Edit: Surprised at all the vegans in this thread. I didn’t think there were so many of you. I’m glad you care so much about animal rights, that you’re willing to forego eating them and using products made from them. If you’re not vegan and have moral objections for this, maybe you should look at yourself first and all the animal abuse you sanction by eating animals and using animal products. Did you know dairy cows have to be pregnant to produce milk? They’re artificially inseminated throughout most of their lives. I hope everyone complaining about this also complains about ice cream and cheese. Or else they would be hypocrites who just want to blame others but never look at themselves.

  • @CarbonIceDragon
    link
    English
    -110 months ago

    it would be possible to find the most ethical solution to a problem given all the variables only if you have already selected a system to determine which combinations of values for each variable are more or less ethical. That is to say, if one goes with ultilitarianism, one could hypothetically objectively measure how much happiness or suffering results from a given situation and pick the one that maximized or minimized it, or do the equivalent for a different ethical system, but you cannot objectively decide if utilitarianism, or deontology, or whatever other ethical system one may wish to use, is even the right system to use in the first place. Before your hypothetical measurements of every variable can actually be used to determine what solution to an ethical problem is best, one has to decide what a solution even looks like, and that decision is ultimately arbitrary.

    • @ParsnipWitch@feddit.de
      link
      fedilink
      English
      110 months ago

      It’s not arbitrary, though. It is just hard to define. Ethic theory uses certain axioms that aren’t subjective. I am not talking about your moral values, but whether or not certain behaviour is ethical or not.

      As a drastic example, driving over a person because it is faster than driving around them. We can certainly decide for some cases whether that is ethically good or not. For the harder to decide cases it’s again just a matter of not knowing all variables. If we would know all variables, we could put each reason for driving over a person on a scale of “ethical goodness”. Since we have certain axioms in ethics you can logically conclude a result for all ethical questions (in theory).

      This is not more made up than mathematics are made up. The quantity (not the mass!) of objects, for example, is also just a thought we put onto objects. It’s not in the nature of objects to have a quantity. And if we didn’t had an inherent concept of mathematics, quantity would not exist for us. In that way, it’s different from gravity and other such physical realities. It is the same with ethics.

      • @CarbonIceDragon
        link
        English
        110 months ago

        Quantities do exist in nature, outside of our concepts though. if there exist two electrons in a given area of space vs only one electron in a differen equivalent area of space, the implications of that on everything interacting with them would exist regardless of if we had a word for what two was.

        That aside though, I think that what you have just said confirms what I am trying to say, because you yourself state that ethics are based on axioms. The thing about axioms is that they are not proven, they are statements simply assumed to be true from which all else follows. Mathematics is similarly founded to my understanding, but mathematics can also be compared to measured reality to confirm if the system derived from a certain set of axioms describes that reality (for example, euclidian geometry follows logically from a set of axioms, but since the discovery of relativity seems to show that physical space does not always perfectly follow euclidian geometry, it can be shown by observation of the physical universe that this mathematical model doesn’t completely fit reality. It’s still useful of course, and it does still logically follow from it’s axioms- but those axioms can be verified as fitting observable reality or not, it isn’t arbitrary if you accept those axioms, if you’re talking about our universe that is. But ethical axioms are a different matter. Sure, you can have an objective ethical system, based on a set of ethical axioms, but to do this you have to accept that particular set of axioms in the first place. If one was to use a different set of axioms, you’d get different ethics.

        Suppose we were to meet some aliens- intelligent and technologically sophisticated aliens, in the same way and degree that we are. They’d probably posses ethics of some kind (since ethics are ultimately a tool for deciding what one should and should not do, and the aliens should need to make such decisions just as we do). They’d also probably posses mathematics, since they’re using science and technology. It might not look like our mathematics, having different symbols and phrasing and ways to manipulate those symbols, but it should have roughly equivalent concepts, since they’re using it to model the same universe as us. We can assume with reasonable certainly that the axioms used by their mathematics when describing reality will translate to be the same as ours, assuming we both take the measurements needed to confirm if our models fit, there’s not room for them to take arbitrarily different axioms here, because if they do, their results just won’t be useful except as a mathematical curiosity. However, there’s no particular reason that I can see to assume they have to accept equivalent ethical axioms to us (if you can think of a reason to assume that they do have to arrive at the same ones, I’d be interested in hearing it). If they can create a completely different set of ethics starting with different axioms, and don’t run into any contradiction with observed reality in using that different model, that would seem to imply that the choice of which axioms to use is ultimately arbitrary, and that we can just choose whatever set of them results in an ethical system that we happen to like.