October 24, 2013

Ethics Commentary Part VII: A Satisfactory Moral Theory

Chapter 13: What Would A Satisfactory Moral Theory Look Like?

In the final chapter of The Elements of Moral Philosophy, Rachels does something odd. He gives some guidelines for moral theories, as well as his own moral theory. While I do think it is good to contextualize the opinions he gives in his book, it would be nice if he owned up to this stuff in the beginning, so readers can get a better understanding of what his bias might be before they read his criticisms of other theories.

Rachels starts this chapter by saying that many theories have been put forth, but have met with "crippling objections" (by which I suppose he means whether or not they allow us to disagree on things, and the fact that some of them have different premises than his own--can't have that, can we, Rachels?). He says that some people refuse to give an answer because we don't know enough to reach the "final analysis," but that we do know a lot. I think this is presumptuous of him, since there is no way to "know" anything, so as far as that goes we have been in the same spot for a long time. What would we need to know in order to come up with a final analysis? Whether morality, as an objective good, exists. This would basically solve a lot of philosophy, though even this would be problematic. So I think his point here is...pointless. We don't need to know very much to come up with a satisfactory moral theory, because all moral theories--so long as they have internal validity--are satisfactory.

Then he gets worse by talking about reason giving rise to ethics. Sure, maybe that is the case in a lot of theories, but it isn't a known fact. He's basically recapping all the disdain he has for any theory that didn't come from the Enlightenment and Revolution eras. And again, he says something stupid about Psychological Egoism: "If Psychological Egoism were true--if we could care only about ourselves--this would mean that reason demands more of us than we can manage. But Psychological Egoism is not true; it presents a false picture of human nature and the human condition."

And no, he doesn't give his reasons for that statement here. Rachels just doesn't know when to give up. Psychological Egoism would account for everything that reason leads us to. Psychological Egoism accounts for all human actions, necessarily. There are always ways to link human actions back to it. Yes, this means it cannot be falsified, but that test of scientific theories shouldn't mean that a theory is wrong. The ideas we come up with due to reason are usually ones which help us, help others we care about, help other people help us, give us peace of mind, etc. And why should it count against Psychological Egoism if, indeed, "this would mean that reason demands more of us than we can manage?" It's possible for reason to demand something we can't do. If this is right, and Psychological Egoism is right, that would merely mean that the theory tells us that we can't manage it, which would be good information to have. I don't think he really makes a good case for this, though.

Of course, he only focuses on disproving it. In order to disprove Psychological Egoism, you would have to find a non-religious psychopath, ignorant of what rewards he might receive for an act of kindness, who sacrifices his life for another person he doesn't care about, when he was otherwise going to enjoy a happy life, who also has no moral compass telling him that what he is doing is a good thing. That's a tall order, and even then you might find a few chinks through which Psychological Egoism has found a way in. There just isn't a way to disprove it, and so Rachels does his readers a disservice by dismissing the theory outright, and making blanket statements about its possible uses.

Then he talks about treating people as they deserve, basically taking Kant's view of the respect for persons and the Golden Rule mentality. Now, this kind of makes sense, in that helping people who help you will foster more helpfulness, but I think it's bad to just assume that we shouldn't help people who don't help us. To a certain extent, yes, helping them out in such a case would just encourage them to continue doing what they do. But sometimes people really need help to change their ways. I don't think something like this can really be made into a code of any kind, since it really just depends on the situation.

The next section is about motives, and how, as he puts it, "Only a philosophical idiot would want to eliminate love, loyalty, and the like from our understanding of the moral life." While it is true that most moral systems take these kinds of things very seriously, and they do make people happier, I don't see how he can objectively assert that. He says that, "If such motives were eliminated, and instead people simply calculated what was best, we would all be much worse off." Worse off in what way? By "calculating what is best," don't we pick what actually benefits everyone? I will say that calculating these things is a very problematic endeavor anyway, but he's not really backing up his points. This section basically says, "Don't take love and friendship out of morality, because humans really like that stuff." Why don't you just say that love is part of that calculation? We know that we will feel bad if we don't include it, and because that will make us worse off, we will include it, thus rendering this argument invalid.

And now he gets to Multiple-Strategies Utilitarianism. He says of it, "This theory is utilitarian, because the ultimate goal is to maximize the general welfare. However, the theory recognizes that we may use diverse strategies to pursue that goal." In this, he includes directly working towards general welfare, such as with charity, following rules that help everyone, and also exceptions to those rules (as well as criteria for these exceptions). He says that making a list of the things which would perfectly benefit everyone and promote the general welfare is probably impossible, but that we can still try to create a "best plan" for ourselves, as individuals, to follow. A best plan would include prohibitions against killing, stealing, lying, etc., but also an understanding of when you can break these rules. And everyone has a different best plan, based on their circumstances and idiosyncrasies. Basically, what he's done is combine Anscombe's criticism of the Categorical Imperative with Utilitarianism.

The rest of his chapter is pretty pointless, straightforward stuff. He ends by not answering the very question he started with. So what would a satisfactory moral theory look like?

The answer is simple: it would look like a moral theory. Whether or not it is satisfactory is entirely subjective. For me, I think the best moral theory is Subjectivism. Personally, I follow a combination of Subjectivism, Social Contract Theory, and Utilitarianism (with the assumption of Psychological Egoism mixed in). I act in order to keep the social structure in place, for my own safety. Towards others, I try not to do anything that will make them resent me, so that I can benefit from society. I also seek the promotion of general welfare, though it is mostly so that my welfare is looked after, and so that I can feel good about it. I think that if everyone were to act this way, the world would be a much better place. But, at all times, I remind myself of Subjectivist truths: that nothing can be known to be "moral," "right," "bad," "evil," etc.

I also challenge the Principle of Equal Treatment, and absolutism. Given my Subjectivist background, I think it is perfectly reasonable to change what moral rules you follow. For example, I do think that abortion is murder, but I reason that murder shouldn't have such a drastic prohibition. In society, it can cause unrest, and I'm against it in that capacity, but...I don't think it's wrong. We murder just by doing nothing: sperm and eggs are inside us right now, dying, unless we join them and make babies. And even when we have sex, there are still millions of sperm that die. Should we then demand that each sperm cell be harvested, constantly? There's no way to escape death as a reality of life. So while I may be against murder in the macrocosmic, public sphere, where it causes a lot of problems, I think killing a fetus is totally justifiable (and I'm not saying that everyone who wants one should get one--I realize that some women get very depressed after having it done, and that some women end up glad that they didn't have an abortion. I'm just saying that it should be an option. After all, I'm pro-choice, not pro-death).

In other instances too, though, I think it's okay to fluctuate. Even if there isn't a "logical" reason to do so, there is no reason why people shouldn't be able to choose to do whatever the hell they want. Should we treat everyone equally, unless there is some qualifying factor that changes that? Not necessarily. If I want to treat people bad for no reason, there is no real objective "moral" to look to that says I can't. Now, I don't really act on this, but objectively I think that including this in your moral theory would still make it satisfactory.

What I'm trying to get at here is that Rachels's view--that reason begets ethics--is total baloney. Reason works based on absolutes. Based on what you decide is real. If you change the absolutes, reason will give you a different answer. So, sure, use reason to develop your ethics. Use reason to change your ethics as you learn more about the world, and feel differently from time to time. Use reason to decide whether a moral theory has internal validity, or needs revision. But there is no way in hell that humans will ever be able to reason out the one true, objective moral theory.

No comments:

Post a Comment