November 26, 2013

MaddAddam (some minor spoilers)

 I finally got around to reading the last book in Atwood's MaddAddam Triolgy, MaddAddam. Having read many books since The Year of the Flood, it was nice for me to see that this book included a synopsis of the previous novels, which is something I prefer over the author including little reminders in the text itself (though she ended up doing that anyway).

I was expecting her to continue in the vein of the previous novel, but with Zeb this time. Instead, the book is mostly from Toby's point of view, with Zeb telling her his life story in brief snippets. I liked that Atwood finally started writing more about the current happenings instead of doing mostly flashbacks, but...then I got annoyed because practically nothing was actually happening. They try to help Jimmy (who is by now just a crazy guy that no one really likes), and try to survive as best they can, while preparing for a battle with the Painballers who are still out there, posing a threat to them. The most action happens in the last fifty or so pages, and this is disappointing because it's not from Toby's perspective anymore, but from one of the Crakers. And, thus, the battle is told in a...less-than-engrossing way.

The worst thing about the book, and which really made me glad to finish it, was the fact that Toby was extremely needy and jealous. Zeb and Toby end up in a relationship early on in the book, and Toby spends at least one paragraph per chapter (and often more) making sure the reader remembers that she is in love with Zeb and hopes that he's going to be hers exclusively. It gets very tiring very quickly. You'd think that Atwood, a feminist, wouldn't make her female main character so stereotypical and young-adult-love-triangle-ish. I don't remember getting that vibe from her in The Year of the Flood. Maybe I just forgot it, or it wasn't that prevalent. In any case, MaddAddam really suffers from this.

In my opinion, Atwood should have stopped with Oryx and Crake, since that was a classic by itself. The sequels were pointless, unless she wanted to emphasize feminist concerns. And if that was her goal, why make her leading female character so dependent on a man? Why make her male characters so much cooler than the female characters? (Though I suppose she tried to cut back on this by making Jimmy seem stupid at the end, when I'd been sympathetic towards him before). It gives me the feeling that she was merely trying to milk this story for all it was worth, rather than actually trying to produce a good work of art that made people think about feminist, environmental, and dystopian ideas.

October 26, 2013

Religion and Morality

I've been on a morality/ethics kick for a while now, and I guess it's not really because I wanted to continue that old critique. There were a few things which happened recently that made me think about this stuff again. It all started when, for my Intro to Ethnic Studies class, I had to go out and find some place to observe people, writing down what I see and analyzing these observations to see what statements I can make about race relations, social behavior, etc. I went to the Farmer's Market, and my fiancee saw the anti-evolution thing that's there every week. She pointed out that they had both an American and Israeli flag on their stand, so we went to ask about that. This quickly became a discussion about religion, as they pretty much always make it about that, whatever it is you originally meant to talk to them about. But silly arguments aside, they mentioned once that Atheism doesn't give people any basis for morality. I agreed, and this just led us to saying things they didn't really grasp (like the idea that we can't know anything objectively. They really couldn't wrap their minds around that one, unless they simply misunderstood us completely).

Then my fiancee saw two people sitting at a table, giving information about a philosophy group. She was interested, and the next Thursday, we joined them (she used this group for a project she had to do for a class). There weren't many people there, but they had a very long discussion about how we determine value. While that sounds cool...it was pretty superficial. Not once did anyone question value itself, and ask whether or not things had value in the first place. Rather, it was basically just about why we value certain types of work, and certain types of objects, more than others and at different times. Pretty basic stuff. Going there just reminded me how I don't really have many people that I can talk to about deep philosophical things, and that most philosophy classes are just like this group (or, rather, worse than them).

At first, I just started thinking again about why we value things, and the fact that intrinsic value doesn't make sense...but then I was pulled back to what the anti-evolutionists were saying about morality. It's something that I feel Atheists try very hard to explain away, but I think it's something important to delve into. The basic argument is that morality without religion is free from obligation, and is something we do because it is right, instead of something we are told to do. But I think this is a bad argument, since it presumes that something is "right." This is one of the reasons why I think Atheism is, in fact, religion. It's kind of like Buddhism: there's no god, and it's not the same for everyone, but it still has "facts" that we can't explain without belief.

Now, I know that saying this will make Atheists angry. They don't like to be associated with such terms as "belief," "faith," and "religion." But if you are offended, I ask you to really think about what these terms mean. When I say that Atheism requires belief, I mean that it requires belief in "reality." You might not think of that as belief, but then you'd have to be able to prove that what we see around us is actually real. Which you can't. So you have to allow for belief. (For those who'll say that this is ridiculous, think about this: the brain can easily be fooled, and often is. How do we know, then, that everything in "reality" is not simply a misinterpretation of  stimuli? Or that we have the kinds of brains we think we have? What if they work in a completely different way, which forces us to see this kind of "reality," in which our brains work this way instead? How do you know that the entire universe isn't all an illusion, a dream, etc.?)

Faith branches off of belief, but is less present in Atheism. Science, which most Atheists take as a founding principle, basically says that we should question everything, and determine whether things are true based on extensive tests of veracity. And even then, it allows for the idea that we might be wrong. But this system implies faith in the scientific method, in reasoning, and in logic. It is entirely possible that the universe is not governed by any logical laws. Perhaps what we witness is just random, or perhaps the universe changes based on our observation of it, only following laws when those who think laws must exist are near.

And religion basically comes from the combo of those two. The religion of Atheism says that there are no gods, and the universe can be explained purely by natural processes. Since there is no way of knowing if this is true, we can't see this statement as anything but a statement of belief. True, there are some who aren't this strict about things, but I would argue that they are not Atheists. Those who are on the side of uncertainty, and know that it is impossible to know anything, are Agnostic. There's a lot of confusion about terms, but I propose that the designation Agnostic Atheist be abolished, as well as Gnostic Atheist. A better, less confusing way of describing these things would be to change Gnostic Atheist to Atheist, and Agnostic Atheist to Agnostic.

(As for science, I want to clear something up: science, in and of itself, is not a religion or faith-based system. Gnostic Atheism, which uses science, is. Science is just the search for truth--it doesn't state that anything is true. Rather, it mostly gives guidelines for proving something false. The core of science is the endeavor of learning all there is to learn about the universe and everything, while being humble enough to realize that we can be wrong about the answer we come up with. If all of our "progress" in science up to this point were to be proven wrong, that wouldn't be a problem for science, since it doesn't depend on the veracity of its theories. But if a god appeared and showed that he was the one behind what we thought were natural processes, then Gnostic Atheism would be proven entirely wrong. So science goes best with Agnostic Atheism, but is not the exclusive method of either.)

But I digress. Atheism, as I have just defined it, presumes that something is "right." This is a belief, and because they believe that certain principles are right, then it can be argued that they do have a basis for morality. For most Atheists, this basis is reason. It's Agnostics who don't have a basis, since they have no concrete "right" and "wrong" to cling to.

The question I have, though, goes beyond this little semantics problem I brought up. What I have to wonder is...why should it matter? The anti-evolutionists made a point that there is no basis for morality in Atheism. And a lot of people make this point. But why is this relevant? Does the truth require morals? Is morality something we know to be true, regardless of which religion it comes from? No, of course not. Atheism can be right whether or not morals are part of the equation. Morality shouldn't be a prerequisite.

It's entirely possible that morality is just something we came up with, because humans (and some other animals) have an inclination towards fairness. That doesn't mean fairness is necessarily "right." I have an inclination towards reading, but that doesn't mean it is "right" for me to read. People have an inclination towards watching sports, but that doesn't make that "right." Anything we do is simply what we do, and does not necessarily become what we "should" do just because most of us have an inclination towards doing that thing.

So if someone kills another person, and we feel bad about it, that doesn't necessarily make it "wrong." If someone kills the entire human race, and our "progress" comes to an end, that isn't necessarily "wrong." If a comet hits a planet, that's not "wrong." If a star goes supernova, that isn't "wrong." Why should our actions be considered "right" or "wrong," when they are merely things which happen in the universe? This is how I, an Agnostic and Subjectivist, see morality. There is no basis for morality without an objective standard that is either observable or mandated. And when you look at it this way, morals based on reason seem empty.

You can read my last post to see what I think about morals derived from reason alone, which is what a lot of Atheists would be into. The basis for morality for most Atheists and Agnostics is either function (based on living in society), instinct (just being nice because you feel you should), or both. The distinction I would make is that there are two bases for morality: one for what you think should be done, and one for what you actually practice. For the first, I have no basis, because I believe nothing. But for the second, my basis for morality is a combination of function and instinct. I don't believe that the morals I act upon are "right" or "wrong," but I live by them anyway because they work for me, and they feel good.

The more I think about it, the more I sound hypocritical. I think there is a major difference, though, in the way I approach morality and the way most people do. There must be a difference somewhere in there between choosing a moral code to live by...and living to find out that what you do is close to a moral code someone else thought up through reason. After all, I'm not doing this out of any feeling that it is right or wrong, or that reason demands I do it. I just do it. I'm sure someone could argue that I'm working off of a reason-based ethic, but I realize that reason won't always lead one to a single moral theory. I realize that an objective moral standard could actually exist. Function, unlike reason, just means that I do what works so that I can survive. It's something that benefits from reasoning, but they aren't the same thing. So...unless someone can tell me why I'm wrong in saying this, I think I am not a hypocrite.

I think that if there's one thing we can take away from this, it's that one must first define what a basis for morality would have to be. If we go by "what we should do," then it is arguable that neither Atheists nor Agnostics have a basis for morality. If, however, we go by "what we actually practice," then it is clear that almost everyone, Atheist, Agnostic, or even theistic, has a basis for morality in function and instinct. You can say that Atheists and Agnostics don't have a basis for believing moral value, but it is completely ignorant to say that they have no basis for acting in a moral way.

October 24, 2013

Ethics Commentary Part VII: A Satisfactory Moral Theory

Chapter 13: What Would A Satisfactory Moral Theory Look Like?

In the final chapter of The Elements of Moral Philosophy, Rachels does something odd. He gives some guidelines for moral theories, as well as his own moral theory. While I do think it is good to contextualize the opinions he gives in his book, it would be nice if he owned up to this stuff in the beginning, so readers can get a better understanding of what his bias might be before they read his criticisms of other theories.

Rachels starts this chapter by saying that many theories have been put forth, but have met with "crippling objections" (by which I suppose he means whether or not they allow us to disagree on things, and the fact that some of them have different premises than his own--can't have that, can we, Rachels?). He says that some people refuse to give an answer because we don't know enough to reach the "final analysis," but that we do know a lot. I think this is presumptuous of him, since there is no way to "know" anything, so as far as that goes we have been in the same spot for a long time. What would we need to know in order to come up with a final analysis? Whether morality, as an objective good, exists. This would basically solve a lot of philosophy, though even this would be problematic. So I think his point here is...pointless. We don't need to know very much to come up with a satisfactory moral theory, because all moral theories--so long as they have internal validity--are satisfactory.

Then he gets worse by talking about reason giving rise to ethics. Sure, maybe that is the case in a lot of theories, but it isn't a known fact. He's basically recapping all the disdain he has for any theory that didn't come from the Enlightenment and Revolution eras. And again, he says something stupid about Psychological Egoism: "If Psychological Egoism were true--if we could care only about ourselves--this would mean that reason demands more of us than we can manage. But Psychological Egoism is not true; it presents a false picture of human nature and the human condition."

And no, he doesn't give his reasons for that statement here. Rachels just doesn't know when to give up. Psychological Egoism would account for everything that reason leads us to. Psychological Egoism accounts for all human actions, necessarily. There are always ways to link human actions back to it. Yes, this means it cannot be falsified, but that test of scientific theories shouldn't mean that a theory is wrong. The ideas we come up with due to reason are usually ones which help us, help others we care about, help other people help us, give us peace of mind, etc. And why should it count against Psychological Egoism if, indeed, "this would mean that reason demands more of us than we can manage?" It's possible for reason to demand something we can't do. If this is right, and Psychological Egoism is right, that would merely mean that the theory tells us that we can't manage it, which would be good information to have. I don't think he really makes a good case for this, though.

Of course, he only focuses on disproving it. In order to disprove Psychological Egoism, you would have to find a non-religious psychopath, ignorant of what rewards he might receive for an act of kindness, who sacrifices his life for another person he doesn't care about, when he was otherwise going to enjoy a happy life, who also has no moral compass telling him that what he is doing is a good thing. That's a tall order, and even then you might find a few chinks through which Psychological Egoism has found a way in. There just isn't a way to disprove it, and so Rachels does his readers a disservice by dismissing the theory outright, and making blanket statements about its possible uses.

Then he talks about treating people as they deserve, basically taking Kant's view of the respect for persons and the Golden Rule mentality. Now, this kind of makes sense, in that helping people who help you will foster more helpfulness, but I think it's bad to just assume that we shouldn't help people who don't help us. To a certain extent, yes, helping them out in such a case would just encourage them to continue doing what they do. But sometimes people really need help to change their ways. I don't think something like this can really be made into a code of any kind, since it really just depends on the situation.

The next section is about motives, and how, as he puts it, "Only a philosophical idiot would want to eliminate love, loyalty, and the like from our understanding of the moral life." While it is true that most moral systems take these kinds of things very seriously, and they do make people happier, I don't see how he can objectively assert that. He says that, "If such motives were eliminated, and instead people simply calculated what was best, we would all be much worse off." Worse off in what way? By "calculating what is best," don't we pick what actually benefits everyone? I will say that calculating these things is a very problematic endeavor anyway, but he's not really backing up his points. This section basically says, "Don't take love and friendship out of morality, because humans really like that stuff." Why don't you just say that love is part of that calculation? We know that we will feel bad if we don't include it, and because that will make us worse off, we will include it, thus rendering this argument invalid.

And now he gets to Multiple-Strategies Utilitarianism. He says of it, "This theory is utilitarian, because the ultimate goal is to maximize the general welfare. However, the theory recognizes that we may use diverse strategies to pursue that goal." In this, he includes directly working towards general welfare, such as with charity, following rules that help everyone, and also exceptions to those rules (as well as criteria for these exceptions). He says that making a list of the things which would perfectly benefit everyone and promote the general welfare is probably impossible, but that we can still try to create a "best plan" for ourselves, as individuals, to follow. A best plan would include prohibitions against killing, stealing, lying, etc., but also an understanding of when you can break these rules. And everyone has a different best plan, based on their circumstances and idiosyncrasies. Basically, what he's done is combine Anscombe's criticism of the Categorical Imperative with Utilitarianism.

The rest of his chapter is pretty pointless, straightforward stuff. He ends by not answering the very question he started with. So what would a satisfactory moral theory look like?

The answer is simple: it would look like a moral theory. Whether or not it is satisfactory is entirely subjective. For me, I think the best moral theory is Subjectivism. Personally, I follow a combination of Subjectivism, Social Contract Theory, and Utilitarianism (with the assumption of Psychological Egoism mixed in). I act in order to keep the social structure in place, for my own safety. Towards others, I try not to do anything that will make them resent me, so that I can benefit from society. I also seek the promotion of general welfare, though it is mostly so that my welfare is looked after, and so that I can feel good about it. I think that if everyone were to act this way, the world would be a much better place. But, at all times, I remind myself of Subjectivist truths: that nothing can be known to be "moral," "right," "bad," "evil," etc.

I also challenge the Principle of Equal Treatment, and absolutism. Given my Subjectivist background, I think it is perfectly reasonable to change what moral rules you follow. For example, I do think that abortion is murder, but I reason that murder shouldn't have such a drastic prohibition. In society, it can cause unrest, and I'm against it in that capacity, but...I don't think it's wrong. We murder just by doing nothing: sperm and eggs are inside us right now, dying, unless we join them and make babies. And even when we have sex, there are still millions of sperm that die. Should we then demand that each sperm cell be harvested, constantly? There's no way to escape death as a reality of life. So while I may be against murder in the macrocosmic, public sphere, where it causes a lot of problems, I think killing a fetus is totally justifiable (and I'm not saying that everyone who wants one should get one--I realize that some women get very depressed after having it done, and that some women end up glad that they didn't have an abortion. I'm just saying that it should be an option. After all, I'm pro-choice, not pro-death).

In other instances too, though, I think it's okay to fluctuate. Even if there isn't a "logical" reason to do so, there is no reason why people shouldn't be able to choose to do whatever the hell they want. Should we treat everyone equally, unless there is some qualifying factor that changes that? Not necessarily. If I want to treat people bad for no reason, there is no real objective "moral" to look to that says I can't. Now, I don't really act on this, but objectively I think that including this in your moral theory would still make it satisfactory.

What I'm trying to get at here is that Rachels's view--that reason begets ethics--is total baloney. Reason works based on absolutes. Based on what you decide is real. If you change the absolutes, reason will give you a different answer. So, sure, use reason to develop your ethics. Use reason to change your ethics as you learn more about the world, and feel differently from time to time. Use reason to decide whether a moral theory has internal validity, or needs revision. But there is no way in hell that humans will ever be able to reason out the one true, objective moral theory.

October 17, 2013

Ethics Commentary Part VI: Feminism, Care, and Virtue

Chapter 11: Feminism and the Ethics of Care

This chapter starts out with talking about whether or not men and women think differently about ethics. Many people have answered this in different ways, but I think the best way to look at it is that society makes "male" and "female" constructs, such that most "male" people think one way, and "females" the other way. Individually, though, every person thinks differently from others. Gender is not a good way to make that divide, because it usually means we don't look at all of the other factors that make people believe things. That being said, I will admit that there are some trends worth discussing.

First, Rachels shows us Kohlberg's Stages of Development, and how he asked children of different ages to solve what is known as Heinz's Dilemma. Heinz's wife is fatally ill, and the only thing that can save her is a drug that is extremely expensive. Heinz cannot afford it, and the pharmacist will not accept any other arrangement. Should Heinz steal the drug, in order to save his wife? Kohlberg developed his six stages of moral development off of the answers he received. They are:

"[O]beying authority and avoiding punishment (stage 1);
satisfying one's own desires and letting others do the same, through fair exchanges (stage 2);
cultivating one's relationships and performing the duties of one's social roles (stage 3);
obeying the law and maintaining the welfare of the group (stage 4);
upholding the basic rights and values of one's society (stage 5);
abiding by abstract, universal moral principles (stage 6)."

Personally, I think this is a ridiculous way to divide things--not only because they are not mutually exclusive, but because Kohlberg presumes that any stage is better than the preceding one. For instance, an absolutist might like these stages because they lead to universal moral principles, but by Subjectivist standards, that would be a silly thing to strive for, since morals are seen as subjective. Also, "the basic rights and values of one's society" might not be good things to uphold (though Cultural Relativists might like that stage).

Rachels picks two subjects of Kohlberg's study, Jake and Amy (both 11) to illustrate something about the difference between men and women. Jake, when asked about the Heinz Dilemma, says that he should definitely steal the drug, because his wife's life is worth more than money. It is a moral injustice that he isn't able to get her the medicine she needs, so stealing it is justified. Amy, on the other hand, is hesitant, and tries to think up other ways of getting the drug. She suggests borrowing the money, getting the loan, or just talking it out, rather than stealing. After all, if Heinz does steal it, and gets caught, he will go to jail, and be unable to get the medicine if she becomes sick again.

Kohlberg sees Amy as being at a lower stage of moral development, with Jake being at a later stage, since he works from impersonal principles while Amy is more concerned with personal relationships (though I think, since the stages aren't mutually exclusive, it could be argued that Amy is in a later stage, but that the moral principles she applied just aren't ones Kohlberg acknowledges). In 1982, Carol Gilligan objected to Kohlberg in her book, In a Different Voice, saying that while they think differently, Amy's way of thinking is not inferior. Rachels summarizes that "Jake's response will be judged 'at a higher level' only if one assumes, as Kohlberg does, that an ethic of principle is superior to an ethic of intimacy and caring. But why should we assume that?" Amy takes into consideration a lot of the circumstances that could make the Dilemma problematic, such as Heinz going to jail, but Jake completely ignores that.

Then Rachels goes into whether or not men and women think differently. It is true that a male-dominated world brought about such logical fallacies as Kohlberg's Stages of Moral Development, but this is more of a cultural construct than a biological difference. Rachels shows this, and I applaud him for it. The thing he overlooks, because most people don't know about it, is that men and women, biologically, are not very different. All humans start out the same, and the only thing that really changes them is testosterone. The amount of it, how their body reacts to it, etc. This is what decides whether the sexual organ develops into a penis with testes, or remains a vagina with ovaries. The X and Y chromosomes might try to control this, but it is not a done deal, and this accounts for all the variations of human sexuality we see in the world: "heterosexuals," "homosexuals," "bisexuals," and the like (as well as the more obvious biological variations). Humans, however, are basically all the same, but with different paths of biological development, based mainly on the presence of that one substance.

So I would say that, if we want to use a biological argument, we should say that, generally, more testosterone leads to more aggression, which leads to a justice-oriented predilection. Less testosterone, on the other hand, leads to less aggression, and fosters a more caring-oriented predilection. "Men/women" and "masculine/feminine" are merely social constructs to easily categorize people. The real distinction is between people with more or less testosterone. And even then, more factors can be said to intrude, such as other genetic predilections, upbringing/family background, life experiences, and social context. As such, I would say that the gender differences we perceive, which are really just testosterone differences, only serve as an indicator for what someone's ethics are likely to be. There is no inherent quality of men and women (since they are social constructs anyway) that makes them lean one way or the other, but there is one in people in general. This is hard to determine, however, and so the social constructs can be useful. But there are far too many variables to simply assign all the responsibility of moral derivation to gender alone.

(For example, I'm pretty sure I have a normal level of testosterone for a "guy," but I am also an introvert. While many guys might think in a more justice-oriented way, my reluctance to engage in confrontation makes me less aggressive, and more likely to consider other possibilities for solving conflicts. When told about a problem, I still think logically about it first, finding an immediate solution, but I also take time to reflect on consequences, and the ways these decisions will affect others--mostly because I'm already in my head a lot, thinking about the ways other people's decisions affect me. Add to that my social awkwardness and separation (peers seeing me as "smart" and not being comfortable around me), the fact that my upbringing was mostly around my mom and sister, and the fact that I want to be a writer and thus know a lot about character development and plotting out how events will affect others... Eventually, a lot of these factors bring me to the point where I have to accept that I have something of a predilection towards "feminine" moral ethics of care. That doesn't make me any less a "man," but it makes me question the idea of separating genders in that fashion. We all have bits of masculine and feminine qualities, so really it just makes me human.)

Rachels uses the rest of his chapter to talk about ethics of care, and how they might be applied. What I will say about this is that our male-dominated culture can gain a lot by deferring to so-called "female" qualities every once in a while. For instance, I think more significance needs to be placed on therapeutic practices for solving social issues. Rather than making more laws to curtail trends in criminal activity, perhaps more support for people to get psychological help would better solve the problem, and increase the quality of living. I firmly believe that everyone in the world can benefit from therapy, because everyone has problems, all the time. Instead of the cliche, male/justice/logic position that you should only fix what's broken, perhaps it would be better to sustain things before they can get broken in the first place. People should be constantly trying to get better, rather than waiting until they make a mistake.

Chapter 12: The Ethics of Virtue

Rachels starts by introducing how Aristotle and other ancient thinkers saw ethics in relation to character. The question for them was, "What traits of character make someone a good person?" These traits became known as virtues. But then Christianity came along, and said that obeying God made someone a good person. Then, as the Enlightenment came around, this was replaced by a search for morals from reason. But some people have seen this last effort as a failure, and want to return to ethics of virtue.

As Rachels says, "A theory of virtue should have several components: a statement of what a virtue is, a list of the virtues, and account of what these virtues consist in, and an explanation of why these qualities are good for a person to have." Personally, I don't think that last one can be answered without recourse to another moral theory (such as Social Contract Theory), but that gets mentioned later, so I'll wait to talk about that. Rachels goes through each component.

Virtue, according to Aristotle, is a "trait of character manifested in habitual action," and specifically a good trait--one which would make others prefer to be around a person who has it. Rachels points out that we seek out people for different things, and so, depending on what we are looking for (his examples are an auto mechanic and a teacher), we look for different qualities. But beyond that, there are qualities we judge people by as people. So Rachels's restatement of Aristotle is that a moral virtue is "a trait of character, manifested in habitual action, that it is good for anyone to have."

So what are these virtues? Rachels gives a small list of some suggestions, like benevolence, civility, courage, fairness, honesty, tolerance, etc. Then he jumps right into what these virtues consist of. He says, "According to Aristotle, virtues are midpoints between extremes." First, he talks about courage: "a mean between the extremes of cowardice and foolhardiness--it is cowardly to run away from all danger, yet it is foolhardy to risk too much." Likewise, generosity falls between being stingy and being "extravagant," giving everything. Of course Rachels brings up generosity so that he can talk about Utilitarianism again, and how it sees generosity as something to further the total happiness of humanity. And then a section on honesty, which is always a troublesome virtue for absolutists.

In the courage section, Rachels gives a controversial example to show some debate on the subject: September 11, 2001. Bill Maher implied that the hijackers were courageous, and he lost his show because of it. Peter Geach said that "Courage in an unworthy cause is no virtue; still less is courage in an evil cause." However, Rachels puts forth a middle position: they were courageous in their "steadfastness in facing danger," but at the same time they took part in vice (killing innocent people). I would argue that one could make the case that they weren't courageous, but rather went to the extreme of foolhardiness. They had nothing to gain from this act, and their motivation for doing it was based on a very faulty interpretation of the Qur'an. They weren't facing an immediate danger--they sought it out. One could argue, perhaps, that their skewed view of what their religion taught brought them to see America as an immediate danger, and I guess that really just asks the question of whether courage is still courage if the motivation behind it is based on false information. Perhaps we could say that a prerequisite to courage is knowing beforehand that there is no other option to facing a danger? Or that the action that requires courage is the best option? In this case, their act would be foolhardy, since it disregards the Qur'an's call for peaceful resolution, and chooses an option that is highly destructive and ineffective.

Next, why are virtues important? Aristotle basically said that they're important because those who have them fare better in life. Obviously I have a problem with this, since that doesn't really give a reason beyond that of Ethical Egoism or Social Contract Theory. Then Rachels goes on to talk about whether or not virtues are the same for everyone. Should all people try to be like this? He gives a Nietzsche quote, which I will put in here because I love it:

"[H]ow naive it is altogether to say: 'Man ought to be such-and-such!' Reality shows us an enchanting wealth of types, the abundance of a lavish play and change of forms--and some wretched loafer of a moralist comments: 'No! Man ought to be different.' He even knows what man should be like, this wretched bigot and prig: he paints himself on the wall and comments, 'Ecce homo!' ['Behold the man!']"

Even though Rachels says this has its merits, he turns around and says that Aristotle's view that "certain virtues will be needed by all people in all times" was probably right. While these virtues might help people, though, that doesn't change the fact that Virtue Theory tries to do exactly what Nietzsche is criticizing. Rachels even goes so far as to say that an advantage of Virtue Theory is that it is based on character and not action--thus, instead of doing things out of a feeling of action-based duty, you do it because you have the virtues that go along with that action. His example (put forth by Michael Stocker) is of a man who visits a sick friend, but eventually lets him know that he only did it because he felt obligated to do so. The gesture seems cold now, because the friend knows he didn't do it because he wanted to, but because he thought it was a necessary chore. I think, though, that Virtue Theory could lead to the same problem. How can you practice such a moral theory without assessing all of your actions for whether or not they express a certain virtue? Such as, in this example, loyalty to friends.

In the end of the chapter, Rachels deals with the problem of incompleteness: the fact that Virtue Theory isn't very good at actually being a full theory. It doesn't explain very well why virtues are good, as I've already pointed out. It also doesn't tell us how to apply certain virtues, or which ones are more important than others. Rachels gives the example of getting a haircut. If you know your friend's haircut is bad, and they ask you what you think, what should you do? Should you be honest, and hurt their feelings? Or should you be kind, and dishonest? Which one takes precedence? To use a more drastic example, what about the terrorists on 9/11 I referred to earlier? They acted courageously in the face of death, but at the same time they were killing innocent people. If we take this to be a courageous act (which I'm not convinced we should), then does the virtue outweigh the vice, or does the vice outweigh the virtue? The theory doesn't give us a reason to decide one way or the other.

Rachels simply leaves off this chapter by saying that maybe this theory should simply be a part of other theories. And I have to say...duh. It already is. If you look at the definition of virtue, it is clear that every theory that talks about doing good runs on a similar value system for actions. So this chapter feels really pointless. All I really got out of it was the debate about what makes something courageous, and that's not even a big deal. I think Nietzsche was right, and that picking out a way for all people to be is definitely arrogant, and there is no real reason to pick certain virtues as essential.

October 3, 2013

Ethics Commentary Part V: Absolutism and Kant

The end of The Elements of Moral Philosophy leaves us with a few half-baked chapters. First we see absolutism, then Immanuel Kant, and then Rachels kind of drops feminism, ethics of care, and ethics of virtue in there haphazardly. Much of the book's substance is over by this point, and our class finished with the book at chapter 10. Still, now that I have read some more stuff dealing with these latter topics, I actually feel like continuing with this critique of Rachels.

Chapter 9: Are There Absolute Moral Rules?

My answer: No, and there shouldn't be. Rachels starts out detailing the end of World War II, and specifically the atomic bombs dropped on Hiroshima and Nagasaki. Whereas most people see this as a very bad move by the US, it did stop the Japanese. It could be argued that less lives were lost by doing that than would have been lost if the fighting had continued. Whether that's true or not can never be known. The difference, of course, is that at least those people who would've died would've known beforehand that it was a likely thing, and/or they would've been risking their lives willingly. Rachels points out that there was such consideration at the time, since obviously people don't like the thought of killing innocent noncombatants.

But Elizabeth Anscombe, whom Rachels calls, "one of the 20th century's most distinguished philosophers, and the greatest woman philosopher in history," says that such an act was tantamount to murder. Of course, no one really argues against that fact. Maybe it was a pacifying move, one that potentially saved more lives than it ended (taking it back to a Utilitarian perspective), but it definitely is still murder. However, Anscombe goes so far as to say that certain things should never be done, no matter what. "Come now," she says, "if you had to choose between boiling one baby and letting some frightful disaster befall a thousand people--or a million people, if a thousand is not enough--what would you do?"

I always thought this was a dumb example. Obviously you'd boil the baby, since not doing so would be murder as well. Anscombe basically asks, "would you give one innocent infant a horrible death, or give thousands/millions of people horrible deaths?" If you have no other choice, the Utilitarian response makes the most sense. Her insistence that "some things may not be done, no matter what" doesn't make sense. As I pointed out, and which Rachels even briefly hints at...absolute moral rules don't make sense unless there is an accounting of our actions after death. If what we do is best for everyone, does it really matter what rules we broke? Absolute moral rules only make sense in a theological context, where our strict adherence will eventually benefit us (and then it is a form of Ethical Egoism, in a way). If we allow for something like Divine Command Theory, then Anscombe would probably be right. I don't think she really gives a good reason why the rules have to be absolute, though.

Next, Rachels brings out Kant. If you've never read Kant, be warned. If you never will, count your blessings. He is, by far, one of the toughest writers to understand. I actually had to look at some of his stuff for my Intro to Literary Studies class, and I had no fucking clue what he was talking about until we discussed it in depth. Luckily, Rachels provides a very simple way of grasping his core ideas. Kant, as he points out, "argued that lying is wrong under any circumstances. He did not appeal to theological considerations; he held, instead, that reason always forbids lying." First, he shows Kant's imperatives. The "hypothetical imperative" is the thing you "ought" to do if you want to achieve a goal. Examples: "If you want to become a better chess player, you ought to study the games of Garry Kasparov. If you want to go to college, you ought to take the SAT." They are simply reasonable things to do if you want a certain outcome. You aren't morally bound to do them.

But the Categorical Imperative is another story. Moral oughts, as he says, are categorical: "They have the form 'You ought to do such-and-such, period,'" regardless of your desires. Whereas "hypothetical 'oughts' are possible because we have desires, categorical oughts are possible because we have reason." The Categorical Imperative, in one of its forms, is stated thus: "Act only according to that maxim by which you can at the same time will that it should become a universal law." Whenever you do something, ask what it would mean if everyone in the world did the same thing in your position. An example: "Suppose, he says, a man needs money, but no one will lend it to him unless he promises to pay it back--which he knows he won't be able to do. Should he make a false promise to get the loan? If he did, his maxim would be: Whenever you need a loan, promise to repay it, even if you know you can't. Now, could he will that this rule become a universal law? Obviously not, because it would be self-defeating. Once this rule became a universal practice, no one would believe such promises, and so no one would make loans based on them."

Which is a sound argument. Now, let's return to lying:

"1. We should do only those actions that conform to rules that we could will to be adopted universally.
2. If you were to lie, you would be following the rule "It is okay to lie."
3. This rule could not be adopted universally, because it would be self-defeating: People would stop believing one another, and then it would do no good to lie.
4. Therefore, you should not lie."

Then Anscombe, while she does believe lying is wrong, actually redeems herself as a philosopher by pointing out an error. "Why should we say that, if you lied, you would be following the rule, "It is okay to lie?" Perhaps your maxim would be: 'I will lie when doing so would save someone's life.' That rule would not be self-defeating. It could become a universal law. And so, by Kant's own theory, it would be all right for you to lie. The Categorical Imperative is useless, Anscombe says, without some guidance as to how to formulate rules." This isn't necessarily a problem for the theory, since if we allow for this nuance of discretion, then we would end up taking each moral issue on a case-by-case basis. This new branch of the Categorical Imperative could create a set of absolute rules that don't generalize. Generalization, after all, leads to hasty decisions that gloss over the specific circumstances, not giving proper consideration to the consequences of your actions.

Another case is that of the Inquiring Murderer. It was a challenge given to Kant by his contemporaries, in which "someone is fleeing from a murderer and tells you that he is going home to hide. Then the murderer comes by and asks you where the man is. You believe that, if you tell the truth, you will be aiding in murder. Furthermore, the killer is already headed the right way, so if you simply remain silent, the worst result is likely." In this scenario, Kant still says you should tell the truth, because you can't be sure that lying will save the man's life. It is better, then, to avoid the known evil of lying, and let whatever consequences there are happen. Rachels is good enough to point out the huge flaw in this logic: that no one knows what will happen either way, and Kant is being rather pessimistic about what lying can achieve. After all, if you tell the truth, there is a very good chance that the murder will occur, while lying exponentially increases the probability that it won't. Giving exceptions as examples against lying makes no sense, because the whole point of lying is to make the murder less likely to happen. As Rachels says, "This points to the main difficulty for the belief in absolute rules: shouldn't a rule be broken when following it would be disastrous?"

The next section talks about what would happen if you must choose between two wrong actions. If you can only choose two things, and both are things that absolute rules would say are wrong to ever do, what choice should you make? Peter Geach, Anscombe's husband, says that such situations do not occur. His argument is that God does not allow such things to happen. Rachels gives an example when it did happen: Dutch fishermen, during World War II, smuggled Jewish refugees to England in their boats, and were stopped by a Nazi patrol. They were asked who was on board. Now, they had two choices: either lie, and save the Jews, or tell the truth, and let them get killed. Obviously, either choice breaks an absolute moral rule (lying or aiding murder). Rachels gives one limitation of this argument: it only works when there is a pair of alternatives. As he says, "The argument won't stop someone from believing that there is just one absolute rule. And, in a way, everyone does. 'Do what is right.
'" He goes on to say that such a rule, "Do what is right," is "so formal that it is trivial--we believe it because it doesn't really say anything." While I agree that it is a very vague thing, it at least solves problems like these, turning two absolutes into one, and thus making the prohibition against killing more important than the one against lying.

Rachels then has a very silly section where he talks about what we can salvage from Kant's philosophy. He points to the idea that we should only accept those rules that we can accept everyone following all the time, and I guess that makes sense, but I wouldn't say that this is anything new. Social Contract Theory was pretty much the same--better, even, because it allowed for change in its formulation, whereas Kant's idea is more strict. So Rachels is okay in this area, but I still think it gives Kant too much credit.

None of this is to say that absolute moral rules are bad in and of themselves. It's not really a theory, but I still think that objectively this idea has its merits of internal validity. If Divine Command Theory is right, for instance, then absolute moral rules would work. If you work from a non-religious viewpoint, they could still be true. Since we don't know objectively what the right moral code is, absolute moral rules still have a chance of being true. (Plus, you could say that most theories follow a variation of absolute moral rules, in that they all tackle what it means to "Do what is right," and basically just take Anscombe's idea about how formulating rules doesn't have to be such a broad, general thing.)

Chapter 10: Kant and Respect for Persons

Then he has another chapter just for Kant: not a pointless demarcation like he did with Utilitarianism, but rather one that focuses on completely different aspects of Kant's philosophy. As Rachels says in the beginning of this chapter, "Immanuel Kant thought that human beings occupy a special place in creation." I remember when I used to think the way he did... Basically, he thought that only humans had moral worth, because they could reason. Animals don't deserve the same treatment, and are merely here for humans to enjoy. However, he didn't say animal cruelty was okay. Rather, he said that "'He who is cruel to animals also becomes hard in his dealings with men.'" So the only real reason hurting animals is bad is because it makes us meaner towards humans. I don't really think that makes sense, though, since we have a better capacity for compartmentalization than he credits us with (if I kill a cat, I'm not automatically going to kill the next human I see).

In this chapter, I agree with Rachels's criticism of Kant: in most cases, Kant makes flimsy arguments. This is probably the hardest place for me to stay objective, since I pretty much hate Kant. But his ideas do point out something important I want to highlight.

He believed that humans have moral worth because they are rational agents, and so have the capacity to value things. Basically, the fact that we can value things gives us some sort of intrinsic moral value. Or, said another way, the ability to realize moral value gives us the responsibility to act in a moral way, but only towards other rational agents. After all, only rational agents can reason with you/hear you out/care about morality. In another formulation of the Categorical Imperative, Kant charges, "Act so that you treat humanity, whether in your own person or in that of another, always as an end and never as a means."

What does that mean? Basically, don't manipulate people. Kant's example is of a man needing money (yet again) who asks a friend to loan him some, even though he will not be able to pay it back. If he lies to his friend about being able to pay it back, he is manipulating her, and thus using her as a means. If, however, he is truthful with her, she may decide whether or not to loan him the money. In this instance, he is treating her "as an end," giving her the ability "to make that purpose her own," as Rachels puts it. She will be able to make that end her end as well, instead of merely being means towards it. This is respecting her rational agency.

The next section talks about retribution, and Rachels leads it off by talking about how some people see punishment as bad, since it doesn't change the fact that pain was already caused, and it merely inflicts more pain.  He goes into how Utilitarianism handles this issue, saying that it's okay if it causes enough good to outweigh the bad. Whether it does is debatable all around, of course, and changes based on the circumstances. The point of Utilitarianism being part of the discussion makes a little more sense here, since it was the main philosophy around Kant's time. Kant didn't like Utilitarianism, because he said it was incompatible with human dignity. It sees people as a means to an end (happiness), and the rehabilitation process it fosters changes people, making them into what society wants them to be, instead of what they want to be. It forces them to be a means towards a different end than the one they want.

Now, I do agree that society itself changes people in such a way (look into the Panopticon and Michel Foucault for this idea). But as far as rehabilitation goes? It changes people, sure, but only in such a way that they are able to better respect others. You'd think Kant would be in favor of it for that very reason. It helps people be rational again! I guess there are good arguments you could make on either side, and I don't think there is any clear answer to this in Kant's worldview. Kant goes on to say that instead people should solely be punished for their crime, and their punishment should be proportionate to their crime. I think it's kinda pointless to say that someone gets punished just because they committed a crime (it creates an obvious implication that they shouldn't do it again, thus does exactly what Kant doesn't want it to do). The second point, though, is something I like. I don't like how far he takes it, though.

It makes perfect sense that worse offenders should get worse punishments. But it doesn't make sense that murderers should always get the death penalty. First of all, obviously not all people accused of murder are actually guilty, and that alone means we should be very, very careful with that form of punishment. Second, there's the age-old argument that an eye for an eye makes the world blind: if we murder a murderer, we become murderers. Third, it takes away their ability to change. What if they rationally decided not to do such a thing ever again? In effect, killing them then would only be using them as a means to reinforce the idea that you shouldn't murder, instead of allowing them to make "not murdering" their end as well.

Then Rachels goes on to give some more ideas Kant has about this. He brings in the idea of responsibility: that since humans are rational, they are responsible for their actions. Reward and punishment, according to Kant, are the only right ways to show your gratitude or resentment towards the rational decisions people make. Then he goes on to say, "Why should you treat everyone alike, regardless of how they have chosen to behave?" He wants people to respond in kind to others, basically making a Golden Rule philosophy. If someone does something terrible to us, they are basically telling us that that is how they want us to act towards them. Rachels makes a good point against him: why should we stoop to their level? If they think it's okay to be mean to us, then Kant would say it's okay to be mean to them. But then wouldn't they assume that our conduct meant that we saw that as an acceptable way to act towards us? It's a vicious circle. If we try to be the better man, though, then they should see that as the way to act towards us, right?

Kant also says that people shouldn't be punished if they aren't rational agents, the same way you wouldn't put animals in jail for acting out. This applies to the mentally handicapped, of course. I can at least credit him with this, but overall I think he puts way too much significance on rationality. Like I said in my last post, the distinction between man and animal is arbitrary: Kant is a blatant speciesist. But does his theory have internal validity? Maybe. I'm not sure what to say about it as far as that goes, since there are a lot of points where his logic doesn't really make sense, but it's possible, I guess.

What I do want to point out, though, is that Kant presumes that intrinsic value exists. That humans have a value above other things because they can reason, and that we should respect that value. It doesn't follow, though, that humans have intrinsic value, or that value comes from rationality. In my opinion, the only reason we believe anything has value is because certain things are beneficial to our survival. As soon as we decide that something is good because we need it to live, we assume that it has intrinsic value, and create hierarchies based on this assumption. But what, really, makes anything valuable? I think that if you consider Kant's viewpoint, you really have to ponder that.

Many people point to a god as the creator of value. If God values something, and he's perfect, what he says about value must be true. But if you go by that argument, you have to ask another question: what gives God value? And what makes that value valuable? Really, this is a pointless question, since no amount of asking these questions will get us to some concrete value system that creates all other subordinate values--unless you take the whole God-is-eternal-and-so-is-value argument, which is just a silly way to get around the question without really giving it consideration. You would still have to answer why that value is valuable. Personally, I think value is always something applied, a measure of how much someone wants, needs, or likes a thing. Nothing is valuable in and of itself. If a God-given intrinsic value did exist, though, I'm not so sure Kant's respect for persons would be the right moral philosophy.

October 1, 2013

Ethics Commentary Part IV: Utilitarianism

Chapter 7: The Utilitarian Approach

Halfway through the book now (page-wise), we get to one of the more instinctual philosophies, and one which is closer to Rachels's own. Utilitarianism, plain and simple, says that the only thing we should consider in morality is whether our actions will cause happiness or pain. If someone/something can consciously feel pain, we are obligated to consider his/her/its interests in our decisions. The goal is to be a positive force, creating a higher amount of happiness than pain in the world. Now, this kinda feels arbitrary to me, since there is nothing making happiness "good" or pain "bad." How do we know it's not the right thing to cause pain in accruing degree rather than happiness? But, of course, the theory doesn't try to explain that (and lest I be like Rachels, I won't discredit it for that fact).

Rachels gives three examples to illustrate the Utilitarian approach. The first is euthanasia. He talks about Sigmund Freud's death, one of the more famous cases of this practice. He mentions how traditional views of murder shape the way we tend to look at this issue, particularly since Christianity has such a huge influence on our culture's moral ideals. By assuming that killing is always wrong, we want to say euthanasia is wrong (of course, you could point to war and self-defense as hypocritical allowances we make). But, of course, from the Utilitarian perspective, such an idea as "killing is always wrong" is itself wrong. If killing would cause more happiness than pain, then Utilitarianism would say it's the right thing to do. If a war would save more lives than it destroys, or will root out a problem which outweighs those lives, then it is just. If killing one person will save millions, it is a just act. And if killing someone can end an unimaginable amount of misery that cannot otherwise be alleviated, then it is just. Rachels even gives Jeremy Bentham's point that a benevolent God wouldn't want someone to suffer before dying for no reason.

The second example is marijuana. While some simply say drugs are bad/wrong, Utilitarians would look at the effects. People get a good feeling from drugs, and besides some ill effects that should be taken into consideration, the major downside is the way people who use them are treated. The government has come down on drug use rather harshly, even though marijuana, on the whole, isn't as negative a force as alcohol (drunks have worse driving impairment, have to deal with hangovers, and are more violent. As Rachels puts it, "One possible benefit of legalizing pot would be fewer alcoholics"). The only real reason marijuana use causes pain is because of the system stacked against it. With such restrictions lifted, marijuana wouldn't cause as many problems as it does today. (I disregard addicts because those'll be around either way.) Legalizing marijuana would increase happiness and decrease pain, and so Utilitarianism is in favor.

Third, animal rights. Here's where the "something" part of my first paragraph comes into play. Animals may not be humans, but they still feel pain and pleasure. To Utilitarians, animals have the same moral weight as humans because of this. Again, religion might lead some to think otherwise, what with the whole natural order and God giving humans "dominion" over the Earth in Genesis and whatnot... But even with those considerations, it could be wrong to be cruel to animals. Rachels goes into experimentation and treatment of animals before we slaughter them for our sustenance. In all of these cases, we treat animals cruelly, disregarding the pain they feel. Utilitarianism "insists that the moral community must be expanded to include all creatures whose interests are affected by what we do."

And as long as I'm on that topic, I'll connect it to the Principle of Equal Treatment. Is there a "relevant difference" between humans and animals? No. Because humans are animals. Every animal is different. Just being more cognitively developed than the others doesn't make us "better" or more worthy of moral weight and deference. Anyone who thinks we deserve better treatment than animals is simply a speciesist...and in my next post, I'll tackle one philosopher who falls into this category, and is rather pointed about it.

Chapter 8: The Debate over Utilitarianism

Another chapter on Utilitarianism? Yup, Rachels seriously just did that (instead of merging the two, which probably would've made more sense). So, what is the debate over Utilitarianism? After all, it seems a pretty straightforward philosophy: do what makes everyone happy. But, of course, many people are able to come up with situations in which this is not favorable, and so they'll discredit it. This chapter is devoted to Rachels's defense of the philosophy, basically.

One interesting example is the first Rachels gives: "You think someone is your friend, but he ridicules you behind your back. No one tells you, so you never know. Is this unfortunate for you? Hedonists [or Utilitarians] would have to say it is not, because you are never caused any pain. Yet we feel there is something bad going on." Yes, we feel... In this case, I agree with Utilitarians. Life is what we perceive it to be, after all. If we think they're our friends, and we couldn't possibly tell the difference between a good friend and a bad one, then as long as it doesn't hurt us, they may as well be good friends. Now, I don't condone this, since it would be really hard to pull off in most cases. I wouldn't do it myself either, since... Well, first of all, I couldn't pull it off since I'm so big on honesty, but my other point is that I would feel guilty, and so I would feel bad. As it applies to Utilitarianism, though, it works perfectly, and is just another one of the theory's fun little idiosyncrasies. Rachels goes on about how this aspect of the theory, hedonism, is sometimes eschewed by Utilitarians, though he doesn't really delve into that too deeply. At least he's being fair enough to be objective on that point.

Then he gives more examples. If race riots are going on because of a crime committed by a black man, should you testify against an innocent man in order to dispel the riots/lynchings and lessen the overall pain? Rachels says here that this could make Utilitarianism incompatible with justice. Which, though it doesn't come directly from him, is another example of a useless claim. The second example is about Peeping Toms, though the actual case given is one of manipulation, and doesn't really help the argument. But then he talks about Peeping Toms in general: if they're never detected, Utilitarians would say there's nothing wrong with it, since no one is hurt by it--in fact, only net happiness is gained. Basically, these two point out that the theory disregards justice and rights. But really, the point of Utilitarianism isn't justice, but happiness.

He goes next to "backward-looking reasons," which is just ridiculous. If you made a promise, but breaking it would bring more happiness, then Utilitarianism would say to break the promise. Now, this whole thing is pretty shaky, since keeping promises itself brings happiness, and breaking promises really pours on the pain in a lot of relationships. So I'd say there shouldn't be a definitive rule about this. Then there's an argument that Utilitarianism is "too demanding." That if we try to create the most happiness we can, we should always give our money to the poor, we should always help others even when it inconveniences us, etc. We would never do anything for ourselves, basically. Not that this is a problem, really, since...if everyone acted like that, eventually you wouldn't be giving up too much, because everyone would be at the same level. After that, he ends the section with a point about Utilitarianism saying we shouldn't value our own family above others. Which is true, we shouldn't. We only do that because our brains are wired that way, and as animals we tend to take care of our own before helping others. Of course, the pain and negative psychological effect of a damaged familial bond could tell Utilitarians to value them more anyway, so...yeah, that argument doesn't even work.

Now we come to Rachels's defenses of the theory. Citing the above examples, he says, "this strategy succeeds only if we agree that the actions described really would have the best consequences." Good ol' Rachels, he agrees with me. In the race riots case...what if they found out about your false testimony? That would bring about even more pain than telling the truth! But Rachels thinks this isn't enough, for some reason, calling this defense "weak." Cue the rolling of eyes. 

The second defense is that Utility could be a guide for rules, and not actions. "If what we care about is the consequences of particular actions, then we can always dream up circumstances in which a horrific action will have the best consequences." Rule-Utilitarianism, as this new branch is called, holds that instead of looking at each individual action, we should look at the rules we stipulate when determining morality. We should "ask whatset of rules is optimal, from a utilitarian viewpoint." What rules would create the most happiness. And then all acts would be judged by those rules. But, as Rachels does point out, this would basically be "rule worship" or, as it deserves to be called, absolutism. And if we ever decided that there could be exceptions to rules...it wouldn't be Rule-Utilitarianism anymore.

The third--and best--defense he gives is also a hypocritical one. As I said in my last post, Rachels doesn't seem to grasp the internal validity of the philosophies he debunks. Lo and behold, the next section is titled "'Common Sense' Is Wrong." I wouldn't have too much of a problem with this normally, since it was a Utilitarian he quoted when bringing up this subject, but he doesn't even mention that this could be applied to other philosophical debates. This gives the reader the feeling that Rachels is shooting down other theories for being against common sense, but once his own comes under attack, he says to throw common sense out the window. What really bites is when he says, "Utilitarianism is a radical doctrine that challenges many commonsense assumptions. In this respect, it does what good philosophy does--it makes us think about things that we take for granted." Things that Subjectivism made us think about, perhaps? Or Cultural Relativism? Believe me, Rachels, those are much more radical than this. Then he has the gall to say that the man he quotes "is right to warn us that 'common sense' cannot be trusted." What I've been saying all along!

Then he goes into detail about this idea. First he says that those things we see as "always wrong" can still be seen as wrong, but you just have to look at why. Utilitarianism gives a reason: for example, lying hurts people and relationships when found out. It's not wrong "in itself," but wrong because of the consequences it can bring about. Second, he talks about our gut reaction being wrong, which it usually is. We might immediately associate certain acts with bad results in the past, but that doesn't necessarily mean they will have the same results under different circumstances. And third, we should consider all the consequences. While convicting an innocent man may seem "unjust," you also have to think of the fact that you are saving those innocent people who would've been hurt by the continuation of race riots.

Another thing I'd like to bring up about Utilitarianism, which will illustrate just how far from common sense it is, is that it sees happiness as a quantity. Jeremy Bentham actually came up with an equation for it! Tim Hansel told our class about two cases. In the first, you are on a trolley that is headed downhill, and it goes out of control. You can steer it, but there is no way of stopping it. Ahead of you, there are two paths: down one path, there is a group of five people--down the other, there is only one person. You must decide which path to put the trolley on. A Utilitarian would say that you should pick the path with one person on it, simply because there would be only one death, and therefore less pain. Common sense would say that either choice is terrible, and people almost always try to give some scenario in which they miraculously stop the trolley, warn the people, or what have you. Of course, no one really picks killing the five people instead, unless they're a "bad person" or something. Still, there's always hesitation when killing is involved. But for a Utilitarian, the choice is clear.

The second is similar to the first, but with a twist. You are a surgeon, and have five patients, each missing one specific organ, no two missing the same one. You only have an hour left to save them, but no donors are available. Then a man comes in, Chuck, as he's usually called in this story. Chuck just came by to get something, but you see an opportunity. Chuck is perfectly healthy, and has each organ you need. It would be easy to kill Chuck and give his organs to your patients, saving their lives. Or you could leave Chuck alive and let your patients die. A Utilitarian (at least a hardcore Utilitarian) would have to say that you should kill Chuck and take his organs. After all, you're low on time so there aren't any other alternatives for saving these people, and killing one person will cause less overall pain than letting five people die.

Now, the Chop-Up Chuck case is a bit more controversial, since you aren't being forced by gravity to make a split-second decision, and it opens the door for more problems. Would we kill everyone that fell into this situation, in order to keep the majority happy? But the real question is...would that be wrong? Personally, I don't think it would be wrong, but I sure as hell don't like it. Which, of course, was the gut reaction this case was meant to illustrate. It says nothing against Utilitarianism, though. The theory makes enough sense, and I found myself liking it as we discussed it in class. I wouldn't live by it, but it does share similarities with my own approach to life (as in the fact that I try not to cause harm to others if I can help it).

Tom Regan, who wrote an animal rights article we read in class, attacks Utilitarianism, saying, "[A] cup contains different liquids—sometimes sweet, sometimes bitter, sometimes a mix of the two. What has value are the liquids: the sweeter the better, the bitter the worse. The cup—the container—has no value. It's what goes into it, not what they go into, that has value. For the Utilitarian, you and I are like the cup; we have no value as individuals and thus no equal value." Quite aptly, he points out that the theory doesn't care about the individual, but rather the feelings the individual feels. Stuff like this detracts from my respect for the theory, since it just makes it seem arbitrary, as mentioned before. But I don't know if it's fair to say that it doesn't value the individual because it values their feelings only. It also values their interests as they affect their happiness. So maybe it's not really that simple.

I'll end this section here.

Even though it took a while, Rachels finally says we should get rid of common sense. Which was obvious from the start...but I'm not going to end this by talking about his inconsistency. Instead, I will address the questions, "Whose interests matter? Why do they matter?" From a logical point of view, it makes sense to say that only personal interests matter: we are subjective beings, and we only know for a fact that we exist. So why care about anyone else? If I'm going to die some day, why shouldn't I just go out and do whatever pleases me? The Social Contract suggests we should all cooperate so that we all benefit. Ethical Egoism says benefiting only ourselves benefits everyone in the long run. And Utilitarianism says only the ratio of happiness to pain matters, and thus we should take everyone's interests into account.

Whichever view you take, you cannot deny the effect interests have on other interests. My interests won't always coincide with yours, and sometimes people's interests will be in direct conflict. I would side with Social Contract theory and try to find the best option for everyone, since I'm on the side of rational behavior. Not to mention that the Social Contract actually gives us a good reason to follow the law and work together for a better society. Not to say that the laws we have are always right. But this gives us a way to tackle that sort of thing: changing the laws so that they benefit everyone. For that reason, I think Social Contract Theory should be taught widely, and discussed more often. It would be in everyone's best interests, after all.

September 25, 2013

Ethics Commentary Part III: Egoism and the Social Contract

Continuing from my last post, I come to the middle of The Elements of Moral Philosophy, and two chapters which come very close to the kind of philosophy I like. And then Rachels spends two chapters on a philosophy close to that which he believes in. Of course, he brings up a point that I think makes him a bit hypocritical, so I will mention that. This part of the book focuses mainly on interests: whose interests are important, what kind of interests are important, why interests matter in philosophy, and some of the main approaches to how we should balance interests.

Chapter 5: Ethical Egoism

This chapter looks at two different kinds of Egoism: Psychological and Ethical. The former is a more scientific idea, whereas the latter is a moral code. Psychological Egoism states that everyone works towards their own self-interest, all the time. We are always doing either what we want to do (which makes us feel good), or what will help us (making us feel good in the long run). Ethical Egoism, on the other hand, is the philosophy that weshould only do what benefits us. It is a morality that values only self-interest. Instead of helping others, Ethical Egoism calls on us to only help ourselves. Rachels gives the distinction: "It is one thing to say that people are self-interested and so our neighbors therefore will not give to charity. It is quite another thing to say that people oughtto be self-interested and so our neighbors ought not to give to charity."

Psychological Egoism has a lot going for it. It's an evolutionary idea; whatever creature does those things which lead to its survival...survives. So it only makes sense that, at this point in our development, we should be wired such that we seek out our own interests. If we only sought others' interests, we wouldn't survive in the wild, would we? (Sure, you could point out caring for young, but that's an extension of the self.) DNA looks out for its own, and in order for the species to have a good chance of continuing, each individual protects their own interests, so that there are more chances for reproduction.

Rachels doesn't actually touch on that, though. His first example is of Raoul Wallenberg, a Swedish businessman who, during World War II, volunteered to go to Hungary to persuade the government to stop sending Jews to death camps. When the government was taken over by the Nazis, and deportation continued, Wallenberg still helped Jews by issuing them Swedish Protective Passes, finding them places to hide, standing up for them when they were found, etc. He is credited with saving as many as 15,000 lives. Rachels gives other numerous examples, such as when people build homeless shelters, volunteer in hospitals, read to the blind, and give money to charity. But none of these things are problematic for Psychological Egoism. At all.

As for Wallenberg, Rachels gives one interpretation against altruism: "According to some of Raoul Wallenberg's friends, before traveling to Hungary, he was depressed and unhappy that his life wasn't amounting to much. So he undertook deeds that would make him a heroic figure." He even points out that Mother Theresa thought she was going to heaven for her acts of faith. Similarly, people may do other things, such as every example he gave previously, to improve their reputation or get a reward for the act. People volunteer because it makes them look good, especially on applications. Rich people give to charities because it makes them look good (think about it: they don't really need all that money anyway, so it's not that big a gesture. Unless they're giving so much that their own quality of living decreases...they're not really making a sacrifice. It's nice to give, but they're just passing out their surplus, which I personally think should be taken from them anyway if they're never gonna use it...in the form of higher taxes, that is). 

The other reason we do "altruistic" stuff? It makes us feel good. Rachels provides a story from the Springfield, Illinois, Monitor

"Mr. Lincoln once remarked to a fellow-passenger on an old-time mud coach that all men were prompted by selfishness in doing good. His fellow-passenger was antagonizing this position when they were passing over a corduroy bridge that spanned a slough. As they crossed this bridge they espied an old razor-backed sow on the bank making a terrible noise because her pigs had got into the slough and were in danger of drowning. As the old coach began to climb the hill, Mr. Lincoln called out, 'Driver, can't you stop just a moment?' Then Mr. Lincoln jumped out, ran back, and lifted the little pigs out of the mud and water and placed them on the bank. When he returned, his companion remarked: 'Now, Abe, where does selfishness come in on this little episode?' 'Why, bless your soul, Ed, that was the very essence of selfishness. I should have had no peace of mind all day had I gone on and left that suffering old sow worrying over those pigs. I did it to get peace of mind, don't you see?'"

Rachels doesn't see, and says that Abe "employs a time-honored tactic of Psychological Egoism: the strategy of reinterpreting motives." Personally, I see it as simply interpreting motives. Others see it as altruism, we see it as doing what makes you feel good. I believe Psychological Egoism is most likely correct. Heck, I certainly know that, given the choice, I would never do anything out of pure altruism. For a while, I tried never to do anything which helped others. As a result, I felt pretty shitty (and like only worrying about my interests might not be in my best interests). I once told my fiancĂ©e that I don't do stuff for other people for the thanks I will receive. But I had to amend that by pointing out that this wasn't out of some noble altruistic ideal, but simply because I know about Psychological Egoism. I'm not doing it for them, I'm doing it because it makes me feel good, or because it's become a habit (such as opening the door for people). There are some people who would do things for another reason: because others would be likely to help them in return. Personally, I tend not to fall into this category only because I don't like thinking that I am manipulating people (that would make me feel bad, after all. I mean, I'm not naive enough to think that I'm not manipulating them anyway, but I like to keep that to a minimum). Doing things that feel good, and are seen as good, generally makes life more enjoyable, and that's why I do it--not because it is "right," but because it feels "right."

Rachels brings up the point that, though there may be some trace of self-interest in every action we take, that doesn't mean there aren't also some altruistic intentions in there as well. It is entirely possible that our motives can be from self-interest, but it is equally possible that we are altruistic, or that we have a mixture of motives (in cases where I might think that something benefits me as well as others, and perhaps I see that as a bonus). Okay, that makes sense. Psychological Egoism doesn't prove anything; it's just a theory that sounds right. Either you agree with it or you don't. There's no reason to say it is "not a credible theory" just because you told us how one can disagree with it. Like the Lincoln example: just because the reasons behind his action can be interpreted either way doesn't mean that he's wrong in calling it Psychological Egoism--it just means that it's not conclusive either way.

Ethical Egoism is a whole 'nother beast. It's a selfish philosophy that approves only those acts which benefit oneself. You can help others only if you help yourself in the same instance. Rachels gives three arguments for Ethical Egoism, the first being the most contradictory. "The Argument That Altruism Is Self-Defeating" says that, when we try to help others, we screw up most of the time. Not only is it an intrusion into their privacy, but we don't perfectly know what others need, and thus we are likely to do more harm than good (and people we help become dependent, we are effectually saying they are incompetent of helping themselves, they will be resentful rather than appreciative, etc.). The argument, as Rachels summarizes it, is: "1) We ought to do whatever will best promote everyone's interests. 2) The best way to promote everyone's interests is for each of us to pursue our own interests exclusively. 3) Therefore, each of us should pursue our own interests exclusively."

Now, obviously this is a bit weird, since pursuing our own interests "exclusively" seems like it would disregard everyone else's interests. Technically, this argument says that we must be altruistic by caring only about ourselves. Rachels points this out, saying, "rather than being egoists, we turn out to be altruists with a peculiar view of what promotes the general welfare." But that doesn't mean anything's wrong with that theory, just that it's probably got the wrong name. Does it make sense? Well...I don't think so. Just because we screw up sometimes doesn't mean we shouldn't still attempt to help. If you don't help, there's a greater chance of things not working out. It's possible that we could make it worse, that the person will be resentful, but does that really mean we should never try? Any project has the possibility of failure, after all.

The second argument is from Ayn Rand. Her idea was that altruism is poisonous to our individuality. Rachels quotes her saying, "'If a man accepts the ethics of altruism, his first concern is not how to live his life, but how to sacrifice it.'" She, like me, valued the individual. But unlike me, she thought that altruism "does not take seriously the value of the individual." Because of this, she said that Ethical Egoism was the way to go, since it "does take the individual seriously--it is, in fact, the only philosophy that does." Rachels, smartly, points out that her argument only allows for two choices: Altruism or Ethical Egoism. There might be other choices, middle grounds that mix and balance interests (such as saying that one must help if it doesn't inconvenience them, one must help only if they are the only one who can, etc., with multiple exceptions to rules to accommodate the circumstances). It doesn't have to be as black-and-white as she proposes. Plus, if someone wants to be altruistic, that is part of their individuality.

The third says Ethical Egoism can be seen as a Commonsense Morality, kind of in the vein I used to talk about Psychological Egoism: don't harm others because they will harm you in return, and that won't be to your best interests. This makes it another Golden Rule, basically. Rachels points out that this doesn't actually give a reason why we shouldn't do "bad" stuff anyway, when we can get away with it...but I'll leave some of that stuff till the next chapter, since that's where it really belongs.

Then the arguments against...First, "Ethical Egoism Endorses Wickedness." Yes it does, Rachels. But, wonderfully, he actually says something reasonable here: "However, this objection might be unfair to Ethical Egoism, because in saying that these actions are wicked, we are appealing to a nonegoistic conception of wickedness." Really, Rachels? Are you sure? Because it was so fair when you did the same thing to the other philosophies that "endorsed wickedness" in some way.

Next, "Ethical Egoism is Logically Inconsistent." There is an example showing that Ethical Egoism can't work because people's interests clash. Kurt Baier gives some steps displaying two people in an election:

"1. Suppose it is each person's duty to do what is in his own best interest.
2. It is in D's best interest to kill R, so D will win the election.
3. It is in R's best interest to prevent D from killing her.
4. Therefore, D's duty is to kill R, and R's duty is to prevent D from doing it.
5. But it is wrong to prevent someone from doing his duty.
6. Therefore, it is wrong for R to prevent D from killing her.
7. Therefore, it is wrong and not wrong for R to prevent D from killing her.
8. But no act can be wrong and not wrong; that is a self-contradiction.
9. Therefore, the assumption with which we started--that it is each person's duty to do what is in his own best interest--cannot be true."

To his credit, Rachels points out the flaw: Baier has added his own rule, #5. Ethical Egoism doesn't say we can't prevent someone else from doing their duty. As such, it could end up being more of a case where it doesn't matter who prevails--they both did their duty and are right. So this argument doesn't work.

The last argument says that Ethical Egoism is "Unacceptably Arbitrary." Citing the Principle of Equal Treatment, Rachels likens Ethical Egoism to racism and sexism. The Principle of Equal Treatment says that "We should treat people in the same way unless there is a relevant difference between them." Now, I think this a good idea, and it feels nice and reasonable (notice how he points out racism, because that always wins people over). He even points out the use of fairness in the draft lottery. Everyone has the same chance. But then he talks about when you have two tickets to an event, and have to decide which friend to take with you. Picking one would be unfair to the others. But the main point he is making here is that racism favors one "race," sexism favors one sex over the other, and Ethical Egoism favors the self over everyone else.

The problem I see with this is that Ethical Egoism does have you treat everyone the same. You are caring for yourself out of care for others, and so this doesn't exactly match up. Perhaps you could say that racists and sexists have used that excuse before ("we're doing this for their own good"), and I suppose that's true. Makes me think of the way a dictator might care only for himself, allowing the country to fall into degradation. So I suppose it all hinges on whether or not Ethical Egoism would actually benefit everyone...which I doubt. Still, it could, and then this argument would be invalid.

Chapter 6: The Idea of a Social Contract

Ah, this one's got to be my favorite moral theory. Partly because it's connected to Thomas Hobbes, which always makes me think of Calvin and Hobbes...but also because this theory actually gives a decent reason why we should follow a moral code even if we assume that "right" and "wrong" don't exist. Hobbes's famous theory is that of the "state of nature," the eventual destiny of people without government. Anarchy, where everyone does as they please. Without rules, people take what they need, and kill those that get in their way. Life is "solitary, poor, nasty, brutish, and short." In order to avoid such a life, we have the social contract.

The reason that the social contract works is because, as rational creatures, we realize that working together will increase our own chances of surviving and thriving. In the state of nature, we would have to constantly be wary of others, knowing they could kill us at any time. For this reason, a social contract must start with the condition that those involved cannot kill each other. If we agree to this, we all protect our own interests, as well as those of others. It's rational to agree, because it gives you safety. You agree to give up one freedom in order to protect your best interests, and that is the essence of social contract theory. Rachels ends the first section by summarizing it as, "Morality consists in the set of rules, governing behavior, that rational people will accept, on the condition that others accept them as well."

Then, to show another reason for the Social Contract, he goes into the Prisoner's Dilemma: You and Mr. Smith are accused of a crime, and taken aside separately for interrogation. But there are some weird rules to it: 

1. If Smith does not confess, but you do...you go free, and Smith gets 10 years in prison
2. If Smith confesses, but you don't...Smith goes free, and you get 10 years
3. If you both confess, you both get 5 years
4. If neither of you confess, you both get 1 year

The best choice is to confess, since you don't know what Smith will do. "Suppose Smith confesses. Then, if you confess you will get 5 years, whereas if you do not confess you will get 10. Therefore, if he confesses, you are better off confessing. On the other hand, suppose Smith does not confess. Then, if you confess you will go free, whereas if you do not confess you get one year. Therefore, if Smith does not confess, you will still be better off confessing." Rachels extends this dilemma to all morality. You can easily reason that it is best for you to look after your own interests exclusively, kinda like Ethical Egoism, except this isn't out of any desire to help others/keep the balance. "Either people will respect your interests or they won't. If they do respect your interests, you will be better off not respecting theirs, at least whenever that would be to your advantage... If they do not respect your interests, then it will be foolish for you to respect theirs... Therefore, regardless of what other people do, you are better off adopting the policy of looking out for yourself."

This makes perfect sense...except that it inevitably leads to the state of nature. And since we're not separated from others every single time we have to make a moral decision, we don't necessarily have to go that route. We can choose to realize the rational choices, and cooperate to choose something that is even better. If you and Smith, for instance, could come together and talk it out, you would obviously both confess, since that would be best for both of you. That, in a nutshell, is Social Contract Theory: people coming together to agree on what moral rules to follow in order to benefit everyone.

Rachels brings up the "Problem of Civil Disobedience," but it really isn't much of a problem, and Rachels does a good job of showing this. After all, the Social Contract isn't about "law" per se, but about mutual agreement for mutual benefit. Civil Disobedience is just a claim that the agreed-upon rules aren't mutually beneficial. Take civil rights for example (since everyone does). While centuries of discrimination may have benefited whites, slaves certainly got a bad deal, and even when African-Americans were seen as citizens, many laws still stood in their way. The civil rights movement was justified in saying that the laws weren't justified. Social Contract Theory doesn't prohibit civil disobedience; civil disobedience (when used correctly) reinforces Social Contract Theory. Kind of like what democracy is supposed to be: if the government/law doesn't help the people, the people should abolish it in favor of one that does.

But then Rachels gives some "difficulties" with Social Contract Theory. First is that it is "based on a historical fiction." Which is a silly argument, by the way. As I have already talked about, self-interest is an evolutionary quality, one which was arrived at by the workings of nature, hence the phrase "state of nature." We just need to look at the way civilizations started to see that, once there was more cooperation, things started moving smoother, and quality of life increased overall. So no, it's not based on one contract signed thousands of years ago, but who ever said it was? It's based on logic.

Though this point does remind me of something which I frequently think about: the fact that we are not bound by our past. I mean, sure, we're genetically bound by our past, we're economically and geographically bound by our past, but we can make our own decisions now in the present, and those actions taken by others years, decades, centuries and millennia before shouldn't force any course of action, or any guilt, onto us. That's the reason I feel whites shouldn't have to apologize for slavery--none of those who participated in it back then are even alive anymore. We didn't do it. Sins of the father are only sins of the father, and no one else. Now, reparations may still be in order because of the long-term effects we're still feeling, but there should be some recognition of that fact nonetheless. I'm of German descent, but I don't feel guilty about the Holocaust. (Heck, I'm also of African-American descent, so I should be pissed about the slavery I didn't experience as well, eh? Guilt and righteous indignation? And I'm pretty sure I've got some Native American in there too...)

But I digress. Rachels comes close to this by talking about the contract like a "game" that you join. But, of course, that suggests you chose to join, which none of us did. The Social Contract Theorist could, of course, fall back on the argument "if you don't want to play, then leave," but that's really not fair, since so much of the world and its resources are run by the system. So I guess this is one thing to think about, though it doesn't entirely go against the theory since joining the game is in itself for your best interest, unless you can always avoid the authorities and ever needing medical attention, without fail. It would be rational to join the system/game, and thus the theory would say you should do so. Does this give you a choice to "agree"? Yes. So Rachels is wrong when he says this "abandons the idea that morality is based on agreement."

The second objection is about those parties that "cannot benefit us." Rachels lists them as "Human infants, Nonhuman animals, future generations, and oppressed populations." Now, I have to say again that this doesn't make any sense for Rachels to say, simply because it doesn't cause a problem for the theory. So we don't care about the offspring that can't benefit us. So what? That's part of the theory, and it's logical. I won't be around to see my great-great-great-great-great grandchildren, let alone benefit from helping them (unless we hurry up and find a way of prolonging life), and so I have no logical reason to care about the things which affect them.

My take on this:

Human infants -- No, we don't owe them anything, unless we think that keeping them safe helps us (don't hurt someone else's babies, or they'll hurt you; don't hurt a child that will grow up to hurt you; don't hurt them if you think there's a good chance of them helping you in the future, such as your own child who can take care of you in your old age). Since this one has so many exceptions, I'd say it really doesn't make Social Contract Theory all that disturbing. But if you find an orphan somewhere, and know that no one knows about it, or will ever know about it (which you can't guarantee), its parents and extended family are dead, etc...go ahead.

Animals -- Keeping them alive does benefit us, if only for food, testing, and keeping a balanced ecosystem. I think it's kinda stupid for Rachels to even bring them up, seeing as their survival does help us in so many ways. Plus, there are animals that would attack us if we did stuff to them, so it's not always in our best interests to do so.

Future generations -- As mentioned above, who cares? I mean, they do, but since that won't affect you at all, it shouldn't matter. Makes perfect sense not to care a bit about them. Unless, of course, you run into people who are young and care about what you do to their future community.

Oppressed populations -- Really, Rachels? After all that stuff about the civil rights movement? Yes, perhaps you can benefit from enslaving somebody, but that's just inviting them to rebel. And who are you to say that cooperating with other cultures can't benefit you more than enslaving them? You could learn new things that could benefit your society, you could add willing workers to your projects (which would benefit you because they wouldn't be likely to hurt you/run away in the course of their employment), and you could perhaps learn of dangers in the region from them instead of learning firsthand...the hard way. There could be situations where slavery would be more beneficial to you, and I suppose then there wouldn't be any problem...(heck, you could say that everyone in the world is enslaved by the Social Contract anyway, since the other choice is a sucky life and/or death.) His idea that Social Contract Theory would allow us to harm people who don't benefit us, though, kinda goes against the whole point of the theory: harming them would be something you would do in the state of nature. So instead of saying that noncooperation makes the most sense in this theory, he should instead be looking at colonization as an opportunity to create a new contract.

Another thing I envisioned while in the class was that perhaps we should extend the definition of the Social Contract to include future generations. But now that I think about it, that's really just a silly notion brought about by my own squeamishness. Social Contract Theory has no reason to promote environmental issues that don't affect our generation. But that doesn't mean the theory is a bad one. Rachels says that "it seems unable to recognize the moral duties we have to individuals who cannot benefit us." But of course, he fails to realize that since that's the point of the theory, it isn't a problem.