Friday, November 11, 2011

Short and Sharp: A History of Dishonesty

Recently I was involved in a discussion about a claim that was made by Noam Chomsky in his Melbourne lecture. He stated that if the US implemented a public healthcare program similar to other first world nations, they would wipe out their yearly deficit. After much back and forth and number crunching by the ever-patient Dylan Nickelson, it was concluded that this claim is factually wrong. After this, I found myself doubting some of the other claims made by Chomsky. Not that I have outright rejected them; just that I have the feeling that if he had made a mistake on this claim, he might be wrong on other claims too. This is not to say that my feeling is justified or should apply in any intellectual way, but I do think it warrants a discussion.

It seems quite obvious where this feeling would come from; in social settings, an individual’s past history of honesty/dishonesty is generally a good predictor of future behaviour (well, better than chance guessing at any rate). The issue would be whether or not this predictive value translates into the academic world and if it does, whether it should affect analysis of the individual’s future arguments.

I can see two main reasons why an individual would make either a mistaken or intentional false claim; lack of rigorous research and agenda/ideological bias. In both cases, there appears to be the potential for future occurrences; with lack of rigorous research, it implies a lazy methodology and with agenda/ideological bias, it implies that they have a reason to bend or change facts to suit their views.

So the question then becomes ‘how does this affect analysis of a dishonest individual’s claims?’. The most obvious consequence may be a lower threshold for triggering investigation into the validity of their claims. Another possibility is that less in-depth research is required to debunk the individual’s claim; for example, if one or two sources you consult contradict a claim made by a dishonest individual, it is probably wrong as opposed to requiring more for a previously honest individual.

In bringing up these reasons as to why individuals may be dishonest and what should occur because of it, my suggested points of view are more meant as just that; suggestions. I would welcome input from anyone on the following two questions; does a past history of wrong claims by an individual mean that other claims they have made are likely wrong and in what way does this alter how we interact with their claims?

Tuesday, September 6, 2011

Why I Care (And Why You Should Too)

I have been noticing lately that there seems to be certain social taboos around discussing certain topics. The ones I have witness most predominately are religion, politics, philosophy, morality (specifically abortion and gay marriage), medicine vs. alternative medicine and science vs. pseudoscience. On more than one occasion, I have either been told or seen someone else be told to not talk about such issues as the offended individual believed they should not be discussed in polite society. This of course doesn’t sit right with me as these are perhaps my most favourite topics to discuss (and what I write about predominately on my blog). As such, I wish to discuss this phenomenon and offer my take on the situation.

When I have probed individuals who find said topics taboo, the reasons given as to why fall into the following broad categories, to which I will give a response;
  • These are topics that are personal in nature; therefore people should be free to decide for themselves what they wish to believe
This kind of objection has a tinge of a superficial understanding of postmodernist ‘relative truth’ to it; it doesn’t matter what individuals choose to believe as there is no real truth. While, to some extent, I believe that this is the case (or more that, we will never be able to determine what the truth is in any absolute sense), these individuals are ignoring the impact that an individual’s beliefs have on those around them. The decisions we make are based on what we believe; for example, if I believe that gays do not have the right to marry, I will not vote for a politician or party that wishes to allow gays to marry, thus affecting homosexuals who wish to get married. As such, while our beliefs are our own, the fact that they have an impact on those around us obligates us to ensure that these beliefs are indeed correct (or, at the very least, defensible). This necessitates discussions on the issues, specifically public to ensure as many people are exposed to all the possible arguments that exist.
  • These are topics that people will never change their opinions on
I disagree with this line of reasoning two fold; firstly, I think that people can and do change their minds and often do as a result of discussions of said issues. Hell, even I have changed my mind on these issues; I used to be a pro-life, think evolution was wrong and be against gay marriage. Now, I have the polar opposite view on these topics, as well as minor tweaks to my other beliefs. And I would not have changed my mind on these topics if people hadn’t challenged me and pointed out the flaws in my thought process.

Secondly, if you are having a discussion with someone who states that they will never change their position on the topic, you should bring the topic up with them even more. If you are the type of individual who believes you can never have your mind changed, you are by definition closed minded and need to revaluate your life. If you think you have a perfect understanding of reality to the extent that you can’t be wrong, and as such, do not need to discuss the issue, you are quite arrogant.

That being said, I do believe it is possible to reach a point where discussion of a topic between two individuals becomes fruitless. This generally occurs when it has been identified precisely where the point of disagreement arises, both individuals have explained why they disagree with the opposing point of view and still disagree (agreeing to disagree essentially). However, if the discussion has arrived at this point, it has occurred to a sufficient level as to make the original objection irrelevant.
  • These are topics that are too serious to discuss
This objection is often put forward in specific social settings; i.e. Facebook, parties or anywhere where the individual feels should be a causal environment. This objection essentially comes down to taste; what constitutes a topic that is too serious or whether they derive enjoyment from such discussions. I do not find these discussions to be too serious and always enjoy them. As it is an issue of personal taste, there really isn’t much more that can be said other than if you are the only person who appears to find the topic too serious, exclude yourself from the conversation rather than demand others stop for your sake alone. The same is true for the opposite, of course; if you are the only one who wants to talk about these issues, don’t force others to.
  • These topics are unimportant to the individual who does not wish to discuss them
Like the previous objection, this one comes down to a simple taste preference. However, often people underestimate how these topics could affect them. To be fair, there are probably some circumstances where the issue is entirely unimportant to a person; for example, someone who is not homosexual and knows no one who is homosexual would be understandably uninteresting in the topic of gay marriage. However, I think situations like this are particularly rare; in that, most beliefs have an effect on the majority of society. And even if they aren’t, they still require the discussion to determine that they are unimportant; effectively, a meta-discussion about whether the discussion is worth having. If they aren’t even willing to engage in the meta-discussion because they think it could never possibly affect them, it becomes the same as the ‘never change opinions’ objection.

I think after reading this, most people should understand where I am coming from. If, however, you feel I have missed an important reason why discussions of this nature should not occur, please feel free to let me know (if you can; I'm unaware if a meta-discussion about a topic you find taboo would be breaking the taboo).

Sunday, July 10, 2011

Short and Sharp - Limitiation of Liberty

I know I’ve been on a kind of ‘ethical bender’ of late, so I promise this will be my last post on ethics for a while (isn’t that what all addicts say?). However, I wish to have a final quick exploration of the topic of limiting the liberty of individuals in a society. I think that by being a member of society, we  agree (explicitly in some cases, implicitly in most) to give up certain rights, such as the right to murder each other, for the safety and benefits that living in a society offers (yes, I am a fan Hobbes, if you can’t tell; at least in regard to social contract theory). The point of this post is to explore what constitutes the line between what can and cannot be limited by society.

While this is related to my last post on bone marrow donation, I’m going to use the example of vaccination to draw out the potential points of disagreement. We all (well, the significant majority) accept the limit that we cannot kill (either intentionally or through recklessness on our part). The same is true for causing harm to others that falls short of killing them (again, both intentionally and through recklessness). So, given these two fair uncontroversial points, why do we allow people, be they adults or children by the choice of their parents, to opt out of vaccination?

For those of you not familiar with herd immunity, this is the phenomenon where when a certain portion of the population is immune to a certain disease, their immunity acts to protect those who are not immune. The percentage of immunity required to reach this threshold varies for every contagion and is based on factors like the route of infection (airborne, food etc.) and the virulence of the pathogen. Within any given population, there are a certain proportion of individuals that, for medical reasons, cannot be given vaccinations (effectively, anyone with impaired immune functions, often due to age, genetic conditions or other factors). So, with an already reduced population to work with, allowing others to opt out of vaccinations further reduces the amount that are immunised, putting everyone at an increased risk of infection.

My point is not that vaccination should be mandatory (I do think that, but that is not the case I am making here), but what reasons are there that we don’t make them mandatory? I’ve heard people suggest that it is to do with liberty, but as I’ve already said, we given up liberties all the time to benefit from living in a society. Does doing something to people rather than asking them not to do something change the issue (positive vs. negative liberty)? Does removing the option to opt out change anything for people who would have chosen to be vaccinated regardless? What is the threshold for what constitutes a significant harm of an action to the public as to reduce the liberty of individuals to undertake said action?

Thursday, July 7, 2011

Is Donating Bone Marrow A Charitable Act?

A few weeks ago, the Australian Bone Marrow Donor Registry contacted and informed me that I have matched with a person who may require a bone marrow transplantation. I told them that I was interested and went in last week for some confirmatory testing. An interesting thing that I have noticed is that a significant number of people who I have told about this have responded with “that is so generous; I could never do that”. This is the idea that I wish to explore. I’d like to begin with an apparently unrelated hypothetical:
A man is out for a late night walk down a country road and he notices a car stopped on the side of the road with its hazard lights on. Upon investigation, he finds a woman breathing heavily and clutching her chest. She tells him she believes she is having a heart attack. Unfortunately, neither of them have a mobile phone on them. She asks him to use her car to drive her to the hospital, as she is in too much pain to do so. The man does have a license. After a few seconds, the man says that he rather not, as the potential risk of crashing the car is too high.
While I cannot say for certain, I think most people would find the justification for not offering help to be quite weak; no one sees driving a car as being too risky as to not offer help to someone who may die without it. To get an idea of the risk associated with driving a car, the world death rate for motor vehicle accidents is 20.8 per 100,000 people (from the Wikipedia page, which quotes WHO statistics).

The point of this hypothetical is to highlight a contradiction in with the way that the people who I spoke of at the beginning think about bone marrow donation. Statistically speaking, donating bone marrow is safer than driving a car. For clarification, there are two procedures used to harvest bone marrow. The first and most common is a peripheral bone marrow harvest’; this is where the donor is given a drug to stimulate their blood marrow to grow and then they give blood and the bone marrow cells are harvested (in the same way as white blood cells are harvested for donation). As such, the risk associated with donating bone marrow by this method has the similar risk to donating blood; that is, a negligible risk. When most people think of a bone marrow donation, they think of extraction from the hip bone. This requires a general anaesthetic in most cases and it is this that presents the only risk of death (in that, no deaths have ever been recorded due to the actual extraction process). However, using even the most conservative figures (i.e. the ones that show the highest mortality rate), the death rate for general anaesthesia is around 14 per 100,000 people (from this study).

So, statistically speaking, it would be safer to give bone marrow to someone that to drive them to the hospital. However, I do not think that the people that I spoke of would change their view of not wanting to donate bone marrow because of this (I specifically did express this point to one of them and they did indeed not change their view). I am not entirely sure why. I see only minor differences in the scenarios and nothing to make them categorically different (well, as far as I can tell). So I throw it open to my highly intelligent audience; is there anything that would make not donating bone marrow more justifiable than not driving someone to a hospital?

[One potential criticism I could see is that I have used the world data motor vehicle accident death rates and, as such, will be much higher than any given country (for example,  the death rate in Australia is around 5 per 100,000; much lower than the world figure). However, the same is true of general anaesthesia figure; the study used data from any published study (excluding only those that were not in English). As such, it would probably be much lower in any given first world country (as is the case with the death rate from motor vehicle accidents).]

Saturday, June 25, 2011

Short and Sharp: What if you're wrong?

If you are familiar with the evangelical style of Ray Comfort and Kurt Cameron, then you will be familiar with the question “what if you’re wrong?”. For those of you who are not, it is a tactic that attempts to highlight in the minds of atheists the repercussions if they are wrong about the existence of the god that Comfort and Cameron believe in (i.e. that the atheist will go to Hell). I have seen many refutations of this argument (which is essentially Pascal’s Wager), however, I am going to actually do the opposite; I think it is a valid question in certain contexts and should be answerable by any person about any belief that they hold.

To demonstrate why I think this is the case, I would like to modify a scenario used by Richard Carrier in his book, Sense and Goodness Without God;  
Suppose a friend told you they had purchased a new car, would you believe them? As this is a fairly unremarkable claim (many people own cars), it would require very little evidence for you to believe them, perhaps even just their word alone. However, suppose now that you were relying on this friend to drive you to a very important meeting. Would you be willing to rely on just their word or would you require more evidence now that the claim has the potential to impact upon your life? If you believe them, and they are wrong (either by lying or just being misinformed; say they thought the car would be ready for their use on that day, but it was delayed), you are now stuck without a way to get your meeting.
The point that I am trying to drive at is that the amount of evidence needed to support a claim is not simply just how ordinary or extraordinary the claim is, but also how much of an impact the claim’s truth or falseness will have. Claims that will have very little effects require less evidence than claims that will have profound effect, all other things being equal. The way in which Comfort and Cameron use this question is still wrong; in that, they are essentially throwing in a possibility, Hell, which has such a low probability of actually existing that it isn’t worth considering. As such, the question is only valid when used in the context of known negative outcomes. However, when used in this way, it is very useful at highlighting how effects can impact upon our evidential standards.

Wednesday, June 22, 2011

'Ethical' Egoists

While watching the season finale of Grey’s Anatomy recently, I was reminded of my general distain for ‘ethical’ egoists. These are people who believe it is okay to do what is in the best interests for themselves and their in-group (family, friends, lovers etc.), even if it leads to the otherwise preventable harm of others. As such, it is kind of a very shallow version of ethical egoism; the branch of moral philosophy that says that moral agents should act in their own self-interest. While the actual theory is a lot more detailed than my one sentence summary indicates, I still believe it has problems.

To demonstrate my problem with ‘ethical’ egoists, I’ll explain the scenario that occurred in the episode. Earlier in the season, Meredith had switched a placebo for an active drug in an Alzheimer’s clinical trial she was a part of. This was due to the fact that the patient who was to get the placebo was close to her. In the finale, her deception is revealed and the shit hits the fan. This is because it is a randomised clinical trial; the doctors do not get to assign who gets the active therapy and who gets the placebo. This prevents them from, either intentionally or unintentionally, giving the active treatment to patients they believe are more likely to recover anyway and skewing the results (i.e. making the treatment look better than it really is). So Meredith tampering with who gets the treatment invalidates the whole trial. Now, even after she is informed of this and how now no one will get access to the new drug (due to the trial not being able to go ahead, so it can’t be demonstrated to be effective) she still says she would do it again because it was a person who meant so much to her.

This sort of attitude (which is not at all uncommon) really drives me up the wall; effectively Meredith, and others in similar situations, are giving a big middle finger to everyone else just to help someone they care about. It really shows how self-centred someone is that they can’t step back and realise that while they are trying to help someone they love, so is everyone else. In the case of Meredith, she wanted to help someone she loved, but at the same time prevented hundreds of others from helping their loved ones.

I also ran into a similar phenomenon during an ethics class in my undergrad course; we were given the following scenario and asked whether we thought the decision in it was moral (paraphrased from memory);
An earthquake occurs in China and buries a man’s family in rumble. In the process of digging them out, he discovers that across the road an important official and his family are buried. The man decides to stop trying to rescue his family and rescue the official and his family instead. He successfully rescues them, but his own family die in the process. When asked why he made the choice that he did, he said that by rescuing the official, he could go on to coordinate the rescue effort (by virtue of having extensive knowledge of the local area) and end up saving more people.
Now, as a utilitarian, I said that the action was moral because it could effectively save more lives (increasing the overall well-being). Now while the majority did say they while they would have saved their family had they been in the situation but respected the man for thinking of others (a position I don’t find unacceptable), a small minority believed that he had acted immorally and should have saved his family (invoking duty to family primarily). After some discussion back and forth, I presented them with a new hypothetical to try to demonstrate the point I was trying to make;
A serial killer has you locked in a chair. In front of you is your family in a cage, ten families you do not know in another. You are given the choice of who dies; either your family or the ten families. Which would you choose?
I thought that this scenario was entirely black and white; that only a monster would say that it was moral to choose their family. However, I was wrong. Not only did these individuals say they would choose their families, they were defending it as the moral choice. I mean, I could understand someone saying that they aren’t strong enough to do the right thing, but to actually believe that ten other families dying so yours can survive is moral is downright insane. Could these people not understand that each of those families had people who loved them just as much as they loved their families? What makes them think that their love for their family trumps everyone else’s?

A bit more of a rant than usual, but there it is.

Wednesday, June 15, 2011

Short and Sharp - Reformation of Child Molesters

This post is prompted by this story (and the comment section found within). For those of you who are too lazy to click on the link, the article basically tells of a man who was convicted of indecently assaulting a 16 year old boy in 2005, and now has won through VCAT a working with children certificate because VCAT believe he no longer poses a danger to children.

The thing that most interests me is the response to this situation; at the time of writing this, not a single one of the 32 comments on the article was even willing to accept the possibility that this man has rehabilitated and is not a threat; every comment either explicitly or implicitly says that a person who molests a child is incapable of rehabilitating.

Whether or not this case is an example of reformation of a child molester (I personally think it is off the limited information available), it seems clear that the majority of individuals believe that it is impossible for a person who has sexually assaulted a child to be rehabilitated. I’m not quite sure whether it is that they think rehabilitation is actually impossible or that the cost of wrongly assessing a person as rehabilitated is too high to ever bother attempting it (a kind of ‘think of the children’ argument).

What do my intelligent audience think? Is it possible for a child molester to ever be rehabilitated? If so, why? If not, why not?

Saturday, May 7, 2011

Common Questions to Atheists

I found this list of questions on Lady Atheist's blog that atheists are often asked. While I don’t particularly have anything original to add, I think that it is worth answering as it will demonstrate my views on a wide variety of issues. 

Q:  Where do you go when you die?
A:  While I wouldn’t claim to know with certainty, my view is that, upon death, we will cease to exist. From the evidence I have come across, it appears the mind is a function of the brain; so without a working brain, you will have no mind and therefore cease to exist. 

Q:  Aren't you worried that you might be wrong and you might go to hell?
A:  Not at all. Hell was never something I believed in (even when I was a half-assed Christian). Do I think it is possible that I am wrong? Of course. I just don’t think the probability of me being wrong about the existence hell is high enough to worry about. Take for example the chance that I will be hit by a car tomorrow; this is a scenario which I view as quite likely to happen in comparison with Hell being real, yet I’m not worried about being hit by a car. 

Q:  How can you be moral without God?
A:  How can you be moral with God? That isn’t a snide reply, but a serious question. Are you moral with God because God has defined what is good or because God is intelligent and powerful enough to determine what is good? If it is the first one, I would argue that you aren’t moral in any meaningful sense of the word. If it is the second, then you have just answered the question yourself; we can also determine what is good. We may not be able to do it as well as God could (assuming he exists), but being he isn’t putting his two-cents in on relevant issues, we are left to do it ourselves. 

Q:  You're really just angry with God.
A:  Sometimes, but this is irrelevant to why I am an atheist. To be clear, it is possible to be angry at a being regardless of whether it exists or not. For example, I am often infuriated by the character of Nikki on Big Love, yet that doesn’t mean I think she really exists. I am angry at the portrayed actions of her character. This is similar to when I find aspect of God’s personality (as depicted by the Bible) to be immoral/anger-inducing. Again, this doesn’t mean I think God exists; just that I find the actions that are portrayed by his character to be immoral.

Q:  You're really just angry at the abuses of the Church.
A:  Again, while I may sometimes be angry at the actions of Christians, this is irrelevant to my atheism. It may inform the actions I choose to take (e.g. opposing the homophobia of Christians directly), but it isn’t why I don’t believe in god. 

Q:  The church has been responsible for great works of art.
A:  So? There is much art in other religions, so either their gods also exist or inspiration can come from any source, real or imaginary. 

Q:  How do you know the Bible isn't true?
A:  In the same way you know that the Koran, the Tao Te Ching or any other religious text isn’t true; lack of supporting evidence and logical inconsistencies. 

Q:  Isn't it arrogant to presume you're right and all those Christians are wrong?
A:  Why is it arrogant for me to presume I am right, yet it isn’t arrogant for Christians to presume they are right? But I fully admit I may be wrong, but I will only change my position when given a good argument and evidence that my position is wrong. 

Q:  You think you know everything, don't you?  (Also: You think you have all the answers!)
A:  No. Just no. 

Q:  Science can't answer everything.  What about love?
A:  I’m not advocating that it can. Though I do think science can explain love; interactions between memories, emotions and social situations that are governed by neurons and chemicals in our brains. 

Q:  How do you explain the human need to believe in God?  God made humans different from the animals.
A:  I think the ‘need’ (I use quotation marks because I don't think it is really a need) can be explained by a number of facts known about human psychology. Firstly, humans are pattern and agent seeking creatures; our minds are built for detecting patterns and, often, attributing those patterns to an animate agent. An example of this is the thought some people get that their computer intentionally crashes when they haven’t saved; they are detecting a pattern (computer crashing when they haven’t saved) and blaming their computer for it. Now, most of us would agree that this is both not a meaningful pattern (i.e. there isn’t actually a causative effect between the chance of your computer crashing and whether you have saved your work or not) or an intentional act on behalf of the computer. I think this is a similar phenomenon to how humans came to believe in god (mistakenly identifying patterns in nature and attributing them to an agent).
The second point I would bring up would be that this pattern seeking behaviour increases when we find ourselves in situations that are out of our control. There is a good, evolutionary reason for this; our ancestors found themselves frequently in situations that were out of their control such as attacks from predators. Now, increased pattern detection in such situations would aid in survival as these individuals could determine any activity they are engaged in that is affecting the attack rates of predators. For example, it could be noticed that if meat is left uncovered for too long, predators are more likely to attack. Thusly, this would lead to the covering of meat and decreased predator attacks as a consequence. This fact could also be used to explain another fact about our current world; the countries with the highest societal health (which is a good proxy for control over our situation) have the lowest levels of religious belief and vice versa. 

Q: What about the miracles of the Bible?
A: What about the miracles of the Koran? The Baghavada Gita? 

Q:  [insert seemingly miraculous prayer story here]. How do you explain that?
A:  I normal take a two pronged approach to such questions; firstly, unless it is an event that personally happened to individual telling me this story, the question of authenticity is one that is hard to answer. Secondly, I ask why there is the need to explain some fortunate event with references to the supernatural. An example I have had presented to me is one of a family had their unborn child diagnosed with a serious condition (unspecified as to which), the prayed and when the child was born, they had no problems whatsoever. The issue here is when you realise that most, if not all, medical tests have an error rate; that is, a percentage of test results are either false positives or false negatives. This is often quite small, but when applied to a large population size, is not an insignificant number. To demonstrate this, let’s say that the test in question had a .1% false positive rate; .1% of the time, the result indicated they had the condition tested for when they really didn’t. If this test is administered to all pregnant mothers (roughly 300,000 in 2010), then we can expect 300 false positives for 2010 alone. That is, 300 mothers will be told that their child has that condition, only to find at birth that they don’t. Not really miraculous at all. 

Q:  Christianity has been around for 2,000 years.  How could it survive if it were false?
A:  How has Hinduism survived for 4,000 years if it was false? 

Q:  There are millions of Christians.  They can't all be wrong.
A:  Yes, they can, just as the billions of Muslims and Hindus can be wrong.

Q:  Nothing can exist without a creator, so the fact that things exist proves there's a God.
A:  If nothing can exist without a creator, then neither can God. If things can exist without a creator, who is to say that the cosmos isn’t one of those things? 

Q:  You can't prove that God doesn't exist.
A:  And you can’t prove Santa Claus doesn’t exist too. I am not an atheist because I think God has been proven not to exist, but because there is no evidence to prove that he does. 

Q:  If you're an atheist doesn't that mean that you don't believe in anything?
A:  I believe in things that have evidence that prove that they exist. 

 Q:  If you don't believe in God, that means you want to be God.
A:  Depends what you mean by that statement; if you mean I want humans to fulfil all the functions that have normally been attributed to God (morality, purpose etc.), then yes. If you mean I want to be a dictator in the sky, then no. 

Q:  You just left the Church because you want to sin.
A:  I was never really in any church. 

Q:  So then your life has no meaning?
A:  Yes and no. I do not believe life has any inherit meaning. But that is different from saying it has no meaning. Life has the meaning that I (and all of us) choose to give it.

Sunday, April 3, 2011

Short and Sharp - Conformity

A thing that really bugs me is when people get all snooty when it comes to conformity; that if anyone does what the mainstream does, they are just a mindless sheep. Now, I won’t dispute that there are people who legitimately participate in activities they hate just because the majority it. But I believe these individuals to be a minority; most people who make are the mainstream are simply prioritizing their desires.

Let me explain further; in life, we have to prioritize our desires because it is impossible for us to undertake them all concurrent. Desires can mutually exclusive (i.e. wanting the security of a stable partner and wanting to experience the thrill of getting to know someone new) where as others are just temporally exclusive (i.e. they can’t be undertaken at the concurrently because they take up too much time and resources). We have to order our desires and select which are most likely to make us happy and which are most practical.

Conformity comes into this when you realise it too is a desire; the desire to be a part of the group. Take the following example:
Person A’s desires (ranked by preference):
  1. Person A wants to be a porn-star.
  2. Person A wants to be a nurse.
  3. Person A wants to conform to a group.
Now a lot of people would say that they should just go with their most powerful desire to become a porn-star and, if they chose nursing instead, would criticize them for conforming to what society wants. But what they fail to realise is that, by becoming a nurse, they are fulfilling two lesser desires, which may be greater than their first one.

My point is simply this; don’t criticize the choice a person makes just because it happens to align with what mainstream society does. The desire to conform is no more rational than any other emotional desire (to be loved, to feel happy etc.).

Saturday, March 26, 2011

Short and Sharp - Basic Beliefs

In the essay ‘Is Belief in God Properly Basic?’ Alvin Plantinga argues that belief in God does not require evidential proof, as it is a properly basic belief (1). By this, he means it is a belief that cannot be based on any other belief. Another example of a basic belief is our memory; the belief that I have a memory cannot be based on any other belief; it is properly basic. Plantinga contends that belief in God is the same.

It should be noted that I agree with Plantinga’s foundationalism approach to epistemology; in that, I think that every belief we have can be boiled down to basic beliefs, which are self-evident and therefore do not require proof. For me, these basic beliefs are our senses, emotions, thoughts and memory (henceforth referred to as experiences). This is different from saying that our beliefs about our experience are properly basic; just that the experiences themselves are properly basic. This is known as basic empiricism and is discussed by Richard Carrier in his book ‘Sense and Goodness Without God: A Defence of Metaphysical Naturalism’ (2) or in this web article (3).

This is an important distinction to make as I believe this is primarily where Plantinga’s argument for God as a properly basic belief fails. There is a difference between experience and our interpretation of experience. Think acknowledging that you are having an experience compared to what that experience actually means. The first is undisputable; the second is quite easily disputable. To show the difference further, here are the examples that Plantinga uses to demonstrate basic beliefs:
  1. I see a tree.
  2. I had breakfast this morning.
  3. That person is angry.
The problem is that none of these are basic beliefs; they are interpretations of experiences and, therefore, can be wrong. The tree could be a realistic fake; you could have dreamed you had breakfast this morning and mistaken it for reality; you could not understand how that person displays anger. If you construct a foundationalist epistemology that is based off incorrect basic beliefs, you’re going to be wrong a lot of the time (even by the standards of your own epistemology). 


1. Cottingham J. Western Philosophy: An Anthology. 2nd ed. Malden, Mass.: Blackwell Pub.; 2008. (Blackwell Philosophy Anthologies).
2. Carrier R. Sense & Goodness Without God: A Defence of Metaphysical Naturalism. Bloomington, Indiana: AuthorHouse; 2005.
3.  Carrier R. Defending Naturalism as a Worldview: A Rebuttal to Michael Rea's World Without Design; 2003 [27/03/2011]; Available from:

Saturday, February 5, 2011

Biological Immortality

In the February 2nd, 2010 Deakin Philosophical Society meeting, we discussed the idea that humans may one day no longer age (after watching a TED lecture on the subject by Aubrey de Grey, which can be found here). In the discussion the followed, there seemed to be a disagreement which, in my opinion, hinged on a fundamental misunderstanding on how each party in the discussion was defining the term ‘immortal’; one side (my side, for those of you playing at home) was using a more biological definition of immortality, the other was using what might be the more colloquial sense of the word (namely, never having to die).

My take on the word immortal (at least, my usage of in during this debate) was more in line with biological immortality. The basic, one sentence summary of this concept would be where the death rate of an organism is not affected by the age of the organism. With humans (and animals in general), once adulthood is reached, the probability of an individual dying in the following year increases (that is, the older you are, the more likely you are to die). This graph from the Australian Bureau of Statistics illustrates the idea nicely:

Biological immortality would be represented in a graph like this:

That is, once adulthood was reached, the death rate would remain static (more or less). This is not to say people would not die, just that there would be no correlation between age and death rate. There would still be a correlation between life style choices and death rate (i.e. if you drink and drive, you’d have a higher probability of dying than someone who didn’t drink drive).

In using ‘immortal’ in this sense, I believe it is perfectly acceptable to say that humans will one day become immortal.

It should be noted that there may still be an indirect correlation between age and death rate in a biologically immortal race; it is possible that life would become mind-numbingly boring after many hundreds or thousands of years. Thus, the older people get in a biologically immortal society, the more likely they are to choose death.

Friday, January 21, 2011

Being wrong for the right reasons

In a discussion recently with a fellow atheist, I was asked which of the two following options I would prefer; a world full of rational Christians/Muslims/Jews etc. or a world full of irrational atheists. After a brief moment of thought, I responded that I would prefer the rational religious to the irrational atheists. This perplexed my friend, who said that they would prefer the opposite. They couldn't seem to understand why, given that we are both atheists, would I choose a world where everyone is wrong (from our point of view). My response is the topic of this post.

I place a higher value on the method upon which people reach their conclusion (i.e. reason, logic, evidence etc.) rather than whether the conclusions they reach are correct (or whether they match my own conclusions). This is because, if someone is at least willing to base their views off the same method as mine (or society in general), we can actually have a valid discussion of the issue. A person who rejects reasoned argument and evidence as a source of truth can’t be reasoned with.

This brings us back to the original hypothetical; while the irrational atheists agree with a single conclusion I have drawn, the fact that they use an alternative method for deducing it (say blind faith; accepting a claim as true regardless of the evidence for or against it) means they are more likely to have false views in other areas (politics, science etc.). And being that the theists in this hypothetical are using reason and logic to come to their opinions, they are more likely to have better opinions in these areas. So I would much rather a world where people came to the wrong conclusion on the topic of whether a god exists or not, but were still using the preferable method of coming to other conclusions.

An analogy of what I mean can be found in mathematics classes; from my experience, teachers would give marks on tests if you used the right formulas, but made an error in calculation (and therefore had the wrong answer). Some teachers would even go so far as to not give full marks for a question if working was not shown, even with the right answer. This is pretty much exactly what I mean; it is better to use the right formula, make a mistake in calculation and come to the wrong answer than to use no formula at all and get the answer right by chance.

Sunday, January 9, 2011

Intentional Utilitarianism - A New Approach?

For those of you who are unfamiliar with utilitarianism, it is a view of morality that holds that an action is right if it leads to an increase in wellbeing or, conversely, a decrease in suffering (Thiroux and Krasemann, 2009). Therefore, actions that are immoral are those that lead to a decrease in wellbeing or an increase in suffering. It is part of a much broader category of moral theories described as teleological or consequentialist; moral actions should be judged on their consequences (the other category being the deontological theories; moral actions should be judged by their adherence to rules i.e. the Ten Commandments).

One of the problems I have with utilitarianism (even though I describe myself as such) is that it seems to discount motivations that lead one to their actions. I’ll give an example of a scenario where this becomes an issue;

Scenario 1:

Adam is a 22 year old who, in a fit of rage, intentionally runs over his girlfriend Eve with his car and kills her.

Adam* is a 22 year old who, through a small mistake on his part, runs over his girlfriend Eve* with his car and kills her.

Under utilartarianism, both of these actions are equally immoral; they both lead to the same consequence of Eve being dead. Yet, most of us would agree that there is a difference between the two (that Adam* is not as immoral as Adam). While it might be possible that our moral intuition about this is wrong (and they really are equally immoral), I think this is an case where utilitarianism fails to accurately describe what is moral.

However, I believe there is a simple addendum that can be added to utiltarianism that rectifies this (and many other similar) problems; taking intentions into account. The way to do this, in my opinion, is to have a second set of consequences; the intended consequences. The intended consequences can be compared to the actual consequences to determine the morality of an action. For example, let us assign some values upon which to evaluate this previous scenario;

Eve/Eve*’s death = -1,000


Adam’s actual consequences = -1,000 (he caused Eve’s death)
Adam’s intended consequences = -1,000 (he intended to cause Eve’s death)
Adam’s average consequences = (-1,000 + -1000)/2 = -1,000

Adam*’s actual consequences = -1,000 (he caused Eve*’s death)
Adam*’s intended consequences = 0 (he intended no harm)
Adam*’s average consequences = (-1,000 + 0)/2 = -500

Therefore, we can say that Adam’s actions are more immoral than Adam*’s actions. It should be noted that while the numbers are just arbitarily assigned, the underlying principle still holds; if the intended consequences are moral or neutral, it lessens the immorality of the actual consequence. I’ll also now provide a example in the positive to show that it works both ways;

Scenario 2:

Gill Bates gives a billion dollars to charity to help those in need and gets a tax break unintentionally.

Gill Bates* gives a billion dollars to charity to for the expressed purpose of getting a tax break.

Again, conventional utiliartarianism would say both Gill Bates and Gill Bates* are equally moral; in both cases the charity gets a billion dollars and they both get a tax break. Yet, most of us would say that Gill Bates is more moral. The solution, again, is to take their intentions into consideration;

Giving a billion dollars tocharity = +500
A billionaire getting a tax break = -50 (that extra money he saved could have been better spent by the government/it is unlikely to increase his own well being).


Gill Bates’s actual consequences = +500 (he gave a billion dollars to charity) and -50 (he got a tax break) = +450
Gill Bates’s intended consequences = +500 (he intended to help those in need)
Gill Bates’s average consequences = (+450 + +500)/2 = +475

Gill Bates*’s actual consequences = +500 (he gave a billion dollars to charity) and -50 (he got a tax break) = +450
Gill Bates*’s intended consequences = -50 (he only intended to get the tax break)
Gill Bates*’s average consequences = (+450 + -50)/2 = +200

Again, it is not so much that the numbers I have used accurately reflect the proportional differences in the consequences, just that the underlying principle holds; intended consequences have some bearing on the morality of an action.

It should be noted that this is just a rough outline of my idea. I can see many potential problems that need to be sorted out. An example being is that, in my scenarios, I assumed that actual and intended consequences are equally important (i.e. 50:50). In my opinion, it would be closer to 75:25 (in that, actual consequences are more relevent in determining morality than intended consequences). Another example is that intentions have to be conveyed by the person and, therefore, a person could lie about their intentions to seem more moral than they are (i.e. Gill Bates* could lie and say that he was really doing it for charity, when his true intentions are simply for the tax break).

So that is my idea; feel free to rip it to shreds if you see any problems or offer any advice on improving it.

UPDATED (10/01/2010)

A fellow medical student (thanks Ben) has suggested to me a possible addition to this take on utilitarianism; taking potential consequences into account. This is most applicible to actions that do not always have consequences, such as driving while under the influence. I will again go through a scenario to demonstrate the two competing ideas (utilitarianism vs. Intentional utilitarianism);

Scenario 3:

Clyde drives his car sober, not causing any accidents.

Clyde* drives his car intoxicated, not causing any accidents.

Once more, utilitarianism would have us believe that these two actions are morally equivilant; they both do not cause any suffering. Yet, we would all recognise that driving while intoxicated is clearly immoral. This can be rectified by taking potential consequences into account;

Potential consequences of driving while intoxicated = -500
Potential consequences while driving sober = -10


Clyde’s actual consequences = 0 (he didn’t cause any suffering)
Clyde’s potential consequences = -10 (he drove while sober)
Clyde’s average consequences = (0 + -10)/2 = -5

Clyde*’s actual consequences = 0 (he didn’t cause any suffering)
Clyde*’s potential consequences = -1000 (he drove while intoxicated)
Clyde*’s average consequences = (0 + -500)/2 = -250

One might ask why I am giving a negative value to driving while sober; the reason is that even a perfectly lucid individual who follows the road rules to the letter could still be involved in an accident (i.e. a child running out in front of their car with little warning). Therefore, one accepts a certain level of potential consequences when one gets behind the wheel of a vehicle.

It should also be noted that the relationship between the potential consequences of driving sober and intoxicated is proportional; I am assuming that driving intoxicated increases one’s potential of causing suffering by a factor of a fifty. If, in reality, it only increases it by a factor of twenty, then the average would change accordingly.

A further point to take in is how I am defining the terms;

Potential consequences – The predicted consequences before an action is taken. For example, Russian Roulette with a six shooter, with death being a value of -1000, would have a potential consequences value of -166.66 (-1000/6).

Intended consequences – The hypothetical consequences if an action goes exactly as one intends (i.e. a perfect execution of the action).

It is possible to combine all three sets of consequences; actual, intended and potential. Also, to give a more accurate representation of the completed theory (which is long way off, assuming no one can point out any critical issues as it builds), I will proportion the consequences in a 3:2:1 ratio (i.e. actual consequences are 3, potential consequences are 2, and intended consequences are 1) To demonstrate an example of this, I will revise the intitial scenario;

Scenario 1 (revised):

Adam is a 22 year old who, in a fit of rage, intentionally runs over his girlfriend Eve with his car and kills her.

Adam* is a 22 year old who, through a small mistake on his part, runs over his girlfriend Eve* with his car and kills her.

Eve/Eve*’s death = -1,000
Potential consequences of intending to cause Eve/Eve*’s death = -950
Potential consequences of driving with no impairments = -10


Adam’s actual consequences = -1,000 (he caused Eve’s death)
Adam’s potential consequences = -950 (he intended to cause Eve’s death)
Adam’s intended consequences = -1,000 (he intended to cause Eve’s death)
Adam’s average consequences = ((-1,000 * 3) + (-950 * 2) + (-1000 * 1))/6 = -983.33

Adam*’s actual consequences = -1,000 (he caused Eve*’s death)
Adam*’s potential consequences = -10 (he was driving with no impairments)
Adam*’s intended consequences = 0 (he intended no harm)
Adam*’s average consequences = ((-1,000 * 3) + (-10 *2) + (0 * 1))/2 = -503.33

The reason I do not have the potential consequences of intending to cause Eve/Eve*’s death as the same as actually causing her death is that it is possible to fail; one could attempt to kill someone but not, therefore, due to that possibility, it is valued slightly less negatively.

Again, any criticism will be greatly appreciated.

UPDATED (13/01/2010)

It has been pointed out by Dylan that what I really mean by potential consequences is better described as a ‘recklessness index’; a measure of the risk assoicated with a particlar action leading to a particular outcome. I am choosing, however, to call it a ‘Risk Index’, as I believe this can be equally applied to both positive actions (like donating to charity) and negative actions (such as drink driving).

The Risk Index (RI) is based on another suggestion made by Dylan; Best Available Evidence (BAE). The BAE is the information an actor has available to them on how likely different outcomes are based on their actions prior to performing them (i.e. foresight). In my new equation (see below), I will express it as a percentage. So, to summarize;

The Risk Index (RI) is the percentage chance that an action will lead to an outcome based upon the actor’s Best Available Evidence (BAE). RI is not exactly the same as BAE; RI is what a normal, rational individual would judge to be the probability based on the BAE. This seperates the possibility of a person who has the available evidence to accurately determine the RI, yet does not due to a fault on their part that a normal, rational individual would not make (i.e. not understanding the evidence well enough).

Example: a person is aware that drink driving increases the risk of crashing by twenty times, yet believes the risk of them crashing is lower because they think they are a better than average driver. The RI this person would calculate for themselves drink driving is therefore wrong as the twenty times increase figure is based upon all other things being equal (i.e. any driver that is drunk is 20 times as likely to crash than if he was sober).

Also, upon review of the values I calculated in Scenario 1 and Scenario 1 (revised), I feel that the average consequences between the two individuals (Adam and Adam*) is too close, given their vast differences in moral responsiblity (intending to kill versus a simple mistake). As such, I am altering my equation to be as follows;

Consequences attributable to actor = ((2 * AC) + (1 * IC))/3 * RI


AC = Actual consequences
IC = Intended consequences
RI = Risk Index

Rather than revise the previous scenario, I will begin with two new examples, one negative and one positive;

Scenario 4:

John intentionally punches Jim in the face.
John*, due to waving his arms about, unintentionally punches Jim* in the face.


Jim/Jim* being punched in the face = -50
RI of attempting to punch someone = 95% (in this case, I am assuming that John has attempted his punch while Jim was not looking; therefore, if the scenario was different and John was aware that Jim were expecting the punch, the RI would be lower)
RI of waving arms about = 6%


John’s AC = -50 (he punched Jim in the face)
John’s IC = -50 (he intended to punch Jim in the face)
John’s RI = 95% (he attempted to punch Jim in the face)
Consequences attributable to John = ((2 * -50) + (1* -50))/3 * 95% = -47.5

John*’s AC = -50 (he punched Jim in the face)
John*’s IC = 0 (he didn’t intend to punch Jim in the face)
John*s RI = 6% (he was waving his arms)
Consequences attributable to John* = ((2 * -50) + (1 * 0))/3 * 6% = -2

And now for the positive example (negatives are so much simpler);

Scenario 5:

Jane gives $1000 to a well-established charity organisation, and the money is put to good use.
Jane* gives $1000 to a newly created charity organisation, and the money is put to good use.

NOTE: the two charity organisations share the exact same goal; therefore the only variable between them is the experience that each one of them has.


The charity effectively using the $1000 = +100
RI of giving to a reputable charity = 98% (in that, there is a small possibility that the money will be wasted)
RI of giving to a newly founded charity = 75% (there is a greater chance that this charity may be ineffective, as it has yet to demonstrate it is reliable)


Jane’s AC = +100 (her money was effectively used by the charity)
Jane’s IC = +100 (she intended her money to be used effectively)
RI = 98% (it is a reputable charity)
Consquences attributable to Jane = ((2 * +100) + (1* +100))/3 * 98% = 98

Jane*’s AC = +100 (her money was effectively used by the charity)
Jane*’s IC = +100 (she intended her money to be used effectively)
RI = 75% (it is new charity)
Consquences attributable to Jane* = ((2 * +100) + (1* +100))/3 * 75% = 75

The reasoning behind this is that, even though the money was effectively used by both charities, there was a greater risk in wasted money by donating to the new charity. This does not imply that donating to new charities is in any way immoral (it still has a positive figure), just that, given the potential risk associated with giving money to the new charity, it is slightly more moral to give it to the well established charity. This of course becomes null and void when the choice is between a new charity (75) and no charity at all (0).

I know I should like a broken record now but comment, criticise and otherwise make me your bitch (intellectually speaking, of course).


THIROUX, J. & KRASEMANN, K. 2009. Ethics: Theory and Practise, Pearson International Education.