Friday, January 21, 2011

Being wrong for the right reasons

In a discussion recently with a fellow atheist, I was asked which of the two following options I would prefer; a world full of rational Christians/Muslims/Jews etc. or a world full of irrational atheists. After a brief moment of thought, I responded that I would prefer the rational religious to the irrational atheists. This perplexed my friend, who said that they would prefer the opposite. They couldn't seem to understand why, given that we are both atheists, would I choose a world where everyone is wrong (from our point of view). My response is the topic of this post.

I place a higher value on the method upon which people reach their conclusion (i.e. reason, logic, evidence etc.) rather than whether the conclusions they reach are correct (or whether they match my own conclusions). This is because, if someone is at least willing to base their views off the same method as mine (or society in general), we can actually have a valid discussion of the issue. A person who rejects reasoned argument and evidence as a source of truth can’t be reasoned with.

This brings us back to the original hypothetical; while the irrational atheists agree with a single conclusion I have drawn, the fact that they use an alternative method for deducing it (say blind faith; accepting a claim as true regardless of the evidence for or against it) means they are more likely to have false views in other areas (politics, science etc.). And being that the theists in this hypothetical are using reason and logic to come to their opinions, they are more likely to have better opinions in these areas. So I would much rather a world where people came to the wrong conclusion on the topic of whether a god exists or not, but were still using the preferable method of coming to other conclusions.

An analogy of what I mean can be found in mathematics classes; from my experience, teachers would give marks on tests if you used the right formulas, but made an error in calculation (and therefore had the wrong answer). Some teachers would even go so far as to not give full marks for a question if working was not shown, even with the right answer. This is pretty much exactly what I mean; it is better to use the right formula, make a mistake in calculation and come to the wrong answer than to use no formula at all and get the answer right by chance.

Sunday, January 9, 2011

Intentional Utilitarianism - A New Approach?

For those of you who are unfamiliar with utilitarianism, it is a view of morality that holds that an action is right if it leads to an increase in wellbeing or, conversely, a decrease in suffering (Thiroux and Krasemann, 2009). Therefore, actions that are immoral are those that lead to a decrease in wellbeing or an increase in suffering. It is part of a much broader category of moral theories described as teleological or consequentialist; moral actions should be judged on their consequences (the other category being the deontological theories; moral actions should be judged by their adherence to rules i.e. the Ten Commandments).

One of the problems I have with utilitarianism (even though I describe myself as such) is that it seems to discount motivations that lead one to their actions. I’ll give an example of a scenario where this becomes an issue;

Scenario 1:

Adam is a 22 year old who, in a fit of rage, intentionally runs over his girlfriend Eve with his car and kills her.

Adam* is a 22 year old who, through a small mistake on his part, runs over his girlfriend Eve* with his car and kills her.

Under utilartarianism, both of these actions are equally immoral; they both lead to the same consequence of Eve being dead. Yet, most of us would agree that there is a difference between the two (that Adam* is not as immoral as Adam). While it might be possible that our moral intuition about this is wrong (and they really are equally immoral), I think this is an case where utilitarianism fails to accurately describe what is moral.

However, I believe there is a simple addendum that can be added to utiltarianism that rectifies this (and many other similar) problems; taking intentions into account. The way to do this, in my opinion, is to have a second set of consequences; the intended consequences. The intended consequences can be compared to the actual consequences to determine the morality of an action. For example, let us assign some values upon which to evaluate this previous scenario;

Eve/Eve*’s death = -1,000

Therefore;

Adam’s actual consequences = -1,000 (he caused Eve’s death)
Adam’s intended consequences = -1,000 (he intended to cause Eve’s death)
Adam’s average consequences = (-1,000 + -1000)/2 = -1,000

Adam*’s actual consequences = -1,000 (he caused Eve*’s death)
Adam*’s intended consequences = 0 (he intended no harm)
Adam*’s average consequences = (-1,000 + 0)/2 = -500

Therefore, we can say that Adam’s actions are more immoral than Adam*’s actions. It should be noted that while the numbers are just arbitarily assigned, the underlying principle still holds; if the intended consequences are moral or neutral, it lessens the immorality of the actual consequence. I’ll also now provide a example in the positive to show that it works both ways;

Scenario 2:

Gill Bates gives a billion dollars to charity to help those in need and gets a tax break unintentionally.

Gill Bates* gives a billion dollars to charity to for the expressed purpose of getting a tax break.

Again, conventional utiliartarianism would say both Gill Bates and Gill Bates* are equally moral; in both cases the charity gets a billion dollars and they both get a tax break. Yet, most of us would say that Gill Bates is more moral. The solution, again, is to take their intentions into consideration;

Giving a billion dollars tocharity = +500
A billionaire getting a tax break = -50 (that extra money he saved could have been better spent by the government/it is unlikely to increase his own well being).

Therefore;

Gill Bates’s actual consequences = +500 (he gave a billion dollars to charity) and -50 (he got a tax break) = +450
Gill Bates’s intended consequences = +500 (he intended to help those in need)
Gill Bates’s average consequences = (+450 + +500)/2 = +475

Gill Bates*’s actual consequences = +500 (he gave a billion dollars to charity) and -50 (he got a tax break) = +450
Gill Bates*’s intended consequences = -50 (he only intended to get the tax break)
Gill Bates*’s average consequences = (+450 + -50)/2 = +200

Again, it is not so much that the numbers I have used accurately reflect the proportional differences in the consequences, just that the underlying principle holds; intended consequences have some bearing on the morality of an action.

It should be noted that this is just a rough outline of my idea. I can see many potential problems that need to be sorted out. An example being is that, in my scenarios, I assumed that actual and intended consequences are equally important (i.e. 50:50). In my opinion, it would be closer to 75:25 (in that, actual consequences are more relevent in determining morality than intended consequences). Another example is that intentions have to be conveyed by the person and, therefore, a person could lie about their intentions to seem more moral than they are (i.e. Gill Bates* could lie and say that he was really doing it for charity, when his true intentions are simply for the tax break).

So that is my idea; feel free to rip it to shreds if you see any problems or offer any advice on improving it.

UPDATED (10/01/2010)

A fellow medical student (thanks Ben) has suggested to me a possible addition to this take on utilitarianism; taking potential consequences into account. This is most applicible to actions that do not always have consequences, such as driving while under the influence. I will again go through a scenario to demonstrate the two competing ideas (utilitarianism vs. Intentional utilitarianism);

Scenario 3:

Clyde drives his car sober, not causing any accidents.

Clyde* drives his car intoxicated, not causing any accidents.

Once more, utilitarianism would have us believe that these two actions are morally equivilant; they both do not cause any suffering. Yet, we would all recognise that driving while intoxicated is clearly immoral. This can be rectified by taking potential consequences into account;

Potential consequences of driving while intoxicated = -500
Potential consequences while driving sober = -10

Therefore;

Clyde’s actual consequences = 0 (he didn’t cause any suffering)
Clyde’s potential consequences = -10 (he drove while sober)
Clyde’s average consequences = (0 + -10)/2 = -5

Clyde*’s actual consequences = 0 (he didn’t cause any suffering)
Clyde*’s potential consequences = -1000 (he drove while intoxicated)
Clyde*’s average consequences = (0 + -500)/2 = -250

One might ask why I am giving a negative value to driving while sober; the reason is that even a perfectly lucid individual who follows the road rules to the letter could still be involved in an accident (i.e. a child running out in front of their car with little warning). Therefore, one accepts a certain level of potential consequences when one gets behind the wheel of a vehicle.

It should also be noted that the relationship between the potential consequences of driving sober and intoxicated is proportional; I am assuming that driving intoxicated increases one’s potential of causing suffering by a factor of a fifty. If, in reality, it only increases it by a factor of twenty, then the average would change accordingly.

A further point to take in is how I am defining the terms;

Potential consequences – The predicted consequences before an action is taken. For example, Russian Roulette with a six shooter, with death being a value of -1000, would have a potential consequences value of -166.66 (-1000/6).

Intended consequences – The hypothetical consequences if an action goes exactly as one intends (i.e. a perfect execution of the action).

It is possible to combine all three sets of consequences; actual, intended and potential. Also, to give a more accurate representation of the completed theory (which is long way off, assuming no one can point out any critical issues as it builds), I will proportion the consequences in a 3:2:1 ratio (i.e. actual consequences are 3, potential consequences are 2, and intended consequences are 1) To demonstrate an example of this, I will revise the intitial scenario;

Scenario 1 (revised):

Adam is a 22 year old who, in a fit of rage, intentionally runs over his girlfriend Eve with his car and kills her.

Adam* is a 22 year old who, through a small mistake on his part, runs over his girlfriend Eve* with his car and kills her.

Eve/Eve*’s death = -1,000
Potential consequences of intending to cause Eve/Eve*’s death = -950
Potential consequences of driving with no impairments = -10

Therefore;

Adam’s actual consequences = -1,000 (he caused Eve’s death)
Adam’s potential consequences = -950 (he intended to cause Eve’s death)
Adam’s intended consequences = -1,000 (he intended to cause Eve’s death)
Adam’s average consequences = ((-1,000 * 3) + (-950 * 2) + (-1000 * 1))/6 = -983.33

Adam*’s actual consequences = -1,000 (he caused Eve*’s death)
Adam*’s potential consequences = -10 (he was driving with no impairments)
Adam*’s intended consequences = 0 (he intended no harm)
Adam*’s average consequences = ((-1,000 * 3) + (-10 *2) + (0 * 1))/2 = -503.33

The reason I do not have the potential consequences of intending to cause Eve/Eve*’s death as the same as actually causing her death is that it is possible to fail; one could attempt to kill someone but not, therefore, due to that possibility, it is valued slightly less negatively.

Again, any criticism will be greatly appreciated.

UPDATED (13/01/2010)

It has been pointed out by Dylan that what I really mean by potential consequences is better described as a ‘recklessness index’; a measure of the risk assoicated with a particlar action leading to a particular outcome. I am choosing, however, to call it a ‘Risk Index’, as I believe this can be equally applied to both positive actions (like donating to charity) and negative actions (such as drink driving).

The Risk Index (RI) is based on another suggestion made by Dylan; Best Available Evidence (BAE). The BAE is the information an actor has available to them on how likely different outcomes are based on their actions prior to performing them (i.e. foresight). In my new equation (see below), I will express it as a percentage. So, to summarize;

The Risk Index (RI) is the percentage chance that an action will lead to an outcome based upon the actor’s Best Available Evidence (BAE). RI is not exactly the same as BAE; RI is what a normal, rational individual would judge to be the probability based on the BAE. This seperates the possibility of a person who has the available evidence to accurately determine the RI, yet does not due to a fault on their part that a normal, rational individual would not make (i.e. not understanding the evidence well enough).

Example: a person is aware that drink driving increases the risk of crashing by twenty times, yet believes the risk of them crashing is lower because they think they are a better than average driver. The RI this person would calculate for themselves drink driving is therefore wrong as the twenty times increase figure is based upon all other things being equal (i.e. any driver that is drunk is 20 times as likely to crash than if he was sober).

Also, upon review of the values I calculated in Scenario 1 and Scenario 1 (revised), I feel that the average consequences between the two individuals (Adam and Adam*) is too close, given their vast differences in moral responsiblity (intending to kill versus a simple mistake). As such, I am altering my equation to be as follows;

Consequences attributable to actor = ((2 * AC) + (1 * IC))/3 * RI

Where;

AC = Actual consequences
IC = Intended consequences
RI = Risk Index

Rather than revise the previous scenario, I will begin with two new examples, one negative and one positive;

Scenario 4:

John intentionally punches Jim in the face.
John*, due to waving his arms about, unintentionally punches Jim* in the face.

Where;

Jim/Jim* being punched in the face = -50
RI of attempting to punch someone = 95% (in this case, I am assuming that John has attempted his punch while Jim was not looking; therefore, if the scenario was different and John was aware that Jim were expecting the punch, the RI would be lower)
RI of waving arms about = 6%

Therefore;

John’s AC = -50 (he punched Jim in the face)
John’s IC = -50 (he intended to punch Jim in the face)
John’s RI = 95% (he attempted to punch Jim in the face)
Consequences attributable to John = ((2 * -50) + (1* -50))/3 * 95% = -47.5

John*’s AC = -50 (he punched Jim in the face)
John*’s IC = 0 (he didn’t intend to punch Jim in the face)
John*s RI = 6% (he was waving his arms)
Consequences attributable to John* = ((2 * -50) + (1 * 0))/3 * 6% = -2

And now for the positive example (negatives are so much simpler);

Scenario 5:

Jane gives $1000 to a well-established charity organisation, and the money is put to good use.
Jane* gives $1000 to a newly created charity organisation, and the money is put to good use.

NOTE: the two charity organisations share the exact same goal; therefore the only variable between them is the experience that each one of them has.

Where;

The charity effectively using the $1000 = +100
RI of giving to a reputable charity = 98% (in that, there is a small possibility that the money will be wasted)
RI of giving to a newly founded charity = 75% (there is a greater chance that this charity may be ineffective, as it has yet to demonstrate it is reliable)

Therefore;

Jane’s AC = +100 (her money was effectively used by the charity)
Jane’s IC = +100 (she intended her money to be used effectively)
RI = 98% (it is a reputable charity)
Consquences attributable to Jane = ((2 * +100) + (1* +100))/3 * 98% = 98

Jane*’s AC = +100 (her money was effectively used by the charity)
Jane*’s IC = +100 (she intended her money to be used effectively)
RI = 75% (it is new charity)
Consquences attributable to Jane* = ((2 * +100) + (1* +100))/3 * 75% = 75

The reasoning behind this is that, even though the money was effectively used by both charities, there was a greater risk in wasted money by donating to the new charity. This does not imply that donating to new charities is in any way immoral (it still has a positive figure), just that, given the potential risk associated with giving money to the new charity, it is slightly more moral to give it to the well established charity. This of course becomes null and void when the choice is between a new charity (75) and no charity at all (0).

I know I should like a broken record now but comment, criticise and otherwise make me your bitch (intellectually speaking, of course).

References:

THIROUX, J. & KRASEMANN, K. 2009. Ethics: Theory and Practise, Pearson International Education.