My thoughts on science, philosophy, politics, religion and everything else.
Friday, November 11, 2011
Short and Sharp: A History of Dishonesty
Tuesday, September 6, 2011
Why I Care (And Why You Should Too)
- These are topics that are personal in nature; therefore people should be free to decide for themselves what they wish to believe
- These are topics that people will never change their opinions on
- These are topics that are too serious to discuss
- These topics are unimportant to the individual who does not wish to discuss them
Sunday, July 10, 2011
Short and Sharp - Limitiation of Liberty
Thursday, July 7, 2011
Is Donating Bone Marrow A Charitable Act?
A man is out for a late night walk down a country road and he notices a car stopped on the side of the road with its hazard lights on. Upon investigation, he finds a woman breathing heavily and clutching her chest. She tells him she believes she is having a heart attack. Unfortunately, neither of them have a mobile phone on them. She asks him to use her car to drive her to the hospital, as she is in too much pain to do so. The man does have a license. After a few seconds, the man says that he rather not, as the potential risk of crashing the car is too high.
Saturday, June 25, 2011
Short and Sharp: What if you're wrong?
Suppose a friend told you they had purchased a new car, would you believe them? As this is a fairly unremarkable claim (many people own cars), it would require very little evidence for you to believe them, perhaps even just their word alone. However, suppose now that you were relying on this friend to drive you to a very important meeting. Would you be willing to rely on just their word or would you require more evidence now that the claim has the potential to impact upon your life? If you believe them, and they are wrong (either by lying or just being misinformed; say they thought the car would be ready for their use on that day, but it was delayed), you are now stuck without a way to get your meeting.
Wednesday, June 22, 2011
'Ethical' Egoists
An earthquake occurs in China and buries a man’s family in rumble. In the process of digging them out, he discovers that across the road an important official and his family are buried. The man decides to stop trying to rescue his family and rescue the official and his family instead. He successfully rescues them, but his own family die in the process. When asked why he made the choice that he did, he said that by rescuing the official, he could go on to coordinate the rescue effort (by virtue of having extensive knowledge of the local area) and end up saving more people.
A serial killer has you locked in a chair. In front of you is your family in a cage, ten families you do not know in another. You are given the choice of who dies; either your family or the ten families. Which would you choose?
Wednesday, June 15, 2011
Short and Sharp - Reformation of Child Molesters
Saturday, May 7, 2011
Common Questions to Atheists
Sunday, April 3, 2011
Short and Sharp - Conformity
Person A’s desires (ranked by preference):
- Person A wants to be a porn-star.
- Person A wants to be a nurse.
- Person A wants to conform to a group.
Saturday, March 26, 2011
Short and Sharp - Basic Beliefs
- I see a tree.
- I had breakfast this morning.
- That person is angry.
Saturday, February 5, 2011
Biological Immortality
Friday, January 21, 2011
Being wrong for the right reasons
I place a higher value on the method upon which people reach their conclusion (i.e. reason, logic, evidence etc.) rather than whether the conclusions they reach are correct (or whether they match my own conclusions). This is because, if someone is at least willing to base their views off the same method as mine (or society in general), we can actually have a valid discussion of the issue. A person who rejects reasoned argument and evidence as a source of truth can’t be reasoned with.
This brings us back to the original hypothetical; while the irrational atheists agree with a single conclusion I have drawn, the fact that they use an alternative method for deducing it (say blind faith; accepting a claim as true regardless of the evidence for or against it) means they are more likely to have false views in other areas (politics, science etc.). And being that the theists in this hypothetical are using reason and logic to come to their opinions, they are more likely to have better opinions in these areas. So I would much rather a world where people came to the wrong conclusion on the topic of whether a god exists or not, but were still using the preferable method of coming to other conclusions.
An analogy of what I mean can be found in mathematics classes; from my experience, teachers would give marks on tests if you used the right formulas, but made an error in calculation (and therefore had the wrong answer). Some teachers would even go so far as to not give full marks for a question if working was not shown, even with the right answer. This is pretty much exactly what I mean; it is better to use the right formula, make a mistake in calculation and come to the wrong answer than to use no formula at all and get the answer right by chance.
Sunday, January 9, 2011
Intentional Utilitarianism - A New Approach?
Scenario 1:
Adam is a 22 year old who, in a fit of rage, intentionally runs over his girlfriend Eve with his car and kills her.
Adam* is a 22 year old who, through a small mistake on his part, runs over his girlfriend Eve* with his car and kills her.
However, I believe there is a simple addendum that can be added to utiltarianism that rectifies this (and many other similar) problems; taking intentions into account. The way to do this, in my opinion, is to have a second set of consequences; the intended consequences. The intended consequences can be compared to the actual consequences to determine the morality of an action. For example, let us assign some values upon which to evaluate this previous scenario;
Eve/Eve*’s death = -1,000
Therefore;
Adam’s actual consequences = -1,000 (he caused Eve’s death)
Adam’s intended consequences = -1,000 (he intended to cause Eve’s death)
Adam’s average consequences = (-1,000 + -1000)/2 = -1,000Adam*’s actual consequences = -1,000 (he caused Eve*’s death)
Adam*’s intended consequences = 0 (he intended no harm)
Adam*’s average consequences = (-1,000 + 0)/2 = -500
Scenario 2:
Gill Bates gives a billion dollars to charity to help those in need and gets a tax break unintentionally.
Gill Bates* gives a billion dollars to charity to for the expressed purpose of getting a tax break.
Again, conventional utiliartarianism would say both Gill Bates and Gill Bates* are equally moral; in both cases the charity gets a billion dollars and they both get a tax break. Yet, most of us would say that Gill Bates is more moral. The solution, again, is to take their intentions into consideration;
Giving a billion dollars tocharity = +500
A billionaire getting a tax break = -50 (that extra money he saved could have been better spent by the government/it is unlikely to increase his own well being).
Therefore;
Gill Bates’s actual consequences = +500 (he gave a billion dollars to charity) and -50 (he got a tax break) = +450
Gill Bates’s intended consequences = +500 (he intended to help those in need)
Gill Bates’s average consequences = (+450 + +500)/2 = +475Gill Bates*’s actual consequences = +500 (he gave a billion dollars to charity) and -50 (he got a tax break) = +450
Gill Bates*’s intended consequences = -50 (he only intended to get the tax break)
Gill Bates*’s average consequences = (+450 + -50)/2 = +200
Again, it is not so much that the numbers I have used accurately reflect the proportional differences in the consequences, just that the underlying principle holds; intended consequences have some bearing on the morality of an action.
It should be noted that this is just a rough outline of my idea. I can see many potential problems that need to be sorted out. An example being is that, in my scenarios, I assumed that actual and intended consequences are equally important (i.e. 50:50). In my opinion, it would be closer to 75:25 (in that, actual consequences are more relevent in determining morality than intended consequences). Another example is that intentions have to be conveyed by the person and, therefore, a person could lie about their intentions to seem more moral than they are (i.e. Gill Bates* could lie and say that he was really doing it for charity, when his true intentions are simply for the tax break).
So that is my idea; feel free to rip it to shreds if you see any problems or offer any advice on improving it.
UPDATED (10/01/2010)
A fellow medical student (thanks Ben) has suggested to me a possible addition to this take on utilitarianism; taking potential consequences into account. This is most applicible to actions that do not always have consequences, such as driving while under the influence. I will again go through a scenario to demonstrate the two competing ideas (utilitarianism vs. Intentional utilitarianism);
Scenario 3:
Clyde drives his car sober, not causing any accidents.
Clyde* drives his car intoxicated, not causing any accidents.
Once more, utilitarianism would have us believe that these two actions are morally equivilant; they both do not cause any suffering. Yet, we would all recognise that driving while intoxicated is clearly immoral. This can be rectified by taking potential consequences into account;
Potential consequences of driving while intoxicated = -500
Potential consequences while driving sober = -10Therefore;
Clyde’s actual consequences = 0 (he didn’t cause any suffering)
Clyde’s potential consequences = -10 (he drove while sober)
Clyde’s average consequences = (0 + -10)/2 = -5Clyde*’s actual consequences = 0 (he didn’t cause any suffering)
Clyde*’s potential consequences = -1000 (he drove while intoxicated)
Clyde*’s average consequences = (0 + -500)/2 = -250
One might ask why I am giving a negative value to driving while sober; the reason is that even a perfectly lucid individual who follows the road rules to the letter could still be involved in an accident (i.e. a child running out in front of their car with little warning). Therefore, one accepts a certain level of potential consequences when one gets behind the wheel of a vehicle.
It should also be noted that the relationship between the potential consequences of driving sober and intoxicated is proportional; I am assuming that driving intoxicated increases one’s potential of causing suffering by a factor of a fifty. If, in reality, it only increases it by a factor of twenty, then the average would change accordingly.
A further point to take in is how I am defining the terms;
Potential consequences – The predicted consequences before an action is taken. For example, Russian Roulette with a six shooter, with death being a value of -1000, would have a potential consequences value of -166.66 (-1000/6).
Intended consequences – The hypothetical consequences if an action goes exactly as one intends (i.e. a perfect execution of the action).
It is possible to combine all three sets of consequences; actual, intended and potential. Also, to give a more accurate representation of the completed theory (which is long way off, assuming no one can point out any critical issues as it builds), I will proportion the consequences in a 3:2:1 ratio (i.e. actual consequences are 3, potential consequences are 2, and intended consequences are 1) To demonstrate an example of this, I will revise the intitial scenario;
Scenario 1 (revised):
Adam is a 22 year old who, in a fit of rage, intentionally runs over his girlfriend Eve with his car and kills her.
Adam* is a 22 year old who, through a small mistake on his part, runs over his girlfriend Eve* with his car and kills her.Eve/Eve*’s death = -1,000
Potential consequences of intending to cause Eve/Eve*’s death = -950
Potential consequences of driving with no impairments = -10Therefore;
Adam’s actual consequences = -1,000 (he caused Eve’s death)
Adam’s potential consequences = -950 (he intended to cause Eve’s death)
Adam’s intended consequences = -1,000 (he intended to cause Eve’s death)
Adam’s average consequences = ((-1,000 * 3) + (-950 * 2) + (-1000 * 1))/6 = -983.33Adam*’s actual consequences = -1,000 (he caused Eve*’s death)
Adam*’s potential consequences = -10 (he was driving with no impairments)
Adam*’s intended consequences = 0 (he intended no harm)
Adam*’s average consequences = ((-1,000 * 3) + (-10 *2) + (0 * 1))/2 = -503.33
The reason I do not have the potential consequences of intending to cause Eve/Eve*’s death as the same as actually causing her death is that it is possible to fail; one could attempt to kill someone but not, therefore, due to that possibility, it is valued slightly less negatively.
Again, any criticism will be greatly appreciated.
UPDATED (13/01/2010)
It has been pointed out by Dylan that what I really mean by potential consequences is better described as a ‘recklessness index’; a measure of the risk assoicated with a particlar action leading to a particular outcome. I am choosing, however, to call it a ‘Risk Index’, as I believe this can be equally applied to both positive actions (like donating to charity) and negative actions (such as drink driving).
The Risk Index (RI) is based on another suggestion made by Dylan; Best Available Evidence (BAE). The BAE is the information an actor has available to them on how likely different outcomes are based on their actions prior to performing them (i.e. foresight). In my new equation (see below), I will express it as a percentage. So, to summarize;
The Risk Index (RI) is the percentage chance that an action will lead to an outcome based upon the actor’s Best Available Evidence (BAE). RI is not exactly the same as BAE; RI is what a normal, rational individual would judge to be the probability based on the BAE. This seperates the possibility of a person who has the available evidence to accurately determine the RI, yet does not due to a fault on their part that a normal, rational individual would not make (i.e. not understanding the evidence well enough).
Example: a person is aware that drink driving increases the risk of crashing by twenty times, yet believes the risk of them crashing is lower because they think they are a better than average driver. The RI this person would calculate for themselves drink driving is therefore wrong as the twenty times increase figure is based upon all other things being equal (i.e. any driver that is drunk is 20 times as likely to crash than if he was sober).
Also, upon review of the values I calculated in Scenario 1 and Scenario 1 (revised), I feel that the average consequences between the two individuals (Adam and Adam*) is too close, given their vast differences in moral responsiblity (intending to kill versus a simple mistake). As such, I am altering my equation to be as follows;
Consequences attributable to actor = ((2 * AC) + (1 * IC))/3 * RI
Where;
AC = Actual consequences
IC = Intended consequences
RI = Risk Index
Rather than revise the previous scenario, I will begin with two new examples, one negative and one positive;
Scenario 4:
John intentionally punches Jim in the face.
John*, due to waving his arms about, unintentionally punches Jim* in the face.Where;
Jim/Jim* being punched in the face = -50
RI of attempting to punch someone = 95% (in this case, I am assuming that John has attempted his punch while Jim was not looking; therefore, if the scenario was different and John was aware that Jim were expecting the punch, the RI would be lower)
RI of waving arms about = 6%Therefore;
John’s AC = -50 (he punched Jim in the face)
John’s IC = -50 (he intended to punch Jim in the face)
John’s RI = 95% (he attempted to punch Jim in the face)
Consequences attributable to John = ((2 * -50) + (1* -50))/3 * 95% = -47.5John*’s AC = -50 (he punched Jim in the face)
John*’s IC = 0 (he didn’t intend to punch Jim in the face)
John*s RI = 6% (he was waving his arms)
Consequences attributable to John* = ((2 * -50) + (1 * 0))/3 * 6% = -2
And now for the positive example (negatives are so much simpler);
Scenario 5:
Jane gives $1000 to a well-established charity organisation, and the money is put to good use.
Jane* gives $1000 to a newly created charity organisation, and the money is put to good use.NOTE: the two charity organisations share the exact same goal; therefore the only variable between them is the experience that each one of them has.
Where;
The charity effectively using the $1000 = +100
RI of giving to a reputable charity = 98% (in that, there is a small possibility that the money will be wasted)
RI of giving to a newly founded charity = 75% (there is a greater chance that this charity may be ineffective, as it has yet to demonstrate it is reliable)Therefore;
Jane’s AC = +100 (her money was effectively used by the charity)
Jane’s IC = +100 (she intended her money to be used effectively)
RI = 98% (it is a reputable charity)
Consquences attributable to Jane = ((2 * +100) + (1* +100))/3 * 98% = 98Jane*’s AC = +100 (her money was effectively used by the charity)
Jane*’s IC = +100 (she intended her money to be used effectively)
RI = 75% (it is new charity)
Consquences attributable to Jane* = ((2 * +100) + (1* +100))/3 * 75% = 75
The reasoning behind this is that, even though the money was effectively used by both charities, there was a greater risk in wasted money by donating to the new charity. This does not imply that donating to new charities is in any way immoral (it still has a positive figure), just that, given the potential risk associated with giving money to the new charity, it is slightly more moral to give it to the well established charity. This of course becomes null and void when the choice is between a new charity (75) and no charity at all (0).
I know I should like a broken record now but comment, criticise and otherwise make me your bitch (intellectually speaking, of course).
References:
THIROUX, J. & KRASEMANN, K. 2009. Ethics: Theory and Practise, Pearson International Education.