Morality and the Subjunctive: Ordering Hypothetical Situations

Read that carefully – I wrote “subjunctive”, not subjective. There’s an importance difference between the two. A whole bunch has been written about right and wrong and whether they are subjective or objective.  That discussion has gone on a long time and gotten us nowhere useful. Among people who agree on the existence of objective morality, they can’t agree on what is moral and what isn’t moral and how we can decide morality. Their agreement is not all that helpful to someone who wants to know what is right.

So let’s take a look at morality from another angle – that of language and grammar.  The subjunctive mood in language deals with hypothetical situations.  “If you were to come over to my house and threaten me, I would ask you to leave.”    This statement describes two possible situations, and a direct path from one to the other.  Hypothetical situation A (you come over and threaten me) is linked to  hypothetical situation B (I ask you to leave) with certainty.

There are many different ways I could respond if you were to come over to my house and threaten me. Let’s consider three possible responses:

1) I ask you to leave my house.
2) I rip your windpipe out of your throat.
3) I inform you that you are on my property, that I am holding a weapon, and that I will use the weapon on you if you do not leave.

These three responses can all be considered  “possible futures” from the situation where you come to my house and threaten me. In the language of physics and the multiverse, these are three alternate locations in   different realities in the multiverse, all reachable through different paths from hypothetical situation A.    In the language of games, when you come to my house and threaten me, you have made a move which alters the game board. There now are many moves available for me to choose from;  I could also choose any of the above, or I could chose to spin in a circle and put my thumb on my nose.   In the language of morality, some of those responses are “right” and some are “wrong”. There is widespread disagreement about which of those choices is right, if any.  Even among people who agree that it is meaningful and correct to  insist that “right and wrong are objective truths”, they will disagree over just which choices are objectively true. Morality also says some choices – worldtracks between configurations in the language of the multiverse, or moves in the language of the game – are invalid. Even the people who agree with that claim, however, rarely agree on which choices are valid and which choices are invalid.  They agree only on the validity of ‘valid choices’ as a concept.

Like morality, the languages of physics and games both give us the ability to to determine which moves are valid and which moves are not.  I can’t, for example, construct a perpetual motion machine in response to you threatening me. Nor can I reduce the total amount of entropy in the universe, or turn you into a bowl of jello just by thinking about it. Those moves are invalid.  The reasons I can’t do those things have to do with the laws of physics.  Just like morality, when it comes  to physics and games, people agree that “you can’t just do whatever you want” – but unlike morality, they also agree on which things can be done and which things can’t.  Everyone will accept that claim “you can’t build a perpetual motion machine in response to a person who threatens you in your home.”  In the example of games – such as chess – there is perfect agreement about which moves are valid and which moves aren’t. In the example of physics, there  is less agreement, but it’s still far stronger than that which exists in  morality. People disagree over what is possible, but in morality, we aren’t just disagreeing over what is possible but over what should or ought to be done.

So we have established that the languages of morality, physics, and games all place restrictions upon which moves, choices, or transitions between situations are valid and which ones are invalid.  The difference between these languages is that questions of morality are in much more debate. There is no ‘standard model’ of reality, accepted the way the standard model of physics is accepted.  There is no ‘official rules of morality’  which is accepted the same way the rules of chess tend to stay the same.

Why am I talking about this?  When I heard a typical debate of the form “Is behavior X moral?” before age 14 or so – I had an answer of which I was certain. Like many people who believe in objective morality, I became offended when people suggested that objective morality didn’t exist. I thought they were cowards, weak, or selfish people who refused to acknowledge it was wrong to hurt the  innocent.  Years later, working through a few hypothetical scenarios, I eventually came to the conclusion that morality was entirely subjective. I concluded that the total lack of agreement on which choices were moral meant it was meaningless to say that some choices were objectively moral and some where objectively immoral.

This conclusion – that morality was entirely subjective –  was very painful for me, and destructive in my personal life because I lived without a strong sense of direction.  If ‘north’ tells you which way you are going on earth, a sense of right and wrong – a  moral compass – helps you navigate the complexities of the physical world and the choices it presents.  I still had a moral compass – I still had strong feelings about choices I made – but I had no way to structure them. I was like a man lost in the wilderness, finding some areas he liked and some he didn’t,  rushed along the river of time. I could paddle furiously this way and that, and yet the river always seemed to carry me, sometimes dashed into rocks along the way.   I had a compass in my pocket, but i didn’t trust it because sometimes when i paddled north, i banged into rocks just as much as when i paddled south.

When I was in this period of not believing in morality – not believing in any meaningful way to order hypothetical situations – I became frustrated rapidly because debates about morality just seemed silly or pointless. There was never a meaningful conclusion and it never really went anywhere.

I finally got out of that mess with the help of Alan Turing. You see, Alan Turing developed mathematics showing that there are functions which can’t be computed. “What does morality have to do with computers,” you ask?

Most people would agree that  you should not blow up the entire world and kill everyone. Most people would agree, writing a computer program that blows up the entire world and kills everyone would be a bad choice. Alan Turing proved that no computer program can answer the question – will this other program blow up the entire world?

Morality is uncomputable. Boom. There it was.

The problem I had with all the moral debates in the past was that they all proceeded under the assumption that we should be able to take morality and boil it down to a set of basic  principles which we can apply mechanistically to any situation – ignoring context not applicable to the principles – and say “this choice is right, that choice is wrong.” Never being brought out into the open, this assumption underlay everything people were saying about right and wrong.   “It is always wrong to kill innocent people” – that is a mechanistic principle which ignores context. It is like a very simple computer program.  Some people will insist that always following this computer program – this fixed set of rules for responding to a given hypothetical situation in a given way – will stop you from doing wrong.

We were probably making this wrong assumption because it’s worked pretty OK for us in physics and Mathematics. Except, around the same time Alan Turing  proved the existence of uncomputable functions, a guy named Kurt Godel proved the limits that apply to axiomatic, mechanical systems of logic.  We have yet to apply Godel’s incompleteness theorem and the existence of uncomputable functions to morality, and this is why we are always fighting about what is right and wrong.

A godel-aware morality is one which rejects principles as the ultimate arbiter of truth, and says instead, every situation must be considered fully according to all of the information you have available, including your conscience. The Catholic theology constructed by St. Augustine  was built before Godel and Turing showed the limits of what logic is capable of.   It was built on the assumption that we could use logic to find the truth mechanistically – the assumption that every serious mathematician had, until a gay man and a soft-spoken schizophrenic proved that they were all wrong.

I dig that the new pope is all about protecting the poor and the marginalized. I’m all for that.  The catholic church’s opposition to homosexuality, though – well that all comes from the broken use of axiomatic logic in a place where it’s not appropriate. All humans are infinitely valuable – even the weak and suffering. Ok, i buy that. I agree. Therefore, the mechanism by which humans come into being – sex – should be treated with dignity and respect. Ok, I but that, too, but there are caveats and contextual variables   which must be considered.   Therefore, sex that doesn’t lead to the possibility of procreation is wrong – NO, I don’t agree there, and I don’t have to because i reject your use of axiomatic logic to solve a problem which is uncomputable.

I believe in right and wrong again. They are uncomputable. They must be. If right and wrong were computable, I could offer no opinions on the morality of computer programs. And i’ve seen some messed up things done with computers.

If you take nothing else from this – please understand this last argument – Morality is uncomputable.

There are many meaningful ways to order hypothetical situations – but no computational process, no set of  lifelessly applied rules –  can allow you to traverse the multiverse, to move from one configuration to another – to make choices – without causing problems if it becomes the sole mechanism by which you make choices.  That is what Isaac Asimov argued in I Robot, that is what Jesus argued when he healed the blind on the sabbath, and that’s what I’m trying to tell you now.

One thought on “Morality and the Subjunctive: Ordering Hypothetical Situations

share your thoughts!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s