Life, Liberty, and the Pursuit of Happiness. In that order.
Moral codes are tricky things. Dangerous, even. Even with the most straightforward of intentions, after a few generations or so of interpretation, they have a tendency to spin wildly out of control. You start out with a set of rules that are meant to ensure that people treat each other decently, and you end up with people blugeoning each other to death with your holy tablets.
In our world, the nice thing about moral codes is the same thing that’s nice about standards: there’s so many to choose from. In today’s exercise, I humbly propose to examine some of the prevailing moral codes that currently bestride the planet, and in the end, propose a new — or at least, newly argued — one. Heady stuff, indeed — so let’s see if I can firewalk these coals without getting too badly burned.
The first option most folks consider when shopping for a moral code is what we in the software business like to call a “packaged system”. Take it out of the shrink-rap, a bit of installation, and you’re ready to run — soup to nuts. No muss, no fuss, no thought required — or encouraged. The biggest vendors in this particular market are of course the major established religions of the world. Islam, Christianity, Judiasm: all come complete with often surprisingly detailed instructions for exactly how to tell right from wrong; good from evil. Happily, the Big Three tend to agree on which category the vast majority of things fall into. Less happily, the small percentage of things which they disagree on has fed enough hard feelings to keep the planet pretty well engulfed in war for the past few millenia.
The Big Three aren’t the only game in town, of course: there are more religions begging to tell you exactly how to live your life than you can shake a stick at. (Just try it sometime, you’ll run out of shake or stick real fast). But religions aren’t the only packaged systems out there by any means.
You can also get all the benefits of a packaged system without any of that tedious God stuff, if that kind of thing troubles you. Marxism, Socialism — pretty much anything ending in “ism” will get you up and running with a set of ideas that are meant to be taken as fundamental truths; ideas that you can live your life by.
But what if the idea of a packaged system doesn’t appeal? Not a problem: roll your own.
The folks who roll their own moral codes are generally an ornery, sometimes even antisocial lot. Usually, they’ve flat-out rejected the Big 3’s pretentions to own universal truth; often they label themselves agnostic, atheist, or even (the grumpier ones) antitheist. And they don’t necessarily like the idea of the “isms”, either; the idea of having their moral system handed to them on a plate makes them inherently suspicious. Unfortunately, by telling you what they are against, they haven’t actually told you what they are for.
So how do most people who roll their own moral code do it? Usually, they start with a fundamental principle which they feel is the most important to uphold in their lives. And it seems that however they phrase it, most folks tend to pick the same general idea: do unto others as you would have them do unto you. Or: Do no harm.
Or: maximize happiness in the world. Make people happy.
These all reduce down to the same basic fundamental concept — and its the same one generally followed by those who haven’t ever even thought in any explicit terms about their own moral code: to maximize “happiness” in the world, and minimize “suffering”. Do good, not bad.
This sounds great, on a superficial level. But I am here to argue that it’s an absolutely lousy foundation to build a moral framework on.
The biggest problem is that “happiness” and “suffering” are totally and unavoidably subjective measures. Nobody is ever going to be able to define human happiness in a way that would allow an objective scale of it. You wouldn’t even know where to start. Is physical pleasure happiness? Emotional joy? Which is more important? How about satisfaction from a job well done?
It’s a mess. Most people don’t even stand a chance of assessing their own happiness — let alone judging what makes other people happy. And yet that basic assumption — that you can objectively assess what will make other people happy — lies at the heart of the moral systems on which a very large number of people on our fair planet base their decisions on, day in and day out.
So what happens? You end up with perfectly well meaning people — people following that nice moral code — who disagree about what happiness is. And guess what? They start thinking that they can decide what will make other people happy. Unfortunately, those other people don’t particularly like the idea of happy that the first group of people came up with for them, which of course makes the first group pissed off that the ungrateful bastards aren’t appreciating all the happy they’ve got in store for them — and soon enough, before you know it you’re back to people getting whacked over the head with stone tablets.
OK, smartguy, you say, that’s all fine and good. But it’s a moral system, man, it’s got to be subjective. Haven’t you ever heard of moral relativism?
Shudder. Let’s just say we’ve met, and that it didn’t go well.
I will accept, that in a truly rigorous scientific sense, there’s no way to build a truly, 100% objective moral system. At the heart of it, you’ve got to pick something — some principle to start with that you decide is more important than the infinity of other possible principles that you could have selected. And I don’t think there’s really any way to objectively and/or scientifically argue that any one principle is “better” than any other in a rigorously proveable sense.
But…. but! If you pick the right starting principle to use as your foundation, I claim you can arrive at a system that from there on up can be completely objective.
I’ve already argued that nice as it sounds, “happiness” makes a crummy first principle for a moral system. It’s just too squishy, too difficult to measure — too subjective. So we need something more rigorous, something that can actually be judged objectively. Something that you could legitimately measure and, more importantly, measure in a way that two different people would come up with the same answer. And not so incidentally: it would certainly be nice if the value was something that you truly believed was a valuable and good thing (and yes, that’s subjective). A thing that you’d be comfortable living in a world where it — whatever it is — is the most important thing to everyone.
And so my modest proposal: Freedom.
Yup, freedom. Big lead up just to get to that, right? Freedom; everyone’s for freedom. Duh. You made me read this whole boring thing just to get to freedom?
But I challenge you to bear with me, and think through the implications of replacing that squishy “make people happy” in the standard model moral system with “make people free.”
The implications, I think, are subtle, but profound. And the reason is that freedom is actually a concept that, theoretically at least, can be measured objectively.
Think of every human life as a decision tree starting at birth, and branching outward in a huge forrest of possible decisions and actions that all, eventually, lead down a path to that person’s eventual demise. Some paths are short; some are long. At any given moment, you can picture a person sitting at one spot on that tree of possibilties. And he’s got a finite set of options at any moment; a finite set of choices that will lead him down the paths of his life. At some moments, he’ll have many paths to choose from — at others, he’ll have few.
To use a crude example; a man in a maximum security prison serving a life sentence without parole has a very low freedom quotient, becaus
e in a very rigorous sense, he simply doesn’t have many branches to choose from. Whereas that same man, were he never to have been convicted, would have a significantly higher quotient.
Of course, we don’t have any way to actually rigorously measure the exact freedom quotient of a person. But just because we can’t take the measurement doesn’t mean the value doesn’t exist. And yes, we’ll still have arguments between people who, examining the same set of possible course of actions, disagree as to which course will maximize freedom. But I argue that comparing these potential disagreements with the ones we’re already stuck with over what will increase “happiness” argues strongly in favor of a freedom-based code. People arguing over what will maximize freedom would look like two refs arguing over whether the ball was in the end zone or not. There’s an objective answer, but neither one has a perfect way to measure reality to get at it. People arguing about maximizing happiness, on the other hand, are analagous to those same two refs arguing —except one of them thinks the game is football, and the other thought they were judging hockey.
This is not to say that happiness has no place in a moral system. Particularly in small-scale, interpersonal relations, it is not clear to me that applying the freedom-test really tells you much about how you should act. (Will it “increase freedom” if I do a favor for a friend? If no, does that mean I shouldn’t do it?). And so I think that there is still a place to fall back on the old “what do I think will increase happiness” question. But only after you’ve tried to find a course that maximizes freedom.
I’ve been mulling this idea over in my mind for some time, struggling to find an appropriate way to convey my thoughts. And tonight, it struck me that some very wise men already laid out the roadmap — intentionally, or not, I’m not historian enough to know for sure. But it is there, if you look for it:
Life: For without preserving life, there is nothing.
Liberty: Because freedom is the foundation upon which all else rests.
The pursuit of happiness: For when maximizing freedom doesn’t tell you which way to go.
It’s all there. Just make sure you get the order right.