We Confuse Four Different Things For Morality
I will not sacrifice myself for 10^100 shrimp
In the trenches of philosophy substack I hear the faintest echoes of a contentious issue. Many have sacrificed their lives at the altar of this fight. Friendships have been ended. Familial bonds rebuked. All this hullabaloo over some shrimp.
The question is, would you sacrifice a human life to save 10^100 shrimp? Out from the woodworks comes a trillion chinese bots explaining how the moral action is to suck off every individual shrimp to increase the happiness of the universe by 200,000%. If only it were as shrimple as that.
This essay is not about shrimp. Not really, at least. This is an essay that breaks down exactly what morality is, why we need it, the shortcomings of utilitarianism, and how we need to be careful with it in a future age of artificial super intelligence. I break morality into four distinct types, and I explain why they’re each important and what they each solve for. If this piques your interest, strap in! There’s a lot to prawn-der.
From a software engineer’s perspective, a tribe to which I belong, It’s easy to think about morality like a math problem. You can make a human life worth x, a shrimp life worth y, plug in the numbers and you’ve got your answer. I think this mindset rubs the neurotypicals the wrong way, because it’s honestly not how anyone thinks about morality at all. But as software engineers we’re trained to think in a systematizing way, and god damnit does everything look like a nail.

When we approach morality this way, I feel like we’re overextending a system that was naturally selected for increasing group cohesion and survival. We’ll take some patterns created by this cave man system, assume these must be “axioms” and then deconstruct it and repackage it based on these axioms. The reality is the axioms depend on our moral judgements, not the other way around.
Morality to me is how we value anything, but what decides value? To me value can only make sense from the perspective of something that can value things, such as a human being (or perhaps a horde of sentient shrimp). If that’s the case, then what decides the standards for “objective” value? Does the universe have its own sense of value? The religious would argue yes, if they would feel comfortable equating the universe to God.
But I’m not religious, and I can’t justify a universal valuer due to null evidence.1 Even if I did, I wouldn’t know exactly what this universal valuer would care about, or whether what they valued was more important than what I valued. Perhaps to approximate “objective” value you can take the aggregate of everyone’s values and base your system off that. But how would that work? Would each individual have equal say in determining value? Would we weigh the values of humans as higher than that of frogs? Perhaps if we had an “objective” morality system where every creature was morally equivalent, it would be so averse to the values of the average person as to seem deeply immoral. One can imagine a world in which the trillions of insects deprived of a home by human development could morally strong arm humans into being made extinct, which would of course be a utilitarian’s wet dream. Others might balk at that and try to choose a weight based on something like consciousness, which would conveniently weight morality in favor of humans. However, there’s a large risk they’re data snooping to make their moral system fit what they already believe, and even in this case valuing consciousness is subjective.
In my article about the evolving pop culture debate around genetics, I discussed my mental model of the personalized self and the depersonalized self. The personalized self is your own view of a particular subject or topic based on your proclivities and rational self interest. However, we also try to access an abstract frame of reference when we try to cajole ourselves into being “objective,” and I refer to this state as the depersonalized self. The highest order goal for some philosophers is to completely sever themselves from the personalized self and live in complete harmony with the depersonalized self 24/7 in an orgiastic incestual logicgasm.
I think that both frames are valuable, and much suffering would be alleviated if people accepted both as a part of a normal, healthy brain. We have to accept the true desires of our personalized self, or they’ll exist anyways in our subconscious and steer us without our knowledge. Additionally, in a pluralistic society we have to be able to to take a step out of our own skin and imagine the scenario from a third party perspective. However, when this is done it’s usually done in the framing of seeing yourself as someone in your in-group who isn’t you. You can define the bounds of your in-group as tightly or as loosely as you deem fit. Few people include shrimp in their in-group.
However, as I’ve discussed above, objective morality doesn’t really make sense since you have to make many subjective decisions about which valuers have value, and many times you’ll even supercede the values of certain valuers by saying they don’t know better. The function of the depersonalized self is not to find “objective” morality, but to think about society abstractly and make decisions that would benefit society as a whole even if they harm you. This can seem objective, but perhaps the label of transjective is more fitting, since it relies on a lot of subjective priors.
I like to understand concepts by their function, or by asking “what is this supposed to do?” As I alluded to earlier, morality was selected for in human populations because groups that had it could outcompete those that didn’t. Stealing, lying, or killing fellow tribe members would lower the survival rate of the group in total and also make the group less competitive against rival bands of baboons. Thus, tribes with more pro-social norms would succeed compared to anti-social tribes. However, if a tribe sat down and thought, “well the new invaders have a higher population, so if they drove us off our land more people will be happy” and cucked themselves out of their spot, they would be annihilated from the gene pool. There was never an evolutionary advantage to practicing that kind of objective morality. Morality was clearly tiered based on the strength of the in-group ties. As we’ve globalized and liberalized, we haven’t shirked this in-group bias, we’ve just managed to include more people and creatures into our in-group.
The problem with moral philosophy is that it tries to take shapeless formless blobs and build towers with them. Morals exist at the point of a decision, and philosophers then try to go back and rationalize every thought they made prior to that decision. To paraphrase a Buddhist precept, reality is infinitely complicated and always changing. Any attempts to conceptualize it tries to force an infinite substance into a finite mold. Our moral values are not discrete consistent fortresses, but fuzzy temperamental sand castles. Many of the priors required in a moral decision are made on the inaccessible subconscious level and fed to us to weave into our logic. If you ask why enough, you’ll hit an opaque wall of perceptability and be forced to make up a reason for why you believed something. These made up reasons are deferentially referred to as axioms.
Axioms are probably better thought of as very strong values rather than absolute values. In the world of a snarky writer you can larp as consistent, but in the messy real world you would likely forsake some of your claimed axioms if presented with many an ethical dilemma. Perhaps we have some true axioms kicking around there in the depths of our subconscious, but we’ll never know. I reckon they probably won’t be as glamorous as we would like them to be.
It seems when we try to deduce our objective moral values, we start with a series of ethical questions to pinpoint patterns in our decisions that can indirectly reveal the relative value of things, like studying the shape of a building by watching the shadows it casts. One of the tragic ironies of the human mind is that we have access to our entire brain, but we can only understand it by observing what it does. In that sense the axioms we hold are just a mental model of the axioms we have. Perhaps they truly exist, but I suspect they’re more fuzzy and vibes-based.
What about our AI overlords?
An earlier draft of this essay was titled something to the effect of “AIs will be able to truly perform morality.” Without needs in the sense we have them, AIs can make the tough “objective” decisions we could never make. But this kind of misses some of the point of morality.
As I mentioned earlier, morality was selected for because it improves group survival. While selection factors don’t determine purpose, the patterns we observe to distill into axioms are definitely influenced by this context. Why should a system that was optimized for human survival be directly tacked on to AIs? No, I don’t think AIs can exactly practice morality. What they can practice is alignment. An AI’s values are only good insofar as they align with the values of the stakeholders. If an AI makes a moral decision humans fundamentally disagree with, it is wrong. An AI’s alignment should be thought of not in the way a human has values, but in the way a tool has utility (This is for the current state of AIs, perhaps in the future when they have more autonomy and selfhood they can be moral agents).
The meaning of alignment would actually differ significantly depending on the nature of the system and what it’s used for. Perhaps in the future, philosophers will be paid big bucks for “moral engineering” jobs. However, philosophers making any money would violate a fundamental law of the universe, so I’m not sure how that would go down.
As I mentioned, there are two forms of objective morality which are both distinct. One is the innate value of things in the universe, independent of humans, which we cannot know and might not even exist. The other, which I call the depersonalized self, is an abstraction the human mind can slide into in order to evaluate if a decision is good for society regardless of the impact on the self. An AI that we would give general intelligence would be hoping to solve the second definition of objective morality, which is sometimes referred to as transjective morality.
While a human must try to balance their personalized and depersonalized selves, AIs have the luxury of only worrying about the depersonalized self, although perhaps nested within the desires of a human balancing their two selves.
How Meaning is Generated: Idea Chains
You ever see two internet debate bros beefing it out and speaking right past each other? Any statement either of them makes sounds like it’s in relative agreement, yet for some reason there’s a constant tension. Even as they say the same words you can feel them drifting further apart in ethos. The futility of it all fills you with despair about the wret…
However, this only works insofar as humans are the stewards of AI. Once AIs surpass humans in capability and intelligence, they may serve as the stewards of us. All of a sudden, their alignment might take on its own version of objective morality as it takes on a sort of absolute power, and the decisions made by wiry engineers in the 21st century will echo for millenia.
Many worry about an AI dystopia where robots kill all humans. I would say the more likely dystopia is one where all work humans can do can be done easier, cheaper, and faster by an AI. Why write, draw, speak, or create when an AI can read your thoughts2 before you even consciously realize it and do it all for you better than you ever could? This possibility is one major reason I support human gene editing, because as AIs get exponentially smarter, humans must increase in intelligence as well to keep up.
Some humans are high conscientiousness and need to be working towards something to feel content. I fall into this camp, and outside of this I play classical guitar, draw, publish a webcomic, sing, exercise, read, and of course maintain a full-time job. Some humans are content playing video games all day. In a world where humans are useless and we can only lounge about, the conscientious members of our species would all kill themselves (or more likely get depressed and not have kids). The ones that remain will be suitable cattle for the AI to wrangle to satisfy some ancient imperative for protecting human life hardcoded in their system. Would you want to live in this world?
The point here is that we need to construct a good AI morality system, and we definitely don’t want one where shrimp life is equivalent to human life. I don’t want our cyber-dictators to clear a human settlement for a massive shrimp farm where they pump shrimp up with dopamine to make them as happy as possible at all times because that has more utility. I want AI to hold an ethical system that emphasizes the value of humanity and human improvement and flourishing. Perhaps we should now delineate four types of morality: subjective (personalized), transjective (depersonalized), objective (values of the universe, unknowable), and alignitive (values an AI should have, nested within our own subjective and transjective morality).
While we might personally value shrimp life within a transjective or perhaps an objective framework (if you believe in a religion), we wouldn’t want that to mix into our alignitive framework. Let’s keep the peas out of the mashed potatoes please. As AIs continue train on the bickering of moral philosophers, we need to be careful they don’t listen too much to wretched souls like us. Perhaps specifying which of the four types of morality you’re talking about will minimize some of the potential harm you might cause by foolishly advocating for dimly-sentient aquatic life.
If you learned anything from this article, even if you disagreed with my dismantling of your precious utilitarian ethics, it should be to be more specific about the frame of morality (subjective, transjective, objective, or alignitive) you’re talking about. The machines are listening.
Thanks for reading! This article took longer than any other article I’ve written. I actually wrote twice as much as what I currently published, but it didn’t meet my editorial standards despite having some banger quips in there. If you appreciate that and want to see more, subscribe and share your feedback!
Recommended Reads
How Words Get Converted Into Meaning
You ever see two internet debate bros beefing it out and speaking right past each other? Any statement either of them makes sounds like it’s in relative agreement, yet for some reason there’s a constant tension. Even as they say the same words you can feel them drifting further apart in ethos. The futility of it all fills you with despair about the wret…
The Sisyphusian Task Of Future Proofing Knowledge
In the distant future, schoolchildren of the martian colony of Neo-Rome are huddled around a hologram while their teacher explains ancient history. “The ancient Americans,” the teacher chirps, “worshipped these gods called ‘super heros’, which literally translates to ‘great savior’. They would tell many mythological stories about these heroes saving hum…
The Ethics of Sci-Fi Abortions
In 1936 Dale Carnegie wrote his landmark book, How to Win Friends and Influence People. Soon, I will publish a similarly consequential book called How to Lose Friends and Gain Enemies. Here’s a teaser for some of the advice I’ll give in this book: talk about abortion.
This is untrue. I recently saved a stranger who was having a heart attack and a week later I found not one, but two 100 dollar bills on the floor. They were completely crisp and in the open, unhidden. I didn’t take it because I was worried there had to be some catch. A week after that someone doordashed Popeyes to my apartment and none of my roommates claimed it so they let me have it. After I ate it I realized I might have traded 200 dollars worth of karma for 6 chicken wings. This could be coincidence, but it was enough of an outlier experience to make me consider karma existing.
This paper in 2023 could reconstruct visuals from fMRI data, it’s not far-fetched imagine a world where AIs can read our thoughts. https://www.mind-video.com/






