The Meaning Crisis … and AGI
The “Meaning Crisis” is popularly discussed and debated these days: John Vervaeke has a fifty episode youtube series on it, David Chapman has a multi-volume website-book on it, and there is an endless collection of posts on the Culture War. Meaning is important to understand, as it has implications for transhumanism and AGI/StrongAI.
Why do people crave meaning?
This is a rhetorical question, and I propose an ‘operationalist’ answer below. Now, meaning is important for the ‘deep’ questions in life, but it’s easier to get started with a unfraught examples. Consider the sentence: “If the engine stalls under load, that means the power is inadequate”. This is of the form “If situation X, that means fact Y is true”. Another form is “If you perform X, that means Y will happen.”
Basic Darwinian survival “means” that humans, indeed, all animals must have an ability to asses this kind of operational, quotidian concept of meaning. Not all meaning is created equal: “If I stop to smell this rose, I will feel good” has a meaning subordinate to “If I stop to smell this rose, the bear who is chasing me will eat me.” Thus, one concludes that (almost) all animals have priority-ranking systems.
… Although, apparently, jellyfish don’t. They give equal weighting to eating, and fleeing predators. The evolution of bilateria provides the needed neural circuitry to make up one’s mind about this. (Reference: Tony J. Prescott “Forced moves or good tricks in design space? Landmarks in the evolution of neural mechanisms for action selection” (2007))
People Crave Meaning because Biology
Which brings me to the operationalist answer to the question posed above. David Chapman presents multiple scenarios involving a crisis of meaning; one example involves an extramarital affair, and what it “means”. Now, in the case of having an affair, one is engaging some very deep and old parts of the brain, dealing with mate selection and procreation. It’s vital to get these right, as the survival of the offspring is in question. Or rather, get this wrong, the offspring don’t survive, and such faulty behavior is bred out of the population. Animals know how to mate.
Thus, having an affair triggers sustained activity in large parts of the brain. Basically, the brain is saying “Pay attention! What is happening right now is really important! Deal with it!” The various neural layers dealing with ‘quotidian’ meaning fall into action, trying to extract if-then relationships between perceptions and actions. The claim here is that the meaning crisis arises from the fact that solving this particular problem is hard … it’s effectively unsolvable. Faced with unsolved problems, we have brain circuitry that says “think harder, think more, your survival depends on this”, and we’re now stuck in a feedback loop of pondering the answer to an unsolvable problem.
Of course, this feels like a crisis: when the bear is chasing us, and we cannot find an answer to the problem, a feeling of crisis ensues, expressed variously as helplessness, hopelessness, fear, determination, sheer will-power, grasping-at-straws, try-anything, pray to God, …
I propose that the ‘meaning’ of the affair “feels” just like being chased by that bear, but in slow motion. By “feels”, I mean, the brain senses that this is important, the brain is demanding an answer to the predicament, an answer is not being found, crisis and loss of ‘meaning’ ensues. This is a deeply ingrained feedback loop in the brain, and once switched on, it is not easily switched off.
Indeed, resolutions to the meaning crisis seem to often involve demoting the importance of the original problem: the crisis often resolves when one convinces oneself that “Ah, whatever, it didn’t really matter anyway. No need to feel miserable over it.”
The converse is also true: awakened at night, you find some issue has gained immense importance, and you are tortured, anxious, unable to sleep, as the issue has no solution. Usually the torment dissolves by the next day, but can linger on for days or months. The claim is that these are old, deep neural circuits trying to do the job they’ve evolve to do, and are making your life a miniature hell as a side-effect.
David Chapman spills oceans of virtual ink on nihilism and eternalism, and how these are faulty foundations on which to build meaning. What he does not seem to say is that the reason that nihilism and eternalism are popular is that they provide the missing answer to the question driving the crisis: with the answer in hand, the feedback loop is halted, one is relieved and can get back to ordinary life. (The bear is no longer chasing you; nihilism/eternalism resolved the crisis.) People are terrible in ‘rational’ reasoning, and don’t notice the inadequacy of nihilism/eternalism. And it mostly doesn’t matter: it not only resolves this crisis, its a cure-all for (almost) all crises: its a rock you can depend on (as the Christians call it).
To be clear, the ‘feedback loop’ I am describing here is meant to be taken as a literal feedback loop: it really is one set of neural circuitry raising an alarm: ‘this is important, find out what it means’, and a different set of circuitry saying ‘I don’t know, I can’t figure it out’, feeding into those reinforcement circuits saying ‘well try again, this is important, your survival as a species depends on finding the answer.’ Failure is not an option: you are flooded with hormones, and thoughts, each elevating the problem to high importance, and driving senses of high anxiety and/or loss-of-meaning and all the other stereotypical psychological responses.
So, it seems for me, that the crisis of meaning is the conflation of multiple forces:
- (a) Old, deep brain structures that make you unhappy when you cannot find resolutions to ‘important’ problems, where ‘importance’ is ranked by other old, deep brain structures.
- (b) Sloppy reasoning and intellectual laziness allows nihilism/eternalism to provide adequate answers, adequate guidance to shut down the feedback loops that are making you miserable.
- © This shutdown and relief from pain are so significant that the ‘answer’ of nihilism/eternalism is marked as ‘really good stuff’, and ‘generically useful’ and ‘apply whenever you feel bad’: these are perceived as cure-alls.
- (d) Whenever there is some obvious social or political problem, one’s favorite personal cure-all is trotted out as the right solution for everyone else, too.
This last point results in Culture Wars, especially evident when religion is providing one of the bedrocks of meaning for one group, even as another group rejects religion without proposing a clearly defined, simple foundation on which to build meaning. Or, more directly: failing to provide an easy, simple cure-all for resolving all emotional and psychological crises that one might face. It’s no wonder that those of faith reject those without it: faith is the tool for resolving the crisis. If you are not clear on this, you should ponder the lyrics to Amazing Grace.
Neural Feedback Loops
l’ve posted before on ‘Endorphin Supply Chains’, which is about the tobacco (nicotine) feedback loop that couples neural circuits in your brain to a hundred-billion-dollar capitalist industry. It is a huge feedback loop, of tremendous importance, over which we, as individuals, have precious-little control. We live in a capitalist society: it has organizational structures that are bigger than us.
Likewise, in this post: there is another deep feedback loop. It is the one that searches for meaning. It is primitive, because finding meaning correctly allows a species to survive and thrive. Yet we, as individuals, cannot find meaning for everything. If those those things we cannot solve are also given high importance by other neural circuits, a sensation of unease takes over. Allowed to grow (fester?), it can mature into full-blown crisis. The feedback loop does not stop at the individual level; it is coupled to the global brain, and has resulted in two World Wars, as Chapman so marvelously explains.
The Risk to StrongAI
The relevance for AGI, for Strong AI is that we must be cautious with our feedback loops, with the systems that provide motivation for behavior. They can very easily amplify negative, unwanted actions. Wrong thinking, as David Hume noted in the 18th century, results in wrong action.
Feedback loops establish the basins of attraction in dynamical systems. In this (mathematical) sense, the activity of any AGI/StrongAI is a dynamical system. (Well, anything and everything in our Universe is a dynamical system.) In the case of software having the ability to act in the physical world, the “laws” of unintended consequences apply. A system smart enough to “think” but not smart enough to escape its own destructive feedback loops risks not only killing itself, but taking down Humanity with it.