People are often surprised when someone they consider extremely smart does something that appears stupid. This may seem counterintuitive, but we can explain it without going into a deep philosophical discussion about intelligence. We could start with Daniel Kahneman’s simple distinction between two basic systems.
System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations.
Here is an example of extreme system 1 performance by Diego Maradona. In that play it is clear that he is not thinking about every action, the way an optimal computer player might. Reasoning is expensive and slow. Mathematicians, programmers or chess players may focus their rational thinking on very constrained domains, and they are considered very intelligent because these domains demand a lot of rational power. However, it would be impossible for a human to navigate the entirety of daily life using system 2 only. We are not even particularly good at it. If you would like to learn more about why, I recommend listening to this interview with Yann LeCun.
As a thought exercise, we can try to imagine how a pure rational might function. This entity could perform these tasks in a loop:
Read the state of the world.
Evaluate the inputs, the current state of the world, and the desired state of the world.
Decide which action among all available actions is optimal.
It is easy to come up with toy scenarios in which Rational Being can thrive. We could have a simulated world with a limited number of inputs, for example a game. The desired state could be to win, or to avoid losing. The possible actions are moves in the game. If the whole state space of the game is within the exploration capability of this being, it can play optimally. We may not be able to build a machine that could be guaranteed to never lose at chess, because the computational complexity is too high. However, we know it is possible in theory.
What about human life? Could a human being forego all other mechanisms and live life purely by reason? The answer is obviously no. Why?
There are too many inputs.
The computational speed of a human brain is absurdly slow compared to how fast the world evolves.
The desired state of the world for a human being (what we want at a given moment) is constantly changing, and we are not particularly good at modeling it.
Imagine I wake up, and I have to operate on pure reasoning. No intuition or automatic actions allowed. What do I do?
“I am awake. I am now going to examine my current state and determine what should be my most immediate goal. Let’s see… I need to pee, and I’m hungry. This means that I need to get out of bed, pee, then get some food. But also I’m cold. Should I open my phone and use Nest to set the thermostat to a higher temperature, and wait a few minutes before I get out of bed? If so, what is the optimal amount of time to wait given that I am uncomfortable now? How do I minimize discomfort over time?”
I have been awake for thirty seconds and I am already exhausted from thinking. Imagine doing that all day. Obviously we must rely on automatic processes for the vast majority of the things we do. Some may be evolved mechanisms that we share with other animals, some may be learned from experience (like driving). They may have been system 2 originally but have been “compressed” into faster and more efficient system 1 mechanisms.
The most obvious problem is that there are situations in which we would like to engage reasoning, but first some other mechanism needs to turn it on! As a rudimentary analogy, I like to imagine a warrior carrying a heavy and powerful sword on him, all the time. This sword is reasoning. Our warrior encounters challenges and obstacles every day. We would not expect the warrior to unsheathe the sword upon hearing a fly buzz by. We know he is fast and lethal when he needs to be, but most of the time he will solve problems differently. However, once in a while he will face a dangerous animal, or a potential enemy who could kill him. How does he decide when to use the sword?
Imagine that we had an app that told us “hey, this situation may require reasoning. Stop and think.” Or the opposite: “you are wasting your precious rational juice on this, the choice is not that relevant, just toss a coin and move on.” We could try to apply it to a situation that happened to me a few days ago:
Last Friday was ordering a jigsaw puzzle to put together with my son. The price on Amazon was $15, and I wanted to have it the next day so we could put it together on Saturday night. Amazon gave me two choices:
Buy $35 worth of same-day shipping products, get it for free.
Pay $3 to have it shipped same-day.
My mind immediately started thinking of what else I needed so I could get free shipping. After thirty seconds I could not come up with anything, so I just paid the $3. Either way, it is not an important decision. What matters is that Amazon tricked me into using my scarce reasoning budget for something unimportant. The app might have caught this and saved me thirty seconds perhaps. Okay, here is an example of a very different situation in which this app could have been effective only by making me freeze and not act on system 1:
A few years ago I was sitting at a Starbucks working on my laptop. I was one of many people doing this, but only two of us had Macbooks. Two thieves came in and snatched our computers. The other Mac owner sat there in shock. I did not. I got up instinctively and chased the thieves to their car. The guy who took mine was at least twice my weight. I got in front of him, and he knocked me to the ground easily. I got up, kept going, got in front of the car. They started to back out, I grabbed one of the windshield wipers, ended up holding on to it as it detached from the car. Their faces had been caught on camera inside the store, I ended up testifying in court against one of the thieves but never got my laptop back.
Would I have gone after them if I had reasoned about it? Of course not. The cost/benefit was atrociously bad. A laptop is not worth that much, and I had no valuable information on it. I know of at least one person who died in a similar situation. This left an impression on me, so I made a point to try to engage reason if a similar situation happens again. So far the closest was when a thief broke my car window at a Target parking lot. A witness said to me “the store must have cameras, you should go inside and speak with them.” My rational system took over, and I thought:
If they have footage, there is nothing I can do with it. It is not even worth my time going inside.
Fixing the window costs $265. I should go home and call to have it fixed, then move on. No point in wasting any more mental effort on this.
What makes matters even more complicated is that in the two examples above, my best interest is pretty obvious. This “rationality exoskeleton app” could help me only because it knows what is best for me. In the first case it may have simplified it like this: I wanted to keep my laptop, but only if I could do it with minimal risk. In the second, I wanted to get my window fixed, and ideally have the perpetrators punished. What if what you want is not even that clear? It is easy to imagine situations in which a very rational person faces an urge to react, with no obvious benefit from the action, and no link between the action and any meaningful goals or desires. If I want to see examples of this, all I need to do is open Twitter.
The whole point of this post is to say: if you know me and see me doing something dumb, please cut me some slack. My rational exoskeleton nanny app is still in an early alpha.