One of Star Trek’s most popular aliens are the Vulcans — a “purely logical” species that eschewed emotion. While Vulcans were an interesting species, they were unfortunately often used as straw men to attack the logic vs. emotion dichotomy. In fact, this attack has become so cliched, that it earned a name: Straw Vulcan, and there’s even a video on the subject here.
Star Trek’s treatment of Vulcans is so simplistic, it fails in its very premise — the false dichotomy between logic and emotion. See, rather than being opposing forces, logic SERVES emotion by charting an efficient course to emotional satisfaction. As such, agents are logical to the extent that they efficiently satisfy their emotional goals — whatever they may be. In fact, without emotion, there would be no goal for logic; no reason for anyone to explore, play 3D chess or even get out of bed in the morning.
Or to use Hume’s more succinct phrasing: Reason [logic] is the Slave of the Passions.
However, the problems go beyond that, for Star Trek’s favorite Vulcan trope — how logical beings would act around humans — consistently got bungled. I hope to show this, using a branch of mathematics designed to study the interaction of agents. That branch is called Game Theory.
Cutting a Cake
To get a taste (ha!) of Game Theory, let’s look at a very simple game involving two players, call them X and Y. X can cut a cake in 2 and Y can choose which portion to keep. Assuming X wants the biggest piece possible, how should X cut the cake?
X knows that Y gets to choose the piece and assumes Y wants the biggest piece possible. So X knows that cutting uneven pieces means Y will just pick the bigger piece, leaving X with the smaller one. With this in mind, X cuts the cake into 2 equal pieces, thus assuring X gets the biggest piece possible.
Notice how X HAD to take Y into account. Y was part of this game, and ignoring Y would have been delusional. Contrast this with the Vulcans in Star Trek; they consistently ignore the “emotional nature” of humans when acting among them, then get surprised at the result.
Back to cake cutting. X chose to cut based on assuming Y wanted the bigger piece. However, what if Y wanted the smaller piece? Does this mean Y is illogical? Not necessarily. For instance, maybe Y is on a diet or is an altruist. Therefore, X cannot judge Y’s logic until X understands Y’s payoffs. Again, contrast this with Star Trek’s Vulcans who dismiss humans as “emotional” without first understanding the goals humans seek.
Payoffs are the win or loss from a game (interaction) and they can vary. Some people value money, others fairness, others altruism, yet others some complex combination of factors. It’s only by understanding the payoffs of the players that we can form an effective strategy. This is a tough task, and there’s a concept called utility to try to quantify this. Whether or not utility succeeds, a big part of playing a game is quantifying player payoffs.
As another example of disparate payoffs, imagine X and Y play the following game. X has $100 and can share any amount of it with Y. If Y accepts that amount, then X and Y keep their respective shares. If Y rejects the amount, X and Y get nothing. How should X play?
X could reason that offering no money to Y will give Y no incentive to accept the offer, but offering any amount is better than nothing. So X keeps $99.99 and generously offers Y 1 cent.
And Y rejects it, and nobody gets anything.
Now was it logical for Y to reject free money? Yes, if Y gets a payoff from punishing unfairness. In fact, Y’s sense of fairness can run so far that Y would reject anything other than a 50/50 split (rejecting even $49.99 on principle). On the other hand, Y’s fairness may go only so far, so that Y might look the other way and accept a $30 share.
Again, to act logically when another agent is involved, you must understand the agent’s payoffs.
The Prisoner’s Dilemma
Interestingly, Game Theory offers a critique of logic in the form of its most famous game — The Prisoner’s Dilemma. This game shows how two agents can end up with poor payoffs by acting logically to maximize their payoffs! This game can be explained using the following example.
X and Y are arrested for a crime and separated so they cannot communicate. Now each faces the same choice. If both stay silent, they each go to prison for 1 year. If one confesses and the other stays silent, the confessor gets no prison time, but the silent one gets 4 years. Finally, if both confess, they each get 3 years. Given this, what should they do?
Notice the difference from the cake cutting game. Previously, X went first, Y chose based on X’s move, and X — knowing Y would choose based on X’s move then chose the first move — a process called backwards reasoning. In The Prisoner’s Dilemma however, X and Y have to make their decisions without knowing what the other is doing.
So what does X do? X reasons thus… if X stays silent and Y confesses, X gets the worst result possible. On the other hand, if X confesses and Y confesses, X’s outcome is a little better. What’s more, if X confesses and Y doesn’t confess, X gets the best outcome possible. So confessing clearly minimizes the worst case scenario for X (and may pay off big), and is thus a logical choice — especially for one who is risk averse.
Y reasons similarly and confesses as well.
As a result, they both serve 3 years in prison, whereas they could have done only a year each had they both stayed silent.
Again, X and Y acted logically, by applying the silver rule (do unto others before they do it to you), and as a result, they ended up worse off.
The Prisoner’s Dilemma applies to a variety of things. For instance, assume X and Y are nations that share an exhaustible resource. Both would benefit by reducing their consumption to preserve the resource, although they’ll lose some economic growth. Well, if X reduces its consumption but Y doesn’t, then Y keeps its economic growth and out-competes X, thus causing Y to gain and X to lose heavily. Yes, but X and Y are in communication right? Well, they can sign a treaty, but who knows what they’ll REALLY do when the other isn’t looking? Mutual mistrust and treaty violations are the same as not being in communication as far as this game is concerned — neither party knows what the other is REALLY doing. In fact, some argue that many of our environmental challenges, including global climate change is this game.
Star Trek often used aliens as straw men to attack human tendencies: Vulcans for logic, Klingons for violence, Ferengis for Capitalism. Sometimes the entertainment value (especially when these tropes were violated) was worthwhile. Other times, they simply dumbed down what could have been a very valid critique.
In particular, Star Trek bungled logic from the get-go. There is no logic/emotion dichotomy, Vulcans were not logical, and humans are not necessarily illogical. Furthermore, the obvious flaws in Star Trek’s straw men blinded them to more profitable avenues for critiquing logic — like The Prisoner’s Dilemma, in which logic can screw all involved, and even encourage immorality by encouraging people to preemptively screw others out of a fear of being screwed first.
On the other hand, insofar as Star Trek’s simplistic treatment invited or inspired some people to do independent research on the subject, it had value. For instance, for all my criticisms about Vulcans, they’re still my favorite Star Trek species, and have even influenced my philosophical interests and tendencies (Buddhism, Stoicism, even Computer Science and Math).
Besides, they gave (what I hope was) an entertaining angle for introducing Game Theory.
So maybe I shouldn’t be too hard on them.