Straw Vulcans, Logic and Game Theory

Introduction

One of Star Trek’s most popular aliens are the Vulcans — a “purely logical” species that eschewed emotion. While Vulcans were an interesting species, they were unfortunately often used as  straw men to attack the logic vs. emotion dichotomy.  In fact, this attack has become so cliched, that it earned a name: Straw Vulcan, and there’s even a video on the subject here.

Star Trek’s treatment of Vulcans is so simplistic, it fails in its very premise — the false dichotomy between logic and emotion.  See, rather than being opposing forces, logic SERVES emotion by charting an efficient course to emotional satisfaction. As such, agents are logical to the extent that they efficiently satisfy their emotional goals — whatever they may be.  In fact, without emotion, there would be no goal for logic; no reason for anyone to explore, play 3D chess or even get out of bed in the morning.

Or to use Hume’s more succinct phrasing: Reason [logic] is the Slave of the Passions.

However, the problems go beyond that, for Star Trek’s favorite Vulcan trope — how logical beings would act around humans — consistently got bungled.  I hope to show this, using a branch of mathematics designed to study the interaction of agents.  That branch is called Game Theory.

Cutting a Cake

To get a taste (ha!) of Game Theory, let’s look at a very simple game involving two players, call them X and Y.  X can cut a cake in 2 and Y can choose which portion to keep. Assuming X wants the biggest piece possible, how should X cut the cake?

X knows that Y gets to choose the piece and assumes Y wants the biggest piece possible.  So X knows that cutting uneven pieces means Y will just pick the bigger piece, leaving X with the smaller one. With this in mind, X cuts the cake into 2 equal pieces, thus assuring X gets the biggest piece possible.

Notice how X HAD to take Y into account.  Y was part of this game, and ignoring Y would have been delusional.  Contrast this with the Vulcans in Star Trek; they consistently ignore the “emotional nature” of humans when acting among them, then get surprised at the result.

Highly illogical.

Back to cake cutting.  X chose to cut based on assuming Y wanted the bigger piece.  However, what if Y wanted the smaller piece?  Does this mean Y is illogical?  Not necessarily.  For instance, maybe Y is on a diet or is an altruist. Therefore, X cannot judge Y’s logic until X understands Y’s payoffs. Again, contrast this with Star Trek’s Vulcans who dismiss humans as “emotional” without first understanding the goals humans seek.

Payoffs

Payoffs are the win or loss from a game (interaction) and they can vary. Some people value money, others fairness, others altruism, yet others some complex combination of factors. It’s only by understanding the payoffs of the players that we can form an effective strategy.  This is a tough task, and there’s a concept called utility to try to quantify this.  Whether or not utility succeeds, a big part of playing a game is quantifying player payoffs.

As another example of disparate payoffs, imagine X and Y play the following game.  X has $100 and can share any amount of it with Y.  If Y accepts that amount, then X and Y keep their respective shares.  If Y rejects the amount, X and Y get nothing. How should X play?

X could reason that offering no money to Y will give Y no incentive to accept the offer, but offering any amount is better than nothing.  So X keeps $99.99 and generously offers Y 1 cent.

And Y rejects it, and nobody gets anything.

Now was it logical for Y to reject free money? Yes, if Y gets a payoff from punishing unfairness.  In fact, Y’s sense of fairness can run so far that Y would reject anything other than a 50/50 split (rejecting even $49.99 on principle).  On the other hand, Y’s fairness may go only so far, so that Y might look the other way and accept a $30 share.

Again, to act logically when another agent is involved, you must understand the agent’s payoffs.

The Prisoner’s Dilemma

Interestingly, Game Theory offers a critique of logic in the form of its most famous game — The Prisoner’s Dilemma.  This game shows how two agents can end up with poor payoffs by acting logically to maximize their payoffs! This game can be explained using the following example.

X and Y are arrested for a crime and separated so they cannot communicate.  Now each faces the same choice.  If both stay silent, they each go to prison for 1 year.  If one confesses and the other stays silent, the confessor gets no prison time, but the silent one gets 4 years.  Finally, if both confess, they each get 3 years. Given this, what should they do?

Notice the difference from the cake cutting game.  Previously, X went first, Y chose based on X’s move, and X — knowing Y would choose based on X’s move then chose the first move — a process called backwards reasoning. In The Prisoner’s Dilemma however, X and Y have to make their decisions without knowing what the other is doing.

So what does X do?  X reasons thus… if X stays silent and Y confesses, X gets the worst result possible.  On the other hand, if X confesses and Y confesses, X’s outcome is a little better.  What’s more, if X confesses and Y doesn’t confess, X gets the best outcome possible.  So confessing clearly minimizes the worst case scenario for X (and may pay off big), and is thus a logical choice — especially for one who is risk averse.

Y reasons similarly and confesses as well.

As a result, they both serve 3 years in prison, whereas they could have done only a year each had they both stayed silent.

Again, X and Y acted logically, by applying the silver rule (do unto others before they do it to you), and as a result, they ended up worse off.

The Prisoner’s Dilemma applies to a variety of things.  For instance, assume X and Y are nations that share an exhaustible resource.  Both would benefit by reducing their consumption to preserve the resource, although they’ll lose some economic growth.  Well, if X reduces its consumption but Y doesn’t, then Y keeps its economic growth and out-competes X, thus causing Y to gain and X to lose heavily.  Yes, but X and Y are in communication right?  Well, they can sign a treaty, but who knows what they’ll REALLY do when the other isn’t looking?  Mutual mistrust and treaty violations are the same as not being in communication as far as this game is concerned — neither party knows what the other is REALLY doing.  In fact, some argue that many of our environmental challenges, including global climate change is this game.

Conclusions

Star Trek often used aliens as straw men to attack human tendencies: Vulcans for logic, Klingons for violence, Ferengis for Capitalism.  Sometimes the entertainment value (especially when these tropes were violated) was worthwhile.  Other times, they simply dumbed down what could have been a very valid critique.

In particular, Star Trek bungled logic from the get-go.  There is no logic/emotion dichotomy, Vulcans were not logical, and humans are not necessarily illogical.  Furthermore, the obvious flaws in Star Trek’s straw men blinded them to more profitable avenues for critiquing logic — like The Prisoner’s Dilemma, in which logic can screw all involved, and even encourage immorality by encouraging people to preemptively screw others out of a fear of being screwed first.

On the other hand, insofar as Star Trek’s simplistic treatment invited or inspired some people to do independent research on the subject, it had value.  For instance, for all my criticisms about Vulcans, they’re still my favorite Star Trek species, and have even influenced my philosophical interests and tendencies (Buddhism, Stoicism, even Computer Science and Math).

Besides, they gave (what I hope was) an entertaining angle for introducing Game Theory.

So maybe I shouldn’t be too hard on them.

19 comments

    • Thank you! I think this was one of those posts that just happened; a friend sent me the link to the Straw Vulcan video around the same time I was watching a great series of Game Theory lectures (Games People Play). This was a while ago, but the idea just kept bouncing around in my head. When I encountered another game theory book (a game theoretic analysis of language), I figured it was time 🙂

  1. Reads as both a quick but strong defense of the integration of logic and emotion, and a brief introduction to game theory; and your criticism of Star Trek’s weird single value alien characterizations is right on..

  2. This is a great entry. I really like the way you use Game Theory to make your point. 🙂

    One thing however that I think complicates this issue a bit is the distinction between reason and logic and the distinction between emotion and desire. You are surely right that the notion of a purely logical being is problematic if not incoherent, but what about a fully rational being? Reason is wider than logic and it may be irrational for me to smoke because that will lower my life expectancy even though it is not illogical for me to do so because I have no explicit belief that living a longer life is better than living a shorter life. Thus, reason unlike logic can influence what we value and pursue as on a certain conception of reason reason can determine ends, not just means. Of course this requires a non-instrumental conception of reason, but while such views are not in vogue they still seem quite powerful.

    Likewise, there is a tendency to conflate emotion and desire. Desire is the wider category because it merely involves a “want” of some kind. Desire may be necessary for action, but emotion certainly is not. I may want to do well on my exam and consequently study, but this hardly makes me emotional.

    I am not disagreeing with anything you have said, but trying to point out that distinctions that we tend to use interchangeably like reason vs. desire and logic vs. emotion are actually quite distinct.

    • Thank you for reading and for your comment.

      When writing the article, I was a bit concerned about the differences among logic, rationality and reason on the one hand and emotion, desire and pay-offs on the other, but felt I could ignore those differences in the context of the article. However, I never thought of reason as determining values.

      I would love it if you would elaborate further on this. how for instance, would reason lead to a change in values?

      • Sure, I have no problem saying a little bit more, and thanks for replying to my comment. Also, it makes perfect sense to me that you avoided distinguishing reason and logic, and emotion and desire in a discussion of game theory.

        The way we typically use reason in an academic context tends to denote the calculative or instrumental side of reason. For example, reason as something that allows us to most efficiently achieve certain ends.

        But when we reflect on what is important to us (our values), is not this activity a form of reason, albeit a very different form? We assess our sense of what matters and think I was wrong about what matters because I thought worldly success was more important than friendship. Furthermore, we reinterpret our values in light of new experiences as we have them. Our values are not just brute sentiments that we happen to have but rather changing understandings of what matters.

        Now, some people are hesistant to call these activities a form of reason, but it seems to me that we have one fairly strong reason to consider them to be forms of reason. Given that we think we can be mistaken and value the wrong things it seems that is possible for us to be wrong about our values and being wrong in this way is a kind of irrationality. For example, we might say someone who values wearing purple on Sundays as the most important aspect of his life is foolhardy because he misses out on what really matters. Consequently, it seems that valuing the wrong things seems to be a form of irrationality, and conversely we might say understanding what matters is a form of rationality.

        I think I can show how this form of reason operates through an example from my own life. I used to play in bands and as a result of that I treated some friends who were members of the band poorly because I wanted to ensure that the band was as successful as possible. Looking back on my action now I see that I was irrational as I failed to understand that praise and recognition will not provide much of significance in my life, where friendship is deeply important to my life’s success.

        In this sense, this form of reason operates through judgement, reflection and introspection and is much less exact than the instrumental conception. But it still seems plausible to me.

        Of course it will be noted that this conception of reason hearkens back to Aristotle’s notion of phronesis which is not merely about instrumentally reaching certain ends, but properly understanding and acting on the basis of what matters. So my views are hardly original here, but I hope I have stated them clearly.

      • Thanks for the detailed explanation; this is really thought-provoking, and suggests some fascinating questions. Off the top of my head…

        1. What is the mechanism by which we (re-)evaluate our goals?

        2. Where do our goals come from?

        3. Is (re-)evaluating our goals a case of turning logic on itself, or is there a higher value system in place to which these other goals are lesser values? Put another way, are our logic+goals milestones in the service of higher logic+goals, such that these form a nested/recursive/fractal structure to our life?

      • I will do my best to answer your questions, but my answers are a bit sketchy as I struggle with conceptualizing these issues.

        1. The mechanism is driven by our capacity for ethical reflection (ie reflection of what matters). But in a more specific sense I think we are driven to reevaluate our goals by events in our own lives. It is not as if we can mechanically reevaluate them successfully at any time by just thinking really hard. Instead our reflection is triggered by an event that we experience. After having this experience our previous goals begin to seem hollow and we turn back towards them with an openness to their being wrong. In this sense reason does not play the role of master as fate or fortune seem to play a role as well, but reason still plays a significant role.

        2. At the most basic level our goals from our natural consitution as biological beings and our culture. We are given a set or conflicting sets of goals when we are born into the world that reflect our human biological tendencies and our specific culture. We use this background to reinterpret and make our own goals through rational reflection, sometimes conforming with previous goals, and in other contexts re-imagining them.

        3. I tend not to see reevaluating our goals as a form of turning logic on itself. But I would see our goals as constitutive of, or instrumental to the pursuit of higher goals. For example, my goal to develop a friendship with this person may matter intrinsically but it could be part of a wider goal to connect intimately with other beings (both rational and non-rational). So I think I am positing a nested structure to our life. If we dig deep enough each person will have a set of fundamental values that most of their other goals reflect. And these fundamental values are each person’s answer to the question of what matters.

  3. Hmm, very interesting idea. However, I wonder if it’s possible to be self-interested without emotion? Perhaps you could program a computer to, for example, enrich itself on the stock market. Would that not be a “purely logical” system operating with its own non-emotional motivations?

Leave a comment