Note: This is part 2 of a 5 part series. In part 1, I outline a model of emotion: namely, that our emotions give us information about the implied structure of future possibilities. In this, part 2, I explore some mathematical properties of this model. In part 3, I analyze economic activity in terms of this model. In part 4, I look at politics through the lens of this model. In part 5, I tie things together.
Emotion we can Think Clearly About
In the first post of this series, I proposed a model of emotion: namely, that our emotions inform us of the “Structure of Possibility” corresponding to the world as we understand it. I gave a very simple example scenario involving emotion, and showed how the model predicted the emotions experienced by the people in this scenario. In this post, I will explain the benefits of this model from a computational perspective. In short, the “Structure of Possibility” model has the strong benefit of being amenable to existing mathematical techniques. By thinking of a person’s emotions as describing the ‘structure of possibility’ of that person’s reality model, we gain the benefit of being able to use the precise language of mathematics to reason clearly about emotion.
The ‘Structure of Possibility’ of a system is only a little more elaborate than its entropy. The entropy of a system is a measure of the number of configurations available to it. You may have heard entropy explained as the measure of ‘disorder’ in a system, but this is a massive oversimplification. A better way of looking at entropy – and one which is very easily defined and reasoned about – is to interpret it the way it is defined in the field of statistical mechanics: it is a measure of how many possible ways a system could be. The claim that entropy is a measure of ‘disorder’ comes from the fact that there are many more ways to distribute elements randomly than there are to distribute them in a pattern.
The ‘Structure of Possibility’ of a system is the relationship between those configurations. For a simple example, consider the game ‘tic-tac-toe‘. In this game, players take turns putting symbols onto an initially-empty 9×9 grid. We can ask the question “what is the entropy of a game of tic-tac-toe, having been played for 3 turns?” The answer is the total number of possible configurations of a tic-tac-toe game after three turns. That is, the number of possible game boards after three turns. This is equal to (9x8x7)/2: The first player has 9 possible choices. The second player has 8. At the end of the second turn, then, there are 9×8 possible configurations of the board. The first player now has 7 options for the third turn. We divide by 2, to account for the fact that it doesn’t matter what order the first player chose their moves in. For example, both of these boards here:
can lead to this board here:
The Entropy of the game of checkers – or of any system – does not take history or the future into account. It is only concerned with the “present state” of the board. The “Structure of Possibility” of checkers is like entropy, but taking history into account. What is the entropy of checkers? The question is answered simply. After N turns, It’s the natural of log of the number of possible boards after N turns. Those raw possible boards are as follows: [ 1, 9, 72, 252, 756, 1260, 1680, 1260, 630, 630 ]. I computed those numbers for use in a little site I made to show you something far more interesting than the entropy of checkers: the structure of its possibility. The structure of possibility for tic-tac-toe is here – but only for four turns. It gets too big after that. You can use the WASD and arrow keys and the mouse to move around this massive, connected structure.
If that seems big to you, try imagining one of those for “physics”; each possible mapping of the total quantity of energy of the universe into space, time, matter, heat, and momentum. It would be very large, to put things succinctly. And yet, for tic-tac-toe, that massive, gargantuan structure you see in the link, – that big, tangled, confusing thing – is no more complex than the simple strategy everyone knows. You don’t lose at tic-tac-toe unless you make a mistake. Could physics work the same way?
Chess AI and Happiness
Chess players use a simple heuristic to evaluate the strength of a possible move: they count how many moves they could make from the board which results if they take that move. They take a measure of the Structure of Possibility. The Structure of Possibility model predicts that the “just do what makes you happy” strategy – works for the same reason, and is susceptible to the same sorts of problems and flaws as using that chess heuristic. Both of those strategies work OK for many people who are just starting – either as chess novices just counting possible moves afterwards, poker players counting outs, or children playing – to the point that many suspect that it should just work in all times, in both chess and life.
This heuristic – maximizing possible future outcomes, simply counting the number of ways things could turn out as a result of different choices, and then making the choice that maximizes that number – was recently proposed as a basis for intelligence by a team of physicists. They built a system that simply worked on that principle, and it was capable of learning physical motion and even successfully investing in the stock market, without conceptual understanding of kinetics, money, portfolios, or balancing – without even being told that “more money is better than less money” – it learned that on its own just by operating on the simple principle of maximizing possible future histories.
Of course, if your model of the world is broken, trying to maximizing possible outcomes according to that model is not going to work for you. The “structure of possibility” model of emotion says that a person operating under a broken model of reality is going to have a much less pleasant emotional experience than someone who understands the world.
Language: Hot and Cold
The temperature of a system is actually defined in terms of how its entropy changes when energy is added to or removed from it. Most people don’t know this. If you ask people what temperature is, they think the answer is obvious: it’s how hot or cold something is. But what does that mean? People will also say that emotional responses are “obvious”, but they can never describe how they work, or why. When you break one of these “mysterious social rules” that governs our lives, they’ll look at you with sort of disdain for not understanding something that is “obvious” to them. It’s so obvious, they’d say – you just can’t explain it. You just have to know these things. The aggressive, unpleasant responses to people who don’t understand “the rules of the game” make it much harder for a person like myself to learn emotional intelligence. The “obvious” claim, though – it reminds me of the history of mathematics. The chest-puffing “These statements are so obviously true, no reasonable man could doubt them” lied at the basis of geometry for thousands of years, until people realized that hiding under these “obvious” and “clearly correct” notions – was something very terrifying. Non euclidean geometry – which was based upon discarding an unprovable but “obvious” truth and showing that things could still make sense – changed our notions of what was right and normal and true.
The fact that people respond negatively to you if you don’t understand emotional rules of a situation itself is interesting because it’s an emotional response. Imagine living in a world where you said “hey guys, how does sound work” – and then everyone shouted at you that it was obvious, so just shut the fuck up about it, Ok? Just do what you’re told! The advice we give people these days is not “do what you are told” – it’s “follow your heart” – which is great advice if you have an accurate understanding of reality, but if not, it’s just going to lead you into a dead end. If our emotions operate under principles like sound or vision, then telling a person “just follow your heart” would make as much sense as telling them “just do whatever you are told”, or “execute all instructions received in the visual context” – it’s really messed up. Of course, it also serves the interests of people who understand emotion, but choose not to explain it to others because they know that controlling other people’s emotions gives you power over them. If there’s a room full of men shouting and yelling, and one guy is calm – that guy is either already in charge, or he will be soon.
Let’s explore that temperature language relationship further. Someone who responds angrily and aggressively is said to be ‘hot headed.’ Someone who responds without excessive energy or passion, is said to be “cool” or “cold.” In terms of the definition of temperature, a cool system is one that experiences a large entropy increase when given a small amount of energy. A hot system is one whose entropy doesn’t increase as much, given the same amount of energy. Energy flows from hot to cold systems, as part of the 0th law of thermodynamics. It’s almost as if the basic stuff of the universe – energy – is doing the same “maximize happiness” routine, trying to maximize possibility wherever it goes. A hot system can lose some energy without losing nearly as much entropy as the cold system will gain from that same amount of energy – which is why the hot system leaks energy to the cool system, the way a room full of men who are shouting and upset are easily lead under the influence of someone who is calm and patient with them. How could they not be taken under this man’s sway? He must understand something we don’t -how else does he stay so calm when there are ghosts running lose?
“I am hot,” a person says. “I hear a truck, I see a bird, and I am hot.” You’d never say “I am” a bird – that’s just what you are hearing. You’d never say “I am” a truck – that’s just what you are seeing. Why, then, do we say “I am sad” rather than ” I feel sad” – could it possibly be for the same reason we say “I am hot” – when everyone knows we mean “I feel hot” ? Entropy and temperature are very, very closely related. It’s not a stretch at all to say that people can sense the temperature. This is obviously true. Given that we alternate use of the word “feel” with the word “to be” when describing both our temperature and our emotional state, is it absurd to suggest that emotional state is connected to entropy? Can a person sense entropy? What would that feel like? Perhaps the catholic belief in conscience being endemic to all humans is simply a belief that human beings can naturally sense the entropy of their environments, and that’s what our emotional states are. This hypothesis – that humans can naturally sense entropy – is throwaway and should not be considered as important or central to the larger notion here: that emotion can be understood just as sound, light, and heat are all understood: as mathematical phenomena.
If you feel any emotions upon reading this – explore what those emotions mean in the light of the Structure of Possibility model. If you think my references to religion make no sense in this scientific context – ask yourself what role religion plays, besides giving some structure to people’s emotional experiences – particularly those revolving around unanswered questions which have big implications for the large-scale structure of the world’s possibility. Aside, of course – from power. Perhaps a board of directors is a room full of people extracting energy from a heat reservoir and moving to a cold place, like a carnot engine.
Please leave comments with your thoughts, then move on to part 3.