Evaluative judgments come into play even in chess, at which HAL also demonstrates proficiency (see chapter 5). Before his 1996 tournament with Deep Blue, IBM's chess-playing computer, Garry Kasparov contrasted human and machine playing: "Every computer has a fixed set of priorities. The safety of the king, active pieces, open diagonals, and so forth. We humans don't have in our head a fixed list; we feel the most important things to evaluate."
Daniel Goleman calls a successful balance of healthy and intelligent control of one's emotions and ability to recognize emotions
in others and empathize with them emotional intelligence. Although the focus of discussion to date has been on human
emotional intelligence, the concept is equally applicable to computers, especially those that interact with us. If we give
computers only the fundamental ability to recognize emotions, they might, like the tutor in our example, detect increasing levels
of distress in us. Yet it is one thing to detect distress, but quite another to know how to respond to it intelligently. Emotional
intelligence goes beyond recognizing, expressing, and having emotions; it requires knowledge and wisdom about how to use
Reporter: One gets the sense that he is capable of emotional responses. When I asked him about his abilities I sensed a sort of pride ...
Bowman: Well, he acts like he has genuine emotions. Of course he's programmed that way to make it easier for us to talk with him. But whether or not he has real feelings is something I do not think anyone can truly answer.
Bowman's answer parries a difficult question that is more in the domain of philosophy than in that of science: Can computers have emotions? The answer, of course, depends on the definition of emotions, which theorists still argue about; so at present there is no good answer. This question parallels the question "Can computers have consciousness?, where consciousness is also difficult to define. In the novel, Clarke endows HAL with self-consciousness, a necessary prerequisite for certain kinds of emotions, such as shame or guilt (see chapter 16).
Let's consider two scenarios in which a computer might be seen as having emotions. In the first, the emphasis will be on primary emotions (the more innate, hard-wired kind). In the second, the emphasis will be on secondary emotions, which typically involve cognitive evaluation.
Scenario 1. A robot used to explore a new planet is given some basic emotions in order to improve its chances of survival. In its usual, nonemotional state, it peruses the planet, gathering data, analyzing it, and communicating results back to earth. At one point, however, the robot senses that it has been physically damaged and changes to a new internal state, perhaps named "fear." In this state it behaves differently, quickly reallocating its resources to drive its perceptual sensors (e.g., its "eyes" might open wider) and provide extra power to its motor system to let it move rapidly away from the source of danger. However, as long as the robot remains in a state of fear, it has insufficient resources to perform its data analysis (like human beings who can't concentrate on a task when they are in danger). The robot's communication priorities, ceasing to be scientific, put out a call for help. This so-called fear state lasts until the threat passes, then decays gradually over time, returning the robot to a state of no emotion in which it resumes its scientific goals.
Scenario 2. A computer is learning to be a smart personal assistant, to aid you in scheduling meetings and retrieving important information. It has two ways of getting feedback. In the first, you give it feedback directly by selecting preferences (essentially programming it). Alternatively, it watches how you respond to its assistance and programs itself. It enters a state called "feel good" when (1) you feel good or express pleasure at its performance, and (2) when you succeed at a task more efficiently and accurately than usual. It might also have a corresponding "feel bad" state for the reverse situation, as well as a neutral "no emotion" state, a "feeling curious" state, and an "I'm puzzled" state. When the system has been in its feel-good state for several days, it becomes more curious trying out new ways to help you and taking more risks. When it lingers in a feel-bad state, it allocates more resources to trying to understand your wishes. When you make a complicated set of demands, it weighs the feel-good and feel-bad associations and tries to choose an action that satisfies goals (1) and (2). Unlike a fixed computer program, it doesn't expect you to behave consistently nor require precise rules telling it how you want it to behave. It copes with your human fickleness by aiming for a dynamic balance, recognizing that you will often not show pleasure when it performs well and will sometimes complain or show approval inconsistently. At such times, depending on how calm or agitated you are (measured from your norm), it either asks for clarification or makes a note to come back later and try to understand the situation - perhaps when you are not so agitated. It's use of emotions helps it make flexible, creative, and intelligent decisions.
In both scenarios, the computer's emotions are labels for states that may not exactly match the analogous human feelings, but that initiate behavior we would expect someone in that state to display.
In both cases, giving the computer emotions serves some ostensibly greater human good, such as survival - save humans the
cost of building and dispatching another robot - or performance - save humans time, money, and frustration. In neither case
are emotions provided to dignify the machine by creating it in the image of a human being. Doing the latter would raise issues
of computer slavery and computer rights that are many decades down the road! In any case, discussing them would take us far
from the aims of this chapter.