Affect and Aesthetics
Art does not think logically, or formulate a logic of behavior; it expresses its own postulate of faith. If in science it is possible to substantiate the truth of one's case and prove it logically to one's opponents, in art it is impossible to convince anyone that you are right if the created images have left him cold, if they have failed to win him with a newly discovered truth about the world and about man, if in fact, face to face with the work, he was simply bored.
-- Andrey Tarkovsky, Sculpting in Time: Reflections on the Cinema
Affect not only plays important roles in communication and decision making, it also shows up in a subtle way, in aesthetic appreciation. The film 2001 does not go into this topic at length, but it does include an intriguing scene that suggests HAL has some artistic sensibilities. When I first saw the scene below, I was initially impressed with HAL's visual system (see chapter 10). Looking back, I am more interested in how HAL can comment on the quality of what he sees. Consider this dialogue, which takes place after HAL and Dave have exchanged small talk.
HAL: Have you been doing some more work?
Dave: Just a few sketches.
HAL: May I see them?
HAL: That's a very nice rendering, Dave. I think you've improved a great deal. Can you hold it a bit closer?
Not only can HAL see well enough to recognize that this is a drawing
of Dr. Hunter, but he recognizes that Dave's drawing ability has
improved. In general, aesthetic evaluation is a difficult
problem. Mere realism in a rendering is not the only criterion. Most
of us think successful art also has some kind of emotional appeal
-- something that beckons us to relinquish our cold distance
-- something not captured by logical rules, as Tarkovsky
sug-gests. Now, it may be that HAL is merely commenting on the realism
of Dave's drawing. But, if HAL really has emotions, perhaps he could
develop a form of internal aesthetic appreciation. Perhaps an
affective HAL could discover new forms of beauty and enjoyment, ones
we have not yet begun to dream of.
One could argue that the story of 2001 makes sense without assuming that HAL has emotions, that it is perfectly logical for HAL to be conflicted, given the nature of the mission. But these claims miss the mark, for emotions do not have to be illogical. They can arise from very logical circumstances (e.g. "here is a goal you really want, you are prevented from getting it, therefore you feel frustrated"). Yet the book and film deliberately use emotional language to describe HAL's internal states, whether they are justified logically or not. Moreover, HAL, like all machines, has only a finite amount of computational space. If most of his resources are allocated to protection and trying to reason about a source of distress, less capacity for diagnosing and treating the operations of the ship will be available, which increases the probability of error. So, although a machine doesn't need to have emotions to malfunction, some emotional states may make malfunctions more likely.
HAL's malfunctioning, then, appears to be caused by his internal conflict. His most sinister behavior, however, is apparently evoked by something he finds even more unimaginable. In the novel, Clarke writes, HAL realizes that "he had been threatened with disconnection; he would be deprived of all his inputs, and thrown into an unimaginable state of unconsciousness. To HAL, this was the equivalent of Death. For he had never slept, and therefore he did not know that one could wake again... "
At this point, I think, the 2001 story is weakest. If HAL was so intelligent, why didn't he know about disconnection, and reconnection? Obviously it makes a more dramatic story this way, for the consequences are deadly. HAL becomes a deliberately malicious assassin. He manages to kill every crewman on Discovery except Bowman, and Bowman, in the end, has to disconnect HAL. The fictional message, repeated in many forms, is serious: a computer that can express itself emotionally will some day act emotionally. And the consequences may be tragic.
In our discussion, we have outlined several beneficial reasons for pursuing the development of affective computers. Nonetheless, such machines inevitably pose a dilemma: Can we create computers that will recognize and express affect, feel empathy, exhibit creativity and intelligent problem solving, and never bring about harm through their emotional reactions?
Isaac Asimov, in The Bicentennial Man, subjects his affective robots to three laws of behavior to prevent them from bringing harm to people; one of them puts human life above the self-preservation of the robot. Still, his laws are not infallible; we can imagine conflicts in which the robot will not be able to reach a rational decision based on the laws. Indeed, Asimov's robots could be rendered completely ineffectual by situations that force them to reach a decision when two laws are in conflict. Without an emotion system to determine saliency and, ultimately, override rules, a law-based robot -- somewhat like Damasio's patients -- is severely handicapped in its decision-making ability.
Is this a reason not to build affective computers? I think not,
because I expect us to find solutions to this dilemma. What if
computers were designed never to hide their emotions from humans? Or
what if they were taught not to fear disconnection? And to value
human life? Of course, we are decades from having to worry about these
issues in practice. They do, however, raise new questions about
responsibility, ethics (see chapter 16), and perhaps even machine