Thursday, 26 October 2017

Do intelligent robots need emotion?

It is becoming increasingly clear that the parts of our brains processing emotions are not tidily separated from those dealing with reason, cognition, perception, motivation, and action. This leads Pessoa to suggest that efforts to construct intelligent robots that act like humans should not just have emotion-related components in their information-processing architecture, but rather that cognitive-emotional integration should be a key design principle. Here are a few clips from his essay in Trends in Cognitive Sciences:
In the past two decades a steady stream of researchers have advocated the inclusion of emotion-related components in the general information-processing architecture of autonomous agents. One type of argument is that emotion components are necessary to instill urgency to action and decisions. Others have advocated emotion components to aid understanding emotion in humans, or to generate human-like expressions. In this literature, including affect is frequently associated with the addition of an emotion module that can influence some of the components of the architecture.
The framework advanced here goes beyond these approaches and proposes that emotion (and motivation) need to be integrated with all aspects of the architecture. In particular, emotion-related mechanisms influence processing beyond the modulatory aspects of ‘moods’ linked to internal states (hunger, sex-drive, etc.). Emotion can be thought of as a set of valuating mechanisms that help to organize behavior, for instance by helping take into account both the costs and benefits linked to percepts and actions. At a general level, it can be viewed as a biasing mechanism, much like the ‘cognitive’ function of attention. However, such conceptualization is still overly simplistic because emotion does not amount to merely providing an extra boost to a specific sensory input, potential plan, or action. When the brain is conceptualized as a complex system of highly interacting networks of regions, we see that emotion is interlocked with perception, cognition, motivation, and action. Whereas we can refer to particular behaviors as ‘emotional’ or ‘cognitive’, this is only a language short-cut. Thus, the idea of a biasing mechanism is too limited. From the perspective of designing intelligent agents, all components of the architecture should be influenced by emotional and motivational variables (and vice versa). Thus, the architecture should be strongly non-modular.
...the central argument described here is not that emotion is needed – the answer is ‘yes’ – but that emotion and motivation need to be integrated with all information-processing components. This implies that cognitive–emotional integration needs to be a principle of the architecture. In particular, emotion is not an ‘add on’ that endows a robot with ‘feelings’, allowing it, for instance, to report or express its internal state. It allows the significance of percepts, plans, and actions to be an integral part of all its computations. Future research needs to integrate emotion and cognition if intelligent, autonomous robots are to be built.


from Deric's MindBlog http://ift.tt/2lhN64q
via https://ifttt.com/ IFTTT

No comments:

Post a Comment