• Nenhum resultado encontrado

The role of emotions as a coordinating smoother in joint action

the role of emotions

Chapter 2 Background

2.1 The role of emotions as a coordinating smoother in joint action

Humans have a remarkable ability to perform fluent organization of joint action, achieved by anticipating the motor intentions of others (Sebanz et al., 2006). In everyday social interactions, humans continuously monitor the actions of their partners and interpret them effortlessly in terms of their outcomes. These predictions are used to select adequate complementary behaviours, and this can happen without the need for explicit verbal communication.

To achieve a useful and eficient human-robot collaboration, both partners are required to coordinate their actions and decisions in a shared task. The robot must then be endowed with some cognitive capacities such as action understanding, error detection and complementary

action selection.

Classical accounts of joint action involve the notion of shared intentions, which require that the participants have common knowledge of a complex, interconnected structure of intentions and other mental states (see Bratman, 1992, 1993, 1997, 2009; Gilbert, 1990; Tuomela, 2005).

Individual participants’ actions within a joint action make sense in light of each other and in light of a shared intention to which they are committed. An example of this is a group playing music, each individual must be aware of the intentions of others so that its own actions make sense.

The classical accounts of joint action are able to explain complex actions involving rational deliberation and planning, and thus presupposes that participants in a joint action are capable of rational deliberation and planning. These accounts are suitable to explain complex interactions but fail to explain more simple cooperative tasks, presenting therefore a limitation. They can’t explain joint action performed by non-human animals or young children, since these lack the previously mentioned cognitive skills required (Michael, 2011).

In recent years there have been proposals of minimalist accounts of joint action that either assume no shared intention is necessary or the participants do not require common knowledge of each other’s intentions or other mental states, or the relations among the various participants’

mental states.

Tollefsen (2005) suggests a minimalist account for joint action where joint attention is suficient as a substitute for common knowledge of an interconnected structure of intentions.

Joint attention involves two individuals attending to the same object or event, and being mutually aware that the other is also attending to the same object or event.

Another model of a minimalist joint action states that the participants do not even require to represent each other as intentional agents. The proposed model by (Vesper et al., 2010) relies in three aspects that play a role in joint action:

• Representations: An agent represents its own task and the goal state, but not necessarily any other agents’ task;

• Processes: Prediction (sensory consequences of himself or other agent) and monitoring (identify errors);

• Coordination smoothers: exaggeration of one’s movements to make them easier for the other participant to interpret; giving signals, such as nods; and synchronization, which makes partners in a joint action more similar and thus more easily predictable for each other.

Even though there are multiple accounts that attempt to explain human-human joint action, none of the above described accounts addresses the potential role of emotions as coordinating factors in joint actions. Michael (2011) aims to fill this gap by showing how emotions, more specifically shared emotions, can facilitate coordination in joint actions.

Before discussing how shared emotions can be used to facilitate coordination in joint action, it is important to define the necessary conditions for a shared emotion. Supposing we have an interaction between two agentsA andB, two necessary conditions must be met in order for a shared emotion to be considered as such:

Aexpresses his affective state verbally or otherwise (facial expressions, posture, . . . );

B perceives this affective state.

When these two conditions are satisfied we are in the presence of a shared emotion. Michael (2011) discusses in more detail different varieties of shared emotions: (a) emotion detection;

(b) emotion/mood contagion; (c) empathy; (d) rapport.

From the various forms of shared emotions, the one that is most helpful in a human-robot joint action scenario is the emotion detection, it occurs as a result of agent B perceiving agent A’s emotional expression, as a consequence,B detects A’s emotional state. Emotion detection can be used to facilitate coordination in joint action in three aspects:

• Facilitate prediction;

• Facilitate monitoring;

• Serve as signalling function.

Emotion detection can facilitate prediction in a joint action, lets imagine agent Aexpresses an emotion in response to a certain action performed by agentB, if agentB detects this emotion and agent Ais aware of this detection, then, agent Acan predict that the decisions made by agent B will take into account its emotional state.

Regarding monitoring, a person’s emotional expressions can transmit information about how

she appraises her progress toward the goal of her own task, or the group’s progress toward the global goal of a joint action.

Emotion detection can also serve as signalling function, a positive emotional expression such as a smile may signal approval of another participant’s action or proposed action, or the continued presence of rapport within the group.

Emotional mechanisms can contribute to fast adaptation (allowing to have faster or slower reactions), to resolve the choice among multiple conflicting goals, and through their external manifestations, to signal relevant events to others (Cañamero, 2001).

In a context of human-robot joint action, the use of emotions can be beneficial for the team, since the robot can harness more information about its partner (ex.: facial expressions, gesture velocity and body movement), it is expected that the decisions that the robot makes take into account not only the performed actions, but also the state of the partner, resulting in better decisions. From the human perspective, collaborating with a robot that has emotion recognition abilities could contribute to a less rigid interaction, making it more human-like and fluent.

Autonomous agents can benefit from the inclusion of emotions in their architectures as far as adaptation is concerned (Cañamero, 2001). If the robot is able to interpret the user facial expression in terms of its meaning, in the context of a joint task, it can select more appropriate behaviours and actions to improve the interaction with the human.

In the context of this work the facial expressions displayed by the human, might not be actual manifestations of an emotion, if the robot interprets a facial expression as sadness, it does not mean that the human is actually feeling sad, this cannot be measured accurately by the robot’s sensing capabilities, the human can be only displaying a communicational signal to work with the robot. However, the interpretation that the robot makes of this expression, in the context of the task, will have consequences in its behaviour and decisions.

In order to integrate emotion recognition into the robot architecture, in a way that is biologically plausible, one must first investigate how humans understand emotions.

2.2 Neuro-cognitive mechanisms underlying the