21-07-2012, 01:39 PM
Emotional annotation of text
Emotional annotation of text.docx (Size: 1.01 MB / Downloads: 28)
INTRODUCTION
from, or deliberately influences Emotional annotation of text is a step towards implementing affective computing. Affective Computing is computing that relates to, arises emotion and other affective phenomena. The field was originally named and defined treating affect and emotion essentially synonymously and there is still no widely agreed upon definition of either the term "emotion" or "affect" in the literature; however, there is a general acceptance that affect is the broader term, and that states such as "interest ―are affects, whether or not they are emotions, while states such as "anger" are both an emotion and an affect. Regardless of the resolution of the precise definitions of emotion and affect, research in Affective Computing addresses the broader sense of the two terms, and contributes to Artificial Intelligence, Pattern Recognition, Machine Learning, Human-Computer Interaction, Social Robotics, Autonomous Agents, Cognitive and Affective Sciences, Affective Neuroscience, Neuroeconomics, Health-behavior Change, and many other areas where technology is used to detect, recognize, measure, model, simulate, communicate, elicit, handle, or otherwise understand and directly influence emotion and other affective phenomenon.
Imagine your robot entering the kitchen as you prepare breakfast for guests. The robot looks happy to see you and greets you with a cheery "Good morning." You mumble something it does not understand. It notices your face, vocal tone, smoke above the stove, and your slamming of a pot into the sink, and infers that you do not appear to be having a good morning. Immediately, it adjusts its internal state to ―subdued," which has the effect of lowering its vocal pitch and amplitude settings, eliminating cheery behavioral displays, and suppressing unnecessary conversation. Suppose you exclaim unprintable curses that are out of character for you, yank your hand from the hot stove, rush to run your fingers under cold water, and mutter "... ruined the sauce." While the robot's speech recognition may not have high confidence that it accurately recognized all of your words, its assessment of your affect and actions indicates a high probability that you are upset possibly hurt. At this moment it might turn its head with a look of concern, search its empathetic phrases and select, "Burn-Ouch ... Are you OK?" and wait to see if you are, before selecting the semantically closest helpful response, "Shall I list sauce recipes?" As it goes about its work helping you, it watches for signs of your affective state changing - positive or negative. It may modify an internal model of you, representing what is likely to elicit displays such as pleasure or displeasure from you, and it may later try to generalize this model to other people, and to development of common sense about people's affective states. It looks for things it does that are associated with improvements in the positive nature of your state, as well as things that might frustrate or annoy you. As it finds these, it also updates its own internal learning system, indicating which of its behaviours you prefer, so it becomes more effective in how it works with you.
This seminar deals with the various technologies behind annotation of text. Given a text how to get the emotion corresponding to that text? The way in which the meaning of a sentence is built from the meaning of its words has been a subject of study in computational linguistics for a long time. No such study has been carried out for the way in which the emotional connotations of a sentence are affected by the emotional connotations of its words. Existing approaches to this task rely most often on a simplified representation of the sentence as a bag of words, where all words contribute in equal measure, much in the way information retrieval simplifies the treatment of text. However, intuitively certain words can probably be considered more significant, depending on the role they play in the word from their syntactic or semantic structure. An important hypothesis is that if this kind of sentence structure were to be represented computationally in a way that modelled how the emotional contributions of words affect the emotional connotations of the sentence, it would provide the means for capturing these intuitions. A static ontology of word dependencies within a sentence fulfils the requirements for such a representation.
An important challenge in addressing issues of affective computing is having an adequate representation of emotions. Existing approaches vary from identifying a set of basic categories—with a name tag assigned to each one of them—to design a multi dimensional space in terms of primitive elements—or emotional dimensions—such that any particular emotion can be defined in terms of a tuple of values along the different dimensions. If one were to operate computationally with a representation of emotions expressed in more than one format, one is faced with the task of being able to convert from one to another. This task is reasonably easy when converting from emotional categories to emotional dimensions: it would suffice to assign a particular tuple of values for the emotional dimensions of each emotional category. When trying to convert from emotional values expressed in terms of emotional dimensions to a representation in terms of emotional categories this is not so simple. The problem lies in the fact that, given the subjectivity associated with emotional perception, the particular values assigned to a given impression by one person usually deviate slightly from what a different person would have assigned. This suggests that the process of converting from emotional dimensions to emotional categories should be carried out in a manner that allows a certain tolerance, so that a region of space in the universe of emotional dimensions is assigned to each emotional category, rather than just a single point in the universe.
A separate problem arises from the fact that there are a large number of emotional categories, and the differences and similarities between them are not clear cut. In some cases, it is reasonable to assume that certain emotional categories may be subsumed by others. For example, the emotion anger subsumes the emotions sulking, displeasure and annoyance which may be seen as different types of anger. This suggests that a taxonomy of emotional categories as a hierarchy might be useful in finding correspondence between more specific emotional categories and more general emotional categories.
In this context, the development of ontology of emotional categories based on description logics, where each element is defined in terms of a range of values along the space of emotional dimensions, provides a simple and elegant solution.
This seminar describes the development of a system that relies on two ontologies, together with its application as an interface between a text inputs marked up in terms of emotional dimensions and a set of rules for configuring an emotionally enabled voice synthesizer. By reasoning over the word dependency ontology, the emotional connotations of any node in the structure can be computed separately, from the contribution of single words to the contribution of entire sentences, including intervening substructures. By reasoning over the emotion ontology, insertion of new instances of emotional concepts into the ontology results in their automatic classification under the corresponding branch of the hierarchy. The system can then trace the ascendants in the ontology of the corresponding value, until a more general concept is found that satisfies the condition that specific rules are available for generating an appropriate voice synthesis configuration for expressing the intended emotional impression.
PROBLEM DOMAIN AND THE TECHNOLOGIES
Four basic topics are coming under problem domain and technologies: the computational representation of emotions, the Semantic Web technologies that have been employed, the natural language processing technique employed to obtain the syntactic structure of sentences (dependency analysis) and the review of the system chosen as domain of application.
2.1 computational representations of emotions
Emotions are not easy to define, because there are a lot of factors that contribute to them. A good definition of emotion must take into consideration: conscious feeling of emotion, process which takes place in the nervous system and in the brain and expressive models of emotion. Emotions take place when something unexpected happens and the so-called ―emotional effects‖ begin to take control.
2.1.1 Classification of emotions
Many of the terms used to describe emotions and their effects are difficult to tell apart from one another, as they are usually not well defined. This is due to the fact that the abstract concepts and the feelings associated with such concepts are very difficult to express with words. For this reason, there are a lot of methods for describing the characteristics of emotions: emotional categories and emotional dimensions which represent the essential aspects of emotional concepts.
Emotional categories: The most common method for describing emotions is the use of emotional words or affective labels. Different languages provide assorted labels of varying degrees of expressiveness for the description of emotional states. There are significant differences between languages in terms of the granularity with which these labels describe particular areas of emotional experience. Even within a given language, some areas of emotional experience have a higher density of labels than others. This diversity presents an additional difficulty. A lot of methods have been proposed to reduce the number of labels used to identify emotions: basic emotions, super ordinate emotional categories, and essential everyday emotion terms…
Emotional dimensions: Emotional dimensions represent the essential aspects of emotional concepts. There are three basic dimensions:
– Evaluation: Represents how positive or negative an emotion is. For example in a scale for the evaluation dimensions at one extreme we have emotions such as happy, satisfied, hopeful. The other end of the scale is for emotions such as unhappy, unsatisfied, despaired.
– Activation: Represents an active/passive scale for emotions. At one extreme of the activation are emotions such as excited, aroused…and at the other end of this scale are emotions such as calm, relaxed….
– Power: Represents the control exerted by the emotion. At one end of the scale we have emotions characterized as completely controlled, such as care for, submissive…and at the opposite end of this scale we have emotions such as dominant, autonomous…
This method is very useful because it provides a way of measuring the similarity between emotional states. Another important property of this method is that shifting the representational weight away from the actual labels employed allows for a relative arbitrariness when naming the different dimensions.
2.1.2 Structure of emotions
There are several approaches in the literature for determining which the basic emotions are, or which emotions are more general than others. There is a general agreement that some full-blown emotions are more basic than others. The number of basic emotions is usually small. Some emotion categories have been proposed as more fundamental than others on the grounds that they include the others. Scherer and Ortony suggest that an emotion A is more fundamental than another emotion B if the set of evaluation components of the emotion A are a subset of the evaluation components of the emotion B. An example: five prototypes are proposed as underlying all emotional categories: anger, love, joy, fear and sadness. Joy, for example, would be subdivided into pride, contentment, and zest.
Many psychologists have claimed that certain emotions are more basic than others Plutchik’s postulates that there is a small number of basic, primary, or prototype emotions (anger, anticipation, disgust, joy, fear, sadness and surprise). All other emotions are mixed or derivative states; that is, they occur as combinations, mixtures, or compounds of the primary emotions. Plutchik states that all emotions vary in their degree of similarity to one another and that each emotion can exist in varying degrees of intensity or levels of arousal. Ekman has focused on a set of six basic emotions that have associated facial expressions: anger, disgust, fear, joy, sadness and surprise. Those emotions are distinctive, among other properties, by the facial expression characteristic to each one. Izard determines that the basic emotions are anger, contempt, disgust, distress, fear, guilt, interest, joy, shame and surprise. The OCC Model has established itself as the standard model for emotional synthesis. It presents 22 emotional categories: pride–shame, admiration–reproach, happy–resentment, gloating– pity, hope–fear, joy–distress, satisfaction–fear-confirmed, relief–disappointment, gratification– remorse, gratitude–anger and love–hate. The OCC Model considers that categories are based on the valence reactions to situations constructed as: goal relevant actions and attractive or unattractive objects. Parrot presents a deeper list of emotions, where emotions were categorized into a short tree structure, this structure has three levels: primary emotions, secondary emotions and tertiary emotions. As primary emotions, Parrot presents: love, joy, surprise, anger, sadness, and fear.
2.2 Semantic Web technologies
The Semantic Web is being developed with the intention of providing a global framework for describing data, its properties and relationships in a standard fashion. Many developers and researchers on knowledge systems are taking the approach of using Semantic Web technologies to obtain more interoperability and reusability with existing software and to take advantage of the strong trend of development that these technologies are experiencing nowadays.
2.2.1 Ontology web language
The Semantic Web relies heavily on ontologies. Concretely, ontologies based on Description Logics paradigm include definitions of concepts -OWL classes-, roles -OWL properties- and individuals. The most common language to formalize Semantic Web ontologies is OWL (Ontology Web Language), a proposal of the W3C. The goal of this standard is to formalize the semantics that was created ad hoc in old frame systems and semantic networks. OWL has three increasingly expressive sub-languages: OWL Lite, OWL DL, and OWL Full.
OWL Lite is the simplest subset of OWL, specially designed to provide a quick migration path for other taxonomical structures.
OWL DL is the subset of OWL designed for applications that need the maximum expressiveness without losing computational completeness and decidability. It is based on Description Logics, a particular fragment of first-order logic, in which concepts, roles, individuals, and axioms that relate them (using universal and existential restrictions, negation, etc.) are defined. These entailments may be based on a single document or multiple distributed documents that we combine using the import OWL mechanisms. The OWL DL reasoning capabilities relies on the good computational properties of DLs.
OWL Full ignores some significant restrictions of OWL DL, becoming a more powerful language for representing complex statements, but less useful for reasoning with them due to their computational properties.