Acquisition of Linguistic Negation for a Humanoid

Recent research on language acquisition in (developmental) robotics and symbol grounding focused on the acquisition of oject labels (typically nouns), words for concrete actions like "push", "pull", "put" etc. (typically verbs) and words for objects properties like colour and size. Systems of this kind are capable of recognizing simple commands like "push the red block" and execute the correct actions accordingly or describe a scene in which such an action occured. Negation has so far not been within the scope of these systems, that is, there has been no grounding mechanism to give meaning to a word like a simple "no".
For computer scientists and linguists that typically attend logic courses on their way to obtain their particular degrees, the natural way to think about negation seems to be as a logical operator that negates a proposition. During my bachelor and masters degree in computer science I attended quite a few courses in logic and learned to think in these terms about formal but also natural languages. So it does not come as a surprise that the existing symbol grounding systems focus on the grounding of elements of propositions like nouns, verbs, and adjectives but do not take into account the communicative functions of the utterances from which the propositions are taken or constructed from.
In literature on early child language development we find an emphasis of the communicative functions of single word utterances. This is also the case for accounts of the development of negation. These accounts typically list rejective "no"s as the first type of negation that is used by young children.
In order to tackle the problem of enabling a robot to acquire and use words and meaning for negative linguistic entities I therefore decided to begin where typically also a childs language begins: with a simple "no". In other words I started with the acquisition of rejective utterances as a first target.



Early Negation and its Link to Affect

After reading accounts on the development of negation in early child language it became clear that the affective states of the toddlers play a very important role when it comes to the meaning of the different observed types of negation. Curiously none of the publications on existing symbol grounding architectures ever mentioned any links to affective/emotional/motivational systems. Consequently I decided to extend an existing language acquisiton and symbol grounding architecture with a simple motivational or affective module.
Note that I use the adjectives "motivational" or "affective" synonymously as to my knowledge there is no unified theory of emotions, affect and/or motivation. What I mean when I use these words is that there is some process in humans that makes us like certain things and dislike others or that makes us want to have certain things and reject others. It is not clear to me if "liking something" is rather linked to affect whereas "wanting something" is rather linked to volition, if affect, motivation, and volition stand in some causal relationship to each other, if there is a real cognitive distinction between the three or if we are reifying state of affairs when insisting on such a clear distinction. (If you know of a well-founded theory that elicits the relation of these three terms and with which most psychologist agree, drop me an email.) Read more on the basic considerations in my >ECAL paper. For a more philosophy-leaning 'treatise' which gives a semiotic perspective on the problem, see my contribution to >ECAP.


A Cognitive Architecture for Language Acquisition

Fortunately my 2nd supervisor >Dr. Joe Saunders already had developed a language acquisition architecture in order to enable humanoids to acquire object words from an unconstrained dialogue with naïve participants. Therefore I decided to extend his architecture on a conceptual level with a motivation module that makes the humanoid >iCub want/like certain simple objects and dislike others. On a software level I decided to develop my own system in accordance with the >software development guidelines for this particular platform. (My system is open-source, and available via SVN >here, but be warned that it is still in a beta-stage and some modules are still lacking a proper documentation). A big part of this architecture consists of a behavioural system that that enables the iCub to act in certain ways that seemed beneficial for the HRI experiments soon to be described in the next paragraph.


Human-Robot Interaction for Language Acquisition

I conduced Human-Robot-Interaction (HRI) experiments with 20 naïve participants in order to test above architecture. All experiments were single-blind, that is, my participants did not really know what these experiments are about in order not to bias them in the way they speak to the robot. The results of these experiments were very encouraging and can be found in my thesis (not public yet due to press embargo) and a paper which is about to be published (link will be given here upon publication).