So, quite a long time ago really, back in January or maybe towards the end of last year, I was thinking about examples of machines that interact with each other, and could interact with people, but without needing to look like humans…

and I thought, Luxo Lamp ©Pixar Animation Studios.

Then recently I found out that Guy Hoffman at MIT Media Lab has created a real life version called AUR! Well, it’s sort of similar. Ok, it doesn’t hop around, but it does interact with humans, and could be used as part of an interactive office environment. A promotional video on the MIT site shows how the lamp might help someone at work. This video has also been put onto You Tube:

[kml_flashembed movie="http://www.youtube.com/v/4oCVZTrWrKw" width="425" height="350" wmode="transparent" /]

You can see here that AUR has been designed to attend to the human’s point of interest, and moves to light the workspace where their attention is directed. In the video this has been emphasised by making the office environment pretty dark. Some have used this as an excuse to question the usefulness of AUR, suggesting that maybe the invention of the light switch has made this research redundant (look at the comments for this), but I think AUR is an interesting development.

Of course, I am mostly interested not because I particularly want an interactive work environment, but because AUR is a great example of a non-humanoid robot that draws out a variety of responses from humans during interactions. Hoffman’s research has included experiments in which humans and AUR work in partnership to complete a repetitive task, learning from one another as they go, and questionnaires have been used to evaluate the humans’ responses to the robot.

There are many things about this robot that may help me to focus some of my ideas about human-robot interaction.

  • the importance of fluency, rhythm and joint action – the idea that turn-taking is all very well, but not that natural in many situations
  • the combined use of bottom-up and top-down approaches to perception and perceptual analysis
  • working with anticipation and perceptual simulation
  • looking for and acting on patterns of perception between different modalities – searching for meanign through a more holistic view of perception
  • simplifying the perceptual space – looking for the most salient messages and ignoring the others
  • the effect of using non-human form – although it was disappointing in some ways, to see the way this lowered expectations sufficiently to skew the results of the user experiments. The human side of the team was so impressed that the lamp could take voice commands and follow hand signals that it was marked highly for intelligence and commitment even when not programmed to act fluently (ie even when not using anticipation and perceptual simulation)
  • while non-humanoid this robot does elicit anthropomorphisation by humans
  • the fact that the robot learned with the human led the human to feel that the lamp was somehow like them
  • humans in working with the fluent robot were self deprecatory, they spoke about their mistakes during the task, some felt that the robot was the more important partner in the team

This project highlights the idea that the way a robot moves is at least as, and possibly more, important than its form in supporting human-robot interactions.

In his thesis defense, Hoffman mentions the way that when a robot (in this case in a computer simulation) and the human are working well together (and the robot it in its “fluent” state) it is like watching a dance. This makes me think of Grace State Machines (Bill Vorn), where a robot and human dance as a performance piece, and the link seems all the more appropriate because AUR has also appeared in a play with human actors (although in this role AUR was not acting autonomously).

Hoffman is strongly drawn to creating non-humanoid robots and, I think, would prefer them to be anthropomorphised as little as possible by humans. The idea that using other forms enables a more creative process certainly makes sense to me, although I would not necessarily want the robots to look like existing objects. It might be harder to come up with a novel design, but in some ways that is they way I’d like to see robotics go, in particular for robots destined to be more than partners in working relationships.

However, making familiar objects autonomous does have many possibilities, and another good example is that of the Fish-Bird project where autonomous machines were made in wheelchair form. In this case it is particularly important to consider the compromises made for the initial implementation, where the writing arms the artist originally specified were replaced with miniature printers. Here the characters of Fish and Bird were still created, the practical design constraint was successfully overcome by compromise because the final form of the robots was not completely fixed. Hoffman argues that the aim of building a humanoid robot removes this freedom by providing a final form and behaviour that cannot be compromised, the robot will always be “evaluated with respect to the human original” (Hoffman, Thesis 2007).

Now, I haven’t really got to grips with this yet, but what I want to do next is to consider these human-robot interactions in more depth. I would like to link this with the ideas that I already have in relation to the encounter between self and other in Emmanuel Levinas, and also to consider a theory that I have just come across that uses Levinas to open up a new consideration of communication.