You are currently browsing the tag archive for the 'Human-Robot Interaction' tag.
Baxter is a new workplace robot developed by a company called Rethink Robotics, run by Rodney Brooks (a very well known robot designer, who also started the company iRobot famous for developing Roomba robotic vacuum cleaners). Baxter is designed to work next to people in manufacturing environments, being human-like in form and consisting of a torso and two arms, together with a screen “face” (well, one consisting of eyes and eyebrows at least).
Most interesting to me is the way in which people communicate with Baxter, using touch first to get the robot’s attention and then to move the robot’s arms into particular positions. This reminds me of the touch of a yoga teacher, for example, in helping to position people into a particular pose. Baxter also has a graphical interface, displayed on the same screen that more often shows the robots eyes, which is controlled with buttons on each arm. In order to “program” Baxter to complete a task a person can therefore show the robot what to do by moving its arms into position and choosing the appropriate action, eg to pick up or place down a part, from the user interface. Importantly, it is the way in which Baxter learns a movement from being placed into position that seems to separate it from various other manufacturing robots currently in production.
As Rodney Brooks explains in the interview with IEEE Spectrum writers Erico Guizzo and Evan Ackerman, “[w]hen you hold the cuff, the robot goes into gravity-compensation, zero-force mode” such that “the arm is essentially floating”. This makes the robot easy to position, and as Mike Bugda notes in the video below, Baxter is therefore understood to be “very compliant to the user”. Although “compliant” is used here in part to emphasise that the robot is flexible and therefore able to deal with “an unstructured environment” (Matthew Williamson, in conversation with Guizzo and Ackerman), there is also a sense in which this robot is being placed as a servant, or possibly even a slave, by virtue of its immediate compliance to a human’s touch. This design decision in itself is probably a pragmatic response to making the robot easy to program in the workplace, but from my perspective it raises some issues since this is also clearly a robot designed to be read as human-like, but also as a compliant servant/slave.
The idea of Baxter as human-like is reinforced when Bugda (in the video below) explains that Baxter “exhibits a certain level of interaction, collaborative friendliness that makes it easy to understand and introduce into a manufacturing environment”. For example, when you touch its arm the robot stops what it is doing and looks towards you, acknowledging your presence, and once its instruction is complete the robot acknowledges that it understands with a nod of its “head”. Guizzo and Ackerman take this idea further, when they suggest that while at work Baxter’s “animated face” displays “an expression of quiet concentration”, while a touch on the arm causes Baxter to “stop whatever it’s doing and look at you with the calm, confident eyes”.
Although this video is simply a demonstration, Baxter has obviously had some limited field testing, and Brooks (again in conversations with Guizzo and Ackerman) notes that after people have worked with the robot for a while “something interesting happens … People sort of personify the robot. They say, ‘It’s my buddy!’”. It is at this point that the perception of the robot as a friend is reinforced.
This type of reading of Baxter as a “buddy” or friend might be assumed to be closely linked to the robot’s human-like form. However, my research considering the ALAVs, the Fish-Bird project and also Guy Hoffman’s robotic desk lamp, AUR, along with anecdotal evidence from friends who are owners of Roomba robotic vacuum cleaners, indicates that robots of pretty much any form encourage this kind of personification. In addition, for Baxter, I suspect that the use of touch between human and robot might also serve to support the perception of this robot as a social subject, and eventually a friend. The importance of touch in working with Baxter would seem to sets this robot apart from others that I have considered in my research to date. This might also suggest that Baxter could be made more machine-like (and less human-like) in a move that would reduce my discomfort in placing humanoid robots as servants/slaves, as I have suggested occurs with this robot worker.
I couldn’t overlook this new robot, erm, pillow-bear.
While its communication skills might seem limited to pawing at your head, this robot listens for your snores while it’s cute companion monitors your blood oxygen. Should your sleep become less than silent, or your oxygen levels drop alarmingly, the pillow teddy will wake you with a gentle paw on your head (although sound sleepers might require a more energetic thump I suppose).
It’s a little difficult to see from the video just how effective the tickling/thumping action might be, and it’s also a pity that the level of background noise makes it hard to relate the bear’s reaction to the snore of the presumably willing test subject (or more likely the creator of the robot).
At last, an entertaining robot with which to enter the new year. I particularly like the interaction with the cat, and also the squeaks of surprise that result even when one knows that it’s going to jump!
Yes, I want one of these too, and I haven’t even had time to construct my other robot kits at home.
And, yes, while I acknowledge that there are differences between the two robots, I think that they also share many elements in common, not least cuteness!
But of course, Keepon is really famous for his dancing…
I may have shared that once already, but once just isn’t enough for that video.
I’m really impressed by how expressive both Keepon and Tofu are, in particular by their ability to show where their “attention” lies. These robots are seemingly quite simple (although I’m sure the underlying technology is still pretty complex), but they show considerable possibilities as expressive and communicative robots.
No, I don’t mean that these are robots under development, I mean that I’m hoping to build my own Blubber Bot (or two) by around the time I finish my thesis next year.
Ok, the first thing to clarify is, no this will not stand in the way of my completion. However, maybe it does indicate that I’m more positive that I am going to complete, at last, in the first quarter of next year. I mean, I’m already planning the party, so it must be true, mustn’t it??!!
Anyway, I thought that I should construct some guests of honour for my robot themed party, hence my decision to track down at least one, or maybe two, Blubber Bot kits. The Blubber Bots are a “transitional species” of robot closely related to the ALAV (Autonomous Light Air Vessel).
I’m just hoping that my technical skills are going to be up to the task. They should be great guests, and a nice talking point as they “graze the landscape in search of light and cellphone signals”.
UPDATE: The purchase has been made, now I just hope that I’m capable of putting them together (and that is assuming that the kits arrive safely and with no damage)!
“Regarding appearance, as other studies already showed, the humanoid option is not a good one”
Ok, so this is a study from 2008, which immediately makes me think I should have found it ages ago, but BotJunkie only got hold of it this year, so maybe I’m not that far behind the times. In common with the Evan Ackermann I have been spending a lot of time (a LOT of time, some would say too long) “wondering why robotics researchers persist in designing humanoid robots specifically for domestic applications…” and, of course, my answer is becoming tinged with communication theory.
I would therefore argue that one of the drivers behind the design of humanoid robots is the expectation, which was identified in this research paper, that robots need to communicate in humanlike ways. In terms of the effect on robot design, this assumption encourages the development of robot with expressive faces, although these faces can range from being very humanlike (as seen in Jules, erm, and Eva, although you may have some misgivings there) to more of a mechanical cartoon (as seen in Kismet pictured below).
Maybe this design decision is popular because developing robots that can understand and produce human language really well is still pretty hard to achieve, and the provision of facial expressions offers an alternative way to provide more humanlike communication. Maybe even with more complete language capabilities roboticists will still think expressive faces add value.
From a personal perspective I’m really not drawn in by such faces on robots. In many ways I find them pretty off-putting, and I’d be really interested to see more designs that work with movement and sound (and possibly even light) to provide a more “machinelike” communicative feedback that is nonetheless understandable to humans.
… or possibly a fish?
So, I’m all for experimenting with novel locomotion for robots, and often it does seem that nature provides interesting templates to help designers with this type of problem. Interestingly the inspiration for this robot is not a snake, but rather the sandfish lizard, although it is the way that the lizard tucks its legs in to “swim” through sand using snake-like undulations that caught the attention of the designers.
However, I am rather worried about the idea that this robot could be used to “help find people trapped in the loose debris resulting from an earthquake”. In the main I’m wondering how this robot might be expected to communicate in order to alleviate the panic it might cause in any survivors it found. Would it help if it was talking or even singing a song as it wriggled towards you? I’m not sure, but it might be better than it appearing silently alongside your trapped body.
Here is the video from New Scientist report:
This is, at least from my point of view, an important development for robotics reported in the New Scientist.
Programmers have now managed to write “sentiment-analysing” software that has been trained, through collating a bank of comments judged by human readers to contain sarcastic content, to recognise sarcasm. For some reason I find it amusing that the comments were taken from Amazon.com product reviews, as well as from Twitter.
I would assume that adding this ability, to analyse the contents of a statement for sarcastic components, to the existing ability of some robots to read tones of voice, might bring us a step closer to building robot companions that can take part in life more fully by appreciating all of the joys of human(like) existence. After all, without an understanding of sarcasm how could we expect robots to understand comedy. “You know in another life, maybe we could have been brothers…” (Black Books)