You are currently browsing the tag archive for the 'Robot of the day' tag.
Baxter is a new workplace robot developed by a company called Rethink Robotics, run by Rodney Brooks (a very well known robot designer, who also started the company iRobot famous for developing Roomba robotic vacuum cleaners). Baxter is designed to work next to people in manufacturing environments, being human-like in form and consisting of a torso and two arms, together with a screen “face” (well, one consisting of eyes and eyebrows at least).
Most interesting to me is the way in which people communicate with Baxter, using touch first to get the robot’s attention and then to move the robot’s arms into particular positions. This reminds me of the touch of a yoga teacher, for example, in helping to position people into a particular pose. Baxter also has a graphical interface, displayed on the same screen that more often shows the robots eyes, which is controlled with buttons on each arm. In order to “program” Baxter to complete a task a person can therefore show the robot what to do by moving its arms into position and choosing the appropriate action, eg to pick up or place down a part, from the user interface. Importantly, it is the way in which Baxter learns a movement from being placed into position that seems to separate it from various other manufacturing robots currently in production.
As Rodney Brooks explains in the interview with IEEE Spectrum writers Erico Guizzo and Evan Ackerman, “[w]hen you hold the cuff, the robot goes into gravity-compensation, zero-force mode” such that “the arm is essentially floating”. This makes the robot easy to position, and as Mike Bugda notes in the video below, Baxter is therefore understood to be “very compliant to the user”. Although “compliant” is used here in part to emphasise that the robot is flexible and therefore able to deal with “an unstructured environment” (Matthew Williamson, in conversation with Guizzo and Ackerman), there is also a sense in which this robot is being placed as a servant, or possibly even a slave, by virtue of its immediate compliance to a human’s touch. This design decision in itself is probably a pragmatic response to making the robot easy to program in the workplace, but from my perspective it raises some issues since this is also clearly a robot designed to be read as human-like, but also as a compliant servant/slave.
The idea of Baxter as human-like is reinforced when Bugda (in the video below) explains that Baxter “exhibits a certain level of interaction, collaborative friendliness that makes it easy to understand and introduce into a manufacturing environment”. For example, when you touch its arm the robot stops what it is doing and looks towards you, acknowledging your presence, and once its instruction is complete the robot acknowledges that it understands with a nod of its “head”. Guizzo and Ackerman take this idea further, when they suggest that while at work Baxter’s “animated face” displays “an expression of quiet concentration”, while a touch on the arm causes Baxter to “stop whatever it’s doing and look at you with the calm, confident eyes”.
Although this video is simply a demonstration, Baxter has obviously had some limited field testing, and Brooks (again in conversations with Guizzo and Ackerman) notes that after people have worked with the robot for a while “something interesting happens … People sort of personify the robot. They say, ‘It’s my buddy!’”. It is at this point that the perception of the robot as a friend is reinforced.
This type of reading of Baxter as a “buddy” or friend might be assumed to be closely linked to the robot’s human-like form. However, my research considering the ALAVs, the Fish-Bird project and also Guy Hoffman’s robotic desk lamp, AUR, along with anecdotal evidence from friends who are owners of Roomba robotic vacuum cleaners, indicates that robots of pretty much any form encourage this kind of personification. In addition, for Baxter, I suspect that the use of touch between human and robot might also serve to support the perception of this robot as a social subject, and eventually a friend. The importance of touch in working with Baxter would seem to sets this robot apart from others that I have considered in my research to date. This might also suggest that Baxter could be made more machine-like (and less human-like) in a move that would reduce my discomfort in placing humanoid robots as servants/slaves, as I have suggested occurs with this robot worker.
I couldn’t overlook this new robot, erm, pillow-bear.
While its communication skills might seem limited to pawing at your head, this robot listens for your snores while it’s cute companion monitors your blood oxygen. Should your sleep become less than silent, or your oxygen levels drop alarmingly, the pillow teddy will wake you with a gentle paw on your head (although sound sleepers might require a more energetic thump I suppose).
It’s a little difficult to see from the video just how effective the tickling/thumping action might be, and it’s also a pity that the level of background noise makes it hard to relate the bear’s reaction to the snore of the presumably willing test subject (or more likely the creator of the robot).
At last, an entertaining robot with which to enter the new year. I particularly like the interaction with the cat, and also the squeaks of surprise that result even when one knows that it’s going to jump!
Yes, I want one of these too, and I haven’t even had time to construct my other robot kits at home.
And, yes, while I acknowledge that there are differences between the two robots, I think that they also share many elements in common, not least cuteness!
But of course, Keepon is really famous for his dancing…
I may have shared that once already, but once just isn’t enough for that video.
I’m really impressed by how expressive both Keepon and Tofu are, in particular by their ability to show where their “attention” lies. These robots are seemingly quite simple (although I’m sure the underlying technology is still pretty complex), but they show considerable possibilities as expressive and communicative robots.
No, I don’t mean that these are robots under development, I mean that I’m hoping to build my own Blubber Bot (or two) by around the time I finish my thesis next year.
Ok, the first thing to clarify is, no this will not stand in the way of my completion. However, maybe it does indicate that I’m more positive that I am going to complete, at last, in the first quarter of next year. I mean, I’m already planning the party, so it must be true, mustn’t it??!!
Anyway, I thought that I should construct some guests of honour for my robot themed party, hence my decision to track down at least one, or maybe two, Blubber Bot kits. The Blubber Bots are a “transitional species” of robot closely related to the ALAV (Autonomous Light Air Vessel).
I’m just hoping that my technical skills are going to be up to the task. They should be great guests, and a nice talking point as they “graze the landscape in search of light and cellphone signals”.
UPDATE: The purchase has been made, now I just hope that I’m capable of putting them together (and that is assuming that the kits arrive safely and with no damage)!
… or possibly a fish?
So, I’m all for experimenting with novel locomotion for robots, and often it does seem that nature provides interesting templates to help designers with this type of problem. Interestingly the inspiration for this robot is not a snake, but rather the sandfish lizard, although it is the way that the lizard tucks its legs in to “swim” through sand using snake-like undulations that caught the attention of the designers.
However, I am rather worried about the idea that this robot could be used to “help find people trapped in the loose debris resulting from an earthquake”. In the main I’m wondering how this robot might be expected to communicate in order to alleviate the panic it might cause in any survivors it found. Would it help if it was talking or even singing a song as it wriggled towards you? I’m not sure, but it might be better than it appearing silently alongside your trapped body.
Here is the video from New Scientist report:
Telepresence robot are not really what my research is about, I’m more interested in autonomous robots. However, these examples cause me to question ideas about the best way to embody someone’s presence through a robot. Recently the AnyBots QB was in the news, and it’s pretty odd looking if you ask me:
The idea is that this robot can not only provide a presence in meeting rooms, as is the case with existing teleconference facilities, but will also allow the operator to continue to talk to colleagues as they move back into the office after the meeting. It also allows people to be more involved in the office even when working from a distance, for example being able to look at prototypes or help with specific problems, anything that requires them to be present in a particular physical space.
According to the IEEE Spectrum report the robot includes “a laser pointer that shoots green light from one of its eyes”. One can only hope that this truly is only used to highlight items in a presentation, as opposed to taking over the office by deadly force! In any case, this robot’s eyes don’t seem to add much to its character, although Wired argues that they give the QB an “aesthetic similar to Pixar’s Wall-E”.
Then again, if you want something more aesthetically pleasing than this, but still without a somewhat creepy robot head, how about the VGo.
The Pioneer Navi Robo is a robot in the form of a crab. It has been designed to sit on the dashboard of your car to translate the directions from your GPS into easy to interpret claw movements.
So, here it is: the crab that tells you where to go…
There’s a lot of reasons why this is one of my favourite robots of the moment. For one, of course, it’s definitely not humanoid, but maybe more important is the clever use of a form that seems non-intuitive, but works well in this context.
Here it is close up
I have always been fascinated by watching videos of crabs signaling to one another, in fact they’re even more entertaining and interesting when you watch them in real life (but you have to creep up on them or they all scuttle back home). Rather than communicating with other crabs, the Navi Robo’s claws really lend themselves to signaling the direction to take in your car. It would seem to be easy to catch sight of the robot out of the corner of your eye, while remaining primarily focused on the road. This is just a prototype, but I like the way that the crab calmly signals on the run up to the turn, and then flashes its eyes and jiggles the appropriate claw as the turn becomes imminent.
While some people might ask whether this robot would be too distracting for drivers, it is also possible to argue that by utilising peripheral vision, as opposed to encouraging the driver to focus on the GPS screen, this robot could well be a positive safety development. In addition, it might be a vital component of a GPS system for someone who is deaf or finds it difficult to hear the spoken instructions provided by most GPS systems.
Ultimately though this robot wins me over because it’s something I never expected to see, certainly not in this context, it’s just excellent!
So, no blogging from me for a while then. I think I stopped because someone I know requested that I blog about robots again, but I have fallen out with the robots, so no blogging from me…
At some point in this and the next month I hope to reconnect with ideas of communication theory using examples of human-robot communication as illustrations, but I haven’t managed yet. Meanwhile, I am teaching in an upper level Communication Studies unit and enjoying pretty much every minute of that. It’s possible that some of my students may drop by the blog this week or next, so I thought I owed them a more recent post.
What bits of information could I share here which have some bearing on the tutorials for next week?
- My favourite theory uses stories as illustrations, almost all theorists in which I am interested and whose ideas I quote do this
- John Durham Peters is someone I cite a lot (and he’s quoted in the reading for this week)
- My life choices and the work I have done can be linked back to stories I have heard that have captured my imagination, from school, through my first degree, at work, in moving to Australia and in my research and teaching
Back to robots soon, yes, I really will get back to the robots… one day…
Given how much of my blog has been recently concerned with the trials and tribulations of thesis writing, I felt that a “Robot of the day” post was in order. So who would you rate: ASIMO v Robbie? You can see where I stand…
Not an academic analysis, but I feel probably the most likely result!
After a side trip to New Brunswick to visit a friend I made at last years British Society for Literature and Science conference I travelled back into the US to visit Boston. My main aim was to visit MIT. I had an appointment with someone in the Personal Robotics group at MIT Media Lab, and I also wanted to visit the MIT Museum.
I had originally planned to visit Guy Hoffman, designer and builder of AUR the robotic lighting assistant, but unfortunately he ended up being out of the country when I was there (some people will go to any lengths to avoid meeting with me)! However, Mikey Siegel kindly agreed to talk to me about his work, and to show me around the Media Lab.
It was an interesting tour, and the lab is just as cluttered with boxes and wires as any other I’ve visited. The only difference in the Personal Robotics section is the large number of cuddly toys that are strewn about the place. I should have asked if I could take some photos, but for some reason felt a bit awkward about this, as if they were bound to say no. I did, however, take some in the museum, just so that I could prove I had “met” Kismet and Cog.
I also spent some time just walking around MIT:
Then I headed off to the Harvard end of town, and into the best book store that I have ever visited. The Harward Book Store shelves are piled high, the staff are helpful and it was packed with browsers.
I know you’re not suppose to do this, or maybe there are no rules for blogging? I decided to back-post a little just as a means of jogging my memory.
While in Montreal I also had the opportunity to meet with Bill Vorn, who I have mentioned before (very briefly) in this blog. In particular, I was interested in talking to him about his work on a project called Grace State Machines, but I was really interested to see all of the machines he has made which are scattered about his laboratory at Corncordia.
I really love visiting labs/studios, they’re usually cluttered, with nowhere to sit down, and bits and pieces of metal and wire everywhere. It’s just great – and I’m really beginning to wonder if I should make my own machines!
I also went back to look at Jessica Field’s work in the museum for a second time. Jessica had obviously dropped in to fix Clara, because she was much more talkative on my second visit (or maybe she just recognised me from before)?!
Yesterday I went to visit Jessica Field, a Canadian artist/roboticist at her studio in Montréal.
Jessica has been building robots for more than ten years, and has an exhibit in the Communicating Vessels: New Technologies and Contemporary Art exhibition I mentioned in the previous post. A video of this work, in which three static robots: Alan, Clara, Brad and Daphne interact with one another to “watch” and “discuss” the movements of their visitors. A video explaining this work is available online. I went to visit these robots on Tuesday, and again today (Thursday). I saw Jessica in between, and mentioned that Clara didn’t seem to be saying much. I suspect that some maintenance work may have taken place, because today both Alan and Clara were working well, and I had fun moving about the space in front of them, in particular moving close to Clara’s “eyes”, which provoked an interesting reaction. You have to spend time with these robots in order to see how they interact, and the problems that they experience in communicating with one another. They “see” the world in very different ways, and cannot therefore agree on what is happening around them.
Jessica is now working on a new set of four robots, three of which can move around a sort of robot play-pen. As far as I am aware these robots do not yet have names, but they do have clearly defined characteristics and different levels of personality. The static robot reacts to sounds it “hears” with its two ears. If a sound reaches both ears then it switches on a light while the sound continues. If it only “hears” with one ear, then it moves around orienting itself to the sound. One of the moving robots can show either a phototropic or photophobic response, and it moves appropriately. As it does this is draws a line on the ground. Another moving robots follows lines it finds on the ground, and when it reaches the end of a line it stops, and “tells” you what it has read with sound. It then becomes attracted to sound, and will move towards this until it finds another line and reverts to line following. The third moving robot follows light in a more “intelligent” way than the robot with a hard wired response. It considers it’s movement, and moves more smoothly. However, I didn’t see this robot in action as it was in parts on Jessica’s desk!
As you can probably tell from the description above, these four robots are designed to form a robot ecosystem. They interact with one another, and also, to a certain extent, with their visitors when they follow sound.
Although I took photos of these robots it’s not appropriate for me to post them here. These robots are Jessica’s work in progress, and are being prepared for exhibit in January. As Jessica works on the robots she keeps a book of observations. These include scientific information about the circuit diagrams and programming of the robots, but also textual descriptions, stories and narratives based on her observations of the robots.
These robots are going to be presented in tandem with a video. This will take the form of what sounds like a “nature programme” about the robots and how they behave. This video is actually going to overstate what the robots are capable of doing, and Jessica is interested to see how visitors then understand the actual movements and behaviours of the robots in the installation.
For the last four days I have been attending the SLSA (Society for Literature, Science and the Arts), apparently pronounced “salsa”, conference in Portland, Maine.
Above is the view of the harbour from my hotel.
The theme of the conference was “CODE” and I presented a paper called “Machine codes in conversations with embodied emotional robots”, which went surprisingly well considering the level of jet lag I was experiencing at the time! I was on the panel, “Robots & Zombies”, with Nick Knouf and Jentery Sayers, both of whom gave great papers. Nick’s, which was about his robot called Syngvan (n here indicates the version of the project a, b, c, etc), had a particular resonance with my own, as we share an interest in non-humanoid, non-anthropomorphic robots.
In addition to attending the conference, with N. Katherine Hayles and Brian Massumi as plenary speakers, I also had a little time to explore Portland. Here is a picture of the only weatherboard observatory I have ever seen (rather like a windmill which has had its wings pulled off),
and another view of the water from where I ate lunch in the park.
You can see that there is some construction going on in Portland, but it was still a nice place to walk around, and the seafood was great .
Tomorrow I take the early train to Boston, and then fly straight out to Montreal. I’m going to visit Bill Vorn and Jessica Field, both of whom create robotic art installations.
These robots have been built to look like slightly smaller than standard wheelchairs. They are beautifully finished, the materials are in keeping with the idea of the wheelchair, but also seem lighter and more delicate. Their wiring and circuitry is cleverly hidden beneath the seat section. One of the most surprising, and I think important, things is that these robots are autonomous in a such a way that the complete installation is robust. They have simple switches: on, off and charging. They have been designed to be easy to look after (for the curator’s of exhibitions and their staff); there are no complex processes that need to be followed, for example to install certain programs as part of their set up. These robots have been designed to work over a long period of time, with minimal technical attention.
The only thing that exhibition staff need to be taught is how to catch them! You need to have a strategy to get hold of them and stop them “running away” when you need to recharge them or “rest” them overnight. I think this is just fantastic!
Of course, and unfortunately, these robots were out of action when I visited. However, it is still really good to have seen them up close. It was also very useful to be able to discuss their design, and the future plans of the CSR project team. In particular, I had the opportunity to talk to the artist Mari Velonaki and the roboticists Steve Scheding and David Rye at the same time. I got a clear idea of their technical goals, philosophical ideas and the way in which they all work together as a project team. All of this is relevant to my thesis work, and it was a good visit to have made just before my research trip to the US and Canada.
So, quite a long time ago really, back in January or maybe towards the end of last year, I was thinking about examples of machines that interact with each other, and could interact with people, but without needing to look like humans…
and I thought, Luxo Lamp ©Pixar Animation Studios.
Then recently I found out that Guy Hoffman at MIT Media Lab has created a real life version called AUR! Well, it’s sort of similar. Ok, it doesn’t hop around, but it does interact with humans, and could be used as part of an interactive office environment. A promotional video on the MIT site shows how the lamp might help someone at work. This video has also been put onto You Tube:
[kml_flashembed movie="http://www.youtube.com/v/4oCVZTrWrKw" width="425" height="350" wmode="transparent" /]
You can see here that AUR has been designed to attend to the human’s point of interest, and moves to light the workspace where their attention is directed. In the video this has been emphasised by making the office environment pretty dark. Some have used this as an excuse to question the usefulness of AUR, suggesting that maybe the invention of the light switch has made this research redundant (look at the comments for this), but I think AUR is an interesting development.
Of course, I am mostly interested not because I particularly want an interactive work environment, but because AUR is a great example of a non-humanoid robot that draws out a variety of responses from humans during interactions. Hoffman’s research has included experiments in which humans and AUR work in partnership to complete a repetitive task, learning from one another as they go, and questionnaires have been used to evaluate the humans’ responses to the robot.
There are many things about this robot that may help me to focus some of my ideas about human-robot interaction.
- the importance of fluency, rhythm and joint action – the idea that turn-taking is all very well, but not that natural in many situations
- the combined use of bottom-up and top-down approaches to perception and perceptual analysis
- working with anticipation and perceptual simulation
- looking for and acting on patterns of perception between different modalities – searching for meanign through a more holistic view of perception
- simplifying the perceptual space – looking for the most salient messages and ignoring the others
- the effect of using non-human form – although it was disappointing in some ways, to see the way this lowered expectations sufficiently to skew the results of the user experiments. The human side of the team was so impressed that the lamp could take voice commands and follow hand signals that it was marked highly for intelligence and commitment even when not programmed to act fluently (ie even when not using anticipation and perceptual simulation)
- while non-humanoid this robot does elicit anthropomorphisation by humans
- the fact that the robot learned with the human led the human to feel that the lamp was somehow like them
- humans in working with the fluent robot were self deprecatory, they spoke about their mistakes during the task, some felt that the robot was the more important partner in the team
This project highlights the idea that the way a robot moves is at least as, and possibly more, important than its form in supporting human-robot interactions.
In his thesis defense, Hoffman mentions the way that when a robot (in this case in a computer simulation) and the human are working well together (and the robot it in its “fluent” state) it is like watching a dance. This makes me think of Grace State Machines (Bill Vorn), where a robot and human dance as a performance piece, and the link seems all the more appropriate because AUR has also appeared in a play with human actors (although in this role AUR was not acting autonomously).
Hoffman is strongly drawn to creating non-humanoid robots and, I think, would prefer them to be anthropomorphised as little as possible by humans. The idea that using other forms enables a more creative process certainly makes sense to me, although I would not necessarily want the robots to look like existing objects. It might be harder to come up with a novel design, but in some ways that is they way I’d like to see robotics go, in particular for robots destined to be more than partners in working relationships.
However, making familiar objects autonomous does have many possibilities, and another good example is that of the Fish-Bird project where autonomous machines were made in wheelchair form. In this case it is particularly important to consider the compromises made for the initial implementation, where the writing arms the artist originally specified were replaced with miniature printers. Here the characters of Fish and Bird were still created, the practical design constraint was successfully overcome by compromise because the final form of the robots was not completely fixed. Hoffman argues that the aim of building a humanoid robot removes this freedom by providing a final form and behaviour that cannot be compromised, the robot will always be “evaluated with respect to the human original” (Hoffman, Thesis 2007).
Now, I haven’t really got to grips with this yet, but what I want to do next is to consider these human-robot interactions in more depth. I would like to link this with the ideas that I already have in relation to the encounter between self and other in Emmanuel Levinas, and also to consider a theory that I have just come across that uses Levinas to open up a new consideration of communication.
Time for a robot of the day. This is Bar Bot (to the right of this picture taken by Ewald Elmecker and Flickred by Alexander Barth) at a video shoot:
Bar Bot’s makers explain that this is probably the most humanoid robot ever built, because it is”driven by self interest”. Bar Bot exists to drink beer, and the drinks are on you! Bar Bot interacts with humans, but it’s objective is not to get to know you, rather it just wants your change. As soon as enough money has been collected Bar Bot turns to the bar to order a beer.
Although the makers don’t stress this, I like the fact that when Bar Bot finishes its drink it just drops the empty can on the ground. Another clear reference to human traits there I think!
Bar Bot takes the goal of roboticists – to create the ultimate humanoid robot as a helpful worker or companion – and twists this around to identify a very different and challenging outcome.
So, is this relevant for me?
[kml_flashembed movie="http://www.youtube.com/v/7mTb7LYj7KE" width="425" height="350" wmode="transparent" /]
The cockroach controlled mobile robot created by Garnet Hertz. Above is his movie about the project, and below is one from Daily Planet.
[kml_flashembed movie="http://www.youtube.com/v/6_wKE83vxdk" width="425" height="350" wmode="transparent" /]
While this project has resulted in what is strictly a cyborg development, I think that it is interesting that Hertz sees the cockroach as the archetypal posthuman, a more literal successor to humanity “than Fukuyama, Stock or Hayles envisions”. I think this is related to my obsession with the importance of other-than-human robots.
It is pleasing that putting the cockroach in the robot alters people’s reactions to the roach. The cockroach becomes cool, rather than disgusting, although it still appears to be rather scary if it moves towards you!
Of course, I also like the way he is pleased to have “cornered the market” in “designing wearable technology or exoskeletons for cockroaches”. I also appreciate the idea that “after we’ve all killed each other in WWIII with biomimetic robots, the earth will be happily inhabited by cockroaches. These insects will need something to drive on all of the abandoned freeways.”
Given my interest in machines that look like machines, but still interact with humans, it should come as no surprise that I like the work of Bill Vorn. Of his current projects two are of particular relevance:
- Grace State Machines – a performance in which a human dances with a machine
- Protozoic Machine – a machine built to interact with people, but deliberately designed to look like a machine, and not like any living being
I’m sure that I’ll write more about these projects soon, and might be able to visit Bill Vorn towards the end of this year.
The Fish-Bird Project was an art-science collaboration that resulted in an installation exploring the possibilities of creating a dialogue between two robot wheelchairs and human visitors using movement and written text.
There is a lot of information about the project available from the above link. The particular ideas behind this project that interest me are:
- Non-anthropomorphic representation
- Not cute
- Movement implying being and being alive
- Movement as communication
- Movement and text creating the “sense of a person” (aided by the absence implied by the wheelchairs)
- Movement indicating awareness, mood, intention
This looks like a great example for my thesis (thanks, Chantal , and if I can make it to Sydney I should be able to make arrangements to meet Fish and Bird, although I don’t know if I’ll be able to interact with them in the way shown in the video on the website.
One of the things I have found in my research so far is that artists seem to be more prepared to investigate human interactions with a wide range of forms than roboticists. This is a huge generalisation I suppose, but there certainly seems to be more acceptance of the possibilities of a wide range of interaction types in installation or performance art.
Here, as an illustration, is a link to the Autonomous Light Air Vessels website. These flying robot “creatures” form an interactive flock and in version 2 people can use mobile phones to communicate with either one ALAV or the group as a whole where this communication alters the individual or flock behaviour.
It is sometimes difficult to see the ALAVs reactions in the videos, but I find them fascinating, and would love to have the opportunity to interact with them myself. The fact that they fly brings them close to some of my science fiction robot inspirations (more of these in a future post) and maybe this is why I am so drawn to these creations.
[kml_flashembed movie="http://www.youtube.com/v/c_IkUysQASQ" width="425" height="350" wmode="transparent" /]
And here, just for fun, is the revenge of the robot arm from the previous post. Set to the Chemical Brothers song Believe, this one was pointed out to me by George after a conference presentation in which I showed the GM Advertisement.
[kml_flashembed movie="http://www.youtube.com/v/UQKk3PI-DW8" width="425" height="350" wmode="transparent" /]
So, as you can see from this General Motors advertisement maybe robots don’t need to be humanoid or to have faces in order to convey their feelings in such a way that they can be understood. (Although the music obviously helps in this video!)
I find this idea fascinating. I suppose it appeals to me because I am working to support the idea that robots could be of many varied forms, and yet still be able to take part in sophisticated human-robot interactions.
Cut from original image © Jared C. Benedict in Wikimedia Commons
The robot of the day is Kismet, designed and built at MIT. Kismet was probably one of the robots that first made me start thinking along the line of my current research.
In recent months my research has used Kismet mainly as an example of a robot where the concentration of design has been on the face. My research questions whether faces are a requirement for successful human-robot interactions, and more broadly, whether robots need to be recognisably human-like in order to support sophisticated human-robot communication.
In general, I would like to argue that in fact there are tremendous possibilities and advantages in using other forms for robot design.