You are currently browsing the tag archive for the 'Research' tag.
At last, an entertaining robot with which to enter the new year. I particularly like the interaction with the cat, and also the squeaks of surprise that result even when one knows that it’s going to jump!
Yes, I want one of these too, and I haven’t even had time to construct my other robot kits at home.
No, I don’t mean that these are robots under development, I mean that I’m hoping to build my own Blubber Bot (or two) by around the time I finish my thesis next year.
Ok, the first thing to clarify is, no this will not stand in the way of my completion. However, maybe it does indicate that I’m more positive that I am going to complete, at last, in the first quarter of next year. I mean, I’m already planning the party, so it must be true, mustn’t it??!!
Anyway, I thought that I should construct some guests of honour for my robot themed party, hence my decision to track down at least one, or maybe two, Blubber Bot kits. The Blubber Bots are a “transitional species” of robot closely related to the ALAV (Autonomous Light Air Vessel).
I’m just hoping that my technical skills are going to be up to the task. They should be great guests, and a nice talking point as they “graze the landscape in search of light and cellphone signals”.
UPDATE: The purchase has been made, now I just hope that I’m capable of putting them together (and that is assuming that the kits arrive safely and with no damage)!
… or possibly a fish?
So, I’m all for experimenting with novel locomotion for robots, and often it does seem that nature provides interesting templates to help designers with this type of problem. Interestingly the inspiration for this robot is not a snake, but rather the sandfish lizard, although it is the way that the lizard tucks its legs in to “swim” through sand using snake-like undulations that caught the attention of the designers.
However, I am rather worried about the idea that this robot could be used to “help find people trapped in the loose debris resulting from an earthquake”. In the main I’m wondering how this robot might be expected to communicate in order to alleviate the panic it might cause in any survivors it found. Would it help if it was talking or even singing a song as it wriggled towards you? I’m not sure, but it might be better than it appearing silently alongside your trapped body.
Here is the video from New Scientist report:
Telepresence robot are not really what my research is about, I’m more interested in autonomous robots. However, these examples cause me to question ideas about the best way to embody someone’s presence through a robot. Recently the AnyBots QB was in the news, and it’s pretty odd looking if you ask me:
The idea is that this robot can not only provide a presence in meeting rooms, as is the case with existing teleconference facilities, but will also allow the operator to continue to talk to colleagues as they move back into the office after the meeting. It also allows people to be more involved in the office even when working from a distance, for example being able to look at prototypes or help with specific problems, anything that requires them to be present in a particular physical space.
According to the IEEE Spectrum report the robot includes “a laser pointer that shoots green light from one of its eyes”. One can only hope that this truly is only used to highlight items in a presentation, as opposed to taking over the office by deadly force! In any case, this robot’s eyes don’t seem to add much to its character, although Wired argues that they give the QB an “aesthetic similar to Pixar’s Wall-E”.
Then again, if you want something more aesthetically pleasing than this, but still without a somewhat creepy robot head, how about the VGo.
The Pioneer Navi Robo is a robot in the form of a crab. It has been designed to sit on the dashboard of your car to translate the directions from your GPS into easy to interpret claw movements.
So, here it is: the crab that tells you where to go…
There’s a lot of reasons why this is one of my favourite robots of the moment. For one, of course, it’s definitely not humanoid, but maybe more important is the clever use of a form that seems non-intuitive, but works well in this context.
Here it is close up
I have always been fascinated by watching videos of crabs signaling to one another, in fact they’re even more entertaining and interesting when you watch them in real life (but you have to creep up on them or they all scuttle back home). Rather than communicating with other crabs, the Navi Robo’s claws really lend themselves to signaling the direction to take in your car. It would seem to be easy to catch sight of the robot out of the corner of your eye, while remaining primarily focused on the road. This is just a prototype, but I like the way that the crab calmly signals on the run up to the turn, and then flashes its eyes and jiggles the appropriate claw as the turn becomes imminent.
While some people might ask whether this robot would be too distracting for drivers, it is also possible to argue that by utilising peripheral vision, as opposed to encouraging the driver to focus on the GPS screen, this robot could well be a positive safety development. In addition, it might be a vital component of a GPS system for someone who is deaf or finds it difficult to hear the spoken instructions provided by most GPS systems.
Ultimately though this robot wins me over because it’s something I never expected to see, certainly not in this context, it’s just excellent!
The second of the seminars was about managing resources. Speaking personally, and as an interdisciplinary researcher with a lot of resources on the go at the same time, my bibliography and research notes are in a real mess. I’m pretty sure that I’m doing better on my computer than I would be with a card system, but only barely!
This disorganisation is leading to a certain amount of anxiety, as I always feel that my research is out of control, and keep thinking that I’m missing out lots of things I meant to mention. Time to sort it all out before its too late!
Part of my problem up until now has been a deep seated hate of EndNote. It works ok, inserts citations into Word documents etc., but it is so painful to edit references and make notes. I also couldn’t find a satisfactory way to organise my references into themes and chapters (and I tried using keywords and groups).
I have decided to switch to Zotero (on the advice of a friend, and after a quick trial run over the last couple of days). I still have the EndNote files as a backup, but from now on I’m organising and note-taking in my new browser based interface (much more satisfying and less clunky).
Anyway, on to what I took away from the seminar…
Never just read a resource (unless you decide after a quick look that it’s of no importance to your research). Make your read through worth while, even if you don’t have time to make exhaustive notes, always record a summary and a critique, so that you’ve got something to jog your memory when you see the reference again.
The summary, um, should be a summary.
The critique should: identify problems you see with the text and identify aspects that are particularly pertinent for your own research.
I’m sure I knew that I should have been doing this all along, but I haven’t. Maybe everyone else has been much better and more organised than me. However, all is not lost, and next week I’m going to work on categorising my resources in Zotero, deleting the things I now know are of no use and writing quick summaries and critiques for resources where I haven’t already done this. (Yes, that probably is a huge number, even though I have lots of notes for many of them, but I’ll work from the most relevant to the least relevant).
As part of my trip back to the UK I has also arranged a follow-up visit to the Bristol Robotics Laboratory. I visited for the first time about a year ago, and received a slightly bemused reception, although my visit turned out to be very interesting and worthwhile.
This year I offered to give a lunchtime seminar, before going round to spend time with the various project teams in the laboratory. I based my presentation on a summary of my research that I had prepared as a lecture last year for teaching in a Communication Studies unit at UWA. My seminar was very well received, by an interested audience who proceeded to ask lots of good questions. I definitely found that talking to people in the lab this year was even more fruitful than last year, because they had a better idea about where I was coming from and the direction of my own research.
I was encouraged by the response I received, and have since tried (although, thinking about it now, not tried hard enough) to set up some joint research with members of the lab. I should really follow this up again, now that I am feeling more positive about my own research.
The last day of my trip (not including a day and a bit of travelling to get back to Perth, which I wasn’t looking forward to very much) was spent wandering around Boston. I had a purposeful morning waiting to get my laptop fixed at the Apple “Genius Bar” (well, I think they’re geniuses, they gave me a new battery in spite of me being just outside my warranty period). Then I headed back into town and lunched at the Union Oyster House – they claim to be the oldest restaurant in America est. 1826 – on Clam Chowder and corn bread, very nice (if a little chewy).
I wandered around the shops, but wasn’t inspired and then the weather began to set in. I made it to the aquarium before it started to rain and spent a happy time watching penguins and looking at pretty fish (the ones not being eaten by penguins). Even here I did have a clear aim to get pictures of some cuttlefish, if they had any. It turns out that cuttlefish are very hard to photograph because they move pretty quickly. Here’s one photo that’s actually in focus!
When I got out of the aquarium it was tipping down, but for some reason I decided to walk back to the hotel. Getting soaked wasn’t a great idea, but it did mean that I got to walk by the original Cheers bar (as opposed to the fake one in the middle of town). It wasn’t that photogenic, which is just as well, because the rain clouds weren’t going to clear for any photographic work on my part.
After a side trip to New Brunswick to visit a friend I made at last years British Society for Literature and Science conference I travelled back into the US to visit Boston. My main aim was to visit MIT. I had an appointment with someone in the Personal Robotics group at MIT Media Lab, and I also wanted to visit the MIT Museum.
I had originally planned to visit Guy Hoffman, designer and builder of AUR the robotic lighting assistant, but unfortunately he ended up being out of the country when I was there (some people will go to any lengths to avoid meeting with me)! However, Mikey Siegel kindly agreed to talk to me about his work, and to show me around the Media Lab.
It was an interesting tour, and the lab is just as cluttered with boxes and wires as any other I’ve visited. The only difference in the Personal Robotics section is the large number of cuddly toys that are strewn about the place. I should have asked if I could take some photos, but for some reason felt a bit awkward about this, as if they were bound to say no. I did, however, take some in the museum, just so that I could prove I had “met” Kismet and Cog.
I also spent some time just walking around MIT:
Then I headed off to the Harvard end of town, and into the best book store that I have ever visited. The Harward Book Store shelves are piled high, the staff are helpful and it was packed with browsers.
I know you’re not suppose to do this, or maybe there are no rules for blogging? I decided to back-post a little just as a means of jogging my memory.
While in Montreal I also had the opportunity to meet with Bill Vorn, who I have mentioned before (very briefly) in this blog. In particular, I was interested in talking to him about his work on a project called Grace State Machines, but I was really interested to see all of the machines he has made which are scattered about his laboratory at Corncordia.
I really love visiting labs/studios, they’re usually cluttered, with nowhere to sit down, and bits and pieces of metal and wire everywhere. It’s just great – and I’m really beginning to wonder if I should make my own machines!
I also went back to look at Jessica Field’s work in the museum for a second time. Jessica had obviously dropped in to fix Clara, because she was much more talkative on my second visit (or maybe she just recognised me from before)?!
Yesterday I went to visit Jessica Field, a Canadian artist/roboticist at her studio in Montréal.
Jessica has been building robots for more than ten years, and has an exhibit in the Communicating Vessels: New Technologies and Contemporary Art exhibition I mentioned in the previous post. A video of this work, in which three static robots: Alan, Clara, Brad and Daphne interact with one another to “watch” and “discuss” the movements of their visitors. A video explaining this work is available online. I went to visit these robots on Tuesday, and again today (Thursday). I saw Jessica in between, and mentioned that Clara didn’t seem to be saying much. I suspect that some maintenance work may have taken place, because today both Alan and Clara were working well, and I had fun moving about the space in front of them, in particular moving close to Clara’s “eyes”, which provoked an interesting reaction. You have to spend time with these robots in order to see how they interact, and the problems that they experience in communicating with one another. They “see” the world in very different ways, and cannot therefore agree on what is happening around them.
Jessica is now working on a new set of four robots, three of which can move around a sort of robot play-pen. As far as I am aware these robots do not yet have names, but they do have clearly defined characteristics and different levels of personality. The static robot reacts to sounds it “hears” with its two ears. If a sound reaches both ears then it switches on a light while the sound continues. If it only “hears” with one ear, then it moves around orienting itself to the sound. One of the moving robots can show either a phototropic or photophobic response, and it moves appropriately. As it does this is draws a line on the ground. Another moving robots follows lines it finds on the ground, and when it reaches the end of a line it stops, and “tells” you what it has read with sound. It then becomes attracted to sound, and will move towards this until it finds another line and reverts to line following. The third moving robot follows light in a more “intelligent” way than the robot with a hard wired response. It considers it’s movement, and moves more smoothly. However, I didn’t see this robot in action as it was in parts on Jessica’s desk!
As you can probably tell from the description above, these four robots are designed to form a robot ecosystem. They interact with one another, and also, to a certain extent, with their visitors when they follow sound.
Although I took photos of these robots it’s not appropriate for me to post them here. These robots are Jessica’s work in progress, and are being prepared for exhibit in January. As Jessica works on the robots she keeps a book of observations. These include scientific information about the circuit diagrams and programming of the robots, but also textual descriptions, stories and narratives based on her observations of the robots.
These robots are going to be presented in tandem with a video. This will take the form of what sounds like a “nature programme” about the robots and how they behave. This video is actually going to overstate what the robots are capable of doing, and Jessica is interested to see how visitors then understand the actual movements and behaviours of the robots in the installation.
For the last four days I have been attending the SLSA (Society for Literature, Science and the Arts), apparently pronounced “salsa”, conference in Portland, Maine.
Above is the view of the harbour from my hotel.
The theme of the conference was “CODE” and I presented a paper called “Machine codes in conversations with embodied emotional robots”, which went surprisingly well considering the level of jet lag I was experiencing at the time! I was on the panel, “Robots & Zombies”, with Nick Knouf and Jentery Sayers, both of whom gave great papers. Nick’s, which was about his robot called Syngvan (n here indicates the version of the project a, b, c, etc), had a particular resonance with my own, as we share an interest in non-humanoid, non-anthropomorphic robots.
In addition to attending the conference, with N. Katherine Hayles and Brian Massumi as plenary speakers, I also had a little time to explore Portland. Here is a picture of the only weatherboard observatory I have ever seen (rather like a windmill which has had its wings pulled off),
and another view of the water from where I ate lunch in the park.
You can see that there is some construction going on in Portland, but it was still a nice place to walk around, and the seafood was great .
Tomorrow I take the early train to Boston, and then fly straight out to Montreal. I’m going to visit Bill Vorn and Jessica Field, both of whom create robotic art installations.
These robots have been built to look like slightly smaller than standard wheelchairs. They are beautifully finished, the materials are in keeping with the idea of the wheelchair, but also seem lighter and more delicate. Their wiring and circuitry is cleverly hidden beneath the seat section. One of the most surprising, and I think important, things is that these robots are autonomous in a such a way that the complete installation is robust. They have simple switches: on, off and charging. They have been designed to be easy to look after (for the curator’s of exhibitions and their staff); there are no complex processes that need to be followed, for example to install certain programs as part of their set up. These robots have been designed to work over a long period of time, with minimal technical attention.
The only thing that exhibition staff need to be taught is how to catch them! You need to have a strategy to get hold of them and stop them “running away” when you need to recharge them or “rest” them overnight. I think this is just fantastic!
Of course, and unfortunately, these robots were out of action when I visited. However, it is still really good to have seen them up close. It was also very useful to be able to discuss their design, and the future plans of the CSR project team. In particular, I had the opportunity to talk to the artist Mari Velonaki and the roboticists Steve Scheding and David Rye at the same time. I got a clear idea of their technical goals, philosophical ideas and the way in which they all work together as a project team. All of this is relevant to my thesis work, and it was a good visit to have made just before my research trip to the US and Canada.
So, quite a long time ago really, back in January or maybe towards the end of last year, I was thinking about examples of machines that interact with each other, and could interact with people, but without needing to look like humans…
and I thought, Luxo Lamp ©Pixar Animation Studios.
Then recently I found out that Guy Hoffman at MIT Media Lab has created a real life version called AUR! Well, it’s sort of similar. Ok, it doesn’t hop around, but it does interact with humans, and could be used as part of an interactive office environment. A promotional video on the MIT site shows how the lamp might help someone at work. This video has also been put onto You Tube:
[kml_flashembed movie="http://www.youtube.com/v/4oCVZTrWrKw" width="425" height="350" wmode="transparent" /]
You can see here that AUR has been designed to attend to the human’s point of interest, and moves to light the workspace where their attention is directed. In the video this has been emphasised by making the office environment pretty dark. Some have used this as an excuse to question the usefulness of AUR, suggesting that maybe the invention of the light switch has made this research redundant (look at the comments for this), but I think AUR is an interesting development.
Of course, I am mostly interested not because I particularly want an interactive work environment, but because AUR is a great example of a non-humanoid robot that draws out a variety of responses from humans during interactions. Hoffman’s research has included experiments in which humans and AUR work in partnership to complete a repetitive task, learning from one another as they go, and questionnaires have been used to evaluate the humans’ responses to the robot.
There are many things about this robot that may help me to focus some of my ideas about human-robot interaction.
- the importance of fluency, rhythm and joint action – the idea that turn-taking is all very well, but not that natural in many situations
- the combined use of bottom-up and top-down approaches to perception and perceptual analysis
- working with anticipation and perceptual simulation
- looking for and acting on patterns of perception between different modalities – searching for meanign through a more holistic view of perception
- simplifying the perceptual space – looking for the most salient messages and ignoring the others
- the effect of using non-human form – although it was disappointing in some ways, to see the way this lowered expectations sufficiently to skew the results of the user experiments. The human side of the team was so impressed that the lamp could take voice commands and follow hand signals that it was marked highly for intelligence and commitment even when not programmed to act fluently (ie even when not using anticipation and perceptual simulation)
- while non-humanoid this robot does elicit anthropomorphisation by humans
- the fact that the robot learned with the human led the human to feel that the lamp was somehow like them
- humans in working with the fluent robot were self deprecatory, they spoke about their mistakes during the task, some felt that the robot was the more important partner in the team
This project highlights the idea that the way a robot moves is at least as, and possibly more, important than its form in supporting human-robot interactions.
In his thesis defense, Hoffman mentions the way that when a robot (in this case in a computer simulation) and the human are working well together (and the robot it in its “fluent” state) it is like watching a dance. This makes me think of Grace State Machines (Bill Vorn), where a robot and human dance as a performance piece, and the link seems all the more appropriate because AUR has also appeared in a play with human actors (although in this role AUR was not acting autonomously).
Hoffman is strongly drawn to creating non-humanoid robots and, I think, would prefer them to be anthropomorphised as little as possible by humans. The idea that using other forms enables a more creative process certainly makes sense to me, although I would not necessarily want the robots to look like existing objects. It might be harder to come up with a novel design, but in some ways that is they way I’d like to see robotics go, in particular for robots destined to be more than partners in working relationships.
However, making familiar objects autonomous does have many possibilities, and another good example is that of the Fish-Bird project where autonomous machines were made in wheelchair form. In this case it is particularly important to consider the compromises made for the initial implementation, where the writing arms the artist originally specified were replaced with miniature printers. Here the characters of Fish and Bird were still created, the practical design constraint was successfully overcome by compromise because the final form of the robots was not completely fixed. Hoffman argues that the aim of building a humanoid robot removes this freedom by providing a final form and behaviour that cannot be compromised, the robot will always be “evaluated with respect to the human original” (Hoffman, Thesis 2007).
Now, I haven’t really got to grips with this yet, but what I want to do next is to consider these human-robot interactions in more depth. I would like to link this with the ideas that I already have in relation to the encounter between self and other in Emmanuel Levinas, and also to consider a theory that I have just come across that uses Levinas to open up a new consideration of communication.
Time for a robot of the day. This is Bar Bot (to the right of this picture taken by Ewald Elmecker and Flickred by Alexander Barth) at a video shoot:
Bar Bot’s makers explain that this is probably the most humanoid robot ever built, because it is”driven by self interest”. Bar Bot exists to drink beer, and the drinks are on you! Bar Bot interacts with humans, but it’s objective is not to get to know you, rather it just wants your change. As soon as enough money has been collected Bar Bot turns to the bar to order a beer.
Although the makers don’t stress this, I like the fact that when Bar Bot finishes its drink it just drops the empty can on the ground. Another clear reference to human traits there I think!
Bar Bot takes the goal of roboticists – to create the ultimate humanoid robot as a helpful worker or companion – and twists this around to identify a very different and challenging outcome.
not a robot?
Tama links to the Times Online “50 best Robot movies” today. Of course, I have my own reservations about the list (although I am very distracted by SF robots that have only appeared in print, and often don’t think so much about those on film), but the main thing that interested me was looking at all the comments. There were so many little arguments over what belongs in the category “robot”, and just so many people who were absolutely sure they were right and everyone else was wrong!
The classification of something as a robot/android/droid/drone/cyborg(/human/person/animal) is obviously something that I mull over pretty much every day, and I still haven’t really found an answer. Mind you, I’m not really looking that hard as I don’t think one exists, at least not in any clearcut way.
It made me smile, though . In particular, when I realised that noone had yet mentioned the origin of the word robot, and the fact that in Capek’s play, R. U. R., the robots were assembled out of organic material. That’s always an interesting spanner in the works when trying to clarify the differences between robots, animals and humans.
So, is this relevant for me?
[kml_flashembed movie="http://www.youtube.com/v/7mTb7LYj7KE" width="425" height="350" wmode="transparent" /]
The cockroach controlled mobile robot created by Garnet Hertz. Above is his movie about the project, and below is one from Daily Planet.
[kml_flashembed movie="http://www.youtube.com/v/6_wKE83vxdk" width="425" height="350" wmode="transparent" /]
While this project has resulted in what is strictly a cyborg development, I think that it is interesting that Hertz sees the cockroach as the archetypal posthuman, a more literal successor to humanity “than Fukuyama, Stock or Hayles envisions”. I think this is related to my obsession with the importance of other-than-human robots.
It is pleasing that putting the cockroach in the robot alters people’s reactions to the roach. The cockroach becomes cool, rather than disgusting, although it still appears to be rather scary if it moves towards you!
Of course, I also like the way he is pleased to have “cornered the market” in “designing wearable technology or exoskeletons for cockroaches”. I also appreciate the idea that “after we’ve all killed each other in WWIII with biomimetic robots, the earth will be happily inhabited by cockroaches. These insects will need something to drive on all of the abandoned freeways.”
Given my interest in machines that look like machines, but still interact with humans, it should come as no surprise that I like the work of Bill Vorn. Of his current projects two are of particular relevance:
- Grace State Machines – a performance in which a human dances with a machine
- Protozoic Machine – a machine built to interact with people, but deliberately designed to look like a machine, and not like any living being
I’m sure that I’ll write more about these projects soon, and might be able to visit Bill Vorn towards the end of this year.
The Fish-Bird Project was an art-science collaboration that resulted in an installation exploring the possibilities of creating a dialogue between two robot wheelchairs and human visitors using movement and written text.
There is a lot of information about the project available from the above link. The particular ideas behind this project that interest me are:
- Non-anthropomorphic representation
- Not cute
- Movement implying being and being alive
- Movement as communication
- Movement and text creating the “sense of a person” (aided by the absence implied by the wheelchairs)
- Movement indicating awareness, mood, intention
This looks like a great example for my thesis (thanks, Chantal , and if I can make it to Sydney I should be able to make arrangements to meet Fish and Bird, although I don’t know if I’ll be able to interact with them in the way shown in the video on the website.
That’s probably because my research has been having an identity crisis, and I have been trying to sort this out, while also completing curriculum development for next semester.
Curriculum development takes me forever. Maybe that’s just because I am a beginner at this teaching and learning stuff. Maybe that’s just because it’s hard, particularly if you are a reflective practitioner, which of course I must be because I’m a Teaching Intern .
Anyway, it’s back to research today, with an emphasis on making a workable plan for writing, rather than an outline that looks good until you start trying to do something with it. I have been advised to break what seem to be huge all-encompassing chapters into bite-size chunks. This should work for me better as a writer, but also work for my examiner as a reader. They should find my work easier to chew and maybe swallow, or possibly to spit out in disgust!
The other positive note is that the book I ordered a week and a bit ago should be making its way to Perth by now. This time it’s not just “another one about robots/emotions” to read for my research. It’s about how to write your dissertation in fifteen minutes a day (although the author admits this was a lie to get you to buy the book). Maybe I’m just clutching at straws, but it received good reviews on Amazon, and sounded like it might help with the depression of blank-page-itis.
which means, I was surfing the net. Yeah, really, that is research… only it does tend to result in very easy sidetracking, and also a tendency to become overwhelmed and demoralised by the sheer amount of stuff out there about robots.
I decided that I need to work out a way to immediately categorise things I find into: interesting and useful for research; interesting; and not interesting. I was thinking about this because my basic problem is that I find most things “interesting” and of course if they’re amusing then that’s even better, so I end up trying to consider, or at least feeling that I should consider, all of these things as part of my research (not a good idea)!
The decision I made was that in order to be counted as “interesting and useful for research” the robot in question (whether fictional or factual) must be capable of interacting with humans. The robot should just be given “interesting” status if it simply interacts with the world in such a way as to make humans wish that it also interacted with them.
Now, I realise that this does not help to reduce the size of my research project that well, but strangely it does seem to help with my focus. I can now see that the following are “interesting”:
While these ones are “interesting” and at least might be “useful for research”:
Autonomous Light Air Vessels (see previous post)
Of course, since they’re all “interesting” they might all turn up on this blog from time to time in any case!
One of the things I have found in my research so far is that artists seem to be more prepared to investigate human interactions with a wide range of forms than roboticists. This is a huge generalisation I suppose, but there certainly seems to be more acceptance of the possibilities of a wide range of interaction types in installation or performance art.
Here, as an illustration, is a link to the Autonomous Light Air Vessels website. These flying robot “creatures” form an interactive flock and in version 2 people can use mobile phones to communicate with either one ALAV or the group as a whole where this communication alters the individual or flock behaviour.
It is sometimes difficult to see the ALAVs reactions in the videos, but I find them fascinating, and would love to have the opportunity to interact with them myself. The fact that they fly brings them close to some of my science fiction robot inspirations (more of these in a future post) and maybe this is why I am so drawn to these creations.
[kml_flashembed movie="http://www.youtube.com/v/UQKk3PI-DW8" width="425" height="350" wmode="transparent" /]
So, as you can see from this General Motors advertisement maybe robots don’t need to be humanoid or to have faces in order to convey their feelings in such a way that they can be understood. (Although the music obviously helps in this video!)
I find this idea fascinating. I suppose it appeals to me because I am working to support the idea that robots could be of many varied forms, and yet still be able to take part in sophisticated human-robot interactions.
Cut from original image © Jared C. Benedict in Wikimedia Commons
The robot of the day is Kismet, designed and built at MIT. Kismet was probably one of the robots that first made me start thinking along the line of my current research.
In recent months my research has used Kismet mainly as an example of a robot where the concentration of design has been on the face. My research questions whether faces are a requirement for successful human-robot interactions, and more broadly, whether robots need to be recognisably human-like in order to support sophisticated human-robot communication.
In general, I would like to argue that in fact there are tremendous possibilities and advantages in using other forms for robot design.
This blog was created primarily to hold pages of information that I might want to direct people towards. For example, a curriculum vitae and academic portfolio information.
It is vaguely possible that I’ll get around to actually making posts to this blog as well. At least I feel better about this forum, rather than the university managed one that for some reason made me feel like BB was watching me.