Research

You are currently browsing the archive for the Research category.

Slideshare have decided to remove the slidecast functionality from their service, so I have been looking for a new home for my talk about human-robot teams, and sending robots into dangerous situations. I think I’ve found an alternative now, but I just realised that sharing the link on Twitter doesn’t work, since the site wants people to sign up (or sign in) in order to view the presentation. Here is an embedded version, which I hope anyone will be able to view:

This isn’t one of my best talks, which is irritating given that it’s the one I have recorded and been able to share in this way, but I’m hoping that I’ll be able to do something similar in the future with other presentations.

You know what they say about buses?

You wait for ages… and then they all arrive at once.

That actually happened to me in London once, although it was even worse than that, since no buses came for ages, followed by a whole line of number 45s. Not the bus I was looking for. :(

Apparently, the same thing happens with new teaching sessions. I’m about to start Curtin’s Semester 1, lecturing and teaching Web Media 207/507, and simultaneously Open Universities Australia Study Period 1, teaching Web300 Web Production.

If any students end up looking for me online, then this is what they’re going to see… so I’d better say something positive! ;)

Luckily that’s easy. I love teaching both of these units. I particularly like the way that my students are asked to think about theory, but also create web media of their own to share online. I have to say that I enjoy marking web media products more than essays.

So, let the new Semester/SP begin, as introduced by my mascot for 2014:

yay_reaction

Last Sunday (24 November) was Curtin University’s RoboFair, an annual event where anyone who is interested can visit and find out more about robots and robotic engineering. I was invited to be a part of the event this year as a representative of the Centre for Culture and Technology (CCAT) at Curtin. There were lots of stands with interesting robots, a whole heap of interactive displays for children and adults to enjoy. In some ways the most exciting thing for me was my poster (sad, I know)! This is the first time I’ve ever had a poster describing the types of humanities perspectives that I work with in relation to robots and communication:

CCAT RoboFair Poster

I also decided to try to run a survey during the event, but this didn’t quite work out as planned… ie I didn’t really get anyone to complete the survey (except for about four people for whom I entered the answers myself). Note to self: don’t expect people wandering round an open-day style event to scan QR codes or go to (even shortened) web addresses unless there’s a prize on offer!

However, in spite of this failure I did get to talk to a lot of great people, and certainly got the overall sense that many are interested in the relationship between technology and human culture/society to the extent that they would be like to attend seminars/workshops to think about and discuss the topic. I’m pretty sure that no-one I spoke to had ever heard the word MOOC though.

I did have a very brief wander around the rest of RoboFair, and I met a very interesting artist, Nathan Thompson, with a “robo-guitar briefcase” (yes, that’s how he describes it). I’d link to his website, but I think it’s having a few technical problems at the moment; however, assuming I catch up with Nathan and his robo-guitar again I’ll write a post about this analogue-based device, which has toured Japan recently and I’m hoping will perform in Perth sometime soon.

My talk, “Send in the Robots”, for the Adventures in Culture & Technology seminar series arranged by the Centre for Culture & Technology at Curtin was yesterday, and I have just uploaded the slides and audio to SlideShare:

It was, as I think I mentioned in my previous post, definitely a work in progress, but it went pretty well and resulted in a lot of debate on the day.

Next week I’m presenting a talk, Send in the Robots, for Adventures in Culture and Technology (ACAT), which is the seminar series for the Centre for Culture and Technology (CCAT) at Curtin. My talk is very much a work in progress, and will develop into a chapter for the book I’m writing currently as part of my brief research fellowship at CCAT. The format for these seminars allows me to speak for 20-40 minutes, after which time I pose three questions to encourage audience debate. Finally, the seminar is opened up more generally to audience questions and comments. I hope that the talk will be interesting and that, with the help of the audience, I’ll get a better idea of what the book chapter should be about!

Poster for Send in the Robots

As the poster indicates, the robot cat image has been used courtesy of Martin Fisch on Flickr.

All this talk about perspectives, windows, maps and travelers etc. and no mention of robots… well, I’d better do something about that!

Alan, Brad, Clara and Daphne are “cybernetic machines” designed and built by the artist Jessica Field. They are all linked together to form an art installation, a system that is able to perceive human visitors. I saw these ‘robots’ when I was in Canada in 2007, at the Musée des beaux-arts de Montréal as part of the Communicating Vessels: New Technologies and Contemporary Art exhibition.

Semiotic Investigation into Cybernetic Behaviour from Jessica Field on Vimeo.

Alan, Brad, Clara and Daphne can’t move around, so they can’t really be thought of as travelers, but the ‘conversation’ between Alan and Clara offers an extreme illustration of interaction between beings that perceive the world from incommensurable perspectives. Alan is able to sense the motion of visitors over time, whereas Clara senses their distance from her in space. Alan and Clara’s perceptions of their environment are communicated to human visitors by the other two robots/computers in the system. Brad produces noises indicating particular aspects of their emotional state or “mood”, while Daphne translates their interactions into a conversational exchange in English. Although Alan and Clara aren’t really communicating with each other directly, their potential interaction is played out for visitors to the installation. As you move around the room you begin to ‘experiment’ with the robots (at least that’s what I did) in order to try to work out what their conversation means, what they can and cannot ‘see’.

Alan and Clara’s conversation highlights the difficulty involved in discussing the world with an other that senses its environment in an entirely different way from you. They see the world through different windows, and most of the time they are unable to agree on what is happening. Occasionally, Alan and Clara both ‘catch sight’ of a visitor at almost the same moment, “WOW! YOU SAW IT TOO”, and they are able to agree that something is there, but for much of the time the conversation is one of confusion over what, if anything, is out there in the installation space.

The difficulty in their interaction unfolds in part because of the extreme difference in their perceptions, but also because Alan and Clara are unable to develop any strong sense of trust for each other or respect for the other’s judgement. This means that, while they appear to find their disagreements over what is in the room unsettling, they don’t take any steps to try and work together in developing a sense of what is happening in the room. Of course, the installation is designed precisely not to explore this idea, but rather to focus on the incommensurable nature of Alan and Clara’s ideas about the world. It offers a great illustration to help explain why I’m particularly interested in how trust and respect can develop between disparate team members who sense the world in different ways. Attaining a level of trust and respect is key in effective human-dog teams for example, and I think it could also be vital in human-robot teams.

quotationMark

now we have come to travelers who multiply meanings as they move, we should be wary of getting to comfortable with any single line of analysis. These stories have as many senses as the contexts of their telling. Their tracks point every which way. Odysseus’ oar may also be a winnowing fan, but that hardly exhausts its meanings. Burying the handle of the winnowing fan in a heap of grain is a sign that the harvest is done. Burying a sailor’s oar in a heap of earth is the sign that marks that sailor’s grave. Maybe when an oar stands over a grave it does come to the end of its meanings, for then the traveler’s journey is done. But who would want such closure? “Rabbit jumped over Coyote four times. He came back to life and went on his way.”

(Lewis Hyde, Trickster Makes this World, 1998, p. 80)

I really feel that I should have read this book many years ago, but it was shoved under my nose recently, and this section appeals to me in light of yesterday’s post about windows on the world. Hyde provides some rather beautiful examples of how a different perspective changes the meaning of something.

The point about multiple meanings, and developing a wary eye towards “any single line of analysis” makes a lot of sense to me. However, I also think that it is one of the reasons that I find research writing so difficult. I try to be very careful to see through more than one window (as Mary Midgley suggests) to see more than one meaning, and I then find it very difficult to place descriptions and analyses of the different views together on the page. It is easier in a verbal presentation, probably because the ideas are performed in particular space and time, for a particular audience, and often because there is only time to discuss a few points. In writing it always seems so much more complicated…

Rabbit and Coyote

 

quotationMarkWhile Chomsky, at least in his political work, operates most comfortably at the level of empirical data and relies on his encyclopedic grasp of the facts to assault his opponents, Žižek is more interested in the ways people comprehend those facts, in the symbolic laws and regulations that frame their understanding of the world. Thus, if Chomsky emphasises facts, Žižek’s primary concern is the ideological framework colouring their interpretation.

Importantly, these two positions are not as diametrically opposed as they may initially appear. What we have here is not an irreconcilable contradiction but a case of different dimensions. In their remarks, Chomsky and Žižek simply do not inhabit the same plane. They are operating from different levels of abstraction, both of which, I claim, are important and necessary for political struggle.

(Greg Burris, What the Chomsky-Žižek debate tells us about Snowden’s NSA revelations, in The Guardian, Comment is Free, Sunday 11 August, 2013)

Since I am interested in looking at communication from a range of theoretical perspectives (some of which are incommensurable at least some of the time), this article was relevant both because it unpicked an example of communication between theorists who “do not inhabit the same plane”, and because it went on to consider how useful it is to analyse a political situation from the disparate perspectives they offer.

I was also reminded of the work of Mary Midgley and her explanations of the importance of realising that pluralism in science is useful as opposed to being a problem that must be overcome. Midgley equates different scientific perspectives with the range of world maps on the early pages of an atlas, which may show population, climate, political boundaries etc. and therefore appear very different from one another. She goes on to suggest:

quotationMarkWe have to see the different maps as answering different kinds of question, questions which arise from different angles in different contexts. … The plurality that results is still perfectly rational. It does not drop us into anarchy or chaos.

(Science and Poetry, 2002, p. 82)

In The Myths We Live By, Midgley offers another way of thinking about the idea of pluralism (and this is the one referred to in the title of my post):

quotationMarkanother image that I have found helpful on this point is that of the world as a huge aquarium. We cannot see it as a whole from above, so we peer in at it through a number of small windows. … We can eventually make quite a lot of sense of this habitat if we patiently put together the data from different angles.

(2004, p. 40)

I am aware, even as I consider the different world views of Chomsky and Žižek, that I place a great weight on the importance of identifying personal or ideological biases and assumptions that colour one’s argument. Therefore, while I might find Žižek very difficult to understand much of the time, I think that this makes me more open to his style of critique, as opposed to Chomsky’s empirical stance which, as Burris notes, “downplays or even ignores his own ideological presuppositions”. However, in spite of my personal bias, I can see it is important always to remember Midgley’s warning that “if we insist that our own window is the only one worth looking through, we shall not get very far” (2004, p. 40).

Tomorrow (11 June, 2013) I am giving a seminar at the Bristol Robotics Laboratory (BRL). It is five years since I last visited the lab, and I’m looking forward to seeing how the projects I saw back then have developed, as well as getting the opportunity to see new projects that have started more recently.

This is the title and overview of what I have planned for the seminar:

“Tempered” Anthropomorphism and/or Zoomorphism in Human-Robot Interactions

In this talk I consider how a level of “tempered” anthropomorphism and/or zoomorphism can facilitate perceptions of, and interactions between, overtly different communicators such as humans and non-humanoid robots.

My argument interrogates the tendency within social robotics simply to accept the ascription of human characteristics to machines as important in the facilitation of meaningful human-robot interactions. Many
scientists and other academics might argue that this decision is flawed in a similar way to scholarship that attributes human characteristics to animals. In contrast, my analysis suggests that it is possible to adopt
a “tempered” approach, in particular when the robot other is overtly non-humanoid. I suggest that a level of projection is unavoidable, and is quite possibly the only way to attempt to understand autonomous or
semi-autonomous robots. However, being constantly reminded of the “otherness” of the machine is also vital, and is of practical value in creating effective multi-skilled teams consisting of humans and robots.

I will try to alter my talk as required in response to my audience’s reactions, since it can be quite challenging to present humanities-type research to a predominantly technical audience. My aim is to emphasise the practical use of reconsidering human-robot interactions in this way.

Baxter is a new workplace robot developed by a company called Rethink Robotics, run by Rodney Brooks (a very well known robot designer, who also started the company iRobot famous for developing Roomba robotic vacuum cleaners).  Baxter is designed to work next to people in manufacturing environments, being human-like in form and consisting of a torso and two arms, together with a screen “face” (well, one consisting of eyes and eyebrows at least).

Most interesting to me is the way in which people communicate with Baxter, using touch first to get the robot’s attention and then to move the robot’s arms into particular positions.  This reminds me of the touch of a yoga teacher, for example, in helping to position people into a particular pose.  Baxter also has a graphical interface, displayed on the same screen that more often shows the robots eyes, which is controlled with buttons on each arm.  In order to “program” Baxter to complete a task a person can therefore show the robot what to do by moving its arms into position and choosing the appropriate action, eg to pick up or place down a part, from the user interface.  Importantly, it is the way in which Baxter learns a movement from being placed into position that seems to separate it from various other manufacturing robots currently in production.

As Rodney Brooks explains in the interview with IEEE Spectrum writers Erico Guizzo and Evan Ackerman, “[w]hen you hold the cuff, the robot goes into gravity-compensation, zero-force mode” such that “the arm is essentially floating”.  This makes the robot easy to position, and as Mike Bugda notes in the video below, Baxter is therefore understood to be “very compliant to the user”.  Although “compliant” is used here in part to emphasise that the robot is flexible and therefore able to deal with “an unstructured environment” (Matthew Williamson, in conversation with Guizzo and Ackerman), there is also a sense in which this robot is being placed as a servant, or possibly even a slave, by virtue of its immediate compliance to a human’s touch.  This design decision in itself is probably a pragmatic response to making the robot easy to program in the workplace, but from my perspective it raises some issues since this is also clearly a robot designed to be read as human-like, but also as a compliant servant/slave.

The idea of Baxter as human-like is reinforced when Bugda (in the video below) explains that Baxter “exhibits a certain level of interaction, collaborative friendliness that makes it easy to understand and introduce into a manufacturing environment”.  For example, when you touch its arm the robot stops what it is doing and looks towards you, acknowledging your presence, and once its instruction is complete the robot acknowledges that it understands with a nod of its “head”.  Guizzo and Ackerman take this idea further, when they suggest that while at work Baxter’s “animated face” displays “an expression of quiet concentration”, while a touch on the arm causes Baxter to “stop whatever it’s doing and look at you with the calm, confident eyes”.

Although this video is simply a demonstration, Baxter has obviously had some limited field testing, and Brooks (again in conversations with Guizzo and Ackerman) notes that after people have worked with the robot for a while “something interesting happens … People sort of personify the robot. They say, ‘It’s my buddy!’”.  It is at this point that the perception of the robot as a friend is reinforced.

This type of reading of Baxter as a “buddy” or friend might be assumed to be closely linked to the robot’s human-like form.  However, my research considering the ALAVs, the Fish-Bird project and also Guy Hoffman’s robotic desk lamp, AUR, along with anecdotal evidence from friends who are owners of Roomba robotic vacuum cleaners, indicates that robots of pretty much any form encourage this kind of personification.  In addition, for Baxter, I suspect that the use of touch between human and robot might also serve to support the perception of this robot as a social subject, and eventually a friend.  The importance of touch in working with Baxter would seem to sets this robot apart from others that I have considered in my research to date.  This might also suggest that Baxter could be made more machine-like (and less human-like) in a move that would reduce my discomfort in placing humanoid robots as servants/slaves, as I have suggested occurs with this robot worker.

IEEE Spectrum report and video of Baxter, “How Rethink Robotics Built Its New Baxter Robot Worker” (Erico Guizzo, Evan Ackerman, October 2012)

I couldn’t overlook this new robot, erm, pillow-bear.

Robot teddy bear to monitor your sleep

Jukusui-Kun-Robot

While its communication skills might seem limited to pawing at your head, this robot listens for your snores while it’s cute companion monitors your blood oxygen. Should your sleep become less than silent, or your oxygen levels drop alarmingly, the pillow teddy will wake you with a gentle paw on your head (although sound sleepers might require a more energetic thump I suppose).


It’s a little difficult to see from the video just how effective the tickling/thumping action might be, and it’s also a pity that the level of background noise makes it hard to relate the bear’s reaction to the snore of the presumably willing test subject (or more likely the creator of the robot).

Someone stole Shrek’s ears!!

So, another robot with a rather odd face, oh yes, and it’s cheeks change colour too.  I’m sorry, but in spite of the alien-chic this one is still a bit too cute for my liking.  This robot is a Media Centre PC (whatever that is, I think it’s supposed to help you work out the connection between the PC and your television) and also an expressive robot.  Here it is, looking stunned:

Reeti - looking somewhat stunned

Reeti - looking somewhat stunned

Well, you’d be stunned too, if you found yourself stuck in a vase up to chin height!

This robot can watch people and faces, can speak and change its expressions.  In addition, it has lights under its cheeks and these can be used to express erm, stuff, the examples being: I’m feeling hot (red) or cold (blue), but I think moods are the underlying idea here.  I’m not sure how autonomously it really does any of these things.  A large part of one video relating to this robot seems to be about programming it to act in certain ways as it says certain scripted things.  There is also “an app for that”, so that you can remote control the robot with your iPhone or iPad.

More information can be found at the Reeti website.  However, I felt bound to link to the rather disturbing video below, which seems to place Reeti firmly as an alien visitor to Earth:

I realise that I have neglected my blog, but I haven’t seen any inspiring robots recently (although I have today, so there’ll be a post on that one soon).  I have, of course, also been busy writing my thesis.  However, I have now been tagged by Gwyneth at Groteskology to share seven things (I really hesitate to use the word ‘fact’ now, in reference to anything, ever) about me, so here goes:

  1. I’m a scientist… no wait, I’m an artist… oh, hold on a minute… what am I?
    I am confused!
  2. I’m a dog person…
    Jemma

    Jemma

    oh, but I like cats too… and bunnies… and birds… and geckos… and crickets…
    Ok, I’m an animal person.  Put me in a context with people and animals and I’ll be the one talking to the animals.

  3. I procrastinate a lot.  I’m procrastinating right now.  I like to think it’s because I suffer (see, I must be an artist) from writers block, but I’m beginning to think it’s just because I’m plain lazy (hmm, and back to science).
  4. I get wound up about things because I’m impatient, and I want to know now!  Currently I’m waiting to here about a full paper I submitted to a conference.  It’s being peer reviewed for publication in the proceedings.  This is therefore a serious distraction, resulting in much procrastination and a huge desire to continuously check email.
  5. I watch a lot of television, but almost always unreality TV.
  6. I’m beginning to think this list is really boring, but that’s in part because…
  7. I used to do lots of things, now I just write my thesis, and when I’m not doing that I’m worrying about not writing my thesis.  It’s not a good way to be, and it’ll be over by the end of July unless something really awful occurs.

I almost decided not to post this (oh no, this is thing no. 8, I suppose) but what the hell.  So, my life is boring (9), but one day soon maybe it won’t be so bad (10??)!  I’m certainly not going to tag anyone else, they’ll only show me up.

At last, an entertaining robot with which to enter the new year. I particularly like the interaction with the cat, and also the squeaks of surprise that result even when one knows that it’s going to jump!

Yes, I want one of these too, and I haven’t even had time to construct my other robot kits at home.
(via Engadget)

And, yes, while I acknowledge that there are differences between the two robots, I think that they also share many elements in common, not least cuteness!

But of course, Keepon is really famous for his dancing…

I may have shared that once already, but once just isn’t enough for that video.

I’m really impressed by how expressive both Keepon and Tofu are, in particular by their ability to show where their “attention” lies. These robots are seemingly quite simple (although I’m sure the underlying technology is still pretty complex), but they show considerable possibilities as expressive and communicative robots.

Meet Tofu, and I think it’s quite obvious why this robot has been added to the Robot of the Day category here.

Meet TOFU from ryan wistort on Vimeo.

Although this robot does remind me of Keepon, it does have the addition of moving eyes, which just add to the awesomeness.

No, I don’t mean that these are robots under development, I mean that I’m hoping to build my own Blubber Bot (or two) by around the time I finish my thesis next year.

Ok, the first thing to clarify is, no this will not stand in the way of my completion.  However, maybe it does indicate that I’m more positive that I am going to complete, at last, in the first quarter of next year.  I mean, I’m already planning the party, so it must be true, mustn’t it??!!

Anyway, I thought that I should construct some guests of honour for my robot themed party, hence my decision to track down at least one, or maybe two, Blubber Bot kits.  The Blubber Bots are a “transitional species” of robot closely related to the ALAV (Autonomous Light Air Vessel).

Blubber Exhibit, Brandts

I’m just hoping that my technical skills are going to be up to the task.  They should be great guests, and a nice talking point as they “graze the landscape in search of light and cellphone signals”.

UPDATE: The purchase has been made, now I just hope that I’m capable of putting them together (and that is assuming that the kits arrive safely and with no damage)!

Humanoid robots freak people out!

“Regarding appearance, as other studies already showed, the humanoid option is not a good one”

Ok, so this is a study from 2008, which immediately makes me think I should have found it ages ago, but BotJunkie only got hold of it this year, so maybe I’m not that far behind the times.  In common with the Evan Ackermann I have been spending a lot of time (a LOT of time, some would say too long) “wondering why robotics researchers persist in designing humanoid robots specifically for domestic applications…” and, of course, my answer is becoming tinged with communication theory.

I would therefore argue that one of the drivers behind the design of humanoid robots is the expectation, which was identified in this research paper, that robots need to communicate in humanlike ways.  In terms of the effect on robot design, this assumption encourages the development of robot with expressive faces, although these faces can range from being very humanlike (as seen in Jules, erm, and Eva, although you may have some misgivings there) to more of a mechanical cartoon (as seen in Kismet pictured below).

Kismet

Kismet

Maybe this design decision is popular because developing robots that can understand and produce human language really well is still pretty hard to achieve, and the provision of facial expressions offers an alternative way to provide more humanlike communication.  Maybe even with more complete language capabilities roboticists will still think expressive faces add value.

From a personal perspective I’m really not drawn in by such faces on robots.  In many ways I find them pretty off-putting, and I’d be really interested to see more designs that work with movement and sound (and possibly even light) to provide a more “machinelike” communicative feedback that is nonetheless understandable to humans.

… or possibly a fish?

So, I’m all for experimenting with novel locomotion for robots, and often it does seem that nature provides interesting templates to help designers with this type of problem.  Interestingly the inspiration for this robot is not a snake, but rather the sandfish lizard, although it is the way that the lizard tucks its legs in to “swim” through sand using snake-like undulations that caught the attention of the designers.

However, I am rather worried about the idea that this robot could be used to “help find people trapped in the loose debris resulting from an earthquake”.  In the main I’m wondering how this robot might be expected to communicate in order to alleviate the panic it might cause in any survivors it found.  Would it help if it was talking or even singing a song as it wriggled towards you?  I’m not sure, but it might be better than it appearing silently alongside your trapped body.

Here is the video from New Scientist report:

For a completely different, and also more technical, viewpoint on those telepresence robots I was talking about a couple of posts ago please see: So, Where’s My Robot.

This is, at least from my point of view, an important development for robotics reported in the New Scientist.

Programmers have now managed to write “sentiment-analysing” software that has been trained, through collating a bank of comments judged by human readers to contain sarcastic content, to recognise sarcasm.  For some reason I find it amusing that the comments were taken from Amazon.com product reviews, as well as from Twitter.

I would assume that adding this ability, to analyse the contents of a statement for sarcastic components, to the existing ability of some robots to read tones of voice, might bring us a step closer to building robot companions that can take part in life more fully by appreciating all of the joys of human(like) existence.  After all, without an understanding of sarcasm how could we expect robots to understand comedy.  “You know in another life, maybe we could have been brothers…” (Black Books)

Telepresence robot are not really what my research is about, I’m more interested in autonomous robots.  However, these examples cause me to question ideas about the best way to embody someone’s presence through a robot.  Recently the AnyBots QB was in the news, and it’s pretty odd looking if you ask me:

The idea is that this robot can not only provide a presence in meeting rooms, as is the case with existing teleconference facilities, but will also allow the operator to continue to talk to colleagues as they move back into the office after the meeting.  It also allows people to be more involved in the office even when working from a distance, for example being able to look at prototypes or help with specific problems, anything that requires them to be present in a particular physical space.

According to the IEEE Spectrum report the robot includes “a laser pointer that shoots green light from one of its eyes”.  One can only hope that this truly is only used to highlight items in a presentation, as opposed to taking over the office by deadly force!  In any case, this robot’s eyes don’t seem to add much to its character, although Wired argues that they give the QB an “aesthetic similar to Pixar’s Wall-E”.

So, my real question though is whether the head-like section of this robot really adds anything in terms of useful, or desirable, embodiment.  This robot definitely reminds me of something, and I’m pretty sure it’s not a great memory.  From my perspective I still think it makes more sense simply to mount a screen in a similar way, as seen is this telepresence robot from Willow Garage, the Texai:

Then again, if you want something more aesthetically pleasing than this, but still without a somewhat creepy robot head, how about the VGo.

AnyBots QB in Wired Magazine and in IEEE Spectrum.

The Pioneer Navi Robo is a robot in the form of a crab.  It has been designed to sit on the dashboard of your car to translate the directions from your GPS into easy to interpret claw movements.

So, here it is: the crab that tells you where to go…

There’s a lot of reasons why this is one of my favourite robots of the moment. For one, of course, it’s definitely not humanoid, but maybe more important is the clever use of a form that seems non-intuitive, but works well in this context.

Here it is close up

I have always been fascinated by watching videos of crabs signaling to one another, in fact they’re even more entertaining and interesting when you watch them in real life (but you have to creep up on them or they all scuttle back home). Rather than communicating with other crabs, the Navi Robo’s claws really lend themselves to signaling the direction to take in your car. It would seem to be easy to catch sight of the robot out of the corner of your eye, while remaining primarily focused on the road. This is just a prototype, but I like the way that the crab calmly signals on the run up to the turn, and then flashes its eyes and jiggles the appropriate claw as the turn becomes imminent.

While some people might ask whether this robot would be too distracting for drivers, it is also possible to argue that by utilising peripheral vision, as opposed to encouraging the driver to focus on the GPS screen, this robot could well be a positive safety development. In addition, it might be a vital component of a GPS system for someone who is deaf or finds it difficult to hear the spoken instructions provided by most GPS systems.

Ultimately though this robot wins me over because it’s something I never expected to see, certainly not in this context, it’s just excellent!

So, no blogging from me for a while then.  I think I stopped because someone I know requested that I blog about robots again, but I have fallen out with the robots, so no blogging from me…

At some point in this and the next month I hope to reconnect with ideas of communication theory using examples of human-robot communication as illustrations, but I haven’t managed yet.  Meanwhile, I am teaching in an upper level Communication Studies unit and enjoying pretty much every minute of that.  It’s possible that some of my students may drop by the blog this week or next, so I thought I owed them a more recent post.

What bits of information could I share here which have some bearing on the tutorials for next week?

  • My favourite theory uses stories as illustrations, almost all theorists in which I am interested and whose ideas I quote do this
  • John Durham Peters is someone I cite a lot (and he’s quoted in the reading for this week)
  • My life choices and the work I have done can be linked back to stories I have heard that have captured my imagination, from school, through my first degree, at work, in moving to Australia and in my research and teaching

Back to robots soon, yes, I really will get back to the robots… one day…

Robot by Jessica Field

Last week I went to Sydney for, amongst a couple of other things, STEP 2008. STEP stands for Science, Technology and Economic Progress, and is described as a National Doctoral Program. It is the brain-child of Dr Don Lamberton and has been running for the last 17 years, although I had never heard about it until this year when a call for applications appeared on the CSAA mailing list. The week was filled with presentations by visiting academics (although a number were no-shows for various reasons), student presentations and time working on group projects.

I had a mixed response to attending STEP. Organisationally the whole thing was a shambles, but I enjoyed the student presentations and met some very pleasant and interesting people. The “networking” experience was undoubtedly more positive for those who were all staying together in the accommodation provided close to the University of Western Sydney campus in Parramatta. This was partly because shared adversity always supports the growth of friendships, and also simply because we spent that much more time together as we wandered the streets of Parramatta looking for somewhere nice to eat within everyone’s budget.

My presentation as part of the program wasn’t bad, but by the end of the week I felt that maybe I had missed an opportunity. I chose to try to fit a run down of humanoid robots, “traditional” communication theory, “alternative” communication theory, companions species and non-humanoid robots into my 20-25 minutes. While I actually managed this quite well, it would have been interesting to present later in the week (instead of my timeslot on Wednesday) because I think I might have been better off using STEP itself as an example of the possibilities of complex partial communication, situated knowledges and the importance of respecting otherness-in-relation.

I think that Dr Lamberton wished that there were more pure scientists and engineers in the group, his main goal being to challenge each person’s particular point of view and disciplinary bias. However, I thought that the diversity of cultural and academic backgrounds, and PhD topics from narrow, broad and inter- disciplines lent it’s own interesting flavour to the week. The fact that most people were very open to all of the research perspectives that were represented meant that the student presentations garnered positive and encouraging feedback, although towards the end I think there might have been a slight lack of respect from some, as the sheer horror of having to listen to yet another presentation wore people down.

For me STEP was a gift as an example of incomplete communication, with it’s mixture of language difficulties, startling cultural differences, specialist (and sometimes obscure) terminology, huge range of theory, and artistic and scientific perspectives. However, I suppose if I had gone down that path, using STEP as my example, there’d have been fewer robots and therefore fewer videos in my presentation. Maybe that would have been too much of a loss, particularly for an audience who probably needed some bizarre visual stimulation at that point in the week!

In this writing seminar we concentrated on writing conclusions.  Although none of us (bar one, I think) are at the stage of writing the final conclusion chapter to our theses, the suggestion was that thinking about the conclusion earlier in the process can be useful.

In particular, by considering your conclusion you are forced to make a reality check, to see that your thesis is really focused on the things that you most wanted to discuss.  Thinking about the conclusion throughout the project can also help to prevent what I would call “project creep”, which is when you allow your subject to continually grow, and thus constantly move the finishing post.

We discussed the fact that introductions and conclusions bear a striking resemblance to one another, because both summarise what you are talking about, in particular the value of what you are about to say or have said.  However, in general the introduction should concentrate on the value of the questions you have decided to ask, whereas the conclusion should concentrate on the value of the answers you have found, or the arguments that you have drawn out, in your thesis.  Many people seemed to find this distinction helpful in thinking about writing both introductions and conclusions.

Before you ask, no I don’t think that I can link any of the words in the following list with robots or robotics, although maybe the Czech or Slovak etymology for Robota might link with the same Latin root as Roborant, but I doubt it.  Anyway, these are the twenty-four words that have recently been identified as at risk of extinction by the compilers of the Collins English Dictionary.

Abstergent Cleansing or scouring
Agrestic Rural; rustic; unpolished; uncouth
Apodeictic Unquestionably true by virtue of demonstration
Caducity Perishableness; senility
Caliginosity Dimness; darkness
Compossible Possible in coexistence with something else
Embrangle To confuse or entangle
Exuviate To shed (a skin or similar outer covering)
Fatidical Prophetic
Fubsy Short and stout; squat
Griseous Streaked or mixed with grey; somewhat grey
Malison A curse
Mansuetude Gentleness or mildness
Muliebrity The condition of being a woman
Niddering Cowardly
Nitid Bright; glistening
Olid Foul-smelling
Oppugnant Combative, antagonistic or contrary
Periapt A charm or amulet
Recrement Waste matter; refuse; dross
Roborant Tending to fortify or increase strength
Skirr A whirring or grating sound, as of the wings of birds in flight
Vaticinate To foretell; prophesy
Vilipend To treat or regard with contempt

I’m personally working with embrangle and all its derivatives at the moment!

This seminar related to bottlenecks in your research.  Rather depressing when the whole thing feels like you’re stuck in the neck of the bottle, like a cartoon character with your head bulging out, or should that be in?

The key is to stop procrastinating (ha!) and just to start.  So if there’s a section in your thesis that is weighing heavily on your mind, and you don’t know what to do about it, start writing using one of the techniques from the first session.  So for example, use freewriting, freefalling (the one where you make the text white on white so that you can’t make edits) or writing in a different genre.  Even if you only do 20 minutes to start with, you should gradually find that you’ve worked through the bottleneck and gone some way towards writing the section that was causing all that anxiety.

The second of the seminars was about managing resources.  Speaking personally, and as an interdisciplinary researcher with a lot of resources on the go at the same time, my bibliography and research notes are in a real mess.  I’m pretty sure that I’m doing better on my computer than I would be with a card system, but only barely!

This disorganisation is leading to a certain amount of anxiety, as I always feel that my research is out of control, and keep thinking that I’m missing out lots of things I meant to mention.  Time to sort it all out before it’s too late!

Part of my problem up until now has been a deep seated hate of EndNote.  It works ok, inserts citations into Word documents etc., but it is so painful to edit references and make notes.  I also couldn’t find a satisfactory way to organise my references into themes and chapters (and I tried using keywords and groups).

I have decided to switch to Zotero (on the advice of a friend, and after a quick trial run over the last couple of days).  I still have the EndNote files as a backup, but from now on I’m organising and note-taking in my new browser based interface (much more satisfying and less clunky).

Anyway, on to what I took away from the seminar…

Never just read a resource (unless you decide after a quick look that it’s of no importance to your research).  Make your read through worth while, even if you don’t have time to make exhaustive notes, always record a summary and a critique, so that you’ve got something to jog your memory when you see the reference again.

The summary, um, should be a summary.

The critique should: identify problems you see with the text and identify aspects that are particularly pertinent for your own research.

I’m sure I knew that I should have been doing this all along, but I haven’t.  Maybe everyone else has been much better and more organised than me.  However, all is not lost, and next week I’m going to work on categorising my resources in Zotero, deleting the things I now know are of no use and writing quick summaries and critiques for resources where I haven’t already done this.  (Yes, that probably is a huge number, even though I have lots of notes for many of them, but I’ll work from the most relevant to the least relevant).

Plan Y is based on an idea from the first Moving Forward seminar: trying to write in a different genre, but with a twist.  I have been trying to write to a deadline this week, but I’m experiencing the same old problem of being unable to move along with what I want to say.  I seem to get tied up in prose.

Having tried free-writing, but finding that it just leads to the same old rants, I have decided to try something new.  I have Plan X’s Radical Über Chop-up Document to work with, and I’ve decided to just write my first draft as if I was giving a presentation or a lecture.  The twist is, you see, that really I’m not using a writing genre at all, it’s more like trying to access the way that I’d explain it to an audience directly, face to face.

The reason that I think this might help is that I almost always feel more clear over what I’m doing when I’m talking about it, as opposed to when I’m trying to write it down in “scholarly prose” (whatever that is).  Last week I even recorded myself talking through the chop-up document, because it helped me to get on with reviewing the document rather than miserably trying to work at improving it from beginning to end.  This did help, but when I went back to writing my positive ideas fell apart too quickly.

So, Plan Y:

  • Write as if I’m presenting the material live, talking it through, using my examples etc.
  • If I get stuck then record a section and then write from the recording

Obviously I’m aware that this will only result in an early draft, and it’ll need to be rewritten to make it a “real thesis”, but at least I might end up with the precious draft to work with :).  I’m hoping that by accessing the speaking as opposed to the writing parts of my brain I’ll bypass all the negativity that keeps on tangling me up in knots.

This morning was the first of the “Moving Forward” thesis writing seminars.  It was mostly to do with writing block, which while it is still there to an extent is not currently my major problem.

The seminar was worrying to start with, as I began to wonder how useful it would be for me, and there were (as usual) many people from sciences and social sciences and not many from arts, humanities and cultural studies.  However, it was good, and there was lots of writing time which worked pretty well for me.

The standout thing I took away was the rather depressing fact that writing never gets any easier.  It is always like getting into a cold pool for a swim – every day starting to write is going to seem like a really bad idea, and you’re only going to begin to feel better once you’ve taken the plunge and got going.

So, I could finish on that note, but that would be bad!  Here’s an idea:

Sometimes I can dive in if I promise myself I just have to do 20 minutes – this doesn’t always work, but if you remember to take a short break after that initial 20 minutes you may well find that you’re ok to continue and write for longer.  Even if this doesn’t happen, at least you’ve done that initial 20 minutes!

What if the 20 minutes just doesn’t happen?  Well, if I’m just time-wasting (ie browsing/networking) then I suspect (although I haven’t done this very often and should do it more) that I need to remove my internet connection!

If you’re just stuck then try free writing, although I’d try writing about a specific piece of your thesis in this way, rather than just babbling, that way you are more productive, and will hopefully build up your focus so that you can continue on thesis-related stuff.

What is free-writing in this context?

Well, I think I’d describe it as writing for yourself.  The element of freedom is more in the style, rather than in the content.  Don’t worry about being academic, don’t worry about your supervisor reading it, use “I” to focus on your argument.  You can use this piece of writing later, rework it so that it fits into whatever style you need to use, but if you start like this it really is much easier.  It also has the advantage of making you write more about what you think, rather than just piecing together what other people have said.

If you find that you can’t flow with your writing, ie you keep stopping, making corrections, going back and editing etc. then a suggestion from this morning’s seminar was to switch your screen off, use a white font on a white background, use a small font so that you can’t read exactly what you’ve written.  This sounds extreme, but I think it would help if you find that you’re thinking too much while you’re writing.  You want to write something that says roughly what you want it to say rather than being perfect (in any respect).

Trying to write an interdisciplinary PhD thesis is great – really, it is exciting and you never get bored – but as I suggested in my previous post, it is also confusing and demoralising a lot of the time.   The problem is that, however much you enjoy your research, eventually you want to run away from your computer screaming.

So, I’d got as far as breaking writing block, and getting (many) words down onto paper.  The problem that remained was how to get those words into the correct order!  I was still procrastinating, and feeling afraid of “chapter documents”.

Enter the Radical Chop-up Über Document (RCUD).

My supervisor asked my what my strategy was in writing my chapters, and I said “err”.  As I clearly had no strategy she suggested the RCUD.  I am still working with this technique, but the bare bones are:

  • Create a new document
  • Save it, and make sure the name contains the word “radical” somewhere.  Note that this is essential, it may sound silly, but you need to be reminded that what you are doing is “radical” otherwise you’re never going to chop it all up.
  • Now open up your other documents in turn and cut and paste the best bits from them into your new document.
  • But, as you do this you must be radical.
  • Don’t take pieces that you don’t think are good enough
  • Feel free to write yourself notes in capitals
  • Put subtitles in for the sections as you add them
  • Reorder sections at will
  • Cut bits when you find you’ve written a better version elsewhere

(And I’ll have to add other instructions as I work out what they are!)

I find that this is helping me to put together my chapters.  Previously, when I have tried to write a chapter from beginning to end I have become paralysed.  I have constantly felt that I’m forgetting important stuff, and I have ended up writing loads of detail on areas outside my main focus of interest.

The radical document works for me because I have lots of documents where I’ve written some good bits and some bad bits, and I also have notes from many presentations and even a couple of lectures that are also relevant.  My brain is too small to hold all the ideas I have for my chapters at once, and by using this technique I don’t feel that I’m leaving things out all the time.

I wonder if that’ll help anyone else.  At least by writing it down here I’m going to remember to use it again!

I have decided to write a bit about writing.

The title says “challenges and opportunities” rather than problems or issues, or even impossibilities, simply because that’s what we used to say when working in Information Technology.  There were never problems, and even the word issue went out of fashion, but there were always challenges, and sometimes opportunities.  There are a number of IT people out there who would smile to read that heading, and immediately know what I was about to discuss!

So, what’s my “problem” with writing?  Well, I just think that it’s really difficult.  In fact, I’m having so many “issues” writing my PhD thesis that I’m currently taking part in a seminar series called “Moving Forward”, designed to help me work more productively and optimistically.  Someone pointed out that the title, “Moving Forward”, made them think about personal relationship advice, and when I thought about this I realised that it is about a form of relationship, the one between me and my thesis.

So, before I talk more about the seminars and the writing ideas that they have sparked off for me, here is a summary of a few of the problems I’ve been wrestling with recently:

  • Panic over too many resources
  • Writing block
  • Structuring, both the whole thesis and individual chapters
  • Fear of failure
  • Procrastination

I expect there are more, but that’ll do for now (and just writing these ones down makes my heart race.)

And here are some of the strategies I’ve already tried to improve the situation (and turn those challenges into opportunities)!

  • Free writing each and every day – just 15 minutes a day, about anything at all to start with, and then gradually working down to more thesis related ideas.

    This really does work to break writing blocks.  You must remember to write constantly, don’t stop to think, and even just write nonsense words if you get stuck.  The idea is to write slightly ahead of your detailed thinking, and therefore to avoid listening to negative thoughts (“You can’t do this”, “You can’t say that” etc.) and also positive, but tangential, thoughts (“I need to check that reference”, “Didn’t I see something about that on a web page”, immediately followed by going to check the reference, or looking up the web page etc. and therefore a complete stop in writing).

  • Planning, replanning and planning again

    I find it hard to break up my topic in a way that supports what I (think I) want to say.  I’m getting used to the idea that each of my thesis plans is a positive step, but that things may still change in the future.  I still haven’t cracked the structuring challenges within each chapter.

  • Using spreadsheets to plan and record chapter word counts and progress

    This method got me to write in bulk, and therefore made me realise that I can do this, I can write that many words.  However, the chapter I produced didn’t say what I wanted it to say.  I let myself go off on tangents, and lost my focus.

  • Using a timesheet

    This does help me to prevent my procrastination time overtaking my working time, although I’m not using my timesheet at present (not sure why, maybe it got too depressing).

For me, procrastination is all about fear of failure, so getting myself to write (at all) and planning what I’m doing so that I feel it’s under some level of control all helps.  Even though that chapter I wrote under pressure of weekly deadlines and keeping a progress spreadsheet was pretty crap it did prove to me that I could produce words on paper!

But, where has this left me?  Well, I have a new challenge, just how do I go about making two coherent chapters out of all that free writing and attempted chapter writing (and presentations and lectures) that I have in files on my computer?  Because, just trying to start at the beginning and work to the end certainly isn’t getting the sort of results I want…

This was quite possibly my last conference for quite a while organised in Perth by the Australian Women’s and Gender Studies Association.  Given that Gender Studies isn’t my home field I only registered for one day.  (Well, to be honest, that wasn’t the only reason.  I’ve run out of non-competitive funding money to apply for, which means I couldn’t really afford to go for one day, let alone the whole thing!)

I arrived in time for the first session in the morning only to find that my friend Sandra seemed to be the only presenter in that panel who had actually come to the conference!  Luckily, someone else from UWA switched panels to keep her company.  Both of their papers were excellent, and I was really interested to hear about their research.

My paper was first in the “Science” panel, and I was glad to get it over with.  It went ok, and I got some interesting comments and questions, although I felt that I had lost quite a few members of the audience.  Given my communication theory bias, towards a phenomenological understanding of the irreducible otherness of the Other, I suppose I shouldn’t have found my inability to share all of my ideas that much of a surprise.  Anyway, I definitely feel that my suggestion that machine-like robots could possibly escape being gendered, which I was already aware was colander-like, was irrefutably shot down.

I was so depressed after the conference that I went to Geraldton.  Many people I know in Perth wouldn’t go there if you paid them, but I actually had a very pleasant time staying with friends, having a really nice meal out with my husband and also missing the worst of the rain (which really hit Perth that week).

As part of my trip back to the UK I has also arranged a follow-up visit to the Bristol Robotics Laboratory.  I visited for the first time about a year ago, and received a slightly bemused reception, although my visit turned out to be very interesting and worthwhile.

This year I offered to give a lunchtime seminar, before going round to spend time with the various project teams in the laboratory.  I based my presentation on a summary of my research that I had prepared as a lecture last year for teaching in a Communication Studies unit at UWA.  My seminar was very well received, by an interested audience who proceeded to ask lots of good questions.  I definitely found that talking to people in the lab this year was even more fruitful than last year, because they had a better idea about where I was coming from and the direction of my own research.

I was encouraged by the response I received, and have since tried (although, thinking about it now, not tried hard enough) to set up some joint research with members of the lab.  I should really follow this up again, now that I am feeling more positive about my own research.

This year the British Society for Literature and Science conference was at Keele, and I had a particularly good time because I had arranged a panel with my friend from Canada, and therefore had someone to discuss all the papers and panels with, as well as someone to team up with for dinners and drinks.  Of course, we were both heavily jetlagged in opposite directions, so neither of us was exactly the life and soul of the party, but we had a nice time nonetheless.

The panel, Beauty/Aesthetics in Science and Literature, went really well, and people seemed to enjoy all of the papers.  It was lucky that we presented when we did, as John Bryden came up to me in the lunch break and introduced himself.  It turned out he was giving a paper about a dancing robot in a panel the following day.  I’d never have known this if John hadn’t told me, because only the paper titles were available in the programme.  Anyway, crisis averted, I went to the paper, and it sounded like an excellent robot

(Writing all of this so long after the date just reminds me that I really need to contact John again to ask him for more information about this robot!)

The conference also included excellent plenaries from Helen Small, Frank Close and Steven Connor.

The last day of my trip (not including a day and a bit of travelling to get back to Perth, which I wasn’t looking forward to very much) was spent wandering around Boston.  I had a purposeful morning waiting to get my laptop fixed at the Apple “Genius Bar” (well, I think they’re geniuses, they gave me a new battery in spite of me being just outside my warranty period).  Then I headed back into town and lunched at the Union Oyster House – they claim to be the oldest restaurant in America est. 1826 – on Clam Chowder and corn bread, very nice (if a little chewy).

I wandered around the shops, but wasn’t inspired and then the weather began to set in.  I made it to the aquarium before it started to rain and spent a happy time watching penguins and looking at pretty fish (the ones not being eaten by penguins).  Even here I did have a clear aim to get pictures of some cuttlefish, if they had any.  It turns out that cuttlefish are very hard to photograph because they move pretty quickly.  Here’s one photo that’s actually in focus!

Boston Cuttlefish

When I got out of the aquarium it was tipping down, but for some reason I decided to walk back to the hotel.  Getting soaked wasn’t a great idea, but it did mean that I got to walk by the original Cheers bar (as opposed to the fake one in the middle of town).  It wasn’t that photogenic, which is just as well, because the rain clouds weren’t going to clear for any photographic work on my part.

After a side trip to New Brunswick to visit a friend I made at last years British Society for Literature and Science conference I travelled back into the US to visit Boston.  My main aim was to visit MIT.  I had an appointment with someone in the Personal Robotics group at MIT Media Lab, and I also wanted to visit the MIT Museum.

I had originally planned to visit Guy Hoffman, designer and builder of AUR the robotic lighting assistant, but unfortunately he ended up being out of the country when I was there (some people will go to any lengths to avoid meeting with me)! :)  However, Mikey Siegel kindly agreed to talk to me about his work, and to show me around the Media Lab.

It was an interesting tour, and the lab is just as cluttered with boxes and wires as any other I’ve visited.  The only difference in the Personal Robotics section is the large number of cuddly toys that are strewn about the place.  I should have asked if I could take some photos, but for some reason felt a bit awkward about this, as if they were bound to say no.  I did, however, take some in the museum, just so that I could prove I had “met” Kismet and Cog.

Kismet Cog

I also spent some time just walking around MIT:

MIT Buildings

Then I headed off to the Harvard end of town, and into the best book store that I have ever visited.  The Harward Book Store shelves are piled high, the staff are helpful and it was packed with browsers.

I know you’re not suppose to do this, or maybe there are no rules for blogging?  I decided to back-post a little just as a means of jogging my memory.

While in Montreal I also had the opportunity to meet with Bill Vorn, who I have mentioned before (very briefly) in this blog.  In particular, I was interested in talking to him about his work on a project called Grace State Machines, but I was really interested to see all of the machines he has made which are scattered about his laboratory at Corncordia.

One of the Grace State Machines

I really love visiting labs/studios, they’re usually cluttered, with nowhere to sit down, and bits and pieces of metal and wire everywhere.  It’s just great – and I’m really beginning to wonder if I should make my own machines!

I also went back to look at Jessica Field’s work in the museum for a second time.  Jessica had obviously dropped in to fix Clara, because she was much more talkative on my second visit (or maybe she just recognised me from before)?!

Yesterday I went to visit Jessica Field, a Canadian artist/roboticist at her studio in Montréal.

Jessica has been building robots for more than ten years, and has an exhibit in the Communicating Vessels: New Technologies and Contemporary Art exhibition I mentioned in the previous post.  A video of this work, in which three static robots: Alan, Clara, Brad and Daphne interact with one another to “watch” and “discuss” the movements of their visitors.  A video explaining this work is available online.  I went to visit these robots on Tuesday, and again today (Thursday).  I saw Jessica in between, and mentioned that Clara didn’t seem to be saying much.  I suspect that some maintenance work may have taken place, because today both Alan and Clara were working well, and I had fun moving about the space in front of them, in particular moving close to Clara’s “eyes”, which provoked an interesting reaction.  You have to spend time with these robots in order to see how they interact, and the problems that they experience in communicating with one another.  They “see” the world in very different ways, and cannot therefore agree on what is happening around them.

Jessica is now working on a new set of four robots, three of which can move around a sort of robot play-pen.  As far as I am aware these robots do not yet have names, but they do have clearly defined characteristics and different levels of personality.  The static robot reacts to sounds it “hears” with its two ears.  If a sound reaches both ears then it switches on a light while the sound continues.  If it only “hears” with one ear, then it moves around orienting itself to the sound.  One of the moving robots can show either a phototropic or photophobic response, and it moves appropriately.  As it does this is draws a line on the ground.  Another moving robots follows lines it finds on the ground, and when it reaches the end of a line it stops, and “tells” you what it has read with sound.  It then becomes attracted to sound, and will move towards this until it finds another line and reverts to line following.  The third moving robot follows light in a more “intelligent” way than the robot with a hard wired response.  It considers it’s movement, and moves more smoothly.  However, I didn’t see this robot in action as it was in parts on Jessica’s desk!

As you can probably tell from the description above, these four robots are designed to form a robot ecosystem.  They interact with one another, and also, to a certain extent, with their visitors when they follow sound.

Although I took photos of these robots it’s not appropriate for me to post them here.  These robots are Jessica’s work in progress, and are being prepared for exhibit in January.  As Jessica works on the robots she keeps a book of observations.  These include scientific information about the circuit diagrams and programming of the robots, but also textual descriptions, stories and narratives based on her observations of the robots.

These robots are going to be presented in tandem with a video.  This will take the form of what sounds like a “nature programme” about the robots and how they behave.  This video is actually going to overstate what the robots are capable of doing, and Jessica is interested to see how visitors then understand the actual movements and behaviours of the robots in the installation.

As I mentioned in the previous post one of the plenary speakers at the SLSA conference that has just finished in Portland, Maine, was N. Katherine Hayles, who spoke on Friday night.  As this was after my panel I was still recovering from “presentation stress”, so off the top of my head I find that I can hardly remember what happened at this session, except for the fact that it was closely related to the theory found in Hayles’ book, My Mother Was a Computer.

However, I did take some notes :-)!

Given that the conference theme was “CODE”, it was unsurprising that Hayles’ talk stressed the need to take computation into account as fundamental rather than just peripheral to our understanding of the world. Hayles therefore spoke about concepts such as hierarchy versus heterarchy (?), intermediation, complexity and emergence.  In particular she drew on the work of Douglas Hofstadter, not so much on his first tome Gödel Escher Bach, but more on his second, Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, and spoke about understanding cognition as recognition and the importance of analogy.

Having talked about programming and different codes, Hayles then moved to consider the idea that “the meaning of information is given by the process that interprets it” (Fredkin), and therefore an understanding of objects coming from processes.

These ideas were then brought together as Hayles talked about computers as providing a level of subcognition and the ground for analogies, which then allowed humans to work at the level of creating analogies between analogies.  This supports the understanding that as humans engineer computers, computers re-engineer humans, in a constant process of coevolution.

Then there were some examples, all taken (I think) from Volume 1 of the Electronic Literature Collection, which looks really interesting.

Hayles also talked about the way that in the media intensive environment that young people experience they develop a talent for hyperattention, which does not prepare them to embrace the still more valued ability for deep attention that most university literacture courses stress in the close reading of novels.  This is something I have heard talked about before, and yet I still often hear scholars complaining about their students not reading the novel they have been set each week :-)!

Anyway, I hope that gives some sort of impression of what the plenary contained, incomplete I’m sure, but I hope not incorrect!

For the last four days I have been attending the SLSA (Society for Literature, Science and the Arts), apparently pronounced “salsa”, conference in Portland, Maine.

View from my hotel

Above is the view of the harbour from my hotel.

The theme of the conference was “CODE” and I presented a paper called “Machine codes in conversations with embodied emotional robots”, which went surprisingly well considering the level of jet lag I was experiencing at the time! I was on the panel, “Robots & Zombies”, with Nick Knouf and Jentery Sayers, both of whom gave great papers. Nick’s, which was about his robot called Syngvan (n here indicates the version of the project a, b, c, etc), had a particular resonance with my own, as we share an interest in non-humanoid, non-anthropomorphic robots.

In addition to attending the conference, with N. Katherine Hayles and Brian Massumi as plenary speakers, I also had a little time to explore Portland. Here is a picture of the only weatherboard observatory I have ever seen (rather like a windmill which has had its wings pulled off),

Portland observatory

and another view of the water from where I ate lunch in the park.

View from the park

You can see that there is some construction going on in Portland, but it was still a nice place to walk around, and the seafood was great :-).

Tomorrow I take the early train to Boston, and then fly straight out to Montreal. I’m going to visit Bill Vorn and Jessica Field, both of whom create robotic art installations.

I have just been to visit the wheelchair robots Fish and Bird, at their home, the Centre for Social Robotics (CSR) part of the Australian Centre for Field Robotics (ACFR).

These robots have been built to look like slightly smaller than standard wheelchairs. They are beautifully finished, the materials are in keeping with the idea of the wheelchair, but also seem lighter and more delicate. Their wiring and circuitry is cleverly hidden beneath the seat section. One of the most surprising, and I think important, things is that these robots are autonomous in a such a way that the complete installation is robust. They have simple switches: on, off and charging. They have been designed to be easy to look after (for the curator’s of exhibitions and their staff); there are no complex processes that need to be followed, for example to install certain programs as part of their set up. These robots have been designed to work over a long period of time, with minimal technical attention.

The only thing that exhibition staff need to be taught is how to catch them! You need to have a strategy to get hold of them and stop them “running away” when you need to recharge them or “rest” them overnight. I think this is just fantastic!

Of course, and unfortunately, these robots were out of action when I visited. However, it is still really good to have seen them up close. It was also very useful to be able to discuss their design, and the future plans of the CSR project team. In particular, I had the opportunity to talk to the artist Mari Velonaki and the roboticists Steve Scheding and David Rye at the same time. I got a clear idea of their technical goals, philosophical ideas and the way in which they all work together as a project team. All of this is relevant to my thesis work, and it was a good visit to have made just before my research trip to the US and Canada.

So, quite a long time ago really, back in January or maybe towards the end of last year, I was thinking about examples of machines that interact with each other, and could interact with people, but without needing to look like humans…

and I thought, Luxo Lamp ©Pixar Animation Studios.

Then recently I found out that Guy Hoffman at MIT Media Lab has created a real life version called AUR! Well, it’s sort of similar. Ok, it doesn’t hop around, but it does interact with humans, and could be used as part of an interactive office environment. A promotional video on the MIT site shows how the lamp might help someone at work. This video has also been put onto You Tube:

[kml_flashembed movie="http://www.youtube.com/v/4oCVZTrWrKw" width="425" height="350" wmode="transparent" /]

You can see here that AUR has been designed to attend to the human’s point of interest, and moves to light the workspace where their attention is directed. In the video this has been emphasised by making the office environment pretty dark. Some have used this as an excuse to question the usefulness of AUR, suggesting that maybe the invention of the light switch has made this research redundant (look at the comments for this), but I think AUR is an interesting development.

Of course, I am mostly interested not because I particularly want an interactive work environment, but because AUR is a great example of a non-humanoid robot that draws out a variety of responses from humans during interactions. Hoffman’s research has included experiments in which humans and AUR work in partnership to complete a repetitive task, learning from one another as they go, and questionnaires have been used to evaluate the humans’ responses to the robot.

There are many things about this robot that may help me to focus some of my ideas about human-robot interaction.

  • the importance of fluency, rhythm and joint action – the idea that turn-taking is all very well, but not that natural in many situations
  • the combined use of bottom-up and top-down approaches to perception and perceptual analysis
  • working with anticipation and perceptual simulation
  • looking for and acting on patterns of perception between different modalities – searching for meanign through a more holistic view of perception
  • simplifying the perceptual space – looking for the most salient messages and ignoring the others
  • the effect of using non-human form – although it was disappointing in some ways, to see the way this lowered expectations sufficiently to skew the results of the user experiments. The human side of the team was so impressed that the lamp could take voice commands and follow hand signals that it was marked highly for intelligence and commitment even when not programmed to act fluently (ie even when not using anticipation and perceptual simulation)
  • while non-humanoid this robot does elicit anthropomorphisation by humans
  • the fact that the robot learned with the human led the human to feel that the lamp was somehow like them
  • humans in working with the fluent robot were self deprecatory, they spoke about their mistakes during the task, some felt that the robot was the more important partner in the team

This project highlights the idea that the way a robot moves is at least as, and possibly more, important than its form in supporting human-robot interactions.

In his thesis defense, Hoffman mentions the way that when a robot (in this case in a computer simulation) and the human are working well together (and the robot it in its “fluent” state) it is like watching a dance. This makes me think of Grace State Machines (Bill Vorn), where a robot and human dance as a performance piece, and the link seems all the more appropriate because AUR has also appeared in a play with human actors (although in this role AUR was not acting autonomously).

Hoffman is strongly drawn to creating non-humanoid robots and, I think, would prefer them to be anthropomorphised as little as possible by humans. The idea that using other forms enables a more creative process certainly makes sense to me, although I would not necessarily want the robots to look like existing objects. It might be harder to come up with a novel design, but in some ways that is they way I’d like to see robotics go, in particular for robots destined to be more than partners in working relationships.

However, making familiar objects autonomous does have many possibilities, and another good example is that of the Fish-Bird project where autonomous machines were made in wheelchair form. In this case it is particularly important to consider the compromises made for the initial implementation, where the writing arms the artist originally specified were replaced with miniature printers. Here the characters of Fish and Bird were still created, the practical design constraint was successfully overcome by compromise because the final form of the robots was not completely fixed. Hoffman argues that the aim of building a humanoid robot removes this freedom by providing a final form and behaviour that cannot be compromised, the robot will always be “evaluated with respect to the human original” (Hoffman, Thesis 2007).

Now, I haven’t really got to grips with this yet, but what I want to do next is to consider these human-robot interactions in more depth. I would like to link this with the ideas that I already have in relation to the encounter between self and other in Emmanuel Levinas, and also to consider a theory that I have just come across that uses Levinas to open up a new consideration of communication.

Time for a robot of the day.  This is Bar Bot (to the right :-) of this picture taken by Ewald Elmecker and Flickred by Alexander Barth) at a video shoot:

Bar Bot

Bar Bot’s makers explain that this is probably the most humanoid robot ever built, because it is”driven by self interest”.  Bar Bot exists to drink beer, and the drinks are on you!  Bar Bot interacts with humans, but it’s objective is not to get to know you, rather it just wants your change.  As soon as enough money has been collected Bar Bot turns to the bar to order a beer.

Although the makers don’t stress this, I like the fact that when Bar Bot finishes its drink it just drops the empty can on the ground.  Another clear reference to human traits there I think!

Bar Bot takes the goal of roboticists – to create the ultimate humanoid robot as a helpful worker or companion – and twists this around to identify a very different and challenging outcome.

not a robot?

Tama links to the Times Online “50 best Robot movies” today.  Of course, I have my own reservations about the list (although I am very distracted by SF robots that have only appeared in print, and often don’t think so much about those on film), but the main thing that interested me was looking at all the comments.  There were so many little arguments over what belongs in the category “robot”, and just so many people who were absolutely sure they were right and everyone else was wrong!

The classification of something as a robot/android/droid/drone/cyborg(/human/person/animal) is obviously something that I mull over pretty much every day, and I still haven’t really found an answer.  Mind you, I’m not really looking that hard as I don’t think one exists, at least not in any clearcut way.

It made me smile, though :-).  In particular, when I realised that noone had yet mentioned the origin of the word robot, and the fact that in Capek’s play, R. U. R., the robots were assembled out of organic material.  That’s always an interesting spanner in the works when trying to clarify the differences between robots, animals and humans.

So, is this relevant for me?

[kml_flashembed movie="http://www.youtube.com/v/7mTb7LYj7KE" width="425" height="350" wmode="transparent" /]

The cockroach controlled mobile robot created by Garnet Hertz. Above is his movie about the project, and below is one from Daily Planet.

[kml_flashembed movie="http://www.youtube.com/v/6_wKE83vxdk" width="425" height="350" wmode="transparent" /]

While this project has resulted in what is strictly a cyborg development, I think that it is interesting that Hertz sees the cockroach as the archetypal posthuman, a more literal successor to humanity “than Fukuyama, Stock or Hayles envisions”. I think this is related to my obsession with the importance of other-than-human robots.

It is pleasing that putting the cockroach in the robot alters people’s reactions to the roach.  The cockroach becomes cool, rather than disgusting, although it still appears to be rather scary if it moves towards you!

Of course, I also like the way he is pleased to have “cornered the market” in “designing wearable technology or exoskeletons for cockroaches”. I also appreciate the idea that “after we’ve all killed each other in WWIII with biomimetic robots, the earth will be happily inhabited by cockroaches. These insects will need something to drive on all of the abandoned freeways.” :-)

Given my interest in machines that look like machines, but still interact with humans, it should come as no surprise that I like the work of Bill Vorn. Of his current projects two are of particular relevance:

  • Grace State Machines – a performance in which a human dances with a machine
  • Protozoic Machine – a machine built to interact with people, but deliberately designed to look like a machine, and not like any living being

I’m sure that I’ll write more about these projects soon, and might be able to visit Bill Vorn towards the end of this year.

The Fish-Bird Project was an art-science collaboration that resulted in an installation exploring the possibilities of creating a dialogue between two robot wheelchairs and human visitors using movement and written text.

There is a lot of information about the project available from the above link. The particular ideas behind this project that interest me are:

  • Trust
  • Intimacy
  • Non-anthropomorphic representation
  • Not cute
  • Movement implying being and being alive
  • Movement as communication
  • Movement and text creating the “sense of a person” (aided by the absence implied by the wheelchairs)
  • Movement indicating awareness, mood, intention

This looks like a great example for my thesis (thanks, Chantal :-), and if I can make it to Sydney I should be able to make arrangements to meet Fish and Bird, although I don’t know if I’ll be able to interact with them in the way shown in the video on the website.

between posts.

That’s probably because my research has been having an identity crisis, and I have been trying to sort this out, while also completing curriculum development for next semester.

Curriculum development takes me forever. Maybe that’s just because I am a beginner at this teaching and learning stuff. Maybe that’s just because it’s hard, particularly if you are a reflective practitioner, which of course I must be because I’m a Teaching Intern ;-).

Anyway, it’s back to research today, with an emphasis on making a workable plan for writing, rather than an outline that looks good until you start trying to do something with it. I have been advised to break what seem to be huge all-encompassing chapters into bite-size chunks. This should work for me better as a writer, but also work for my examiner as a reader. They should find my work easier to chew and maybe swallow, or possibly to spit out in disgust!

The other positive note is that the book I ordered a week and a bit ago should be making its way to Perth by now. This time it’s not just “another one about robots/emotions” to read for my research. It’s about how to write your dissertation in fifteen minutes a day (although the author admits this was a lie to get you to buy the book). Maybe I’m just clutching at straws, but it received good reviews on Amazon, and sounded like it might help with the depression of blank-page-itis.

which means, I was surfing the net. Yeah, really, that is research… only it does tend to result in very easy sidetracking, and also a tendency to become overwhelmed and demoralised by the sheer amount of stuff out there about robots.

So…

I decided that I need to work out a way to immediately categorise things I find into: interesting and useful for research; interesting; and not interesting. I was thinking about this because my basic problem is that I find most things “interesting” and of course if they’re amusing then that’s even better, so I end up trying to consider, or at least feeling that I should consider, all of these things as part of my research (not a good idea)!

The decision I made was that in order to be counted as “interesting and useful for research” the robot in question (whether fictional or factual) must be capable of interacting with humans. The robot should just be given “interesting” status if it simply interacts with the world in such a way as to make humans wish that it also interacted with them.

Now, I realise that this does not help to reduce the size of my research project that well, but strangely it does seem to help with my focus. I can now see that the following are “interesting”:

Theo Jansen

The Mascarillons

While these ones are “interesting” and at least might be “useful for research”:

Autonomous Light Air Vessels (see previous post)

Orirobotics

Of course, since they’re all “interesting” they might all turn up on this blog from time to time in any case!

One of the things I have found in my research so far is that artists seem to be more prepared to investigate human interactions with a wide range of forms than roboticists.  This is a huge generalisation I suppose, but there certainly seems to be more acceptance of the possibilities of a wide range of interaction types in installation or performance art.

Here, as an illustration, is a link to the Autonomous Light Air Vessels website.  These flying robot “creatures” form an interactive flock and in version 2 people can use mobile phones to communicate with either one ALAV or the group as a whole where this communication alters the individual or flock behaviour.

It is sometimes difficult to see the ALAVs reactions in the videos, but I find them fascinating, and would love to have the opportunity to interact with them myself.  The fact that they fly brings them close to some of my science fiction robot inspirations (more of these in a future post) and maybe this is why I am so drawn to these creations.

[kml_flashembed movie="http://www.youtube.com/v/c_IkUysQASQ" width="425" height="350" wmode="transparent" /]

And here, just for fun, is the revenge of the robot arm from the previous post. Set to the Chemical Brothers song Believe, this one was pointed out to me by George after a conference presentation in which I showed the GM Advertisement.

[kml_flashembed movie="http://www.youtube.com/v/UQKk3PI-DW8" width="425" height="350" wmode="transparent" /]

So, as you can see from this General Motors advertisement maybe robots don’t need to be humanoid or to have faces in order to convey their feelings in such a way that they can be understood. (Although the music obviously helps in this video!)

I find this idea fascinating. I suppose it appeals to me because I am working to support the idea that robots could be of many varied forms, and yet still be able to take part in sophisticated human-robot interactions.

kismet

Cut from original image © Jared C. Benedict in Wikimedia Commons

The robot of the day is Kismet, designed and built at MIT. Kismet was probably one of the robots that first made me start thinking along the line of my current research.

In recent months my research has used Kismet mainly as an example of a robot where the concentration of design has been on the face. My research questions whether faces are a requirement for successful human-robot interactions, and more broadly, whether robots need to be recognisably human-like in order to support sophisticated human-robot communication.

In general, I would like to argue that in fact there are tremendous possibilities and advantages in using other forms for robot design.

This blog was created primarily to hold pages of information that I might want to direct people towards. For example, a curriculum vitae and academic portfolio information.

It is vaguely possible that I’ll get around to actually making posts to this blog as well. At least I feel better about this forum, rather than the university managed one that for some reason made me feel like BB was watching me.

Eleanor Sandry

me_100

Archives

Academic Links

Twitter

I’m on Pinterest too…

Follow Me on Pinterest