Recently I had the opportunity to present about robots and emotion. This was a topic I’ve talked about before, and wrote about in my PhD thesis. Some of the ideas also came through in Robots and Communication, but as I prepared for The Future of Emotions conference I realised that quite a few of my thought had never really appeared in any of my publications. There were lots of new robots (and ideas) to be included, of course, and I had a great time giving this talk.
For once, I remembered to record my talk, so I’ve decided to upload it here, along with my notes as a form of transcript (because I did stick to my notes on this occasion). I’ve also included a list of links below the recording, so that you can locate the websites for the robots I discuss. I’ve not put up my slides, because there were some images that I only used under Fair Dealing for Research and Study, so don’t want to publish on the public web. However, by visiting the websites you’ll get to see the robots and bots in detail.
Websites for the robots that I discuss (in the order they appear):
It’s always a bit of a gift when an international conference takes place just down the road. (At least, that’s what I think, as someone who doesn’t really enjoy travelling to attend conferences that much although I always have a great time once I’m there). The University of Western Australia is where I studied for my PhD. It has a beautiful campus, which I admit I skived in order to explore in the beautiful sunshine (and, yes, winter in Western Australia is like this quite a lot of the time).
Please let me know if you have any problems accessing the recording or the notes through a comment or an email. This is the first time I’ve put up a talk in this way, so it’s an experiment.
We have to see the different maps as answering different kinds of question, questions which arise from different angles in different contexts. … The plurality that results is still perfectly rational. It does not drop us into anarchy or chaos (Midgley, 2002, p. 82).
The conclusion to Robots and Communication is very short, because really Part III was designed to wrap up the arguments of the chapters, while also taking the opportunity to draw together and further extend some overarching themes. In the conclusion, my aim was to explain one way of envisioning how the use of a number of theories can be helpful in understanding communicative situations, without the need to value one theory over another. In doing this, I drew on Mary Midgley’s writing about scientific theories, which I had found very helpful as I grappled with the research that was the basis for the book.
Midgley, M. (2002). Science and poetry. London; New York: Routledge.
[R]ecognizing all forms of agency precisely allows us to speak of ethics and responsibility in a very practical and incarnated way (Cooren, 2010, p. 6)
As a continuation of the previous chapter’s musings about boundaries between different types of being, this chapter revisited some of the ideas in earlier discussions that related to the balancing act of paying attention to individual communicators and also the dynamic system of communication that forms over the course of an interaction. The analysis draws upon François Cooren’s arguments about activity and agency, which allow one to, amongst other things, re-position responsibility within a system, as opposed to with an individual. Part of the goal of this chapter was also to move beyond discussing short-term interactions between humans and robots, to consider the way that these relations might develop over longer periods of time. In doing this I tried to get across the sense that such relations are always asymmetrical, but that this asymmetry is flexible if the relation responds to changing situations, and the participants have developed trust and respect for each other (where in a human-robot pairing, it is the human’s trust in, and respect for, the robot that will be most important to the adaptability of the team).
Cooren, F. (2010) Action and agency in dialogue passion, incarnation and ventriloquism. Amsterdam; Philadelphia: John Benjamins Pub. Co. Available at: http://public.eblib.com/choice/publicfullrecord.aspx?p=623314 (Accessed: 10 December 2014).
‘Incredible!’ breathed Arthur, ‘the people … ! The things … ?’ ‘The things,’ said Ford Prefect quietly, ‘are also people.’ ‘The people …’ resumed Arthur, ‘the … other people …’ (Adams, 1980, p. 83).
This chapter (as the first in Part III Rethinking Robots and Communication) was an opportunity to draw out some ideas about perceptions of, and assumptions about, boundaries between humans, animals and machines. It built in particular on the argument in the previous chapter, which identified the importance of trust and respect in working relations between humans and robots collaborating to complete a joint task. The quotation, quite possibly the one that I was most disappointed to lose, sums up the idea that it is better to think about animals and robots as (at least somewhat) like people as opposed to as things. I think that language makes it difficult to articulate the complex boundary relations between different beings, whether they human animals, other animals, plants or machines, and the more I think about it now (more than a year after my book was published) I find myself still struggling to express ideas about a range of beings as agents, while also keeping a clear recognition of the absolute differences between them (where difference is not framed as negative, but rather as of positive value to interrelationships).
Adams, D. (1980) The restaurant at the end of the universe. London: Pan Books.
In this blog's profile image I appear with Keepon, Sparki and Chris Lynas' depiction of a drone from Iain M. Banks' Culture.
The photographs of Keepon and Sparki are my own and the drone image appears courtesy of Chris Lynas.