We have to see the different maps as answering different kinds of question, questions which arise from different angles in different contexts. … The plurality that results is still perfectly rational. It does not drop us into anarchy or chaos (Midgley, 2002, p. 82).
The conclusion to Robots and Communication is very short, because really Part III was designed to wrap up the arguments of the chapters, while also taking the opportunity to draw together and further extend some overarching themes. In the conclusion, my aim was to explain one way of envisioning how the use of a number of theories can be helpful in understanding communicative situations, without the need to value one theory over another. In doing this, I drew on Mary Midgley’s writing about scientific theories, which I had found very helpful as I grappled with the research that was the basis for the book.
Midgley, M. (2002). Science and poetry. London; New York: Routledge.
Chapter 7 Communication, Individuals and Systems
[R]ecognizing all forms of agency precisely allows us to speak of ethics and responsibility in a very practical and incarnated way (Cooren, 2010, p. 6)
As a continuation of the previous chapter’s musings about boundaries between different types of being, this chapter revisited some of the ideas in earlier discussions that related to the balancing act of paying attention to individual communicators and also the dynamic system of communication that forms over the course of an interaction. The analysis draws upon François Cooren’s arguments about activity and agency, which allow one to, amongst other things, re-position responsibility within a system, as opposed to with an individual. Part of the goal of this chapter was also to move beyond discussing short-term interactions between humans and robots, to consider the way that these relations might develop over longer periods of time. In doing this I tried to get across the sense that such relations are always asymmetrical, but that this asymmetry is flexible if the relation responds to changing situations, and the participants have developed trust and respect for each other (where in a human-robot pairing, it is the human’s trust in, and respect for, the robot that will be most important to the adaptability of the team).
Cooren, F. (2010) Action and agency in dialogue passion, incarnation and ventriloquism. Amsterdam; Philadelphia: John Benjamins Pub. Co. Available at: http://public.eblib.com/choice/publicfullrecord.aspx?p=623314 (Accessed: 10 December 2014).
Chapter 6 Humans, Animals and Machines
‘Incredible!’ breathed Arthur, ‘the people … ! The things … ?’ ‘The things,’ said Ford Prefect quietly, ‘are also people.’ ‘The people …’ resumed Arthur, ‘the … other people …’ (Adams, 1980, p. 83).
This chapter (as the first in Part III Rethinking Robots and Communication) was an opportunity to draw out some ideas about perceptions of, and assumptions about, boundaries between humans, animals and machines. It built in particular on the argument in the previous chapter, which identified the importance of trust and respect in working relations between humans and robots collaborating to complete a joint task. The quotation, quite possibly the one that I was most disappointed to lose, sums up the idea that it is better to think about animals and robots as (at least somewhat) like people as opposed to as things. I think that language makes it difficult to articulate the complex boundary relations between different beings, whether they human animals, other animals, plants or machines, and the more I think about it now (more than a year after my book was published) I find myself still struggling to express ideas about a range of beings as agents, while also keeping a clear recognition of the absolute differences between them (where difference is not framed as negative, but rather as of positive value to interrelationships).
Adams, D. (1980) The restaurant at the end of the universe. London: Pan Books.
Chapter 5 Collaboration and Trust
‘[M]ethod’ is not what matters most among companion species; ‘communication’ across irreducible difference is what matters. Situated partial connection is what matters. … Respect is the name of the game (Haraway, 2003, p. 49).
In order to take things a bit further, to consider whether and how humans and robots can work together, this chapter considers Guy Hoffman’s creation, AUR, the robotic desk lamp. AUR was created to find out how flexible relations between humans and robots might support better interactions, in particular when they are working together to complete a task. Some key attributes of the relation between human and robot that supports such interaction are summed up in Haraway’s quotation. Communication between humans and AUR definitely takes place across an overtly presented level of “irreducible difference”, and that difference is what allows the partnership to work at all (given that the human directs the movement of the lamp, but relies upon the lamp to turn and shine the right coloured light on cue). Human and robot are in a state of “situated partial connection”, as they work together to complete a repetitive task that both begin to learn about as the iterations of the experiment build up over time. Finally, AUR follows the human’s instructions (depicting a respect for these, although it is impossible to say that this robot ‘respects’ the human exactly), but nonetheless communicates its own understanding of the task when misdirected. The double-take of the lamp (as its ‘head’ moves the way it ‘thinks’ it should go, but it turns back to the human as if to question the misdirection) causes the human to reconsider, and issue the correct instruction (in line with the lamp’s ‘understanding’ of the situation). This, alongside the comments of those taking part in the experiments with AUR, demonstrates how human participants develop a respect for the robot and its ability to learn the task, a vital part of completing the task correctly.
Haraway, D. (2003) The companion species manifesto: dogs, people, and significant otherness. Chicago: Prickly Paradigm Press.