miercoles, 11 de diciembre de 2024   inicia sesión o regístrate
 
Protestante Digital

 
John Wyatt
 

Artificial intelligence and simulated relationships (II)

Should we teach our children to be polite to Alexa, to say please and thank you, to respect its ‘virtual’ feelings? Or is it of no significance if children abuse, tease and bully a simulated slave-person?

JUBILEE CENTRE AUTOR 260/John_Wyatt 26 DE FEBRERO DE 2020 15:23 h
Photo: Mahdis Mousavi. Unsplash (CC0).

This is the second part of this Cambridge Paper published by Prof. John Wyatt with the Jubilee Centre. You can read the first part here.



Anthropomorphism



At the root of many of the issues surrounding human–machine interactions is our profound inbuilt tendency to anthropomorphism, our capacity to project human characteristics onto non-human animals and inanimate objects.



Commercial manufacturers and software developers are expending considerable efforts to find ways to elicit our anthropomorphic tendencies. Most companies are not aiming to replicate the human form exactly, since robots which are too similar to humans are often perceived as ‘creepy’ and repellent. Instead the focus is on creating ‘cute’, friendly, child-like beings, to enable us to suspend disbelief and enjoy the interaction.



A particular focus is on the robotic or virtual face, and especially on the eyes. Eye movements and gaze appear to be a central non-verbal cue in helping humans understand the intentions of other social agents.



Establishing eye contact, and experiencing the other as ‘looking back at us’, enables us to reach out to another person who is conceived and rationalised as being ‘behind the eyes’.



A second focus for eliciting anthropomorphism is the development of increasingly human-like speech. This, of course, is a vital difference between human–animal relationships and human–AI relationships. Although a dog may appear to understand human speech (at least in part), it cannot speak back to its owner.



As I gaze into my dog’s eyes I have no idea what it is thinking – or even if it is thinking at all.



But when my pet robot or AI chatbot speaks back to me, something momentous has happened. The machine appears to be communicating to me the hidden thoughts of its ‘mind’ – revealing its own subjective experience, conscious awareness, desires and intentions.



Except of course that it has none of these. So to give a machine the capacity for human-like speech seems both powerful and manipulative.



The commercial use of powerful anthropomorphic mechanisms opens us up to new forms of manipulation and even abuse. As journalist David Polgar put it, ‘Human compassion can be gamed. It is the ultimate psychological hack; a glitch in human response that can be exploited in an attempt to make a sticky product.



That’s why designers give AIs human characteristics in the first place: they want us to like them.’[9]



 



Analogous personhood or simulated personhood?



From a positive perspective, some intelligent machines may be perceived as holding, in Nigel Cameron’s phrase, at least ‘analogous personhood’.[10] They are not real human persons but they can play to a limited extent some of the same social roles.



They can give us an experience which is analogous to human friendship and this may have many beneficial consequences. An analogous friend can teach me how to build friendships with a real person and can play the role of friend when my real friend is absent. An analogous carer can give me part of the experience of being cared for.



There has been some discussion as to whether social robots should be designed to supplement existing human relationships or to replace them. Some have emphasised that social robots are meant to partner with humans and should be designed to ‘support human empowerment’.



When used correctly, it is argued that social robots can even be a catalyst for human–human interaction. This inevitably raises new questions. Children who grow up with a sophisticated digital assistant, such as Alexa or Google Home, are practising relationships with an entity that is human-like, but whose sole purpose and function is to attend obediently to every human command and whim.



This is to model a relationship with a (usually female) human slave. So should we teach our children to be polite to Alexa, to say please and thank you, to respect its ‘virtual’ feelings? Or is it of no significance if children abuse, tease and bully a simulated slave-person?



The ‘relationally sensitive’ responses that chatbots generate are those that their programmers have prioritised. So it is inevitable that they reflect what programmers think is desirable in a submissive and obedient relationship.



The relationship that is offered reflects the demographic of the programmers: mainly male, young, white, single, materialistic and tech-loving. The image of the Silicon Valley engineer is reflected in their machines; these are the hidden humans who are haunting the robots in our homes.



Many young tech specialists seem to have an instrumentalist understanding of relationships.[11] At the risk of over-simplification, they seem to assume that the purpose of a relationship is to meet my emotional needs, to give me positive internal feelings.



So if a ‘relationship’ with a machine is capable of evoking warm and positive emotions it can be viewed as an effective substitute for a human being.



Sherry Turkle reported that children who interacted with robots knew that they were not alive in the way that an animal was alive. But children often described the robot as being ‘alive enough’ to be a companion or a friend.[12]



In a study of children who grew up with robots that were lifelike in appearance or in social interaction, Severson and Carlson reported that children developed a ‘new ontological category’ for them.



‘It may well be that a generational shift occurs wherein those children who grow up knowing and interacting with lifelike robots will understand them in fundamentally different ways from previous generations’.[13] What effects will this have on the emotional development of children?



As Sherry Turkle put it, ‘The question is not whether children will grow up loving their robots. The question is “what will loving mean?”’.[14] In other words, how may human relationships become distorted in the future if children increasingly learn about relationships from their interactions with machines?



It is important to consider the wider societal context in which relationships with machines are being promoted and are likely to become increasingly common. There is an epidemic of relational breakdown within families and marriages, as well as increasing social isolation and loneliness.



This leads to a pervasive sense of relational deficiency and a technologically-simulated relationship seems an ideal fix. A private chatbot available any time and anywhere appears to offer a degree of intimacy and availability which no human bond can ever match.



There is no doubt that human–machine relationships raise complex ethical, social and philosophical issues and there have been a number of recent initiatives within the UK and elsewhere aimed at the development of regulatory frameworks and ethical codes for manufacturers of AI and robotic technology.[15]



From a Christian perspective there are a number of fundamental questions which seem of special significance:



1.  How should we think of ‘relationships’ with machines within the context of the biblical revelation? What is the fundamental difference between a relationship with another human and a relationship with a machine?



2.  Will the promotion of ‘relationships’ with machines contribute to societal wellbeing and human flourishing? How will simulated relationships influence and change the web of human relationships on which a healthy society is founded? What potential harms may flow from this?



3.  What practical steps can be taken to minimise the potential harms and manipulative potential of machine relationships?



In the remainder of this paper, I shall outline some initial responses from a distinctively Christian and biblical perspective.



John Wyatt is Emeritus Professor of Neonatal Paediatrics, Ethics & Perinatology at University College London, and a senior researcher at the Faraday Institute for Science and Religion, Cambridge.



This paper was first published by the Jubilee Centre.



NOTES



[9] David Polgar quoted at https://qz.com/1010828/is-it-unethical-to-design-robots-to-resemble-humans/



[10] Personal communication.



[11] Sherry Turkle, Alone Together, Basic Books, 2011.



[12] Ibid.



[13] R. L. Severson and S. M. Carlson, ‘Behaving as or behaving as if? Children’s conceptions of personified robots and the emergence of a new ontological category.’ Neural Networks, 2010, 23:1099–103.



[14] Sherry Turkle, ‘Authenticity in an age of synthetic companions’, Interaction Studies, 2007, 8, 501–517.



[15] House of Lords Select Committee on Artificial Intelligence report, 2018, ‘AI in the UK: ready, willing and able?’,https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf; Nuffield Foundation, ‘Ethical and Societal Implications of Data and AI’, 2018nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf; European Commission, ‘Ethics Guidelines for trustworthy AI’, 2019, https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai


 

 


0
COMENTARIOS

    Si quieres comentar o

 



 
 
ESTAS EN: - - - Artificial intelligence and simulated relationships (II)
 
 
Síguenos en Ivoox
Síguenos en YouTube y en Vimeo
 
 
RECOMENDACIONES
 
PATROCINADORES
 

 
AEE
PROTESTANTE DIGITAL FORMA PARTE DE LA: Alianza Evangélica Española
MIEMBRO DE: Evangelical European Alliance (EEA) y World Evangelical Alliance (WEA)
 

Las opiniones vertidas por nuestros colaboradores se realizan a nivel personal, pudiendo coincidir o no con la postura de la dirección de Protestante Digital.