viernes, 19 de abril de 2024   inicia sesión o regístrate
 
Protestante Digital
Flecha
 
SÍGUENOS EN
  • Twitter
  • Facebook
  • Google +
  • Instagram
  • YouTube
  • Rss
 


 
EN PROFUNDIDAD
 
 

ENCUESTA
New Evangelical Focus
Do you like the new design of the Evangelical Focus website?
Yes!
67%
No.
0%
I'm not sure...
33%
Encuesta cerrada. Número de votos: 3
VER MÁS ENCUESTAS
 



 

Artificial Intelligence: Is pausing experimentation to avoid risks for humanity possible?

Three experts react to a widely distributed open letter calling to “immediately pause for at least 6 months the training of Artificial Intelligence systems” such as the popular GPT-4.

AUTOR 5/Evangelical_Focus,7/Joel_Forster 31 DE MARZO DE 2023 09:51 h
A recreation of the logo of OpenIA, the company behind the famous GPT-4 tool. / Photo: [link]Ilgmyzin[/link], Unsplash, CC0.

Chat GPT is on everyone’s lips these weeks. Many have signed it to the Artificial Intelligence (AI) text platform to try it out. What will it tell me if I ask the machine what is the best football team in the world? Could it do the homework for me? What is its view on the latest theological controversy?



With millions of people around the world testing the tool, the ability of the chatbot (whose technical name is ‘Generative Pre-trained Transformer’) created by the company OpenAI has revolutionised the race in the field of Artificial Intelligence.



Chat GPT-4 is the latest version, and has introduced abilities to analyse images. The machine also passed a judiciary exam.



But many in the field have expressed their concerns about the rapid expansion of these tools. An open letter published by the Future For Life Institute (a non-profit research group that analyses threats to human existence, especially in the field of technology) says the AI race is getting out of control and “giant AI experiments should be paused”.  Such new “powerful digital minds” which could deeply affect social communication, the work market, arts and politics, are a “threat to our civilisation”, they say, since “no one – not even their creators – can understand, predict, or reliably control” their outcomes.







The signatories of the letter include people like Twitter CEO Elon Musk and Apple co-founder Steve Wozniak, as well as academics in the communications technology field.



With this open scenario, Evangelical Focus asked three experts in the intersection of ethics and technology. Read what they told us.



 





[photo_footer]  Patricia Shaw, John Wyatt and Juan Parras Moral shared their views abuot the AI race with Evangelical Focus. [/photo_footer] 


Patricia Shaw



CEO and Founder of Beyond Reach – a tech ethics consultancy.



I am quite cynical about the real-world practical impact this Open Letter is going to have asking for a 6-month voluntary moratorium on training of large language models at or beyond ChatGPT-4’s capabilities. However, the sentiments of the letter are important to note:




  • Upholding truth

  • Maintaining control of the technologies we create – not relinquishing our agency or autonomy

  • Recognising the risks that AI may present which undermine humanity

  • Managing those risks

  • Being accountable for the outcomes of AI

  • Calling for independent experts to develop and help implement safety protocols

  • Calling for independent review



 What this letter highlights to me is the clear need for organisations to take responsibility for the societal and economic outcomes of the Artificial Intelligence they create; and this clearly requires ethical and responsible AI governance. 



[destacate]“Building an AI ethics governance ecosystem requires funding, staffing and capacity building”

[/destacate]Being involved in responsible AI principles and frameworks, socio-technical standardisation, AI ethics certification and AI governance, all at the intersection of policy, law, technology, and ethics, I can see much of the tools in the letter’s ask are ready or near-ready. 


Organisations designing, developing, deploying, using, and monitoring AI, need to harness the AI ethics governance ecosystem of regulatory and non-regulatory measures available, but that requires funding, staffing and capacity building.



To rephrase the letter and to turn it onto the other foot:




  • Are businesses and governments ready to be transparent and have a serious societal level dialogue about the kind of AI we can create? Are they ready to risk assess what it is doing for us and to us, that may affect their go/no go decisions?

  • Are businesses and governments ready to implement ethical and responsible AI governance, and when they do, be willing to truly engage stakeholders who are impacted or who are influenced by the AI system in a way which is participatory and empowered with a voice?

  • Are businesses and governments who create and use AI ‘ready, willing and able’ to be regulated?



Last but by no means least. Let us call for bold and innovative leadership to focus on creating AI which tackles the serious world problems we face (rather than creating more of them) for benefit of all humanity.



 



John Wyatt



Doctor, author and research scientist specialised in bioethics.  



I am fundamentally supportive of the call which has come from the Future of Life Institute addressed to the major tech companies to pause the release of further sophisticated generative AI programs to the general public for a period of 6 months. This call has already been supported by a significant number of leading academic figures in the AI world.



However it is important to understand the reasons for calling for this pause. The threat is not that the technology is already developing super-human abilities which are capable of directly threatening the interests and survival of the human race. This is a dystopian fear which derives from science fiction but the current technology is very far from this level of capability.



As far as I am concerned the realistic threats are much more related to these two aspects.



[destacate]“We should be concerned about fallen human beings using this technology for malign purposes”

[/destacate]1. The recently released technologies like Chat GPT and BARD are already having unanticipated consequences in areas as diverse as:



- High school and university education and examinations - plagiarism using these programs is already rife and largely undetectable. An arms race between the fraudsters and the educators is now under way and likely to continue. 



- Academic research - the creation of fraudulent scientific articles has already become a major problem.



- The creation of fraudulent books, articles, blogs, etc



- A wave of false and misleading information on scientific, medical and other topics being released onto the internet.



2. There is a huge potential for evil that this technology offers to fallen human beings with malevolent purposes and goals.



A tsunami of disinformation in politically sensitive areas is already happening. The creation of 'deepfakes' has become much more effective and harder to identify and control.



There is a danger that the internet will become flooded with false and misleading content and that it will become increasingly hard to identify truth from subtle lies and disinformation.



So, in retrospect it was deeply irresponsible for Open AI, Microsoft and Google to release these programs to the general public with virtually no controls at all. Their motives seem to be entirely commercial, to gain a commercial advantage over their competitors, without any concern for the possible adverse consequences on society as a whole.


I think it is appropriate to support the call for a 6 month pause on releasing further generative AI programs whilst appropriate controls and regulatory frameworks are developed.



But we should not give way to dystopian fears about super-human artificial intelligences since this is clearly not the case. We should be concerned about fallen human beings using this technology for malign purposes, rather than that the technology itself might become malign. 





[photo_footer]The website of OpenAI, where the Chat GPT tool can be accessed. / Photo: Emiliano Vittoriosi. [/photo_footer] 


Juan Parras Moral



Researcher and lecturer in AI at the Polytechnic University of Madrid (UPM).



To give some context, in the last year some Artificial Intelligence tools have come out that are starting to have very powerful capabilities: language models (ChatGPT, GPT-4), image generation models (DALL-e, Midjourney), audio transcribers (Whisper)...



What is getting the most attention (and I think it motivates the open letter of Future For Life Institute) are the language models, because they are an evolution of the Google search engine. In the search engine you had to search for keywords, but in these new models you ask questions and they answer you. They have a great advantage (they can save a lot of time by elaborating an answer using potentially numerous sources, combining them; you can even use them to write articles for a newspaper!), but also a great disadvantage (they tend to “hallucinate”, to make things up, so their data are not always reliable and should be understood as a first draft, to be confirmed and refined).



As of today, there is a lot of hype about its possibilities, and for OpenAI (the company) it is an economic goldmine.



[destacate]“A calm and thorough evaluation of these tools will probably not take place”

[/destacate]To the letter: it is true that AI companies are in a race to see who can come up with the next big thing, without giving time to evaluate and test the tools and models they have. This part of the letter is reasonable, but unfortunately, it is going to be ignored (companies prefer profits to having proper models).


My impression: a calm and thorough evaluation of these tools would be necessary, because just as they have great potential (just as Google’s search engine revolutionised life), they also have great risks: the possibility of generating fake news, phishing schemes and scams, etc. But I believe that this assessment will not take place, because AI right now is a race to see who gets the tool that will set the pace in the next few years. Open AI is in the lead there, who is going to tell them to slow down?



[donate]


 

 


0
COMENTARIOS

    Si quieres comentar o

 



 
 
ESTAS EN: - - Artificial Intelligence: Is pausing experimentation to avoid risks for humanity possible?
 
 
Síguenos en Ivoox
Síguenos en YouTube y en Vimeo
 
 
RECOMENDACIONES
 
PATROCINADORES
 

 
AEE
PROTESTANTE DIGITAL FORMA PARTE DE LA: Alianza Evangélica Española
MIEMBRO DE: Evangelical European Alliance (EEA) y World Evangelical Alliance (WEA)
 

Las opiniones vertidas por nuestros colaboradores se realizan a nivel personal, pudiendo coincidir o no con la postura de la dirección de Protestante Digital.