29 August 2023 In All Contents

ARTIFICIAL INTELLIGENCE: A FRIEND OR A FOE*

There is a consensus among AI scientists that advanced digital technology poses potential risks and can have adverse effects on humans in the wrong hands. However, they have different predictions concerning the nature of these effects and how they may manifest. The global community, on the other hand, is concerned about the rapid advancement of AI and calls upon states to cooperate in enacting regulations. Is it possible to find a common ground for global AI regulations?

Since the research company OpenAI[1] introduced the Generative Pre-trained Transformer (GPT-4) on 14 March 2023; an increasing number of leading AI researchers, backers, and players have voiced their concerns about the profound risks that Artificial General Intelligence (AGI) poses to humanity.

Soon after ChatGPT-4’s release, the Future of Life (FIL), a globally influential non-profit[2], issued an open letter to the public calling for “all AI labs to immediately pause”. The authors expressed their concerns about the potential threat advanced AI systems pose to information channels, the workforce, and even human civilisation. They called on the US Government to interfere in case the labs do not pause, expressing that digital minds have already surpassed human intelligence and even their creators are unable to control them[3].

The open letter was signed by prominent figures in the AI industry, and it gained significant public attention. Later, it became controversial as some signatories were later revealed to be fake, and others backed their support[4].

Is AI more intelligent than humans?

Unlike humans, AI systems have neither consciousness nor intention. They do not generate perceptions, thoughts, emotions or wishes. They have specified objectives given by humans and an optimisation algorithm to perform the most favourable action to achieve them[5]. An AI is considered intelligent if it chooses actions that are expected to achieve the given objectives. It does not make choices. However, it has a greater capacity for data storage than humans, allowing it to excel at gaming, for instance. Nevertheless, AI’s learning capacity and performance are limited by the volume of collected data. It lacks the context-based intelligence, common sense, awareness, intuition, imagination, and creativity humans have[6].

To this matter, Michael Wooldridge, senior researcher in computer science, calls the assumption of AGI taking over human civilisation exaggerated and grand. He explains that getting AI to perform human-level intelligence is very difficult, if not impossible since only the human brain can produce human-level intelligent behaviours. Moreover, the human brain is very complex, and humanity is far from understanding how it works, let alone modelling it[7].

Currently, AI does not have all aspects of human cognition, but researchers keep working on replicating them. Their goal is to realise AGI which is able to learn, understand and solve any problem that a human can[8]. They are undoubtedly making progress towards this direction. Renowned brain surgeon İsmail Hakkı Aydın has recently announced that ChatGPT4 scored %97 in the brain surgery test he conducted. Such a result means that the AGI model possesses %97 of the knowledge required to be a brain surgeon, excluding practical experience. Prof. Aydın remarks that although AI systems do not emulate emotions, they can recognise and respond to them. Furthermore, he anticipates that emotions will eventually be incorporated into AI systems[9].

While some AI researchers and practitioners predict AI as an “existential risk” for humanity, others call these predictions “futuristic sci-fi worries”.

The critical question is, what happens if AI’s actions are not aligned with the given objective?

Stuart Russel, a prominent AI scientist and a signatory of the FIL’s open letter, states that humans can always switch the machine off. However, if the AI is more intelligent than humans, then it can prevent the switch off to achieve the given objective. According to Prof. Stuart, “the more intelligent the machine, the worse the outcome for humans” as it becomes less prone to human intervention[10].

Another signatory of the open letter, Geoffrey Hinton, also known as the “godfather of AI”, shares similar concerns about the level of intelligence that AI has reached in such a short time. According to Dr Hinton, AI systems’ ability to learn unexpected behaviour from vast amounts of data and their capability to analyse and run their code pose the risk of killer robots becoming a reality. He states that generative AI can be used as a tool for misinformation, and an average person will not be able to distinguish AI-generated photos, videos or text from authentic ones, leading to mass manipulation. He also warns the public about a significant change in the job market as AI will replace jobs that involve rote tasks. Moreover, big tech companies like Google and Microsoft disregard these risks to win the competition in the market. Nevertheless, he acknowledges that AI offers more benefits than risks in the short term, and its development should continue[11].

AI researchers acknowledge the tremendous positive developments that AGI systems may provide for humankind in healthcare, education, engineering, research and science, finance, agriculture, government administration, business and many other fields. However, they also agree that AI has potential dangers and drawbacks. The current AI systems are not accident-free and might be used harmfully by ill will[12]. Still, many researchers disagree with AI posing an “existential risk” to humanity; instead, they point to the risks already occurring.

There is a need for regulations against AI’s actual and present harms.

In this context, Ardinand Narayanan, an award-winning professor in computer science, dismisses the predictions of the FIL’s open letter regarding misinformation, the labour market and existential risks as “speculative, futuristic sci-fi worries”. He argues that such predictions divert the public attention from the present and actual problems and that focus should instead be placed on issues such as overreliance on inaccurate digital tools, labour exploitation by centralised powers and personal data leaks[13].

In a similar vein, DAIR, an AI research institute, in its counterpoint statement, refers to the FIL’s open letter as “a fantasised apocalypse” and further states that such claims cast a shadow on the current and real harms of AI systems such as worker exploitation, massive data theft, misinformation, the concentration of power and social inequalities. Acknowledging AI’s “real and present” risks, the authors stress the need for regulations that enforce transparency and accountability and prevent exploitative labour practices[14].

The international community urges global cooperation to tackle the challenges of AI.

Like the AI community, the international community discusses the benefits and challenges of the rapid advancement of AI technologies. In this regard, the Council of Europe (CoE) warns the European public against an “AI-generated dystopian society” where AI turns against its users to perpetrate injustices and restrict people’s rights[15]. It also points out the lack of transparency, accountability, or safeguards in their development process. Likewise, the United Nations (UN) Secretary-General suggests global cooperation among states, the private sector, civil society, international organisations, academic institutions, and the technical community to maximise the benefits of AI while minimising its risks[16]. It is noteworthy that the FIL provides significant support to the development of the framework of this global cooperation as a civil society organisation[17].

Governments also recognise the immense economic and practical benefits that AI technologies can offer, and the challenges that come along with their use, thereby addressing the need for AI capacity building[18]. The EU refers to AI as “one of the most important applications of data economy” and stresses the importance of its development and deployment on European values and fundamental rights such as human dignity and privacy[19].

AI has transformed and will continue to transform our lives.

AI is a rapidly growing field with tremendous potential to elevate humanity’s living standards. Our lives are transformed with AI-based chatbots, search engines, robots, self-driving cars, drones, security and surveillance, medical care and health treatment, trading, smart cities and more. Even more are coming, and AI will continue to transform our lives in ways beyond our imagination. However, there is also the possibility that this rapid advancement may catch humanity off guard.

As explained by AI experts, the use of AI technologies and systems raises various legal and ethical concerns, including;

  • the protection of privacy and personal data,
  • high-tech profiling,
  • discriminatory practices,
  • disinformation and manipulation,
  • automated decision-making,
  • job displacement,
  • unintended and unexpected consequences,
  • autonomous weapons,
  • the concentration of power and
  • systemic socio-economic inequalities.

Humanity must regulate the development, deployment and use of AI to safeguard potential risks and protect the safety and liberty of individuals to avoid an Orwellian future. However, there are challenges associated with drafting and implementing such regulations.

Firstly, there is no clarity on the problems that AI systems may cause. They have self-learning capacity, which means they can acquire knowledge independently, without human coding. Even the inventors do not know the exact dangers; they can only make predictions. Therefore, it is impossible to foresee future issues and challenges related to the use of AI with certainty. For this matter, only comprehensive regulatory frameworks may be drafted against various predicted risks and potential harms. These frameworks may include a general list of potential risks and negative consequences compiled from different perspectives, experiences and opinions of AI experts, academics, thinkers, and practitioners.

Secondly, the potential impact of AI is not limited to the territory of states. There is a need for a globally common ground to be able to monitor and apply standards effectively. Establishing common standards or even a shared benchmark for AI poses another challenge in and of itself. There are no precise and absolute universal standards, particularly in ethics and human rights. For instance, the Universal Declaration of Human Rights[20] is the most widely known and recognised instrument in international human rights by states. However, there has yet to be a worldwide consensus on the standards set under the Declaration. While it holds a strong moral value, it is not legally binding.

In this respect, we may expect only a framework of general principles and guidelines on a global level regarding ethics and the protection of fundamental rights. Nevertheless, the critical issue is to strengthen the sovereignty of individuals through establishing conscious and well-informed societies which function upon freedom of thought and unmanipulated information channels.

References

Bates Alex, Augmented Mind, Neocortex Ventures, (kindle version), 2018

BBC News, “AI “godfather” Geoffrey Hinton warns of dangers as he quits Google”, By Zoe Kleinman & Chris Vallance; https://www.bbc.com/news/world-us-canada-65452940

Centre for the Study of Existential Risk, “Risks from Artificial Intelligence”, University of Cambridge, https://www.cser.ac.uk/research/risks-from-artificial-intelligence/

CIRCLS, “Glossary for Artificial Intelligence Terms for Educators”; https://circls.org/educatorcircls/ai-glossary

Council of Europe, “Safeguarding human rights in the era of artificial intelligence”, Human Rights Comment, https://www.coe.int/en/web/commissioner/-/safeguarding-human-rights-in-the-era-of-artificial-intelligence?redirect=%2Fen%2Fweb%2Fcommissioner%2Fthematic-work%2Fartificial-intelligence

Council of the European Union, General Secretariat, “ChatGPT in the Public Sector-overhyped or overlooked?”, ART Research Paper, 24 April 2023; https://www.consilium.europa.eu/media/63818/art-paper-chatgpt-in-the-public-sector-overhyped-or-overlooked-24-april-2023_ext.pdf

DAIR, “Statement from the listed authors of Stochastic Parrots on the “AI pause” letter; Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, Margaret Mitchell”; https://www.dair-institute.org/blog/letter-statement-March2023

European Commission, White Paper On Artificial Intelligence-A European approach to excellence and trust, COM (2020) 65 final, Brussels, 19.02.2020; https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0065

Ekovitrin, “Yapay zeka ChatGPT dünyada ilk sınavına Türkiye’de girdi”, 22.Mayıs.2023; https://www.ekovitrin.com/yapay-zeka-chatgpt-dunyada-ilk-sinavina-turkiyede-girdi

Future of Life, “FAQs about FLI’s Open Letter Calling for a Pause on Giant AI Experiments”;https://futureoflife.org/ai/faqs-about-flis-open-letter-calling-for-a-pause-on-giant-ai-experiments/

Future of Life (FIL), “United Nations”, https://futureoflife.org/person/united-nations/

G20 Ministerial Statement on Trade and Digital Economy, para.17-18; https://wp.oecd.ai/app/uploads/2021/06/G20-AI-Principles.pdf

Kapoor Sayash & Narayan Arvind, “A misleading open letter about sci-fi AI dangers ignores the real risks”, AI Snake Oil, 20.March.2023; https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci

OpenAI, “GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses”; https://openai.com/product/gpt-4

“Policy Work”, https://futureoflife.org/our-work/policy-work/

 “Pause Giant AI Experiments: An Open Letter”; https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Russel Stuart, “Human-Compatible Artificial Intelligence”, Muggleton, Stephen, and Nicholas Chater (eds), Human-Like Machine Intelligence, Oxford University Press, 2021

The Conversation, “AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?”, By Olivier Salvado and Jon Whittle https://theconversation.com/ai-pioneer-geoffrey-hinton-says-ai-is-a-new-form-of-intelligence-unlike-our-own-have-we-been-getting-it-wrong-this-whole-time-204911

The Universal Declaration of Human Rights; https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf

The Guardian, “Letter signed by Elon Musk demanding AI research pause sparks controversy”, by Keri Paul and Agencies,; https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt,

New York Times, “The Godfather of AI Leaves Google and Warns of Danger Ahead”, By Cade Metz, 01.May.2023;

UN Secretary-General, Road map for digital cooperation: implementation of the recommendations of the High-level Panel on Digital Cooperation, A/74/821, 29.May.2020 https://documents-dds-ny.un.org/doc/UNDOC/GEN/N20/102/51/PDF/N2010251.pdf?OpenElement

Vice, “The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess”, by Chloe Xiang,

https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race-is-a-huge-mess

Wooldridge Mael, A Brief History of Artificial Intelligence: what it is, where we are and where we are going, Flatiron Books, NY, US Edition, 2021

Zhuang Simon, Hadfield-Menell Dylan, “Consequences of Misaligned AI”, 7 April 2021, NeurIPS 2020; https://arxiv.org/pdf/2102.03896.pdf

* The author got assistance from the free version of ChatGPT as an AGI tool in editing this article.

[1] OpenAI, “GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses”; https://openai.com/product/gpt-4, (ref. date: 30.05.2023); OpenAI was founded in 2015 by Sam Altman, Elon Musk and several former researchers from other AI companies. To read about the company history; Council of the European Union, General Secretariat, “ChatGPT in the Public Sector-overhyped or overlooked?”, ART Research Paper, 24 April 2023, p.5-7; https://www.consilium.europa.eu/media/63818/art-paper-chatgpt-in-the-public-sector-overhyped-or-overlooked-24-april-2023_ext.pdf

[2] The Future of Life (FIL) advocates the governance of AI at national and international levels; “Policy Work”, https://futureoflife.org/our-work/policy-work/, (ref. date: 23.04.2023)

[3] “Pause Giant AI Experiments: An Open Letter”, 22. March. 2023; https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (ref. date:20.04.2023); “FAQs about FLI’s Open Letter Calling for a Pause on Giant AI Experiments”, 31 March 2023,https://futureoflife.org/ai/faqs-about-flis-open-letter-calling-for-a-pause-on-giant-ai-experiments/,(ref. date: 20.04.2023)

[4] The Guardian, “Letter signed by Elon Musk demanding AI research pause sparks controversy”, by Keri Paul and Agencies, 01.April.2023; https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt, (ref. date: 25.05.2023); Vice, “The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess”, by Chloe Xiang, 29 March 2023,

https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race-is-a-huge-mess, (ref. date: 25.05.2023)

[5] Simon Zhuang, Dylan Hadfield-Menell, “Consequences of Misaligned AI”, 7 April 2021, NeurIPS 2020, p.1; https://arxiv.org/pdf/2102.03896.pdf

[6] Alex Bates, Augmented Mind, Neocortex Ventures, (kindle version), 2018, p.9-10,194

[7] Michael Wooldridge, A Brief History of Artificial Intelligence: what it is, where we are and where we are going, Flatiron Books, NY, US Edition, 2021, p. 9, 33, 34-35

[8] For definition of AGI; CIRCLS, “Glossary for Artificial Intelligence Terms for Educators”; https://circls.org/educatorcircls/ai-glossary

[9] Ekovitrin, “Yapay zeka ChatGPT dünyada ilk sınavına Türkiye’de girdi”, 22.Mayıs.2023; https://www.ekovitrin.com/yapay-zeka-chatgpt-dunyada-ilk-sinavina-turkiyede-girdi, (ref. date: 14.06.2023)

[10] Stuart Russel, “Human-Compatible Artificial Intelligence”, Muggleton, Stephen, and Nicholas Chater (eds), Human-Like Machine Intelligence, Oxford University Press, 2021, p.1

[11] “The Godfather of AI Leaves Google and Warns of Danger Ahead”, New York Times, By Cade Metz, 01.May.2023; https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html?campaign_id=190&emc=edit_ufn_20230501&instance_id=91516&nl=from-the-times&regi_id=204759275&segment_id=131839&te=1&user_id=8f553b1414ef89a4e88bc03de0099c02, (ref. date: 02.05.2023); The Conversation, “AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?”, By Olivier Salvado and Jon Whittle, 04.May.2023; https://theconversation.com/ai-pioneer-geoffrey-hinton-says-ai-is-a-new-form-of-intelligence-unlike-our-own-have-we-been-getting-it-wrong-this-whole-time-204911, (ref. date: 06.05.2023); BBC News, “AI “godfather” Geoffrey Hinton warns of dangers as he quits Google”, By Zoe Kleinman & Chris Vallance 02.May.2023; https://www.bbc.com/news/world-us-canada-65452940 (ref. date: 09.05.2023)

[12] Centre for the Study of Existential Risk, “Risks from Artificial Intelligence”, University of Cambridge, https://www.cser.ac.uk/research/risks-from-artificial-intelligence/, (ref. date: 22.05.2023); https://futureoflife.org/ai/faqs-about-flis-open-letter-calling-for-a-pause-on-giant-ai-experiments/ (ref. date: 20.04.2023)

[13] Sayash Kapoor & Arvind Narayan, “A misleading open letter about sci-fi AI dangers ignores the real risks”, AI Snake Oil, 20.March.2023; https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci, (ref. date: 09.05.2023)

[14] DAIR, “Statement from the listed authors of Stochastic Parrots on the “AI pause” letter; Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, Margaret Mitchell”, 31.March.2023; https://www.dair-institute.org/blog/letter-statement-March2023, (ref. date. 25.05.2023)

[15] Council of Europe, “Safeguarding human rights in the era of artificial intelligence”, Human Rights Comment, https://www.coe.int/en/web/commissioner/-/safeguarding-human-rights-in-the-era-of-artificial-intelligence?redirect=%2Fen%2Fweb%2Fcommissioner%2Fthematic-work%2Fartificial-intelligence, (ref. date: 21.05.2023)

[16] UN Secretary-General, Road map for digital cooperation: implementation of the recommendations of the High-level Panel on Digital Cooperation, A/74/821, 29.May.2020, p.73,70  https://documents-dds-ny.un.org/doc/UNDOC/GEN/N20/102/51/PDF/N2010251.pdf?OpenElement

[17] Future of Life (FIL), “United Nations”, https://futureoflife.org/person/united-nations/, (09.05.2023)

[18] G20 Ministerial Statement on Trade and Digital Economy, para.17-18; https://wp.oecd.ai/app/uploads/2021/06/G20-AI-Principles.pdf

[19] European Commission, White Paper On Artificial Intelligence-A European approach to excellence and trust, COM (2020) 65 final, Brussels, 19.02.2020, p.1-2; https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0065

Source

Leave a Reply

eleven + fourteen =