In recent months, artificial intelligence (AI) has entered the global conversation following the widespread adoption of AI-powered generative tools such as chatbots and automatic image generation programs. Prominent AI scientists and technologists have expressed concern about the hypothetical existential risks posed by these developments.
Having worked in AI for decades, this surge in popularity and the sensationalism that followed took us by surprise. Our goal with this article is not to antagonize, but to balance the public perception that seems disproportionately dominated by fears of existential threats related to AI.
It is not for us to say that one cannot, or should not, worry about the more exotic risks. As members of the European laboratory for learning and intelligent systems (ELLIS), a research-focused organization focused on machine learning, believe it is our job to put these risks into perspective, particularly in the context of government organizations contemplating regulatory action with input from technology companies.
What is Artificial Intelligence?
Artificial intelligence is a discipline within computer science or engineering that has taken shape in the 50s. His aspiration is to build intelligent computational systems, taking human intelligence as a reference. In the same way that human intelligence is complex and diverse, there are many areas within artificial intelligence that aim to emulate aspects of human intelligence, from perception to reasoning, planning and decision making.
Depending on the level of expertise, AI systems can be divided into three levels:
- Narrow or weak AI, which refers to artificial intelligence systems capable of performing specific tasks or solving particular problems, nowadays often with a higher level of performance than humans. All AI systems today are narrow AI. Examples include chatbots like chat GPTvoice assistants such as Siri and Alexa, image recognition systems and recommendation algorithms.
- General or strong AI, which refers to artificial intelligence systems that exhibit a human-like level of intelligence, including the ability to understand, learn, and apply knowledge across a broad range of tasks and incorporating concepts such as consciousness. Artificial General Intelligence is largely hypothetical and has not been achieved to date.
- Super AI, which refers to artificial intelligence systems with intelligence superior to human intelligence in all activities. By definition, we are unable to understand this type of intelligence in the same way that an ant is unable to understand our intelligence. Super AI is an even more speculative concept than general AI.
AI can be applied to any field, from education to transportation to healthcare to law or manufacturing. Therefore, it is profoundly changing all aspects of society. Even in its narrow AI form, it has significant potential to generate sustainable economic growth and help us address the most pressing challenges of the 21st century, such as climate change, pandemics and inequality.
Challenges posed by today’s artificial intelligence systems
The adoption of AI-powered decision-making systems over the past decade across a wide range of domains, from social media to the labor market, also poses significant societal risks and challenges that need to be understood and addressed.
The recent emergence of large, highly capable pre-trained transformer (GPT) designs exacerbates many of the existing challenges while creating new ones that deserve special attention. The unprecedented scale and speed with which these tools have been adopted by hundreds of millions of people around the world are placing additional stress on our social and regulatory systems.
There are a few key challenges that should be our priority:
- THE manipulation of human behavior by artificial intelligence algorithms with potentially devastating social consequences in the dissemination of false information, in the formation of public opinion and in the outcomes of democratic processes.
- Algorithmic prejudice and discrimination which not only perpetuate but exacerbate stereotypes, patterns of discrimination or even oppression.
- The lack of transparency in both models and their uses.
- The invasion of privacy and the use of huge amounts of training data without the consent or compensation of its creators.
- THE exploitation of workers annotate, train, and debug AI systems, many of which are located in low-wage developing countries.
- THE massive carbon footprint of the large data centers and neural networks needed to build these AI systems.
- The lack of veracity in generative artificial intelligence systems that invent credible content (images, text, audio, video) without correspondence with the real world.
- The fragility of these great models that can make mistakes and be deceived.
- Shifting jobs and professions.
- The concentration of power in the hands of an oligopoly of those who control today’s artificial intelligence systems.
Is artificial intelligence really an existential risk for humanity?
Unfortunately, instead of focusing on these tangible risks, the public conversation, especially recent open letters, has focused primarily on hypothetical existential risks of AI.
An existential risk refers to a potential event or scenario that poses a threat to the continued existence of humanity with consequences that could irreversibly damage or destroy human civilization, and thereby lead to the extinction of our species. A global cataclysmic event (such as an asteroid impact or a pandemic), the destruction of a livable planet (due to climate change, deforestation, or depletion of critical resources such as water and clean air), or a global nuclear war are examples of existential risks.
Our world certainly faces a number of risks and future developments are difficult to predict. In the face of this uncertainty, we must prioritize our efforts. The remote possibility of runaway superintelligence must therefore be seen in context, and this includes the context of the 3.6 billion people in the world who are highly vulnerable to climate change; the approximately 1 billion people who live on less than 1 US dollar a day; or the 2 billion people who are affected by the conflict. These are real human beings whose lives are in grave danger today, a danger certainly not caused by super AI.
Focusing on a hypothetical existential risk distracts our attention from the serious documented challenges AI poses today, misunderstands the diverse perspectives of the broader research community, and contributes to needless panic among the population.
Society would certainly benefit from including the necessary diversity, complexity and nuance of these problems and designing concrete and coordinated actionable solutions to address today’s AI challenges, including regulation. Addressing these challenges requires the collaboration and involvement of the most affected sectors of society together with the necessary technical and governance skills. It is time to act now with ambition and wisdom and in cooperation.
Want to learn more about AI, chatbots and the future of machine learning? Check out our full coverage of artificial intelligenceor browse our guides at The best free AI art generators AND Everything we know about OpenAIs ChatGPT.
The authors of this article are board members of The European Lab for Learning & Intelligent Systems (ELLIS).
Nuria OliverDirector of the Fundacin ELLIS Alicante and honorary professor of the University of Alicante, University of Alicante; Bernhard Schlkopf, Max Planck Institute for Intelligent Systems; Florence d’Alch-BucProfessor, Tlcom Paris Institut Mines-Tlcom; Nada LavraPhD, Research Advisor in the Department of Knowledge Technologies, Joef Stefan Institute and Professor, University of Nova Gorica; Nicol Cesa-BianchiProfessor, University of Milan; Sepp Hochreiter, Johannes Kepler University of LinzAND Serge MembershipProfessor, University of Copenhagen
This article is republished by The conversation licensed under Creative Commons. Read the original article.
#fatalism #deal #actual #risks
Image Source : gizmodo.com