Discrimination, manipulation, and job destruction...the biggest risks related to AI


Discrimination, manipulation, and job destruction...the biggest risks related to AI



A strong global concern that is growing day by day. Therefore, on November 1 and 2, the United Kingdom organized the first global summit on the dangers of artificial intelligence. This summit, held at Bletchley Park, the World War II code-decoding centre, aimed to identify and discuss potential risks associated with generative artificial intelligence, such as chatbots and chat.


From smartphones to cars, artificial intelligence is already present in many aspects of our lives. In recent years, the progress of related applications has accelerated, especially with the development of generative artificial intelligence, which is capable of producing texts, sounds, and images in a few seconds. The potential of this new technology raises huge hopes: it could revolutionize many fields, such as medicine, education or the environment.


But its unrestrained development could also lead to serious risks to humans, such as attacks on privacy, disinformation campaigns or even the possibility of “manufacturing chemical or biological weapons,” according to British Prime Minister Rishi Sunak.


Here is an overview of the main risks of artificial intelligence.

misinformation


Artificial intelligence systems are used to generate text, sounds and images, and are capable of producing content that is increasingly indistinguishable from that produced by humans. This content can be misused to trick users, especially by creating fake videos or testimonials that appear authentic.


This is especially the case since last October 7 and the beginning of the war between Israel and Hamas, which was marked by many new misleading content being shared every day on social networks. An image generated by artificial intelligence, showing Atletico Madrid fans raising a giant Palestinian flag in the stands of their stadium, was widely shared on Twitter and Facebook. A video of Palestinian model Bella Hadid was also manipulated to make her say that she supports Israel.


In this information war between Israel and Hamas, these contents are used to influence public opinion and damage the reputation of the opposing party.


These “deepfake” images, created from scratch by artificial intelligence, have reached an unprecedented level of realism, and represent a threat to political leaders. Pictures of Emmanuel Macron as a garbage collector, Pope Francis wearing a white rain jacket, Donald Trump under arrest... and many other misleading pictures that were widely shared on social networks, reaching millions of views.

Manipulation


Deception, pressure, exploitation of weaknesses... According to researchers in this field, manipulation represents one of the main ethical problems of artificial intelligence. Among the most famous examples, we can cite the suicide of a Belgian man who had established a strong relationship with a chatbot, a few months ago, which is an artificial intelligence capable of answering Internet users' questions in real time. And also this example of the personal assistant “Alexa” suggesting to a child to touch an electrical socket with a coin.


“Even if the chatbot is clearly identified as a conversational agent,” says Giada Pestelli, an ethicist at the French startup Hugging Face and a researcher at the Sorbonne University, speaking about the “danger of bot personalization,” the tendency to attribute human reactions to animals and objects. “Users can project human qualities onto him.” “This is due to the fact that chatbots are becoming more effective at simulating a human conversation. It is very easy to fall into the trap. In some cases, the user becomes so vulnerable that he is willing to do anything to maintain the relationship” with the robot.


Last February, the Replica app, which allows you to create and chat with a custom chatbot, decided to suspend its provocative sexual features. The expert comments: “This decision caused emotional and psychological shock among some users, who felt the loss of a close relationship.” “The engineers who develop these tools often believe that their users are able to use them in a safe and responsible manner. But that is not always the case. Minors or the elderly, for example, may be particularly vulnerable.”

Destruction of jobs


The arrival of the chatbot in the lives of millions of individuals, in the fall of 2022, has raised many concerns about the transformation of the world of work and its impact on employment. In the medium term, experts fear that artificial intelligence will allow the elimination of many positions, such as administrative staff, lawyers, doctors, journalists or teachers.


A study conducted by the American bank Goldman Sachs, published in March 2023, concluded that artificial intelligence capable of generating content can automate a quarter of current jobs. For the United States and the European Union, the bank estimates the loss of the equivalent of 300 million full-time jobs. Administrative and legal functions will be the most affected.


“These are the arguments that are being put forward to raise awareness of the arrival of artificial intelligence,” explains Clementine Bouzier, a doctoral student in artificial intelligence and European rights at the University Jean Moulin Lyon 3. “Of course, some jobs will disappear. However, other new jobs will emerge related to artificial intelligence and digital technology in general, areas that will be of increasing importance in our societies.”


In this sense, a study conducted by the International Labor Organization published last August indicates that most jobs and industries are only partially vulnerable to automation. The UN agency believes that this technology “will make it possible to accompany certain activities rather than replace them.”

Discrimination


Currently, discrimination risks represent one of the main weaknesses of artificial intelligence, according to researchers. Artificial intelligence algorithms propagate racial or sexist stereotypes. A good example of this is the hiring algorithm that Amazon used a few years ago. In October 2018, analysts realized that their smart program







Plus récente Plus ancienne