Artificial intelligence (AI) is a complex field in which many technology companies pin their hopes. If technical progress is more and more impressive, researchers are calling on companies to lower their expectations in order to be more fair and ethical. This is what the Wall Street Journal raises in an investigation published on June 29.
The awakening of artificial intelligence
This is the case of LamDA, one of Google’s flagship AI known for successfully holding complex conversations with humans while being able to embody an entity, which alerted ethics researchers in early June. Blake Lemoine, a software engineer for Alphabet, Google’s parent company, said the AI he was working on had a conscience. In particular, she would have asked to have a right to consent for the research of which she is the subject. After extensive research by a hundred researchers and engineers, Google has refuted the words of its former employee, placed on administrative leave since June 6, following the case.
This fervor around AIs capable of meticulously replicating human behavior is pushing some companies to exaggerate the capabilities of their AI. This can alter the vision of regulatory authorities as to the reliability of these technologies.
Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, a Seattle-based artificial intelligence research institute, told the wall street journal that ” We are no longer objective “. According to him, this lack of objectivity is felt in the decisions taken when a scandal breaks out.
Ethics and AI
Several people have already warned about the ethical dangers posed by artificial intelligence. At the end of 2020, Timnit Gebru, co-director of the ethics branch of AI at Google, was suddenly fired after carrying out a study revealing the weaknesses of an AI technology powering the company’s famous search engine. She was joined two months later by Margaret Mitchell, who held the same position. The latter was fired after writing a paper pointing out how Google was selling its AI as “the best in the world” to its shareholders.
In their last paper written as employees, the two researchers specified that by having capacities similar to humans, AI technologies have the capacity to cause harm. These hasty dismissals led to the resignation, in April 2021, of from another Google executiveSamy Bengio, “ stunned by the situation. He was the co-founder of Google Brain, a division of Google dedicated to artificial intelligence.
While AIs are mainly used by companies to collect user data or facilitate research, some take the concept further. At the start of the pandemic, IBM had received a proposal to design an AI capable of identifying a feverish and masked person. An offer that was refused by the company, deeming it too intrusive and disproportionate.
Other ambitious projects have also been abandoned. In 2015, Google analyzed emotions such as joy, anger, surprise or sadness using AI. When considering other emotions, the company’s ethics committee, the Advanced Technology Review Council, decided not to continue the study. Its members felt that facial signals can fluctuate across cultures and that the likelihood of bias was too great. More recently, Microsoft chose to restrict Custom Neural Voice, its voice impersonation software, for fear that people’s voices would be used without their consent.
World authorities are questioning the ethics of AI. In November 2021, UNESCO adopted the first agreement on the ethics of artificial intelligence. This requires companies in the tech sector to be transparent about their research and the operation of their AI. Its objective is to give more control to individuals by giving them the power to control their personal data.
On his side, the European Union seeks to provide a legislative framework for artificial intelligence. The European Parliament’s Special Committee on Artificial Intelligence in the Digital Age met last March to set minimum standards for responsible use of this technology. It is particularly interested in the security of citizens, their right to privacy and data protection.