According to Microsoft President Brad Smith, artificial intelligence could lead to Orwell’s future if legislation to protect citizens isn’t immediately enacted.
Smith commented in an episode focusing on the potential dangers of the BBC news program “Panorama” on May 26th. artificial intelligence Technology development competition between (AI) and the United States and China.The warning comes about a month after the European Union Published regulatory proposal I’m trying to put restrictions on how AI can be used. In the United States, there are few similar efforts, as legislation limits regulations and focuses on promoting AI for national security purposes.
“I always remember the lessons learned from George Orwell’s 1984 book,” Smith said. “The basic story was about a government where everyone could see everything they did and hear everything they said. In 2024.”
Artificial intelligence is an undefined term, but generally refers to a machine that can automatically learn and solve problems without the direction of a human operator. Many AI programs today rely on machine learning, a set of computational techniques used to recognize patterns of large amounts of data and apply their lessons to the next round of data. Theoretically, it will be more and more accurate for each path.
This is a very powerful approach, Basic mathematical theory To Simulation of the early universeHowever, experts claim that it can be dangerous when applied to social data. Human bias is pre-installed in human data.For example, recent research in journals JAMA Psychiatry Algorithms for predicting suicide risk for blacks and Native Americans / Alaska have performed much worse than whites, partly because of the low number of people of color in the medical system. Patients of color are less likely to be treated, which means that the original data was distorted to underestimate the risk.
Bias cannot be completely avoided, but it can be addressed, said Bernhardt Trout, a professor of chemical engineering at the Massachusetts Institute of Technology and a specialized course in AI and ethics. The good news that Trout told Live Science is that reducing bias is a top priority in both academia and the AI industry.
“People are very aware of the problem in the community and are trying to address it,” he said.
Meanwhile, AI misuse is probably more difficult, Trout said. AI usage isn’t just about technical issues. It is as important as political and moral issues. And those values vary greatly from country to country.
“Facial recognition is a very powerful tool in some ways to do good things, but if you want to monitor everyone on the street, or if you want to see everyone in the demo, you can make AI work. You can, “said Smith. BBC. “And we see it in certain parts of the world.”
China has already begun to use artificial intelligence technology in both everyday and amazing ways. Face recognitionFor example, in some cities it is used instead of a bus or train ticket. However, this also means that the government has access to large amounts of data on civil movements and interactions, the BBC’s “Panorama” found. A US-based advocate focused on the ethics of video surveillance. The group IPVM has found a document suggesting plans to develop a system called “one file per person” in China. ..
“I don’t think Orwell will be like this [have] I imagined the government could do this kind of analysis, “IPVM director Connor Healy told the BBC.
Orwell’s famous novel, 1984, depicts a society in which the government monitors its citizens through a “telescreen” even at home. But Orwell didn’t imagine the features artificial intelligence would add to surveillance. In his novel, the characters found a way to circumvent video surveillance, but only allowed by fellow citizens. ..
In the Xinjiang Uighur Autonomous Region, where Uighur minorities are blaming the Chinese government, Torture and cultural genocideThe BBC found that AI was also used to track people and assess their guilt when arrested and interrogated. This is an example of a technology that encourages widespread human rights abuses. The Council on Foreign Relations estimates that since 2017, one million Uighurs have been forcibly detained in “re-education” camps, usually without criminal accusation or legal means.
Potential regulations on AI in the EU ban systems that try to circumvent users’ free will or allow governments to do all sorts of “social scoring.” Other types of applications are considered “high risk” and must meet transparency, security, and monitoring requirements before they can be put on the market. These include critical infrastructure, law enforcement, border control, AI for biometrics such as facial recognition or voice recognition systems, and more. Other systems, such as customer service chatbots and AI-enabled video games, are considered low risk and are not subject to rigorous investigation.
In contrast, the US federal government’s interest in artificial intelligence is primarily focused on encouraging the development of AI for national security and military purposes. This focus has sometimes evolved into controversy. For example, in 2018 Google terminated Project Maven, a contract with the Pentagon that automatically analyzes videos taken by military aircraft and drones. The company insisted Critics have said that the only goal is to flag objects for human review, but critics use this technology to automatically target people and places for drone attacks. I was afraid. An insider within Google unveiled the project, and eventually under strong public pressure, the company stopped working on it.
Nevertheless, the Pentagon is now Over $ 1 billion annually Given China’s enthusiasm for achieving AI hegemony, the application of machine learning to military and national security is inevitable, Trout said.
“It’s not very possible to thwart foreign desires to develop these technologies,” Trout told Live Science. “Therefore, the best you can do is understand them while being a moral leader. , Develop them yourself so that you can protect yourself. ”
Meanwhile, efforts to curb AI domestically are being led by state and local governments.King County, Washington’s largest county, just Ban on government use A general term for face recognition software. This is the first county in the United States, San Francisco city Made the same move in 2019, followed by several other cities.
There are already cases where facial recognition software has led to false arrests. In June 2020, a black man in Detroit Arrested and detained for 30 hours He was detained because the algorithm mistakenly identified him as a suspect in a shoplifting case. 2019 study According to the National Institute of Standards and Technology, software returns more false matches for blacks and Asians than for whites.
“If we don’t enact a law that protects the people in general, technology will be competing and it will be very difficult to catch up,” Smith said.
The complete documentary You can see it on YouTube.
Initially published in Live Science.
Microsoft executives say they look forward to Orwell’s future if AI isn’t suppressed | Live Science
Source link Microsoft executives say they look forward to Orwell’s future if AI isn’t suppressed | Live Science