Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
USA

Elon Musk vs. OpenAI: Tech giants are stoking existential fear to avoid oversight | Kenan Malik

IIn 1914, on the eve of World War I, H.G. Wells published a novel about the possibility of an even bigger conflagration. liberated world Thirty years before the Manhattan Project, “humankind” [to] Carry around enough potential energy in your handbag to destroy half a city. ” A global war breaks out, precipitating a nuclear apocalypse. To achieve peace, it is necessary to establish a world government.

Wells was concerned not just with the dangers of new technology, but also with the dangers of democracy. Wells’s world government was not created by democratic will, but was imposed as a benign dictatorship. “The ruled will show their consent by silence,” King Ecbert of England says menacingly. For Wells, “common man” means “Violent idiots in social issues and public affairs”. Only an educated, scientifically-minded elite can “save democracy from itself.”

A century later, another technology inspires similar awe and fear: artificial intelligence. From Silicon Valley boardrooms to the backrooms of Davos, political leaders, technology moguls, and academics are exulting in the immense benefits of AI, but they are also concerned about its potential. ing. announce the end of humanity When super-intelligent machines come to rule the world. And, as a century ago, questions of democracy and social control are at the heart of the debate.

In 2015, journalist Stephen Levy Interview with Elon Musk and Sam Altmanthe two founders of OpenAIa technology company that quickly gained public attention when it was released two years ago. Chat GPT, a chatbot that looks like a human at first glance. Fearful of the potential impact of AI, Silicon Valley moguls founded the company as a nonprofit charitable trust with the goal of developing technology in an ethical manner to benefit “all of humanity.”

Levy asked Musk and Altman about the future of AI. “There are two schools of thought,” Musk mused. “Do you want a lot of AI or a few? I think more is probably better.”

“If I used it on Dr. Evil, wouldn’t it give me powers?” Levy asked. Altman responded that Dr. Evil is more likely to be empowered if only a few people control the technology, saying, “In that case, we’d be in a really bad situation.” Ta.

In reality, that “bad place” is being built by the technology companies themselves.Musk resigned from OpenAI’s board six years ago to develop his own AI projects, but is currently sue former company Breach of contract for prioritizing profit over public interest and neglecting to develop AI “for the benefit of humanity.”

In 2019, OpenAI created a commercial subsidiary to raise money from investors, particularly Microsoft. When he released ChatGPT in 2022, the inner workings of the model were hidden. I didn’t need to be too open about it, Ilya SatskevaOne of OpenAI’s founders, who was the company’s chief scientist at the time, responded to criticism by claiming that it would prevent malicious actors from using it to “cause significant damage.” Fear of technology became a cover for creating a shield from surveillance.

In response to Musk’s lawsuit, OpenAI released a series of documents last week. Emails between Mr. Musk and other members of the board of directors. All of this makes it clear that all board members agreed from the beginning that OpenAI could never actually be open.

As AI develops, Sutskever wrote to Musk: The “open” in openAI means that everyone should benefit from the results of AI once it is developed. [sic] It’s built, but it’s totally fine if you don’t share the science. ” “Yes,” Musk replied. Regardless of the nature of the lawsuit, Musk, like other tech industry moguls, has not been as open-minded. legal challenge to OpenAI This is more a power struggle within Silicon Valley than an attempt at accountability.

Wells wrote liberated world At a time of great political turmoil, when many people were questioning the wisdom of extending suffrage to the working class.

“Was that what you wanted, and was it safe to leave it to you?” [the masses],” Fabian Beatrice Webb wondered., “The ballot box that creates and controls the British government with its vast wealth and far-flung territories”? This was the question at the heart of Wells’s novel: Who can one entrust their future to?

A century later, we are once again engaged in heated debates about the virtues of democracy. For some, the political turmoil of recent years is a product of democratic overreach, the result of allowing irrational and uneducated people to make important decisions. “It’s unfair to put the responsibility of making a very complex and sophisticated historical decision on an unqualified simpleton.” Richard Dawkins said: After the Brexit referendum, Mr Wells would have agreed with that view.

Others say that such contempt for ordinary people is what contributes to the flaws in democracy, where large sections of the population feel deprived of a say in how society is run. .

It’s a disdain that also affects discussions about technology.like the world is liberated, The AI ​​debate focuses not only on technology, but also on questions of openness and control. Alarmingly enough, we are far from being “superintelligent” machines. Today’s AI models, such as ChatGPT, or claude 3, released last week by another AI company, Anthropic, is so good at predicting what the next word in a sequence is that it makes us believe we can have human-like conversations. You can cheat. However, they not intelligent In every human sense, Negligible understanding of the real world And I’m not trying to destroy humanity.

The problems posed by AI are not existential, but social.from Algorithm bias to surveillance societyfrom Disinformation and censorship to copyright theftOur concern is not that machines might someday exercise power over humans, but that machines already function in ways that reinforce inequalities and injustices, and that those in power strengthen their own authority. It should be about providing tools for

That’s why what we might call “Operation Ecbert,” the argument that some technologies are so dangerous that they must be controlled by a select few over democratic pressure, It’s very threatening. The problem isn’t just Dr. Evil, it’s the people who use fear of Dr. Evil to protect themselves from surveillance.

Kenan Malik is a columnist for the Observer

Summarize this content to 100 words IIn 1914, on the eve of World War I, H.G. Wells published a novel about the possibility of an even bigger conflagration. liberated world Thirty years before the Manhattan Project, “humankind” [to] Carry around enough potential energy in your handbag to destroy half a city. ” A global war breaks out, precipitating a nuclear apocalypse. To achieve peace, it is necessary to establish a world government.Wells was concerned not just with the dangers of new technology, but also with the dangers of democracy. Wells’s world government was not created by democratic will, but was imposed as a benign dictatorship. “The ruled will show their consent by silence,” King Ecbert of England says menacingly. For Wells, “common man” means “Violent idiots in social issues and public affairs”. Only an educated, scientifically-minded elite can “save democracy from itself.”A century later, another technology inspires similar awe and fear: artificial intelligence. From Silicon Valley boardrooms to the backrooms of Davos, political leaders, technology moguls, and academics are exulting in the immense benefits of AI, but they are also concerned about its potential. ing. announce the end of humanity When super-intelligent machines come to rule the world. And, as a century ago, questions of democracy and social control are at the heart of the debate.In 2015, journalist Stephen Levy Interview with Elon Musk and Sam Altmanthe two founders of OpenAIa technology company that quickly gained public attention when it was released two years ago. Chat GPT, a chatbot that looks like a human at first glance. Fearful of the potential impact of AI, Silicon Valley moguls founded the company as a nonprofit charitable trust with the goal of developing technology in an ethical manner to benefit “all of humanity.”Fear of technology provides a cover to shield from surveillanceLevy asked Musk and Altman about the future of AI. “There are two schools of thought,” Musk mused. “Do you want a lot of AI or a few? I think more is probably better.””If I used it on Dr. Evil, wouldn’t it give me powers?” Levy asked. Altman responded that Dr. Evil is more likely to be empowered if only a few people control the technology, saying, “In that case, we’d be in a really bad situation.” Ta.In reality, that “bad place” is being built by the technology companies themselves.Musk resigned from OpenAI’s board six years ago to develop his own AI projects, but is currently sue former company Breach of contract for prioritizing profit over public interest and neglecting to develop AI “for the benefit of humanity.”In 2019, OpenAI created a commercial subsidiary to raise money from investors, particularly Microsoft. When he released ChatGPT in 2022, the inner workings of the model were hidden. I didn’t need to be too open about it, Ilya SatskevaOne of OpenAI’s founders, who was the company’s chief scientist at the time, responded to criticism by claiming that it would prevent malicious actors from using it to “cause significant damage.” Fear of technology became a cover for creating a shield from surveillance.In response to Musk’s lawsuit, OpenAI released a series of documents last week. Emails between Mr. Musk and other members of the board of directors. All of this makes it clear that all board members agreed from the beginning that OpenAI could never actually be open.As AI develops, Sutskever wrote to Musk: The “open” in openAI means that everyone should benefit from the results of AI once it is developed. [sic] It’s built, but it’s totally fine if you don’t share the science. ” “Yes,” Musk replied. Regardless of the nature of the lawsuit, Musk, like other tech industry moguls, has not been as open-minded. legal challenge to OpenAI This is more a power struggle within Silicon Valley than an attempt at accountability.Wells wrote liberated world At a time of great political turmoil, when many people were questioning the wisdom of extending suffrage to the working class.“Was that what you wanted, and was it safe to leave it to you?” [the masses],” Fabian Beatrice Webb wondered., “The ballot box that creates and controls the British government with its vast wealth and far-flung territories”? This was the question at the heart of Wells’s novel: Who can one entrust their future to?A century later, we are once again engaged in heated debates about the virtues of democracy. For some, the political turmoil of recent years is a product of democratic overreach, the result of allowing irrational and uneducated people to make important decisions. “It’s unfair to put the responsibility of making a very complex and sophisticated historical decision on an unqualified simpleton.” Richard Dawkins said: After the Brexit referendum, Mr Wells would have agreed with that view.At the heart of the AI ​​debate are questions of accountability and control, not just technology.Others say that such contempt for ordinary people is what contributes to the flaws in democracy, where large sections of the population feel deprived of a say in how society is run. .It’s a disdain that also affects discussions about technology.like the world is liberated, The AI ​​debate focuses not only on technology, but also on questions of openness and control. Alarmingly enough, we are far from being “superintelligent” machines. Today’s AI models, such as ChatGPT, or claude 3, released last week by another AI company, Anthropic, is so good at predicting what the next word in a sequence is that it makes us believe we can have human-like conversations. You can cheat. However, they not intelligent In every human sense, Negligible understanding of the real world And I’m not trying to destroy humanity.The problems posed by AI are not existential, but social.from Algorithm bias to surveillance societyfrom Disinformation and censorship to copyright theftOur concern is not that machines might someday exercise power over humans, but that machines already function in ways that reinforce inequalities and injustices, and that those in power strengthen their own authority. It should be about providing tools forThat’s why what we might call “Operation Ecbert,” the argument that some technologies are so dangerous that they must be controlled by a select few over democratic pressure, It’s very threatening. The problem isn’t just Dr. Evil, it’s the people who use fear of Dr. Evil to protect themselves from surveillance. Kenan Malik is a columnist for the Observer
https://www.theguardian.com/commentisfree/2024/mar/10/ai-wont-destroy-us-but-tech-giants-use-fear-it-will-to-evade-scrutiny Elon Musk vs. OpenAI: Tech giants are stoking existential fear to avoid oversight | Kenan Malik

Back to top button