Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
USA

OpenAI CEO says AI could be smarter than ‘experts’ within 10 years

Artificial intelligence could surpass ‘expert skill level’ in most fields within 10 years, trying to stop ‘superintelligence’ from emerging, says OpenAI CEO Sam Altman In his book, Mr. monday blog post.

“In terms of both potential good and bad, superintelligence will be more powerful than any other technology humanity has had to contend with in the past,” Altman wrote in the post. This post was co-authored with two other OpenAI executives, Greg Brockman and Ilya. Sutskever.

Altman’s predictions and warnings are that he told a Senate committee that artificial intelligence “going in the wrong directionThe rapid emergence of AI tools like OpenAI’s ChatGPT and Google’s Bard has sparked debate and concern about the impact of jobs on everything, and some experts believe AI could solve these problems. suggests that there is Nearly 1 in 5 jobsin education, now uses AI to help students write papers.

The growing power of AI could help humanity, OpenAI executives wrote in a blog post. But as AI develops into “superintelligence,” they added, there will likely be a need to regulate the technology to prevent it from causing harm.

“Given the potential existential risk, we can’t just react after the fact,” Altman and co-authors write. “Nuclear power is commonly used as a historical example of a technology with this property, and synthetic biology is another example.”

They added, “Today’s AI technology also needs to be de-risked, but superintelligence will require special treatment and tuning.”

That may require an agency like the nuclear industry’s International Atomic Energy Agency to regulate superintelligence, they noted. Some lawmakers also proposed a committee Oversee AI.

“Efforts that exceed certain capacity (or resource, such as computing) thresholds are subject to inspection of systems, requests for audits, testing for compliance with security standards, and restrictions on the extent of deployment and level of security.” We must be subject to the regulations of international authorities that can impose…,” they wrote.

They added that trying to stop the emergence of superintelligence would not work.

Superintelligence is “essentially part of the technological path we are on, and we need some kind of global surveillance system to stop it, but even that is not guaranteed to work.” they are writing “So we have to get it right.”

Summarize this content to 100 words

Artificial intelligence could surpass ‘expert skill level’ in most fields within 10 years, trying to stop ‘superintelligence’ from emerging, says OpenAI CEO Sam Altman In his book, Mr. monday blog post.”In terms of both potential good and bad, superintelligence will be more powerful than any other technology humanity has had to contend with in the past,” Altman wrote in the post. This post was co-authored with two other OpenAI executives, Greg Brockman and Ilya. Sutskever.Altman’s predictions and warnings are that he told a Senate committee that artificial intelligence “going in the wrong directionThe rapid emergence of AI tools like OpenAI’s ChatGPT and Google’s Bard has sparked debate and concern about the impact of jobs on everything, and some experts believe AI could solve these problems. suggests that there is Nearly 1 in 5 jobsin education, now uses AI to help students write papers.

The growing power of AI could help humanity, OpenAI executives wrote in a blog post. But as AI develops into “superintelligence,” they added, there will likely be a need to regulate the technology to prevent it from causing harm. “Given the potential existential risk, we can’t just react after the fact,” Altman and co-authors write. “Nuclear power is commonly used as a historical example of a technology with this property, and synthetic biology is another example.”

They added, “Today’s AI technology also needs to be de-risked, but superintelligence will require special treatment and tuning.”That may require an agency like the nuclear industry’s International Atomic Energy Agency to regulate superintelligence, they noted. Some lawmakers also proposed a committee Oversee AI.“Efforts that exceed certain capacity (or resource, such as computing) thresholds are subject to inspection of systems, requests for audits, testing for compliance with security standards, and restrictions on the extent of deployment and level of security.” We must be subject to the regulations of international authorities that can impose…,” they wrote.They added that trying to stop the emergence of superintelligence would not work.

Superintelligence is “essentially part of the technological path we are on, and we need some kind of global surveillance system to stop it, but even that is not guaranteed to work.” they are writing “So we have to get it right.”

trending news

https://www.cbsnews.com/news/ai-smarter-than-experts-in-10-years-openai-ceo/ OpenAI CEO says AI could be smarter than ‘experts’ within 10 years

Back to top button