Home Tech OpenAI presents its preferred version of AI regulation in a new ‘blueprint’

OpenAI presents its preferred version of AI regulation in a new ‘blueprint’

79
0
OpenAI presents its preferred version of AI regulation in a new ‘blueprint’

OpenAI on Monday published what it called an “economic blueprint” for AI: a living document that lays out the policies the company thinks it can build with the US government and its allies. The plan, which includes an advance from Chris Lehane, OpenAI’s VP of global affairs, emphasizes that the US must act to attract billions in funding for the chips, data, energy, and talent needed to “win in AI.” “Today, while some countries ignore AI and its economic potential,” Lehane wrote, “the US government can pave the way for the AI ​​industry to continue its global leadership in innovation while protecting national security.” OpenAI has repeatedly called on the US government to take more substantive action on AI and infrastructure to support the development of the technology. The federal government largely leaves AI regulation to the states, a situation OpenAI describes in its blueprint as unlikely. In 2024 alone, state lawmakers introduced nearly 700 AI-related bills, some in conflict with others. The Texas Responsible AI Authority Act, for example, imposes heavy liability requirements on developers of open source AI models. OpenAI CEO Sam Altman also criticized federal laws on the books, such as the CHIPS Act, which aims to revive the US semiconductor industry by attracting domestic investment from the world’s top chipmakers. In a recent interview with Bloomberg, Altman said that the CHIPS Act “[has not] has been as effective as expected, and he thinks there is a “real opportunity” for the Trump administration to “do something better as a follow-up.” “Something I totally agree with [Trump] in, it’s wild how difficult it has become to build it in the United States,” said Altman in an interview. “Power plants, data centers, etc. I know how bureaucratic cruft builds up, but it’s not helpful for the country in general. It’s especially not helpful when you think about what needs to happen for the US to lead AI. And the US really needs to lead AI To fuel the data centers needed to develop and run AI, the OpenAI blueprint suggests “dramatically” increasing federal spending on power and data transmission, and building “new energy sources” like solar, wind farms, and nuclear – along with the competition AI – has previously backed the nuclear power project, arguing that it is needed to meet the electricity demand of the next generation of servers. Tech giants Meta and AWS have had trouble with their nuclear efforts, although for reasons unrelated to nuclear power in more immediate terms, the OpenAI blueprint proposes that governments “create best practices” for the dissemination of models to protect against misuse, “harmonize” the AI ​​industry’s engagement with national security agencies, and develop export controls that allow sharing of models with allies while “restricting[ing]”exports to “enemy countries.” In addition, the blueprint advocates that governments share information related to national security, such as briefings on AI industry threats, with vendors, and help vendors secure resources to assess models for risk. “Approach the federal government for the safety and security of frontier models must accelerate requirements,” the blueprint said. “Responsibly exporting … models to our allies and partners will help them create their own AI ecosystems, including their own developer communities that innovate with AI and spreading its benefits, while also building AI on US technology, not technology funded by the Chinese Communist Party. ” OpenAI has already counted several US government departments as partners, and – if the blueprint gains currency among policymakers – it should add more. The company has a deal with the Pentagon for cybersecurity work and other related projects, and has partnered with defense startup Anduril to provide AI technology to systems the US military uses to counter drone attacks. In its blueprint, OpenAI calls for the development of standards that are “recognized and respected” by other countries and international bodies on behalf of the US private sector. But the company stopped short of approving mandatory rules or edicts. “[The government can create] established, voluntary pathway for developing companies [AI] to cooperate with the government to determine the evaluation of models, test models, and exchange information to support the protection of the company,” the blueprint said. , voluntary. The executive order created the US AI Security Institute (AISI), a federal government agency that studies risks in AI systems, which has been working with companies including OpenAI to evaluate the security of models. But Trump and his allies have vowed to withdraw the order Biden executive, created the codification – and AISI – at risk of being overturned. The OpenAI blueprint also addresses copyright as it relates to AI, a hot-button topic. The company made the case that AI developers should be able to use “publicly available information,” including content copyright holders, to develop models. OpenAI, along with many other AI companies, trains models on public data from around the web. The company has licensing agreements with several platforms and publishers, and offers limited ways for creators to “opt out” of the model’s development. But OpenAI also says it’s “impossible” to train AI models without using copyrighted material, and some creators are suing the company for allegedly training their work without permission. “[O]Other actors, including developers in other countries, do not seek to respect or engage with IP rights holders,” said the blueprint. “If the US and similar countries do not address this imbalance through prudent measures that help develop AI for long term, the same content will still be used for AI training elsewhere, but for other economic benefits. [The government should ensure] AI has the ability to learn from publicly available universal information, just as humans do, and also protect its creators from unauthorized digital replicas. It remains to be seen which parts of the OpenAI blueprint, if any, will be affected by the legislation. But the proposal is a signal that OpenAI wants to remain a key player in the race for a unified US AI policy. In the first half of last year, OpenAI more than tripled its lobbying costs, spending $800,000 instead of $260,000 in all of 2023. The company also brought former government leaders into its executive ranks, including former Defense Department official Sasha Baker, NSA chief Paul. Nakasone, and Aaron Chatterji, were both former chief economists at the Commerce Department under President Joe Biden. While making hires and expanding its global affairs division, OpenAI has been more vocal about AI laws and regulations it favors, such as passing a Senate bill that would create a federal rulemaking body for AI and provide federal scholarships for AI. R&D. The company also opposes bills, particularly California’s SB 1047, arguing that it will stifle AI innovation and discourage talent. TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.

Source link