Skip to content

OpenAI lays out its preferred version of AI regulation in new 'blueprint'

    OpenAI lays out its preferred version of AI regulation in new 'blueprint'

    OpenAI lays out its preferred version of AI regulation in new 'blueprint'

    OpenAI on Monday released what it calls an “economic blueprint” for artificial intelligence: a living document that lays out the policies the company believes it can develop with the U.S. government and its allies.

    The blueprint includes a retweet from Chris Lehane, OpenAI's vice president of global affairs, which asserts that the United States must take action to attract billions of dollars in funding for the chips, data, energy and talent needed to “win in artificial intelligence.”

    “Today, while some countries ignore AI and its economic potential, the U.S. government can pave the way for its AI industry to maintain the country's global leadership in innovation while protecting national security,” LeHahn wrote.

    OpenAI has repeatedly called on the U.S. government to take more substantive action on artificial intelligence and infrastructure to support the development of this technology. The federal government has largely left AI regulation to states, and the situation OpenAI describes in its blueprint is untenable.

    In 2024 alone, state lawmakers introduced nearly 700 AI-related bills, some of which conflict with each other. For example, Texas’ Responsible Artificial Intelligence Governance Act imposes onerous liability requirements on developers of open source AI models.

    OpenAI CEO Sam Altman also criticized existing federal laws such as the CHIPS Act, which seeks to revitalize the U.S. semiconductor industry by attracting domestic investment from the world's top chipmakers. In a recent interview with Bloomberg, Altman said the CHIPS Act “(didn't) work as well as any of us had hoped” and that he believed the Trump administration had “a real opportunity” to “do better.” Follow-up. “

    “One thing I really agree with (Trump) is how difficult it has become to build things in America,” Altman said in the interview. “Power plants, data centers, anything like that. I understand how bureaucracy is created, but it doesn't help the country as a whole. It's especially unhelpful when you think about what the U.S. needs to do to lead in artificial intelligence. The U.S. really needs to Leading the way in artificial intelligence.”

    To power the data centers needed to develop and run artificial intelligence, OpenAI's blueprint recommends “significantly” increasing federal spending on power and data transmission, as well as meaningfully building out “new energy sources” such as solar, wind farms and nuclear power . OpenAI and its artificial intelligence rivals have previously backed nuclear power projects, arguing they are needed to power the next generation of server farms.

    Tech giants Meta and AWS' nuclear energy efforts have hit a snag, albeit for reasons that have nothing to do with nuclear power itself.

    In the near term, OpenAI's blueprint recommends that governments “develop best practices” for model deployment to prevent abuse, “streamline” the AI ​​industry's cooperation with national security agencies, and establish export controls to allow sharing of models with allies while “limiting” Their exports to “hostile countries.” Additionally, the blueprint encourages the government to share certain national security-related information with vendors, such as briefings on threats to the AI ​​industry, and help vendors obtain resources to assess the risks of their models.

    “The federal government’s approach to the safety and security of cutting-edge models should simplify requirements,” the blueprint reads. “Responsibly exporting … models to our allies and partners will help them build their own AI ecosystems, including their own A community of developers that innovates with AI and distributes its benefits, while also building AI on American technology, not technology funded by the Chinese Communist Party.”

    OpenAI already counts some U.S. government agencies as partners and will add more if its blueprint gains buy-in among policymakers. The company has an agreement with the Pentagon for cybersecurity work and other related projects, and has partnered with defense startup Anduril to provide artificial intelligence technology for systems used by the U.S. military to respond to drone attacks.

    In its blueprint, OpenAI calls for drafting standards on behalf of the U.S. private sector that other countries and international bodies “recognize and respect.” But the company has not approved mandatory rules or decrees. “(Governments can create a clear, voluntary pathway for companies developing (artificial intelligence) to work with government to define model assessments, test models and exchange information to support company safeguards,” the blueprint reads.

    The Biden administration is taking a similar tack with its artificial intelligence executive order, which seeks to establish several high-level, voluntary artificial intelligence safety standards. The executive order established the Artificial Intelligence Security Institute (AISI), a federal government agency that studies the risks of artificial intelligence systems, and works with companies like OpenAI to assess model safety. But Trump and his allies have pledged to repeal Biden's executive order, putting its codification and AISI at risk of being rescinded.

    OpenAI’s blueprint also addresses copyright issues related to artificial intelligence, a hot topic. The company believes that AI developers should be able to develop models using “publicly available information,” including copyrighted content.

    OpenAI, like many other artificial intelligence companies, trains models using public data on the web. The company has licensing agreements with a number of platforms and publishers and offers creators limited ways to “opt out” of the development of its models. But OpenAI has also said it's “impossible” to train AI models without using copyrighted material, and some creators have sued the company for allegedly training their works without permission.

    “Other participants, including developers in other countries, have made no effort to respect or engage with intellectual property owners,” the blueprint reads. “If the United States and like-minded countries do not adopt smart policies that will help advance artificial intelligence in the long term, measures to address this imbalance, then the same content will still be used for AI training elsewhere, but for the benefit of other economies (governments should ensure) that AI has the ability to act like humans from a general, open source. Learn from information while protecting creators from unauthorized digital copies.”

    It remains to be seen which parts of the OpenAI blueprint, if any, will influence legislation. But the proposals are a signal that OpenAI intends to continue to play a key role in the race to unify U.S. artificial intelligence policy.

    OpenAI more than tripled its lobbying spending in the first half of last year, spending $800,000, compared with $260,000 for the full year of 2023. The company has also added former government leaders to its executive ranks, including former Defense Department official Sasha Baker, NSA Director Paul Nakasone and Aaron Chatterji. The latter is the former chief economist at the Commerce Department under President Joe Biden.

    As it hires people and expands its global affairs unit, OpenAI has been more vocal about which AI laws and rules it prefers, such as supporting a Senate bill that would create a federal AI rulemaking agency and provide for AI Federal Scholarship R&D. The company also opposes bills, particularly California's SB 1047, arguing it would stifle AI innovation and crowd out talent.

    TechCrunch has a newsletter dedicated to artificial intelligence! Sign up here to get it delivered to your inbox every Wednesday.

    Blackell: Por que a água falha em algumas áreas do Porto? | Porta Automakers are going to charge some Trump tariffs, but the prospects for the '3' remain cloudy Portugal registra o tempo de nascimento em 2024. Um terço dos bebês são filhos de mães estrangeiras Trump administrators carry out 100-day victory circle reforms key institutions under fierce push from DEMS Rede de distribuição médica “absolutamente normalizada” Politician shot dead while dining in Mexican restaurant O debate entre Louis Montenegro e Pedro Nuno Santos renunciou nesta quarta -feira – política New lawsuit targets Trump and Duger's administration overhaul: NPR O segredo para alcançar o peso ideal também está na sala. Descubra como – atual Tony Blair said climate plans based on phase-out fossil fuels are doomed to fail. Green Politics