For more than two years, technology leaders at the forefront of artificial intelligence development had made an unusual demand for legislators. They wanted Washington to adjust them.
Technology executives warned legislators that genetic AI, which can produce texts and images that mimic human creations, were able to disrupt national security and elections and could eventually eliminate millions of jobs.
AI could go “pretty wrong”, Sam Altman, Openai’s chief executive, testified to Congress in May 2023. “We want to work with the government to prevent this from happening.”
But since President Trump’s election, technology leaders and their companies have changed their melody and in some cases reverses the course, with strong government demands on the road, in what has become the most dynamic impulse to promote their products.
In recent weeks, Meta, Google, Openai and others have asked Trump’s administration from blocking the AI state laws and declare that it is legal for them to use copyright protected material to train AI models. They are also pressing to use federal data to develop technology, as well as for easier access to energy sources for their computing requirements. And they demanded tax reliefs, grants and other incentives.
The displacement has been activated by Mr Trump, who has stated that AI is the nation’s most valuable weapon to overcome China in advanced technologies.
On his first day in his office, Mr Trump signed an executive mandate to overthrow the security test rules for the AI used by the government. Two days later, he signed another order, asking industry’s proposals to create a policy to “maintain and strengthen America’s world domination”.
Technology companies “are really encouraging by Trump’s administration and even issues such as Security and AI have completely disappeared from their concerns,” said Laura Caroli, a senior partner at the Wadhwani AI Center at the Center for Strategic and International Studies. “The only thing that counts is the introduction of US leadership in AI»
Many experts in her policy are concerned that such an ruthless development could be accompanied by, among other possible problems, the rapid spread of policy and misinformation of health. Discrimination by automated economics, work and housing mediates. and Cyberettacks.
The reversal by technology leaders is intense. In September 2023, more than twelve of them approved the AI regulation at a summit at Capitol Hill organized by Senator Chuck Schumer, a New York Democrat and the majority leader at that time. At the meeting, Elon Musk warned of the “cultural dangers” raised by AI
After all, Biden’s administration began working with the largest AI companies to voluntarily test their systems for security and security weaknesses and presupposes security standards for the government. States such as California have introduced legislation to regulate technology with security standards. And publishers, writers and actors have sued technology companies for the use of copyright protected materials to train AI models.
(The New York Times has filed a lawsuit against Openai and its Microsoft partner, accusing them of copyright violations on the content of AI -related news.
But after Mr Trump won the election in November, technology companies and their leaders immediately went up to pressure. Google, Meta and Microsoft each gave $ 1 million to Mr Trump’s inauguration, as well as Mr Altman and Apple’s Tim Cook. Meta’s Mark Zuckerberg threw an inauguration party and met with Mr Trump many times. Mr Musk, who has his own company AI, Xai, has passed almost every day on the president’s side.
In turn, Mr Trump has welcomed AI announcements, including a plan from Openai, Oracle and Softbank to invest $ 100 billion in AI data centers, which are huge buildings full of servers providing computational power.
“We have to rest in the future of AI with optimism and hope,” Vice President JD Vance told government officials and technology leaders last week.
At an AI summit in Paris last month, Mr Vance also called for the “over-development” of AI policies and warned world leaders against the “excessive arrangement” that could “kill a transformative industry just as it takes off”.
Now technology companies and others affected by AI are offering answers to President AI’s second executive mandate, “removing barriers to American leadership in artificial intelligence”, which gave the development of Policy AI in favor of growth within 180 days. Hundreds of them have submitted comments to the National Science Foundation and the Office of Science and Technological Policy to influence this policy.
Openai filed 15 pages of comments, asking the federal government to prophesy the states from the creation of AI laws. The company based in San Francisco also cited Deepseek, a Chinese chatbot created for a small fraction of the cost of developed Chatbots developed by the US, saying it was an important “meter of this competition” with China.
If Chinese developers “have unlimited access to data and US companies are left without fair access, the AI fight has ended effectively,” Openai said, asking the US government to convert data to supply its systems.
Many technology companies have also claimed that the use of copyright -protected projects to train AI models was legal and that the administration should take their side. Openai, Google and Meta said they believed they had legal access to copyright protected projects such as books, films and art for education.
Meta, which has its own AI model, called Llama, prompted the White House to issue an executive or other action to “clarify that the use of data available for training models is without fair use”.
Google, Meta, Openai and Microsoft have stated that the use of copyright -protected data was legal because the information was transformed during the process of training their models and was not used to reproduce the copyright of the holders of rights. Actors, writers, musicians and publishers argued that technology companies should compensate them for the acquisition and use of their works.
Some technology companies have also pressured Trump’s management to support the “Open Source” AI, which essentially makes the computer code freely available to copy, modify and reuse.
Meta, which holds Facebook, Instagram and WhatsApp, has pushed harder for an open supply policy, which other AI companies, such as Anthropic, have described as an increase in safety risks. Meta has said that open source technology accelerates AI growth and can help newly established businesses to cover the most established companies.
Andreessen Horowitz, a Silicon Valley business fund company with bets on dozens of newly established AI companies, also requested the support of open source models, which many of its companies are based on the creation of AI products.
Andressen Horowitz also gave the most intense arguments against the new regulations for the existing AE laws on security, consumer protection and civil rights are sufficient, the company said.
“It prohibits damage and punish bad actors, but they do not require developers to go through ongoing regulatory wreaths based on speculative fear,” Andreessen Horawitz said in his comments.
Others continued to warn that AI had to be adjusted. Political rights groups have called for systems controls to ensure that they do not discriminate against vulnerable populations in housing and employment decisions.
Artists and publishers stated that AI companies had to reveal the use of intellectual property materials and asked the White House to reject the arguments of the technology industry that the unauthorized use of intellectual property to train their models was within the limits of the law. The Center for Policy AI, a group of reflection and pressure group, has asked third parties for systems for national security vulnerabilities.
“In any other industry, if a product hurts or negatively damages consumers, this project is defective and the same standards should be applied to AI,” said KJ Bagchi, vice president of the Civil Rights and Technology Center, which has submitted one of the requests.