Saturday, June 7, 2025
HomeNewsThe AI ​​regulation of the European Union is both a model and...

The AI ​​regulation of the European Union is both a model and warning for US legislators, experts say experts say

Date:

Related stories

Members of the Group Initiative Copyright (Author’s Rights Initiative) show that on June 16, 2023 in Berlin, it requires regulation of artificial secret services. The AI ​​regulation, which was later adopted by the European Union, is a model for many US legislators who are interested in consumer protection, but a warning story for others who are interested in resilient innovations, say experts. (Photo by Sean Gallup/Getty Images)

The groundbreaking AI law of the European Union, which came into force last year, is inspired for some US legislators to issue widespread consumer protection. Others exploit it as a warning story against over -regulation, which leads to a less competitive digital economy.

The European Union issued its law to prevent what is currently happening in the United States – a patchwork of AI legislation in the States – Sean Heather, Senior Vice President for international regulatory matters and antitrust law in the Chamber of Commerce during the Chamber of Commerce An exploratory hearing of the subcommittee of the Congress on May 21st.

“America’s AI innovators risk being pressed together between the so-called Brusselserffects of the overzealous European regulation and the so-called Sacramento effect of excessive state and local mandates,” said Adam Thierer, a high-ranking scholarship holder at the R Street Institute, at the hearing.

The EU’s AI Act is comprehensive and transmits regulatory responsibility to the developers of the AI ​​to alleviate the risk of damage from the systems. In addition, developers must provide technical documentation and training summary of his models for review by EU officials. The United States, which took similar guidelines, would throw the country out of its first position in the global AI race, said Thierer.

The “Brusselserffekt,” said Thierer, is the idea that the EU regulations will influence the global market. But not much of the world has followed so far – so far Canada, Brazil and Peru Working on similar laws, but Great Britain and countries such as Australia, New Zealand, Switzerland, Singapore and Japan have followed a less restrictive approach.

When Jeff Le, founder of Tech Policy Consultancy 100 Mile Strategies LLC, speaks to the legislators on each side of the Ganges, he said that he hears that they do not want the laws of another country to decide American rules.

“Maybe there is a place in our regulatory debate,” said Le. “But I think the point here is that American voters should be supervised by American rules and these rules are not very complicated.”

Does the EU -AAI law stop Europe from the competition?

Critics of the AI ​​Act say that the language is excessively wide, which slows down the development of AI systems in order to meet the regulatory requirements. France and Germany are among the top 10 global AI leaders, and China is second, so Stanfords AI indexBut The United States is currently leading a huge lead In the number of leading AI models and their AI research, experts referred to before the Congress Committee.

Peter Salib, professor of the University of Houston Law Center, said he thinks that the EU AI law is a factor -but not the only one -to keep the European countries out of the top positions. First, the law has only been in force for about nine months, which would not take long enough to participate in Europe to participate in the global AI economy, he said.

Second, the EU AI law is part of the general attitude towards digital protection in Europe, said Salib. The General data protection regulationA law that came into force in 2018 and gives individual control over their personal data follows a similar strict regulatory way of thinking.

“It is part of a much longer -term trend in Europe that really, such as privacy and transparency priority, very, very high,” said Salib. “Which is good for Europeans – if you want that, but it seems to have serious costs when innovations take place.”

Stavros Gadinis, professor at the Berkeley Center for Law and Business, who worked in the USA and Europe, said most concerns about innovation in the EU were outside the AI ​​law. Your technical labor market is not as resilient as the USA and cannot compete with the most vital financing that is accessible by Silicon Valley and Chinese company, he said.

“That keeps them more than this regulation,” said Gadinis. “That and the law didn’t really have the chance to have teeth.”

During the hearing on May 21, Rep. Lori Trahan, a Democrat of Massachusetts, described the republican’s attitude that every AI regulation Tech Startups and growing companies would kill “a wrong choice”.

The United States invests heavily in science and innovation, have the fascinated -friendly immigration policy, have flexible insolvency laws and “cultural tolerance towards risk recordings”. Trahan said all the guidelines that the EU does not offer.

“It is therefore wrong and incorrect to blame the EU’s technical regulation for its small number of important technology companies,” said Trahan. “The story is much more complicated, but just as the EU may learn something from the United States’ innovation policy, we would advise to examine your approach to online protection from consumers.”

Self -administration

The EU law transmits a lot of responsibility to developers of AI and requires transparency, reporting, testing with third parties and persecution of copyright. These are things that AI companies already do in the USA, said Gadinis.

“They all say that they do this to a certain extent,” he said. “But the question is how expansive these efforts have to be, especially if you have to convince a regulatory authority.”

AI companies in the United States that are currently being cast themselves, i.e. they test their models for some social and cyber security risks, which are currently presented by many legislators. But there is no universal standard – what a company can be considered for a different one for someone else, said Gadinis. Universal regulations would create a baseline for the introduction of modern models and characteristics, he said.

Even a company’s security tests can look different from one year to the next. By 2024, the CEO of Openaai, Sam Altman, was a pro-federal AI regulation and stood in the company’s security committee, which regularly rates the processes and protective measures of Openai over a period of 90 days.

In September he left the committee and has been raucous against the federal KI legislation since then. Openais security committee has been a independent company since then, Time registered. The committee has recently published recommendations to improve security measures, work more transparently in terms of openais and “the company’s security framework”.

Although Altman has changed his attitude to federal regulation, the mission of Openai focuses on the advantages of society that have won by AI – “You wanted to create [artificial general intelligence] That would benefit humanity instead of destroying them, ”said Salib.

Ai Company Anthropic, manufacturer of Chatbot Claude, was founded in 2021 by former employees of Openaai and focuses on responsible AI development. Google, Microsoft and Meta are other American AI companies that have a form of self -security tests and have recently been evaluated by the AI ​​security project.

The project asked the experts to take the strategies for risk assessment, current damage, security framework, existential security strategy, governance and accountability as well as transparency and communication. Anthropic achieved the highest, but all companies lacked their “existential security”, or the MI models for damage could lead to society if they are unchanged.

Only through the development of these internal guidelines do most AI executives recognize the need for a kind of protective measure, said Salib.

“I don’t want to say that there is a broad industry agreement because some have changed their melodies last summer,” said Salib. “But there are at least many evidence that this is serious and is worthwhile to think about it.”

What could the United States win from the EU practices?

Salib said he believes that a law like the EU -AAI law would be too comprehensive in the United States.

Many laws that now deal with AI concerns, such as discrimination against algorithms or self-driving cars, could be determined by existing laws-“it is not clear to me that we need special AI laws for these things.”

However, he said that the specifics that the states passed from case were effective when aligning harmful AI measures and ensuring compliance with AI companies.

Gadinis said that he was not sure why the congress against the legislative model of state-by-state is, since most state laws are consumer-oriented and very specific, and the decision on how a state in education uses to prevent discrimination in health data or keep children from sexually explicit AI content away.

“I would not consider this to be particularly controversial, right?” Said Gadinis. “I don’t think the big AI companies actually want to be associated with problems in this area.”

Gadinis said that the EU’s AI law originally reflected this specific approach from case to case and dealt with AI considerations about sexual images, minors, consumer fraud and exploit of consumer data. When Chatgpt was released in 2022, the EU legislators returned to the drawing board and added the component via huge language models, systematic risk, strategies with high risk and training, which had to comply with the range that had to comply with much wider.

After 10 months with the law of the European Commission This month it is open to “simplify implementation” to simplify To make it easier for companies to keep.

It is unlikely that the USA will have the AI ​​regulations that are as comprehensive as the EU said Gadinis and Salib. The government of President Trump took A Deregulated approach to technology so far, And the Republicans said goodbye to a 10-year moratorium for AI laws at the state level in the “large, beautiful legislative template”, which is transferred to the Senate’s examination.

Gadinis predicts that the federal government does not take much measures to regulate AI at all, but the increasing pressure of the public can lead to a self -regulatory authority in the industry. Here he believes that the EU will be the most influential, and have relied on public-private partnerships to develop a strategy.

“The majority of the actions will either come from the private sector itself – they will join together – or from what the EU does to bring experts together and try to find a kind of half industry and semicogonous approach,” said Gadinis.

Latest stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here