top of page

WORLD

Is AI Regulation Falling Behind in the U.S.? Experts Disagree


Although there isn’t much extensive AI regulation from the U.S. government right now, some legal experts say existing privacy, IP and security laws keep AI companies in check.


Both AI experts and the American public believe that the U.S. government is not regulating AI enough, and are not confident that the government will regulate AI effectively, according to a Pew Research Center survey published earlier this month.


The survey, "How the U.S. Public and AI Experts View Artificial Intelligence," received 5,410 responses from U.S.-based adults and artificial intelligence experts.


Fifty-eight percent of U.S. adults and 56% of AI experts surveyed said they were concerned that the government will not go far enough to regulate the use of AI as opposed to going too far.


The survey also found that 62% of U.S. adults and 53% of AI experts said they have little to no confidence in the U.S. regulating AI effectively.


Brian Hengesbaugh, Baker McKenzie partner and North America IP and technology practice group chair, said that right now, he isn’t worried about the U.S. government’s regulation of AI as companies developing and selling it still have to adhere to existing privacy and security regulations on technology and products.


“I am not concerned that we are not specifically regulating AI fast enough, I am not concerned about that because there already are,” he said. “There's privacy regs that attach, there's employment considerations that come into play, there's IP considerations that come into play, there's public company disclosure—anything that you want to do anyway with AI, there's already a bunch of regulation around it.”


Hengesbaugh likened the government’s regulation of AI to that of the beginning of internet regulation in the early 2000s.


“It really wasn't until past 2000 and past the dot com bust that the internet really settled into business and really became the way business was done … I kind of feel like we're in that [early] cycle still with AI, companies are still trying to sort out what they're going to do with it,” he said.


Since President Donald Trump took office earlier this year, the status of existing AI regulation and standards remain largely unchanged. The Trump administration's approach is to be supportive of AI development and lean more toward loosening regulation. During the AI Action Summit in Paris in February, Vice President J.D. Vance said in his speech that “excessive regulation of the AI sector could kill a transformative industry."


Vance was also critical of the European Union’s EU AI Act, which came into effect this year and prohibits certain AI systems that pose "unacceptable risk.”


Hengesbaugh noted that with this new administration, federal deregulation of AI is possible but AI regulation also comes down to what individual states decide.


“My working assumption though is that states will fill in … although you never know, will they perceive a gap in federal regulation? They might step in and follow Colorado, and there's some California rules on automated decision making technology under their privacy law that they're getting into,” he said.


Some states already regulate AI, including Utah, which requires that anyone deploying gen AI to disclose that the user is interacting with it. Colorado also has AI disclosure requirements and has legislation protecting users from algorithmic discrimination. California has began to regulate AI, with legislation prompting developers to create tools to help users identify deepfakes, and explain how their AI was trained, among other laws. Author: Ella Sherman Source: law.com

Share

bottom of page