U.S. experts urge flexible AI regulation to spur development

0
49
(240531) -- GENEVA, May 31, 2024 (Xinhua) -- A participant visits the exhibition booth of China Mobile at the 2024 Artificial Intelligence (AI) for Good Global Summit in Geneva, Switzerland, May 30, 2024. The 2024 Artificial Intelligence (AI) for Good Global Summit began on Thursday in the Swiss capital of Geneva. (Xinhua/Lian Yi)
- Advertisement -

United States experts  have urged policymakers to prioritise flexibility in regulating artificial intelligence (AI) technologies, arguing that too strict rules could deter investment in AI innovation.

A heated debate over AI regulation was sparked by the controversial Senate Bill 1047 in California which would have placed liability on AI developers for severe harm caused by their models.

It also required developers to publicly disclose their testing methods for assessing the likelihood of critical harm and conditions for shutting down models before training began.

- Advertisement -



The bill, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was vetoed by Governor Gavin Newsom on Sept. 29 over concerns of curtailing innovation and jeopardizing the state’s leadership in AI development.

Some AI industry investors echo this concern.

Lu Zhang, founder and managing partner of venture capital firm Fusion Fund, told Xinhua that many AI researchers and tech leaders were worried that the regulation would slow down the evolution of AI technology.

-Advertisement-

Newsom’s veto “was celebrated among the industry to continue investing the resources, capital, and talent into this amazing future technology,” she added.

A new Brookings Institution report shared similar views, saying that premature or heavy-handed regulation of AI could inadvertently slow progress and limit potential benefits.

The report, published just a few days before the veto, highlighted the challenges of regulating a rapidly evolving technology.

READ ALSO: JAMB registrar cautions Africa against reliance on overseas-developed AI technologies

“Regulating new, fast-changing technology is very difficult, both in theory and empirically,” wrote economists Kevin A. Bryan and Florenta Teodoridis, authors of the report.

They warned that since innovation was usually under-incentivized, overly strict AI regulation could exacerbate underinvestment in the sector and reduce vital experimentation.

The Brookings report suggested that policymakers should consider how regulation impacts innovation, not just end products.

It recommended fostering collaboration between private companies and universities to balance the strengths and weaknesses of each sector in advancing AI responsibly.

Flexibility in regulation is another key point emphasized by the authors, given AI’s rapid evolution.

They noted that AI regulations proposed in 2021 by the European Union and some U.S. states failed to address large language models or generative AI, which have become major focuses in 2024.

“An AI regulation set today without flexibility is unlikely to foresee either the harms or benefits of AI as it will exist in 2027, let alone 2037,” the report said.

It suggested regulators focus on specific, foreseeable harms rather than broad restrictions, given the uncertainty surrounding AI’s future capabilities.

Zhang also noted that the near-term fears about AI may be overblown.

“I think we are exaggerating the threat of AI, at least in the near term,” she said.

She argued that for at least the next five years, AI would still require significant human oversight due to the limitation of data used for training and other technical challenges, such as huge energy consumption driven by data centers.

Nonetheless, Zhang agreed that some regulation would eventually be necessary.

“I do agree regulations are necessary for a new technology, and I think the tech industry is open to embracing regulation when it’s proper time,” she said.

“However, we should not slow down AI innovation, because we’re in the critical moment of AI innovation right now.”

The regulation of AI has sparked widespread discussion since the technology’s emergence in recent years.

A global study released in 2023 found that three out of five people, about 61 per cent, were wary about trusting AI systems, while 71 per cent of the total respondents believed that AI regulation is required.

The study, conducted by researchers from the University of Queensland and KPMG Australia, surveyed more than 17,000 people from 17 countries including Australia, China, France and the United States.

In August, the European Artificial Intelligence Act (AI) officially entered into force as the world’s first comprehensive regulation on AI.

Companies that violate the rules of the AI ACT could face fines of varying degrees, with the maximum being up to 7 per cent of their global annual turnover. (www.nannews.ng )(Xinhua/NAN)

We do everything possible to supply quality news and information to all our valuable readers day in, day out and we are committed to keep doing this. Your kind donation will help our continuous research efforts.

-Advertisement-

-Want to get the news as it breaks?-