Skip to content

Washington confronts a new AI fight

A second day of hearings on Capitol Hill crystallizes new worries — and creates new fault lines.

By MOHAR CHATTERJEE and REBECCA KERN

Washington’s mounting struggle to deal with artificial intelligence took a sharp turn Wednesday morning, as House members wrestled with new and unsettling questions raised by human-like digital brains, including: Who even owns what a robot creates?

A day after OpenAI CEO Sam Altman grabbed Washington’s attention with a call to regulate his own industry — and a senator opened with a public stunt he then called “one of the more scary moments” in Senate history — political leaders are diving deeper into the thorny details of how, or even whether, to regulate these new systems.

The hearings followed a private White House meeting earlier this month with tech-company leadership, as well as a raft of new initiatives intended to test and guide AI technology. Altman also met privately with the House AI caucus and House leadership from both parties to talk about AI yesterday.

But as concern mounts about AI’s possible effects on people’s lives, the economy and even human survival, American leaders are finding they have relatively few tools to put safety reins on new technological platforms. Even in Europe, where officials are armed with far stronger data protection and privacy rules, governments have struggled to find a balance between consumer protections and their support for tech-industry competitiveness.

As the Washington conversation takes shape, fault lines are already emerging in the seemingly bipartisan call to create new regulations.

The most concrete idea to emerge on Tuesday, a new agency to license the most powerful AI platforms — supported yesterday by senators of both parties as well as Altman himself — was dismissed afterward as “baffling” by a key tech lobbyist, and “seriously flawed” by Hodan Omaar, a senior policy analyst at the Center for Data Innovation, a nonpartisan think tank.

“It’s baffling that the U.S. would even consider a restrictive licensing scheme for artificial intelligence development, let alone the idea that some international governing body could get countries to comply with such a thing,” said Steve DelBianco, president and CEO of NetChoice, a trade group that represents companies including Meta, Google and Amazon.

Without a clear path forward on new AI regulations for now, a House subcommittee dove into a different question on Wednesday, one of growing importance as the new technology develops and threatens to swamp whole industries: Who owns it all?

While a bulk of the focus has been on the safety and privacy concerns of generative AI, especially for so-called large language models like the ones behind ChatGPT and Google’s Bard, copyright issues have become an increasing concern.

Many publishers and artists have realized just how much of their work is used in training AI models, and lawyers have begun circling the complex, and new, issue of who will own the work created by machines.

At stake could be billions of dollars with the new industries that could rise around the AI models — and the financial health of publishers and other creators whose work the AI both uses, and could replace.

Though not as high-profile as the Altman hearing, which attracted top officials and throngs of journalists, the House Judiciary Subcommittee on Courts, Intellectual Property and the Internet aired out some key emerging concerns during its meeting Wednesday.

One of the biggest issues is how to compensate or credit artists, whether musicians, writers or photographers, when their work is used to train a model, or is the inspiration for an AI’s creation.

Ashley Irwin, president of the Society of Composers and Lyricists, put it bluntly in the hearing: Generative AI poses an “existential threat to our livelihood.” His concerns echo those of artists, writers and other creative professionals increasingly worried that AI tools could easily imitate their work — and put them out of business.

“It’s very Orwellian how the tech industry manages to change terminology on us,” Irwin said. “It’s not data and content to us, it’s music, it’s photographs. It’s not file-sharing, it’s stealing. Very simple.”

Beneath the AI models lies a vast amount of training data — which is often material created by, and owned by — someone besides the AI developer. One key issue that lawmakers are being pressed to address is who should be compensated for all that material, and how it would work.

Subcommittee Chair Darrell Issa, whose business background is in the electronics industry, proposed one mechanism, a database to track the sources of training data: “Credit would seem to be one that Congress could mandate — that the database input could be searchable so you would know that your work or your name or something was in the database.”

Issa has criticized the federal government for being too slow to act on tech matters, and has previously gone on the record to express his anxieties about AI use to take and create copyright.

Issa told POLITICO after the hearing that he’s planning to hold field hearings in the future in Nashville and potentially Los Angeles about the impact of AI on artists. And his next Judiciary subcommittee hearing will focus on generative AI and patents.

“Even though they might turn into different legislation, the entity that will be regulated — the AI entity — will actually be the same entity,” he said. “So what we do in copyright cannot be ignored between patentability and copyright for the actions of AI.”

Continue Story Here

Source: https://www.politico.com/news/2023/05/17/washington-confronts-a-new-ai-fight-00097425