This morning, the Senate Subcommittee on Privacy, Technology, and the Law held a hearing with AI experts, including OpenAI CEO Sam Altman, to discuss the dangers of AI and how it should be regulated. (opens in new tab)
The nearly three-hour hearing (opens in new tab) featured the usual committee hearing trappings, with lawmakers asking tough questions and a predictable amount of theatrics. Here are a few moments that stood out:
Subcommittee Chairman Senator Richard Blumenthal started the hearing by playing a recording of himself giving opening remarks. However, it turned out that the speech was not actually him speaking, but rather an AI voice cloning tool trained using his past floor speeches. The senator had asked the generative AI to write opening remarks based on his history of advocating for consumer rights and data privacy. While humorous, Senator Blumenthal switched tones to remark on the potential dangers of voice cloning tools falling into the wrong hands. (opens in new tab)
Continuing with the theme of AI doing its work, Senator Marsha Blackburn from Tennessee said that she had asked ChatGPT whether Congress should regulate AI. The AI chatbot gave “four pros, four cons, and ultimately the decision rests with Congress and deserves careful consideration,” she said. She then pressed Altman on OpenAI’s efforts to protect artists’ work from copyright infringement.
On that topic, Altman suggested that content owners should receive a “significant upside benefit” if their copyrighted materials are used to train AI models. He also said that artists should have the option of refusing to allow their voices, songs, images, and likeness to be used to train AIs. OpenAI is creating a new copyright system to pay artists whenever AI-generated works incorporate their material.
Senator Lindsey Graham of South Carolina asked Altman about the potential military use of AI, specifically whether “AI could create a situation where a drone could select a target itself.” Altman responded that he thinks “we shouldn’t allow that.” When pressed further on whether it can be done, Altman said, “Sure.”
Awesome.
In a somewhat stranger exchange, Senator John Kennedy of Louisiana asked Altman if he made a lot of money. Altman replied that he pays enough for health insurance but has “no equity in OpenAI” and is doing the job because he loves it. The Senator jokingly suggested that Altman “get a lawyer or an agent.”
It’s interesting to see the US government at least looking into the potential harm AI companies can cause if left unregulated. The European Parliament went a little further last week, voting on a proposal (opens in new tab) to “ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly,” although how exactly all of that will be enforced is an open question for now. In a similar vein, Dr. Geoffrey Hinton, the “Godfather of Deep Learning (opens in new tab)” who recently left his position as an AI researcher at Google, also warned at the beginning of May that such systems shouldn’t be scaled up any further “until they have understood whether they can control it.”
For fun, I asked ChatGPT if today’s senate subcommittee hearing about AI was good for the AI industry, and it told me that the impact may not be” immediately apparent,” but it does have the “potential to be a positive force for the development and responsible use of AI in the future.”
That is a good political answer, I think. But was it just telling me what I wanted to hear? You know, since sometimes AI chatbots straight up lie to people. (opens in new tab)