WATCH KLOBUCHAR’S FULL QUESTIONS HERE

WASHINGTON –  At a Senate Commerce Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” U.S. Senator Amy Klobuchar (D-MN) pressed tech leaders on the future of AI development.

Testifying at the hearing were Sam Altman, Co-Founder and CEO of OpenAI; Lisa Su, CEO and Chair of Advanced Micro Devices; Michael Intrator, CEO and Co-Founder of CoreWeave; and Brad Smith, Vice Chair and President of Microsoft. 

“I think David Brooks put it the best when he said, ‘I’ve found it incredibly hard to write about AI because it is literally unknowable whether this technology is leading us to heaven or hell.’ We want it to lead us to heaven, and I think we do that by making sure we have some rules of the road in place so it doesn't get stymied or set backwards because of scams or because of use by people who want to do us harm,” said Klobuchar.

Klobuchar is a leader on efforts to put in place guardrails around the use and development of AI. Last Congress, Klobuchar and Majority Leader John Thune (R-SD) partnered on the Artificial Intelligence (AI) Research, Innovation, and Accountability Act, which would create baseline accountability for AI deployment in high-risk areas, like managing critical infrastructure. The bill would also boost transparency for AI systems that are used to decide a person’s access to health care or housing, or to decide who to hire and fire.

Last month, Klobuchar reintroduced the bipartisan Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act with Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), and Thom Tillis (R-NC). This legislation aims to protect Americans' voice and likeness and combat the proliferation of AI deepfakes.

Klobuchar’s and Senator Ted Cruz’s (R-TX) bipartisan TAKE IT DOWN Act passed Congress last week – the bill is now headed to the President’s desk to be signed into law. The TAKE IT DOWN Act would criminalize the publication of non-consensual intimate imagery (NCII), including AI-generated NCII, and require social media and similar websites to have in place procedures to remove such content within 48 hours of notice from a victim.

A rough transcript of Klobuchar’s questions is available below. Video is available HERE for download.

Senator Klobuchar: Thank you very much, Senator Cruz. A lot of exciting things with AI, especially from a state like mine that's home to the Mayo Clinic, with the potential to unleash scientific research. While we've mapped the human genome, we have rare diseases that can be solved, so there's a lot of positive, but we all know, as you've all expressed, there's challenges that we need to get at with permitting reform. I'm a big believer in that. Energy development, thank you, Mr. Smith, for mentioning this with wind and solar and the potential for more fusion and nuclear, but wind and solar, the price going down dramatically in the last few years, and to get there, we're going to have to do a lot better. 

I think David Brooks put it the best when he said, “I found it incredibly hard to write about AI because it is literally unknowable whether this technology is leading us to heaven or hell.” We want it to lead us to heaven, and I think we do that by making sure we have some rules of the road in place so it doesn't get stymied or set backwards because of scams or because of use by people who want to do us harm. 

As mentioned by Senator Cantwell, Senator Thune, and I have teamed up on legislation to set up basic guardrails for the riskiest non-defense applications of AI. Mr. Altman, do you agree that a risk-based approach to regulation is the best way to place necessary guardrails for AI without stifling innovation? 

Sam Altman: I do, that makes a lot of sense to me. 

Klobuchar: Okay, thanks. And did you figure that out in your attic?

Altman: No, that was a more recent discovery. 

Klobuchar: Thank you very good. Just want to make sure. Our bill directs, Mr. Smith, the Commerce Department, to develop ways of educating consumers on how to safely use AI systems. Do you agree that consumers need to be more educated? This was one of your answers to your five words, so I assume you do. 

Brad Smith: Yes, and I think it's incumbent upon us as companies and across the business community to contribute to that education as well.

Klobuchar: Okay, very good. Back to you, Mr. Altman. The Americans rely on AI, as we know, increasingly, on some high-impact problems, to make them be able to trust that we need to make sure that we can trust the model outputs. The New York Times recently reported, earlier this week, that AI hallucinations, a new word to me, where models generate incorrect or misleading results, are getting worse. That's their words. What standards or metrics does OpenAI use to evaluate the quality of its training data and model outputs for correctness?

Altman: On the whole, AI hallucinations are getting much better. We have not solved the problem entirely yet, but we've made pretty remarkable progress over the last few years. When we first launched ChatGPT, it would hallucinate things all the time. This idea of robustness, being sure you can trust the information, we've made huge progress there. We cite sources. The models have gotten much smarter. A lot of people use these systems all the time. And we were worried that if it was not 100, you know, .0% accurate, which is still a challenge with these systems, it would cause a bunch of problems. But users are smart. People understand, you know, what these systems are good at, when to use them, when not. And as that robustness increases, which it will continue to do. People will use it for more and more things, but as an industry, we've made pretty remarkable progress in that direction over the last couple of years.

Klobuchar: I know we'll be watching that. Another challenge that has been, we've seen, and Senator Cruz worked and I worked on a bill together for quite a while, and that's the TAKE IT DOWN Act, and that is that we are increasingly seeing internet activity where kids looking for a boyfriend or girlfriend, maybe they put out a real picture of themselves, it ends up being distributed at their school, or they somehow they someone tries to scam them from financial gain, or its AI, as we've increasingly seen, where It's not even someone photos, but someone puts a fake body on there. And we've had about over 20 suicides in one year, of young people, because they felt like their life was ruined, because they were going to be exposed in this way. So this bill we passed, and through the Senate and the House, the First Lady supported it, and it's headed to the President's desk. Could you talk about how we can build models that can better detect harmful deep fakes? Mr. Smith

Smith: Yeah. I mean, we're doing that. OpenAI is doing that, and a number of us are. And I think the goal is to first identify content that is generated by AI, and then, often, it is to identify what kind of content is harmful. And I think we've made a lot of strides in our ability to do both of those things. There's a lot of work that's going on across the private sector and in partnership with groups like NIC MEC to then collaboratively identify that kind of content. So it can be taken down. We've been doing this in some ways for 25 years, since the internet, and we're going to need to do more of it.

Klobuchar: And on the issue, last question, Mr. Chair, since the last one was about your bill, I figure it's okay. The newspapers and you testified before the Senate Judiciary Committee, Mr. Smith, about the bill Senator Kennedy and I still think that there's an issue here about negotiating content rates. We've seen some action recently in Canada and other places. Can you talk about those evolving dynamics with AI developers and what's happening here to make sure that content providers and journalists get paid for their work? 

Smith: Yeah, it's a complicated topic, but I'll just say a couple of things. First, I think we should all want to see newspapers in some form flourish across the country, including, say, rural counties that increasingly have become news deserts, newspapers have disappeared. Second, and it's been the issue that we discussed in the Judiciary Committee, there should be an opportunity for newspapers to get together and negotiate collectively. We've supported that. That will enable them to basically do better. Third, every time there's new technology, there is a new generation of a copyright debate. That is taking place now. Some of it will probably be decided by Congress, some by the courts. A lot of it is also being addressed through collaborative action, and we should hope for all of these things. To I'll just say, strike a balance. We want people to make a living creating content, and we want AI to advance by having access to data.

[Sen. Klobuchar followed up with an additional round of questions.] 

Klobuchar: I had one more question that I wanted to ask, and it's related to just the whole deep fake issue, just because Senator Blackburn and Senator Coons and Senator Tillis and I have worked on this really hard, and Blackburn and Coons are in the lead of the bill. But we have recently seen deep fake videos of Al Roker promoting a cure for high blood pressure, a deep fake of Brad Pitt asking for money from a hospital bed. Sony Music has worked with platforms to remove more than 75,000 songs with unauthorized deep fakes, including voices of Harry Styles Beyonce. I recently met – it's not just famous people – there is a Grammy-nominated artist from Minnesota, talked to him about what's going on with digital replicas. So there's a real concern, and it kind of gets at what Senator Schatz and I were talking about earlier with the news bill. But they just wanted to make you all aware of this legislation, because there were some differences on this, and now we have gotten a coalition, including YouTube, supporting it, as well as the Recording Industry Association, Motion Picture Association, SAG AFTRA. So it's a big deal, and I'm hoping it's something that you will all look at, but could you just comment – I would go to you, Mr. Smith first, about protecting people from having their likenesses replicated through AI without permission, and even if you all pledge to do it, our obvious concern is that there will, maybe other companies that wouldn't, and that's why I think, as we look at what these guard rails are. The protection of digital people's digital rights should be part of this.

Smith: No, I think you're right to point to it. It has become a growing area of concern. During the presidential election last year, both campaigns, both political parties, were concerned about the potential for deep fakes to be created. We worked with both campaigns and both parties to address that. We see it being used in really ways that I would call abusive, including of celebrities and the like. I think it starts with an ability to identify when something has been created by AI and is not a genuine, say, photographic or video image. And we do find that AI is much more capable at doing that than, say, the human eye and human judgment. I think it's right that there be certain guardrails, and some of these we can apply voluntarily. We've been doing that across the industry. OpenAI and Microsoft were both part of that last year. And there are certain uses that probably should be considered across the line and therefore should be unlawful. And I think that's where the kinds of initiatives that you're describing have a particularly important role to play.

Klobuchar: And could you look at that legislation? 

Smith: Absolutely.

Klobuchar: I appreciate it.  Mr. Altman, just same question, same thing.

Altman: Of course, we'd be happy to look at the legislation. I think this is a big issue, and it's one coming quickly… I think there's a few areas to attack it. You can talk about AI that generates content, platforms that distribute it, how takedowns work, how we educate society, and how we build in robustness to expect this is going to happen. I do not believe it will be possible to stop the generation of the content. I think open source, open weight models are a great thing on the whole, and something we need to pursue, but it does mean that there's going to be just a lot of these models floating around that can do this, the mass distribution, I think it's possible to put some more guardrails in place, and that seems important, I but I don't want to neglect the sort of societal education piece. I think with every new technology, there's some sort of, almost always some sort of new scams that come, the sooner we can get people to understand these Be on the lookout for them. Talk about this as a thing that's coming, and then I think that's happening. I think the better people are very quickly understanding that content can be AI-generated, and building new kinds of defenses in their own minds about it. But still, you know, if you get a call and it sounds exactly like someone you know and they're panicked and they need help, or if you see a video  like the videos you talked about this gets at us in a very deep psychological way. And I think we need to build societal resilience, because this is coming.

Klobuchar: It’s coming, but there’s got to be some ways to – you’ve got to have some to either enforce it, damages whatever. There’s just not going to be any consequences.

Altman: Absolutely, we should have all of that. Bad actors don’t always follow the laws, and so I think we need an additional shield, or whenever we can have them. But yes, we should absolutely have that.

Klobuchar: All right. Look forward to working with you on it.

###