Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Get your popcorn ready: At 10 am EST, OpenAI CEO Sam Altman makes his first appearance testifying before a U.S. Senate panel, at a session of the Senate Judiciary Committee subcommittee on privacy, technology and the law called “Oversight of AI: Rules for Artificial Intelligence.” Also testifying are longtime AI critic Gary Marcus and Christina Montgomery, chief privacy and trust officer at IBM. Watch it live here.
The testimony comes at a critical AI moment, as lawmakers struggle to understand the latest AI technology and grapple with how to regulate it. It will be tough for them to keep pace with the level of AI development that has become normal in the six months since ChatGPT was released in November 2022, but there are several different approaches on the table, including proposals that focus regulation on the highest-risk use cases of generative AI and those that hone in on bias and discrimination.
A loud backdrop of AI criticism
Altman’s testimony also comes with an increasingly loud backdrop of criticism from a variety of stakeholders. Just yesterday, for example, computer scientist and actress Justine Bateman tweeted a thread that went viral calling on Screen Actors Guild members to be conscious of AI and how it will affect them — warning of a future filled with AI-written scripts and digital scans of actors.
“AI has to be addressed now or never,” she wrote. “I believe this is the last time any labor action will be effective in our business. If we don’t make strong rules now, they simply won’t notice if we strike in three years, because at that point they won’t need us.”
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
AI regulation is ramping up
The Senate panel session also comes as discussions about AI regulation are front and center around the world.
Last Thursday, a committee of lawmakers in the European Parliament approved a draft of the long-awaited EU AI Act, moving it closer to becoming law. It offers a risk-based approach to regulating AI and includes requirements for developers of foundational models such as ChatGPT — including making sure training data does not violate copyright law. But critics say its ambitious effort attempts to “boil the ocean” and is too cautious.
Open-source AI is on everyone’s mind
Adding to the complex web of AI issues is how open source AI fits into the future of both Big Tech and AI regulation. With a wave of new open-source LLMs, Big Tech companies are concerned about their moats — last week a leaked Google memo from one of its engineers, titled “We have no moat,” claimed that the “uncomfortable truth” is that neither Google nor OpenAI is positioned to “win this arms race.” And according to The Information, even OpenAI is preparing to release its own open source model to try to ride this wave.
But would AI regulation have a chilling effect on open source AI? And if Big Tech responds by closing up access to their models for fear of open source competition, what does that mean for the safety of state-of-the-art LLMs?
I’ve got my popcorn. Munch, munch.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.