• Mon. Jun 17th, 2024

OpenAI states ChatGPT need to be regulated. In the meantime, get all set for AI audits | The AI Conquer


Feb 6, 2023
OpenAI says ChatGPT must be regulated. Meanwhile, get ready for AI audits | The AI Beat


Verify out all the on-need periods from the Intelligent Stability Summit in this article.

OpenAI CTO Mira Murati produced the company’s stance on AI regulation crystal crystal clear in a TIME report published around the weekend: Of course, ChatGPT and other generative AI equipment really should be controlled.  

“It’s important for OpenAI and companies like ours to convey this into the general public consciousness in a way which is managed and dependable,” she said in the job interview. “But we’re a modest group of men and women and we require a ton extra enter in this system and a good deal more input that goes past the technologies — certainly regulators and governments and every person else.” 

And when asked whether it was as well early for policymakers and regulators to get concerned, in excess of fears that authorities involvement could slow innovation, she said: “It’s not way too early. It is quite critical for absolutely everyone to get started finding concerned, provided the affect these technologies are heading to have.” 

>>Follow VentureBeat’s ongoing ChatGPT coverage<<


Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

AI regulations — and AI audits — are coming

In a way, Murati’s opinion matters little: AI regulation is coming, and quickly, according to Andrew Burt, managing partner of BNH.ai, a boutique law firm founded in 2020 made up of lawyers and data scientists that focuses squarely on AI and analytics.

And those laws will often require AI audits, he said, so companies need to get ready now.

“We didn’t anticipate that there would [already] be these new AI laws on the books that say if you’re using an AI system in this area, or if you’re just using AI in general, you need audits,” he told VentureBeat. Many of these AI regulations and auditing requirements coming on the books in the US, he explained, are mostly at the state and municipal level and vary wildly — including the New York City’s Automated Employment Decision Tool (AEDT) law and a similar New Jersey bill in the works.

Audits are a necessary requirement in a fast-evolving field like AI, Burt explained.

“AI is moving so fast, regulators don’t have a fully nuanced understanding of the technologies,” he said. “They’re trying not to stifle innovation, so if you’re a regulator, what can you actually do? The best answer that regulators are coming up with is to have some independent party look at your system, assess it for risks, and then you manage those risks and document how you did all of that.”

How to prepare for AI audits

The bottom line is, you don’t need to be like a soothsayer, to know that audits are going to be a central component of AI regulation and risk management. The question is, how can organizations get ready?

The answer, said Burt, is getting easier and easier. “I think the best answer is to first have a program for AI risk management. You need some program to systematically, and in a standardized fashion, manage AI risk across your enterprise.”

Number two, he emphasized, is organizations should adopt the new NIST AI risk management framework (RMF) that was released last week.

“It’s very easy to create a risk management framework and align it to the NIST AI risk management framework within an enterprise,” he said. “It’s flexible, so I think it’s easy to implement and operationalize.”

Four core functions to prepare for AI audits

The NIST RMF has four core functions, he explained: First is map, or assess what risks the AI could create. Then, measure, quantitatively or qualitatively — so you have a program to actually test. Once you’re done testing, manage — that is, reduce or otherwise document and justify the risks that are appropriate for the system. Finally, govern — make sure you have policies and procedures in place that apply not just to one specific system.

“You’re not doing this on an ad hoc basis, but you’re doing this across the board on an enterprise level,” Burt pointed out. “You can create a very flexible AI risk management program around this and a small organization can do it and we’ve helped a Fortune 500 company do it.

So the RMF is easy to operationalize, he continued, but added he did not want people mistaking its flexibility for something too generic to actually be implemented.

“It’s intended to be useful,” he said. “We’ve already started to see that. We have clients come to us saying, this is the standard that we want to implement.”

It’s time for companies to get their AI audit act together

Even though the laws aren’t “fully baked,” Burt said it’s not going to be a surprise. So it’s time to get your AI auditing act together if you’re an organization investing in AI.

The easiest answer is aligning to the NIST AI RMF, he said, because unlike in cybersecurity, which has standardized playbooks, for big enterprise organizations the way AI is trained and deployed is not standardized — so the way it is assessed and documented isn’t either.

“Everything is subjective, but you don’t want that to create liability because it creates additional risks,” he said. “What we tell clients is the best and easiest place to start is model documentation — create a standard documentation template and make sure that every AI system is being documented in accordance with that standard. As you build that out, you start to get what I’ll just call a report for every model that can provide the foundation for all of these audits.”

Care about AI? Invest in managing its risks

According to Burt, organizations won’t get the most value out of AI if they are not thinking about its risks.

“You can deploy an AI system and get value out of it today, but in the future something is going to come back and bite you,” he said. “So I would say if you care about AI, invest in managing its risks. Period.”

To get the most ROI from your AI efforts, he continued, companies need to make sure they are not violating privacy, creating security vulnerabilities or perpetuating bias, which could open yourself up to lawsuits, regulatory fines and reputational damage.

“Auditing to me is just a fancy word for some independent party looking at the system and understanding how you assess it for risks and how you manage those risks,” he said. “And if you didn’t do either of those things, the audit is going to be pretty clear. It’s going to be pretty negative.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Leave a Reply

Your email address will not be published. Required fields are marked *