• Wed. Jun 19th, 2024

Is it time to ‘shield’ AI with a firewall? Arthur AI thinks so

Bynewsmagzines

May 4, 2023
Is it time to 'shield' AI with a firewall? Arthur AI thinks so

[ad_1]

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


With the risks of hallucinations, private data information leakage and regulatory compliance that face AI, there is a growing chorus of experts and vendors saying there is a clear need for some kind of protection.

One such organization that is now building technology to protect against AI data risks is New York City based, Arthur AI. The company was founded in 2018 and has raised over $60 million to date, largely to fund machine learning monitoring and observability technology. Among the companies that Arthur AI claims as customers are:  three of the top five US banks,

Humana, John Deere and the U.S. Department of Defense (DoD). Arthur AI takes its name as an homage to Arthur Samuel, who is largely credited for coining the term machine learning in 1959 and helping to develop some of the earliest models on record. 

Arthur AI is now taking its AI observability a step further with the launch today of Arthur Shield, which is essentially a firewall for AI data. With Arthur Shield, organizations can deploy a firewall that sits in front of Large Language Models (LLMs) to check data going both in and out for potential risks and policy violations.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

“There’s a number of attack vectors and potential problems like data leakage that are huge issues and  blockers to actually deploying LLMs,” Adam Wenchel, the co-founder and CEO of Arthur AI, told VentureBeat. “We have customers who are basically falling all over themselves to deploy LLMs, but they’re stuck right now and they’re using this they’re going to be using this product to get unstuck.”

Do organizations need AI Guardrails or an AI firewall?

The challenge of providing some form of protection against potentially risky output from generative AI is one that multiple vendors are trying to solve.

Nvidia recently announced its NeMo Guardrails technology which provides a policy language to help protect LLMs from leaking sensitive data or hallucinating incorrect responses. Wenchel commented that from his perspective, while guardrails are interesting, they tend to be more focussed on developers.

In contrast, he said that where Arthur AI is aiming to differentiate with Arthur Shield is by specifically providing a tool designed for organizations to help prevent real world attacks. The technology also benefits from observability that comes from Arthur’s ML monitoring platform, to help provide a continuous feedback loop to improve the efficacy of the firewall.

How Arthur Shield words to minimize LLM risks

In the networking world, a firewall is a tried and true technology, filtering data packets in and out of a network.

It’s the same basic approach that Arthur Shield is taking, except with prompts coming into an LLM, and data coming out. Wenchel noted that some prompts that are used with LLMs today can be fairly complicated. Prompts can include user and database inputs as well as sideloading embeddings.

“So you’re taking all this different data, chaining it together, feeding it into the LLM prompt, and then getting a response,” Wenchel said. “Along with that, there’s a number of areas where you can get the model to make stuff up and hallucinate and if you maliciously construct a prompt, you can get it to return very sensitive data.”

Arthur Shield provides a set of pre-built filters that are continuously learning and can also be customized. Those filters are designed to block known risks, such as potentially sensitive or toxic data, from being input or output into an LLM.

“We have a great research department and they’ve really done some pioneering work in terms of applying LLMs to evaluate the output of LLMs,” Wenchel said. “If you’re upping the sophistication of the core system, then you need to upgrade the sophistication of the monitoring that goes with it.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Leave a Reply

Your email address will not be published. Required fields are marked *