Anthropic, a startup funded by Google and launched by ex-OpenAI employees, nowadays introduced its extremely anticipated AI chat assistant, Claude, which quite a few gurus perspective as a main rival to OpenAI’s ChatGPT.
Similar to ChatGPT, Claude can be accessed by means of a chat interface and is capable of a large wide variety of conversational and textual content-processing responsibilities. The chat program is designed to aid end users with summarization, research, collaborative crafting, Q&A, coding, and a lot more.
A single of Claude’s critical details of differentiation is that it’s developed to develop less harmful outputs than many of the other AI chatbots that came before it. The organization describes Claude as a “helpful, honest, and harmless AI program.”
Anthropic claims it worked for the past a number of months with partners like Notion, Quora, and DuckDuckGo in a closed alpha in buy to enhance its abilities. “Users describe Claude’s solutions as thorough and quickly comprehended, and they like that exchanges truly feel like normal dialogue,” stated head of people and comms at Quora ,Autumn Besselman, in a assertion.
>>Follow VentureBeat’s ongoing generative AI protection<<
Anthropic offers Claude to businesses through an API
One of the key elements of today’s announcement is that Anthropic is now offering Claude via API to support businesses and nonprofits. (You can sign up for early access here.) Pricing has not yet been revealed for API access.
Anthropic said in its announcement that Claude is “much less likely to produce harmful outputs, easier to converse with, and more steerable — so you can get your desired output with less effort.” The company said in addition to summarization, search, creative writing, and coding, it can also take direction on personality, tone and behavior, making it a prime candidate for customer service and other business solutions that engage with customers.
The company is currently offering two versions of Claude: Claude and Claude Instant. Claude is a high-performance model, while Claude Instant, lighter, less expensive, and much faster.
Anthropic’s ties to Sam Bankman-Fried
Anthropic was founded in 2021 by researchers who OpenAI. It gained attention last April when, after less than a year in existence, it suddenly announced a whopping $580 million in funding — which, it turns out, mostly came from Sam Bankman-Fried and the folks at FTX, the now-bankrupt cryptocurrency platform accused of fraud. There have been questions as to whether that money could be recovered by a bankruptcy court.
Anthropic, and FTX, has also been tied to the Effective Altruism movement, which former Google researcher Timnit Gebru called out recently in a Wired opinion piece as a “dangerous brand of AI safety.”
Anthropic, which describes itself as “working to build reliable, interpretable, and steerable AI systems,” created Claude using a process called “Constitutional AI,” which it says is based on concepts such as beneficence, non-maleficence and autonomy.
According to an Anthropic paper detailing Constitutional AI, the process involves a supervised learning and a reinforcement learning phase: “As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.