• Thu. Apr 18th, 2024

Anthropic unveils Claude 2, an AI model that produces longer, safer responses

Bynewsmagzines

Jul 11, 2023
Anthropic unveils Claude 2, an AI model that produces longer, safer responses

[ad_1]

Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More


Anthropic, an AI safety startup based in San Francisco, announced today the release of Claude 2, a more capable version of its AI model Claude that produces longer and safer conversations with humans.

The updated model has been trained on additional data to generate responses of up to 4000 tokens, up from around 512 tokens in the last version released just four months ago (Claude 1.3). According to Anthropic, Claude 2 also significantly improves performance on metrics like coding, math and logic problems while generating more harmless responses, addressing concerns about potential misuse.

“Claude 2 has improved performance, longer responses, and can be accessed via API as well as a new public-facing beta website, Claude.ai” Anthropic go-to-market (GTM) lead, Sandy Banerjee, said in a recent interview with VentureBeat. “We have heard from our users that Claude is easy to converse with, clearly explains its thinking, is less likely to produce harmful outputs, and has a longer memory.”

“I’m excited for people to try Claude 2,” she added. “Users should treat it as their eager new colleague with little context. Provide information about who you are, what you want from the AI, and the context of the task you’re giving it. Claude can iterate and take feedback really well.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

Anthropic’s approach appears to be resonating with enterprises. The startup is working with “thousands of businesses” using the Claude API, including productivity companies like Slack and Notion, according to Banerjee. She said the 100k token context window (i.e. the amount of information you can input) in Claude 2 is enabling new use cases like summarizing long conversations or drafting memos and op-eds.

Banerjee said that Claude 2 was designed to be helpful, harmless, and honest, and that the company is always trying to improve on these axes in tandem. She also said that Anthropic is following a responsible and measured deployment approach, beginning with a few markets — the US and UK to start — with plans to expand to more regions.

Direct challenge to ChatGPT

Founded in 2021 by former OpenAI research executives Dario Amodei, Daniela Amodei, Jack Clark, Sam McCandlish, and Tom Brown, Anthropic’s mission is to build AI products that people can rely on and generate research about the opportunities and risks of AI.

The company has raised $1.5 billion in funding to date from investors including Google, Salesforce Ventures, Spark Capital, Sound Ventures, Zoom Ventures, and others. Anthropic has also published over 15 safety research papers on topics such as constitutional AI, societal impacts, interpretability, red teaming, and scaling laws.

Anthropic has also partnered with several companies that are using Claude 2 for various use cases. Some of these partners include:

  • Slack and Notion: These productivity tools use Claude to summarize conversations, draft documentation, iterate based on feedback, created detailed business content, and more.
  • Midjourney: This popular AI tool uses Claude as a content moderator on its Discord channel to make quick categorizations of user-generated content.
  • Zoom: This popular video conferencing platform uses Claude to empower its contact center agents to respond faster and more efficiently to customer queries.
  • Robin AI: This legal service platform uses Claude to detect loopholes and provide recommended language to improve the strength of contracts.
  • Sourcegraph: This code AI platform uses Claude’s improved reasoning ability to give more accurate answers to user queries while also passing along more codebase context.
  • Jasper: This generative AI platform uses Claude to enable individuals and teams to scale their content strategies more quickly.

The growing need for ‘safe’ enterprise chatbots

In an industry dominated by major players like OpenAI, Anthropic is gaining traction by focusing on developing responsible, transparent, and easy-to-use AI solutions. Banerjee highlighted the company’s measured approach to deployment and continuous improvement as key factors in their success. “We measure things a lot. It’s a continuous deployment process,” she said.

Anthropic has also gathered lots of attention for its innovative approach to AI security and ethics. The company’s red teaming dataset, published on Hugging Face, is one of the most widely used datasets in the field. This underscores Anthropic’s commitment to ethical AI practices and its dedication to helping clients improve the performance of their AI systems.

The launch of Claude 2 signifies a major milestone for Anthropic as it continues to challenge the status quo in the AI industry. Companies interested in leveraging the power of AI to streamline their operations, improve decision-making, and stay ahead of the competition should keep a close eye on Anthropic’s latest offering.

Anthropic’s launch of Claude 2 comes at a time when the demand for AI technologies is growing rapidly across various industries and domains. However, it also comes with challenges such as ensuring the safety, reliability, transparency, and alignment of AI systems with human values. Anthropic’s approach of combining frontier research with product development aims to address these challenges and create AI systems that are truly helpful for businesses and consumers alike.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Leave a Reply

Your email address will not be published. Required fields are marked *