Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Pinecone, the buzzy New York City-based vector database company that provides long-term memory for LLMs like OpenAI’s GPT-4, announced today that it has raised $100 million in Series B funding at a $750 million valuation. The funding round was led by Andreessen Horowitz.
Pinecone introduced the vector database in 2021, a managed service which lets engineers build fast and scalable applications that use embeddings from AI models, and get them into production quickly. In today’s generative AI era, Pinecone helps engineers connect chatbots with their own company data to provide the right answer, and not hallucinate.
The rise of ChatGPT last fall sent Pinecone soaring, with the tool quickly becoming an integral part of the software stack — the memory layer — for AI applications. The company said that so far in 2023 it has seen an explosion in paying customers — including Gong and Zapier — across all industries and sizes.
>>Follow VentureBeat’s ongoing generative AI coverage<<
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Pinecone took off with the explosive shift to generative AI
While the company was founded with an eye on the rise of large language models, the speed and explosiveness of the generative AI shift came as a surprise, Edo Liberty, founder & CEO of Pinecone (and former director of research and head of Amazon AI Labs) told VentureBeat in a Zoom interview.
“It kind of breached the collective psyche,” he said. “It grew gradually but then it just took off overnight.” When ChatGPT launched, he explained, “millions of developers all over the world got excited and got super-creative about the kinds of stuff that you can do with this — they started building amazing applications.”
In addition, he pointed out that generative AI suddenly became a boardroom-level discussion. “It doesn’t matter if you’re an architect or a law firm or a consulting company, this is potentially going to undermine or strengthen and you have to figure out what to do with it,” he said. “I don’t think there’s a single company that I speak with that doesn’t have something going on related to language and AI.”
And interest in Pinecone keeps building among developers, who continue to research how to use LLMs. For example, over the past two months the AI community has been buzzing about the long-term potential of autonomous AI agents, with tools popping up including Auto-GPT and BabyAGI. “Both of those projects use Pinecone,” said Liberty. “Again, that’s something that drove tremendous growth, I think at some point we were getting 10,000 signups a day.”
The long-term outlook for vector databases
Coincidentally, this week there was a great deal of Twitter chatter about a new research paper about the potential of a new architecture, the Recurrent Memory Transformer (RMT), to allow LLMs to retain information across up to 2 million tokens — which some said could lessen the need for vector databases but others said would not because the RMT requires much longer inference time.
But Greg Kogan, VP of marketing at Pinecone told VentureBeat earlier this week that while the company had no comment about the specific paper, “there’s a big gap between something that works in the lab and something that works for large-scale, real-world applications where cost, performance, ease of use, and engineering overhead are important factors. That’s the gap we want to bridge.” He added that chatbots are a breakthrough technology Pinecone leaned into and “found a way to empower for real-world, large-scale applications.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.