Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
When and why should any enterprise consider a generative AI deployment?
That’s a question that Rackspace is hoping to help answer with a series of new services announced today designed to help organizations better understand, build and deploy artificial intelligence (AI) workloads in both the public and private cloud.
Rackspace has undergone tremendous changes in recent years. The company got its start literally as a rack space — that is, a space where an organization could rent racks of servers — in a model commonly referred to as co-location. The company was an early pioneer in the cloud space as a cofounder of the OpenStack open source cloud infrastructure project. At one point, Rackspace had aspirations of being a large public cloud provider that could rival Amazon, but that didn’t come to pass. Today, Rackspace provides services that help enterprises operate workloads on public cloud vendors. Rackspace also has a large private cloud business that uses OpenStack.
AI is a use case that cuts across both public and private cloud, and it is a key priority for Rackspace. This is where the new Foundry for Generative AI by Rackspace (FAIR) services fit in. The platform provides capabilities to help organizations understand GenAI use cases and incubate development, then create industrial grade deployments with governance and analytics.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
“After all of the excitement created with ChatGPT, you’re trying to look at it to understand the real-life applications and how quickly it can transform everything,” Rackspace EVP and CTO Srini Koushik told VentureBeat.
How Rackspace is using generative AI to boost its own capabilities
Rackspace’s journey in offering generative AI services started when the company was trying to figure out how it could use the technology itself.
In February of this year, the company began to develop its own GenAI service for an internal use case. Koushik noted that Rackspace has decades of information about all of its work around different customers, configurations and technologies. Like many organizations, Rackspace’s data about its own operations and customers is spread across different applications.
In a bid to help unify that data in a way that makes it easily searchable and queryable, the company developed Rackspace Intelligent Co-pilot for the Enterprise (Rackspace ICE). This platform uses a large language model (LLM) to train and understand a corpus of data. The data is all accessible via a natural language processing (NLP) based interface.
This same internal use case can now be leveraged by its customers to help them get a better handle on enterprise data.
Who needs GenAI anyways?
Amidst all the hype and interest surrounding generative AI, many organizations are exploring options for using the technology.
The first step of FAIR is what Koushik referred to as “ideation.” The goal is to understand what use cases are possible with GenAI and how they might fit into a given enterprise environment. Part of the phase includes analyzing an organization’s readiness to adopt a generative AI strategy.
“A big part of that is ‘Where’s your data and what type of data is available to train?’” he said.
The second phase of the FAIR model is incubation, during which Rackspace looks at the technology and its viability within an enterprise. The third and final phase is “industrialize,” where Rackspace looks to bring the AI model to production with the right security, governance and analytics.
OpenStack private cloud and AI
When it comes to AI deployments, Rackspace will be leaning on both the public cloud as well as its own private cloud.
Koushik noted that on the private cloud side, Rackspace is able to provide high performance computing, low latency storage and high speed networking across the globe. This is evolving into a next generation private cloud that can help optimize AI and machine learning (ML) workloads.
The next generation private cloud is the ability to create clusters that have virtual CPU and GPU capacity in an Kubernetes container environment running on OpenStack infrastructure.
“That allows us to create these high performance computing environments on a custom basis for customers,” Koushik said.
Making it easier for organizations to build and deploy AI is a critical challenge, said Koushik, although he doesn’t see AI as an existential threat to humanity.
“I’m not one of these guys who says AI is going to rule the world and they’re going to be our overlords,” he said. “But the impact of what we do from a knowledge worker standpoint is very noticeable and you can see it today, so for Rackspace, one thing we’ve got to do is help our customers be ready for that — and that’s what we’re doing. “
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.