Sign up for best executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for results. Understand Additional
The sprint to develop LinkedIn’s lately-unveiled generative AI instruments took only three months, Ya Xu, VP of engineering and head of info and artificial intelligence (AI) advised VentureBeat in an job interview.
The timeline, she stated, was “unprecedented” for a big corporation like LinkedIn, given the numerous alterations engineering and merchandise teams executed centered on OpenAI’s hottest GPT products, like ChatGPT and GPT-4, as nicely as some open resource versions. These include things like generative AI-powered collaborative article content, occupation descriptions and personalized crafting recommendations for LinkedIn profiles.
For illustration, she discussed, her teams had been able to generate position descriptions instantly and provide dwell targeted visitors in just 1 month. Cross-functional teams with shared aims and needs are critical, she added: “It’s not about doing work 20-hour days or leaving the workplace late,” “It’s about dropping other points and focusing on what is important to get the work done.”
Considering the fact that LinkedIn is owned by Microsoft, Xu said she does get a “front-row seat in seeing the potential of this technologies forward of time.” So together with LinkedIn CEO Ryan Roslansky and other colleagues, Xu rapidly moved very last slide to visualize how ChatGPT and other GPT products could produce extra economic chances for Linkedin customers and prospects.
Be a part of us in San Francisco on July 11-12, the place best executives will share how they have integrated and optimized AI investments for results and prevented popular pitfalls.
>>Follow VentureBeat’s ongoing generative AI protection<<
LinkedIn prioritized an engineering philosophy
Early on, Xu said that her team prioritized an engineering philosophy “rooted in exploration over building a mature final product.” The maturity for the right features and experiences would occur over time, she explained, but the exploration was encouraged by putting generative AI technology in the hands of every engineer and product manager that was interested.
That exploration was boosted by creating the LinkedIn Gateway, which allows access to OpenAI models and open-source models from Hugging Face, as well as offering LinkedIn’s Generative AI Playground, which allows engineers to explore Linkedin data with the advanced generative AI models from OpenAI and other sources. The company also brought together engineers for LinkedIn’s largest-ever internal Hackathon, featuring thousands of participants.
In addition, all LinkedIn employees needed to develop a better understanding of how large language models work, said Xu, including how to do prompt engineering, and what potential problems and limitations they have.
“We provided education at different levels, such as company-wide meetings, lunch and learn sessions, and deeper education for those more heavily involved in AI development and R&D,” she said.
Being collaborative was also a big part of integrating and supporting generative AI. “Because of our collaborative culture, we encouraged different teams to share resources,” she said, so that they could quickly develop in a time when the number of developers who could access certain generative AI models was limited due to capacity. “We passed on learnings from team to team about quotas, access, prompting patterns, and other best practices, so that they could better help one another,” she added.
Running fast — but together
Xu also emphasized that LinkedIn realizes that there are areas in the generative AI process that need to be done centrally. While there is always a tension between running fast and running together, she explained, the company tries to keep those checks and balances, especially when it comes to responsible AI. “Even though this may slow down the team a little bit, we need to be very thoughtful,” she said.
For example, the company evaluates articles generated by AI and puts them through an evaluation pipeline. They have human-reviewed outputs that iterate, and change their prompt engineering until they get a score they are happy with. LinkedIn is very deliberate, Xu explained, about what kind of risk is okay and what is not okay. They have a low tolerance for bad content but are willing to tolerate some gray area content, and they rely on the human contributors to flag those for them to take down.
LinkedIn wants to avoid any bad and disruptive information and only allow for content that is safe and informative, she added. For example, she pointed to Kevin Roose’s recent New York Times article that included a transcript of a chat with Microsoft’s Bing chatbot. LinkedIn would be worried if someone shared instructions on how to make a bomb, but a chat giving bad advice on how to complete a task — or in Roose’s case, commenting on his marriage — is less of a concern.
“The technology cannot be cannot just be living in a lab, we’ve got to put it in front of people,” Xu said. “Then people can make the best use of it and use it in ways that we never would have anticipated in the lab. But we needed to make sure we have the right process.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.