Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
As the whole world knows, the field of artificial intelligence (AI) is progressing at breakneck speeds. Companies big and small are racing to implement the power of generative AI in new and useful ways.
I am a firm believer in the value of AI to advance human productivity and solve human problems, but I am also quite concerned about the unexpected consequences. As I told the San Francisco Examiner last week, I signed the controversial AI “Pause Letter” along with thousands of other researchers to draw attention to the risks associated with large-scale generative AI and help the public understand that the risks are currently evolving faster than the efforts to contain them.
It’s been less than two weeks since that letter went public, and already an announcement was made by Meta about a planned use of generative AI that has me particularly worried. Before I get into this new risk, I want to say that I’m a fan of the AI work done at Meta and have been impressed by their progress on many fronts.
For example, just this week, Meta announced a new generative AI called the segment anything model (SAM), which I believe is profoundly useful and important. It allows any image or video frame to be processed in near real-time and identifies each of the distinct objects in the image. We take this capability for granted because the human brain is remarkably skilled at segmenting what we see, but now with the SAM model, computing applications can perform this function in real-time.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Why is SAM important? As a researcher who began working on “mixed reality” systems back in 1991 before that phrase had even been coined, I can tell you that the ability to identify objects in a visual field in real time is a genuine milestone. It will enable magical user interfaces in augmented/mixed reality environments that were never before feasible.
For example, you will be able to simply look at a real object in your field of view, blink or nod or make some other distinct gesture, and immediately receive information about that object or remotely interact with it if it is electronically enabled. Such gaze-based interactions have been a goal of mixed reality systems for decades, and this new generative AI technology may allow it to work even if there are hundreds of objects in your field of view, and even if many of them are partially obscured. To me, this is a critical and important use of generative AI.
Potentially dangerous: AI-generated ads
On the other hand, Meta CTO Andrew Bosworth said last week that the company plans to start using generative AI technologies to create targeted advertisements that are customized for particular audiences. I know this sounds like a convenient and potentially harmless use of generative AI, but I need to point out why this is a dangerous direction.
Generative tools are now so powerful that if corporations are allowed to use them to customize advertising imagery for targeted “audiences,” we can expect those audiences to be narrowed down to individual users. In other words, advertisers will be able to generate custom ads (images or videos) that are produced on-the-fly by AI systems to optimize their effectiveness on you personally.
As an “audience of one,” you may soon discover that targeted ads are custom crafted based on data that has been collected about you over time. After all, the generative AI used to produce ads could have access to what colors and layouts are most effective at attracting your attention and what kinds of human faces you find the most trustworthy and engaging.
The AI may also have data indicating what types of promotional tactics have worked effectively on you in the past. With the scalable power of generative AI, advertisers could deploy images and videos that are customized to push your buttons with extreme precision. In addition, we must assume that similar techniques will be used by bad actors to spread propaganda or misinformation.
Persuasive impact on individual targets
Even more troubling is that researchers have already discovered techniques that can be used to make images and videos highly appealing to individual users. For example, studies have shown that blending aspects of a user’s own facial features into computer-generated faces could make that user more “favorably disposed” to the content conveyed.
Research at Stanford University, for example, shows that when a user’s own features are blended into the face of a politician, individuals are 20% more likely to vote for the candidate as a consequence of the image manipulation. Other research suggests that human faces that actively mimic a user’s own expressions or gestures may also be more influential.
Unless regulated by policymakers, we can expect that generative AI advertisements will likely be deployed using a variety of techniques that maximize their persuasive impact on individual targets.
As I said at the top, I firmly believe that AI technologies, including generative AI tools and techniques, will have remarkable benefits that enhance human productivity and solve human problems. Still, we need to put protections in place that prevent these technologies from being used in deceptive, coercive or manipulative ways that challenge human agency.
Louis Rosenberg is a pioneering researcher in the fields of VR, AR and AI, and the founder of Immersion Corporation, Microscribe 3D, Outland Research and Unanimous AI.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers