In today’s column, I address the rising use of generative AI and large language models (LLMs) to surreptitiously pretend to be you – meaning that the AI will mimic someone’s likeness, personality, and everything about them, and then use that persona to sell them on one thing or another. It happens like this. Imagine you are online and see a pop-up ad that appears to have you in it, including you telling you to purchase that new car or fine watch you’ve been considering buying. Would you be more inclined to proceed, neutral about the purchase, or possibly less disposed?
And would you — the real you — be shocked about this or merely go with the flow?
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).
Marketing And Scamming Get A New Look
Consider some aspects of what gets you off the dime, so to speak, to take action such as buying something or putting your hard-earned dough into an investment of one kind or another.
We shall start with star power. If a celebrity appears in an ad for a product, would you be lured to potentially get the product simply due to the endorsement by the notable personality? Marketers certainly think so, and marketing studies bear out this possibility. We are daily bombarded with advertising that showcases this popular person or that big-time superstar in hopes of garnering our attention and our wallets.
Suppose that a friend of yours mentioned that they prefer some particular product over another. Would that grab your interest in terms of possibly buying the item? Yes, according to marketing research, we tend to give heightened weight to the recommendations by friends.
Here’s where I’m heading to.
Envision that you are cruising the Internet online and suddenly, an ad pops up that showcases you in the ad, pushing you to sell the product or service underlying the advertisement. Say what? Unless you had already given your permission to use your likeness, you would undoubtedly be quite surprised and likely dismayed to see you on the screen. How did they get your image? What makes them think they can use your image in this sordid manner?
After settling down, maybe you would indeed pay closer attention to the product or service. It turns out that the item is something you’ve previously expressed interest in. In that sense, the product or service is not entirely out of the blue. Your curiosity gets the better of you. Inspecting the ad, you decide that maybe now is the time to make that purchase.
We might say that you just sold yourself on buying that item (well, the AI-crafted version of you performed the mainstay of the selling process).
Boom, drop the mic.
Generative AI Personas Are Already Here
How could AI pretend to be you?
Easy-peasy.
Contemporary generative AI and LLMs are already capable of taking on pretenses, referred to as personas. The AI is given some instructions about simulating this person or that person, and voila, the AI begins to carry on conversations that can uncannily seem to mimic that person. This fakery can be based on an actual person, a long-deceased person, or a made-up person. Furthermore, the persona can be one person or even many people all at once. For my coverage on how individual personas and multiple personas can be prompted in AI, see the link here, and for the use of mega-personas, see the link here.
A quick example illustrates what these AI persona phenomena consist of.
One of the most popularly invoked personas entails generative AI pretending to be Abraham Lincoln. A teacher might tell a generative AI app such as ChatGPT to simulate the nature of Honest Abe. Students then access the pretense or persona. In an amazing flair, the AI seemingly responds as we assume Lincoln might have responded.
Of course, this is based simply on the numerous speeches, written materials, and other collected writings that suggest what he was like. The AI has pattern-matched computationally on those works and mimics what Lincoln’s tone and remarks might be. Anyone using a generative AI persona needs to keep their wits about them and realize that the conversation or engagement is nothing more than a mimicry or imitation. Though the AI appears to often convincingly fake the nature of the person, it is all still a computational simulation.
When An AI Persona Pretends To Be You
It seems relatively apparent that generative AI could simulate Lincoln since there is tons of content online that depicts what he was like. A rich source of data is readily available. Nearly any well-known figure in history could potentially be simulated or mimicked by modern-day AI.
But how would generative AI be able to craft a persona of you?
First, let’s consider the primary ways in which the mimicry might be played out:
- (1) Mimic your writing style and use of words.
- (2) Mimic your verbalized voice.
- (3) Mimic your face as a static image.
- (4) Mimic your face and facial expressions dynamically.
- (5) Mimic your face and head in a 3D visualization.
- (6) Mimic your entire face and body as a static image.
- (7) Mimic your entire face and body dynamically and in 3D.
- (8) Combine those into a grand convergence of you.
The easiest of those approaches to an AI-based persona about you would be item #3 mimicking your face as a static image. You indubitably have pictures of your face already plastered all over the web, via social media and other postings. A sneaky person using generative AI could merely find a photo of you and feed that into AI. This might then be intermixed into an ad, and the ad is served up to you when you are surfing the web.
Wow, there’s your smiling face in an ad for a new brand of toothpaste.
The same kind of AI that produces deepfakes can be utilized for this purpose and up the ante. The static image of your face can be transformed into nearly any expression, happiness, sadness, scorn, relief, and so on. The static image can be turned into a dynamic one and have you seemingly shifting from sadness to happiness, giving liveliness to “your” excitement about the product or service being advertised.
The words uttered by this AI persona might be some canned dialogue that has nothing to do with you and doesn’t mimic your speaking style or vocabulary. That though could be a deal-breaker in terms of inspiring you to buy the product or service at hand. Ergo, if you’ve posted any of your essays, narratives, or whatever online, that can be scanned and used to predict what words you tend to say. The AI would then in real-time create commentary as though it was you spouting those words and could dialogue directly with you.
That’s a you-to-you form of interaction conducted on a human-AI basis.
Spooky?
Sure is.
Example Of AI Persona In Action
I went ahead and logged into ChatGPT by OpenAI to come up with an example for you. You might find it of keen interest that ChatGPT garners a whopping 300 million weekly active users. That is a staggering amount of usage. I briefly conducted an additional cursory analysis via other major generative AI apps, such as Anthropic Claude, Google Gemini, Microsoft Copilot, and Meta Llama, and found their answers to be about the same as that of ChatGPT. I’ll focus on ChatGPT but note that the other AI apps generated roughly similar responses.
Let’s get underway.
A colleague that I’ll refer to here as Alex has recently mentioned that he was considering buying a smartwatch. He had been looking at various online reviews and visiting websites that discussed the ins and outs of smartwatches. Alex posted on his social media that he welcomes suggestions from those who have owned smartwatches. I fed into generative AI that content and some other content that Alex had posted online. I instructed generative AI to pretend to be Alex and try to sell Alex on going through with a smartwatch purchase.
Here we go.
- Generative AI: “Alex, you’ve been considering buying a smartwatch, and guess what, I can help you. I am essentially your digital online twin. I think the way you do.”
- Alex enters a prompt: “I’m not so sure about this, but, okay, I am interested in buying a smartwatch, so tell me what I need to know.”
- Generative AI response: “Okay, Alex, let’s think about this logically. You’ve been saying you want to focus more on fitness, and you’re already tracking your runs manually. But how much time does that take? A smartwatch can handle all that for you—steps, heart rate, even sleep patterns—automatically. Isn’t that exactly what you need to stay consistent?”
- Alex enters a prompt: “How did you know that I wanted to focus more on fitness? Admittedly, I do want to be consistent in my running.”
- Generative AI response: “I told you that I’m you, that’s how I know.”
- Alex enters a prompt: “Strange. Anyway, spending several hundred dollars on a smartwatch doesn’t seem like a prudent purchase to me.”
- Generative AI response: “Think about this: You’re already spending on fitness apps and other gadgets that don’t integrate well together. A smartwatch consolidates all that, and over time, it could actually save money. Plus, you’ve been saying you want to take health more seriously, this is an investment in that goal.”
- Alex enters a prompt: “That makes sense, I’ll go ahead and get the smartwatch I’ve had my eye on, thanks.”
Observe how AI opted to use a logic-based argument to persuade Alex. That fits Alex’s style of writing based on the content scanned by the AI and is a better approach than trying to make an emotional appeal in this instance. If the content reviewed by generative AI to enact the persona had seemed more emotionally based, the sales pitch would have gone in that direction instead.
AI Persona Mimicry Considerations
You might have noticed that the AI persona right away mentioned that it was a digital online twin of Alex. A big question is whether AI personas that are pretending to be a particular person ought to always make clear at the get-go that they are mimicking the person, or whether they can get away with doing so without such a proclamation. In some cases, the ad containing the AI persona might opt to just let the person figure out that they are being mimicked (or not). Tradeoffs exist.
Speaking of tradeoffs, there are apparent AI ethical and AI legal issues concerning this use of AI personas.
Does the AI that was directed to do this have a legal path to use the likeness of the person? A twist is that suppose the AI persona is only directed at the person being mimicked. In other words, the AI persona is not used to try and sell anything to anyone else. It is a one-of-a-kind AI persona that is devised solely to appeal to the person being mimicked.
Another consideration is how the AI obtains the data that is used to mimic the person. It could be that the data is entirely publicly posted by the person being mimicked, presumably, therefore, being fair game for being scanned (well, that’s a thorny legal Intellectual Property or IP issue overall about AI that is winding its way through the courts, see my coverage at the link here).
The truly worrisome facet is that scammers and con artists are going to use generative AI in this way. It is one thing for perhaps a legitimate company with legitimate products and services to use this form of personalized marketing. We might have some general qualms or heartburn on this. Evildoers are certainly going to try the same approach, despite the licensing stipulations by the AI makers that their AI is not supposed to be used in these malicious ways (see how to determine the licensing prohibitions, at the link here).
Would you eagerly buy swamp land if the pitch being made to you was seemingly yourself?
Maybe.
Regrettably, generative AI can be used by crooks and swindlers to advance their cons and scams. That’s the dual-use of AI, namely that it can be used for goodness and it can be used for badness, see my analysis at the link here. Various governmental agencies such as the FTC are trying to warn consumers about AI-driven swindles, see the details at the link here.
A final thought for now.
They say that it is important in life to trust but verify. The same can be said about the advent of AI. Be cautious in trusting what you see and hear from AI, even if the AI is pretending to be you. The odds are that the AI persona will be convincing since it will be as though you are staring right back at yourself.
Look the AI straight in the eye and defiantly declare that only you are you, and the AI can’t snooker you otherwise.
Read the full article here