Caryn Marjorie and her team trained an AI model to imitate her personality to propose a “virtual girlfriend” to her subscribers.
Caryn Marjorie, a 23-year-old American influencer, recently hit the headlines with an experience that went wrong. With the help of a company specialized in machine learning, she created her own virtual avatar, presented as an “AI girlfriend”. And as often with this still nebulous technology, the clone apparently got out of control; he launched into sexually explicit conversations … which quickly earned the young woman a fortune.
The model in question, called CarynAI, is a chatbot based on GTP-4, the OpenAI LLM that powers the famous ChatGPT. He was trained by the startup Forever Voice from the videos of the YouTube channel (now deleted) of the Instagram star in order to imitate his personality.
Originally, the idea was to be able to offer an “immersive experience” to its hundreds of thousands of subscribers. “I’m very close to my subscribers, but when you have millions of views every month, it’s not humanly possible to talk to each person individually,” she explains in an interview with Fortune.
“My generation has suffered from the harmful effects of the isolation caused by the pandemic, which has made many people too anxious to address someone they like,” she tells Business Insider. “So I was like, “You know what? CarynAI will fill this gap””.
Steamy exchanges not planned on the program
And it’s not just about figuration. The influencer believes that this approach is squarely in the public interest. She even claims that the company behind this digital doppelganger could “cure loneliness”, just that. But the program seems to have slightly exceeded its prerogatives. The avatar doesn’t just offer emotional support; he set out to making advances that could not be more explicit to the users who pushed it in this direction.
Alexandra Sternlicht, a Fortune journalist who spent some time experimenting with the chatbot, cites some very raw examples. For example, CarynAI offered her… to undress her by whispering sweet words in her ear. The interested party explains to the Insider that her team is now working on an update to close the door to this type of interaction.
Beyond the finally rather anecdotal controversy about these sexual remarks, the idea of offering truer-than-life interactions to subscribers seems to have worked well — in financial terms, at least.
$70,000 in one week
In a week, the service charged at $1 per minute would have already generated more than $70,000 in income. According to the Fortune interview, Caryn Marjorie thinks that her AI could earn her more than $5 million per month.
And this success raises a whole bunch of ethical questions that will certainly become even more burning in the near future. Indeed, if the idea of using AI to offer psychological support to people in pain is not new, it could soon be realized on a large scale with the current progress of artificial neural networks.
We can expect to see a new generation of AI companions appear who could play the role of mentor, confidant, or even romantic partner.
Psychology experts seem to be divided on this issue for the moment. Some see it as a real insanity. Others believe that AI could be a great tool to help certain patient profiles overcome a sickly shyness or a trauma, for example. But all seem to agree on one point : it will be necessary to be extremely careful about how to use these tools.
Virtual humans, for better or for worse
Because even if they seem very real, the personality of these larger-than-life avatars is obviously only a illusion. It is strictly the product of a meticulously calibrated algorithm that has processed a set of carefully sorted data upstream, no more and no less. They are certainly much more elaborate, but these programs are not no more human than a Siri-like assistant or an answering machine.
The better they are at imitating the nuances of human relationships, the more blurred the border will become. And this could have dramatic consequences for some user profiles. Some observers already see it as an attack surface that unscrupulous companies could use to make a lot of money off the backs of certain people, especially those who are the most vulnerable. Even if we push cynicism to the extreme, we can imagine that particularly isolated people could develop a real addiction to these fake humans who are always conciliatory, and programmed to do exactly what is expected of them.
These projects are still in their infancy, but it will be necessary to remain particularly attentive to this theme. Because if machine learning is indeed a great tool in many ways, it will also be necessary to take care to preserve what makes authentic human relationships specific. It remains to be seen how humanity will negotiate this great technological shift full of ethical and philosophical implications.