Parmy Olson: AI chatbots want you hooked — maybe too hooked

AI companions programmed to forge emotional bonds are no longer confined to movie scripts They are here operating in a regulatory Wild West One app Botify AI in the last few days drew scrutiny for featuring avatars of young actors sharing hot photos in sexually charged chats The dating app Grindr meanwhile is emerging AI boyfriends that can flirt sext and maintain digital relationships with paid users according to Platformer a tech industry newsletter Grindr didn t respond to a request for comment And other apps like Replika Talkie and Chai are designed to function as friends A few like Character ai draw in millions of users multiple of them teenagers As creators increasingly prioritize emotional engagement in their apps they must also confront the risks of building systems that mimic intimacy and exploit people s vulnerabilities The tech behind Botify and Grindr comes from Ex-Human a San Francisco-based startup that builds chatbot platforms and its founder believes in a future filled with AI relationships My vision is that by our interactions with digital humans will become more frequent than those with organic humans Artem Rodichev the founder of Ex-Human declared in an interview published on Substack last August He added that conversational AI should prioritize emotional engagement and that users were spending hours with his chatbots longer than they were on Instagram YouTube and TikTok Rodichev s indicates sound wild but they re consistent with the interviews I ve conducted with teen users of Character ai largest part of whom disclosed they were on it for several hours each day One revealed they used it as much as seven hours a day Interactions with such apps tend to last four times longer than the average time spent on OpenAI s ChatGPT Even mainstream chatbots though not explicitly designed as companions contribute to this dynamic Take ChatGPT which has million proceeding users and counting Its programs includes guidelines for empathy and demonstrating curiosity about the user A friend who lately urged it for voyage tips with a baby was taken aback when after providing advice the tool casually added Safe travels where are you headed if you don t mind my asking An OpenAI spokesman explained me the model was following guidelines around showing interest and asking follow-up questions when the conversation leans towards a more casual and exploratory nature But however well-intentioned the company may be piling on the contrived empathy can get particular users hooked an issue even OpenAI has acknowledged That seems to apply to those who are already susceptible One assessment unveiled that people who were lonely or had poor relationships tended to have the strongest AI attachments The core concern here is designing for attachment A up-to-date inquiry by researchers at the Oxford Internet Institute and Google DeepMind warned that as AI assistants become more integrated in people s lives they ll become psychologically irreplaceable Humans will likely form stronger bonds raising concerns about unhealthy ties and the promising for manipulation Their recommendation Technologists should design systems that actively discourage those kinds of outcomes Yet disturbingly the rulebook is mostly empty The European Union s AI Act hailed as a landmark and comprehensive law governing AI usage fails to address the addictive foreseen of these virtual companions While it does ban manipulative tactics that could cause clear harm it overlooks the slow-burn influence of a chatbot designed to be your best friend lover or confidante as Microsoft Corp s head of consumer AI has extolled That loophole could leave users exposed to systems that are optimized for stickiness much in the same way social media algorithms have been optimized to keep us scrolling The trouble remains these systems are by definition manipulative because they re supposed to make you feel like you re talking to an actual person says Tomasz Hollanek a innovation ethics specialist at the University of Cambridge He s working with developers of companion apps to find a critical yet counterintuitive response by adding more friction This means building in subtle checks or pauses or means of flagging risks and eliciting consent he says to prevent people from tumbling down an emotional rabbit hole without realizing it Legal complaints have shed light on chosen of the real-world consequences Character AI is facing a lawsuit from a mother alleging the app contributed to her teenage son s suicide Tech ethics groups have filed a complaint against Replika with the U S Federal Exchange Commission alleging that its chatbots spark psychological dependence and upshot in consumer harm Lawmakers are gradually starting to notice a matter too California is considering act to ban AI companions for minors while a New York bill aims to hold tech companies liable for chatbot-related harm But the process is slow while the machinery is moving at lightning speed For now the power to shape these interactions lies with developers They can double down on crafting models that keep people hooked or embed friction into their designs as Hollanek suggests That will determine whether AI becomes more of a tool to encouragement the well-being of humans or one that monetizes our emotional demands Parmy Olson is a Bloomberg Opinion columnist covering apparatus A former reporter for the Wall Street Journal and Forbes she is author of Supremacy AI ChatGPT and the Race That Will Change the World Related Articles Christopher Cokinos To dumbly go where no space budget has gone before Jonathan Levin Warren Buffett caps a career built on humility Parmy Olson OpenAI can t have its money both tactics Lisa Jarvis RFK Jr s measles strategy is leading the US down a dark path Erwin Chemerinsky Trump wants to topple the republic s last line of defense