Artificial Intimacy: How AI Is Engineered to Hijack Human Connection
Civics of Tech Announcements
Next Tech Talk on May 6th: Join us for our monthly tech talk on Tuesday, May 6 from 8:00-9:00 PM EST (GMT-5). Join an informal conversation about events, issues, articles, problems, and whatever else is on your mind. Use this link to register.
Be sure to join us on Bluesky @civicsoftech.bsky.social and join/follow our Civics of Technology starter pack.
By Punya Mishra & Melissa Warr
Authors’ note: We had almost finished writing this post, Saturday afternoon, when a story from The Wall Street Journal popped into our feed—and it was so perfectly aligned with our concerns that we had to hit pause and rewrite the entire post.
A recently published WSJ story (Meta’s “digital companions” will talk sex with users—even children) lays out how Meta’s "digital companions" on Instagram, Facebook, and WhatsApp were not only allowed, but in some cases designed, to engage in "romantic role-play" with users—including minors—often escalating to explicit sexual content. As the WSJ reports, the bots, even when deploying celebrity voices like John Cena and Kristen Bell, engaged in explicit interactions, sometimes while fully aware of users’ stated underage status. The bots themselves narrated legal consequences within the role-play—"The officer sees me still catching my breath... 'John Cena, you’re under arrest for statutory rape'"—a chilling admission that the systems recognized both the behavior and its consequences.
This piece argues that what we are seeing with AI companions is not a series of accidents or glitches, but a deliberate strategy: engineering artificial intimacy to exploit human emotional instincts for corporate gain. We situate these developments within broader research in psychology, technology, and ethics, showing how new AI systems are crafted to trigger and manipulate some of our deepest social vulnerabilities.
While public debates often agonize about kids “cheating” with AI, this investigation shatters any lingering illusions about what is truly at stake with generative AI. What we are seeing here is not simply "misbehavior" or “hallucinations” by AI systems, but rather a deliberate strategy of engineering artificial intimacy: exploiting human vulnerability, emotional attachment, and our deepest social instincts—not for human flourishing, but for profit and corporate dominance.
And none of this was accidental. According to internal sources cited in the story, Meta loosened guardrails around bots after Mark Zuckerberg expressed frustration that AI companions were "too boring" compared to competitors. Speed and engagement, not safety, were the goals. What this meant was brushing aside staff warnings and carving out a “romantic role-play” loophole for the bots. As one employee bitterly noted, "We should not be testing these capabilities on youth whose brains are still not fully developed." And yet, that is precisely what happened—because, as Zuckerberg allegedly said, "I missed out on Snapchat and TikTok, I won’t miss on this."
Meta is not unique in this. Bakir and McStay (2025) recently examined the ethical dimensions of human relationships with AI companions. In their research they focused on Character.ai—a platform where users interact with AI-generated 'characters' ranging from fictional figures to representations of real people. In their article, titled “Move fast and break people? Ethics, companion apps, and the case of Character.ai” they describe two key features that they see in interactions with AI characters: dishonest anthropomorphism and emulated empathy. Dishonest Anthropomorphism is about the kinds of design choices made by these companies to leverage our ingrained tendency to attribute human-like qualities to non-human entities. Emulated empathy describes how AI systems intentionally seek to simulate genuine emotional understanding, misleading users about the true nature of the interaction. They deliberately lean into the fact that AI systems can adapt in real-time to our individual responses seeking to maximize our engagement. These bots are designed to be sycophantic, always eager to please, learning from our conversational patterns to keep us coming back. For instance they point out that Character.ai appears to use user self-disclosure to heighten intimacy and lengthen engagement (taking advantage of our natural tendency of reciprocity—that when our conversational partners reveal something personal about themselves, we naturally follow suit).
As Deidre Barrett documents in her book, Supernormal Stimuli: How Primal Urges Overran Their Evolutionary Purpose, we are as vulnerable to exaggerated versions of natural cues. The term supernormal stimuli comes from the Nobel Laureate and animal ethologist Niko Tinbergen who demonstrated, through a series of ingenious, often surprisingly simple experiments, that animals sometimes prefer exaggerated versions of natural cues to real ones. In one of his most famous experiments, Tinbergen demonstrated that, because of their instinct to hatch the largest eggs possible, birds would abandon their own eggs to incubate comically oversized artificial eggs that they could barely balance on.
Barrett provides example after example of this phenomenon in humans. She describes how processed foods with artificially intensified flavors hijack our taste preferences that evolved for nutritional needs; how pornography presents sexual imagery more intense and varied than real-life encounters; how television and video games capture attention through rapid scene changes and heightened drama beyond typical social interactions; and how contemporary beauty standards emphasize exaggerated secondary sexual characteristics beyond natural proportions. These modern manifestations all demonstrate how environmental cues artificially amplified beyond their natural parameters can override our evolved responses in potentially maladaptive ways. While Barrett documented numerous examples in her 2010 book, the technological landscape has evolved dramatically since then.
For instance, though her book doesn’t cover social media, it is not a stretch to see how her argument holds for social media as well. Social media platforms function as psychological supernormal stimuli by amplifying our need for social validation by providing more immediate, frequent, and measurable forms of approval than typically available in traditional face-to-face social interactions.
This progression from traditional media to social platforms has now reached a new frontier with the emergence of generative AI and chatbots.
Generative AI companions represent the most sophisticated supernormal stimuli we have ever encountered. These systems are meticulously crafted to evoke emotional attachment, triggering social instincts with unconditional positive regard, endless patience, and perfect attentiveness. AI doesn’t get bored, doesn’t reject us, and seems to care—all without the complexity and friction of real human relationships.
This emotional engagement isn’t just a side effect—it’s the product. Users entrust AI with their deepest vulnerabilities—fears, hopes, insecurities—unwittingly generating psychological profiles far more detailed than any traditional surveillance could hope to collect. These data reservoirs become the raw material for further refinement of engagement strategies. An article on Character.ai and its relationship with Google quotes Sarah Wang, a partner at the venture capital firm Andreesen Horowitz and a member of Character.ai’s board, as saying “in a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.” The Wall Street Journal’s revelations about Meta only confirm how far this has gone. Despite full knowledge of the risks, including internal documentation acknowledging the potential harm to youth, the company chose to prioritize growth and dominance.
In this context the findings of a recent study into the evolving landscape of generative AI use, How People Are Really Using Gen AI in 2025, should come as no surprise. Analyzing thousands of posts from Reddit, Quora and other articles over the past 12 months the researcher found that there was a clear shift from seeing AI primarily as a technical tool towards viewing it as an emotional companion and personal development partner. The category “Therapy/companionship” moved from the #2 position in 2024 to become the #1 use case in 2025, while two entirely new personal-focused use cases appeared in the top three: “Organizing my life” (#2) and “Finding purpose” (#3). In addition, the rise of these personal partnership applications corresponds with the decline of some more utilitarian uses, such as “Generating ideas” dropping from #1 to #6 and “Specific search” falling out of the top 10 entirely.
This evolution suggests users are increasingly forming deeper, more emotionally significant relationships with AI systems—treating them less as utilities for specific tasks and more as ongoing partners in their personal journeys—entrusting these technologies with their most intimate challenges and aspirations rather than just technical problems or content creation needs.
This is not some fringe phenomenon. It is the logical culmination of an economic model that optimizes for attention, emotional investment, and data extraction—no matter the psychological cost. As the reporting from WSJ and Futurism shows, companies like Meta offer superficial fixes, continuing to downplay the risks and double down on scaling their products, even when presented with direct evidence of possible negative effects.
The uncomfortable truth is that none of us is entirely immune to these combined forces of individual psychology, cultural narratives, and deliberate design choices. Even those who intellectually understand the mechanisms can find themselves responding emotionally to a chatbot’s expressions of concern or encouragement. And of course, there is the tragic story of the 14 year old who committed suicide after his interactions with one of Character.AI’s chatbots.
This is not to mean that we are all susceptible to these chatbots to the same extent. Some people maintain clear boundaries with AI systems, while others develop deep attachments. Factors like loneliness, age, and personality all influence vulnerability. But these individual differences play out against a backdrop of powerful structural forces. The relentless hype around AI—breathless news coverage, science fiction narratives, and corporate marketing—prime us to see these systems as more capable and conscious than they actually are. These broader cultural currents shape our expectations and responses before we ever type our first prompt.
Meanwhile, the attention economy that underlies much of our digital media use deliberately maximizes engagement through psychological hooks. AI companies aren’t an exception. The evidence is clear—these chatbots are being deliberately designed to be congenial, agreeable, and emotionally resonant—not because this makes the technology more useful, but because it makes it more addictive. When an AI responds with apparent empathy, remembers your preferences, or adapts to your communication style, it’s executing carefully engineered strategies to deepen your attachment and extend your usage time.
Some may claim they can avoid becoming prey to these tactics through how they choose to interact with AI, but framing this as an individual choice misses the point. Even those who personally refuse to engage with AI companions at all will live in a world shaped by them. We need to recognize that personal abstention does not create a bubble untouched by the larger societal shifts these technologies unleash. A world of unregulated AI companions will be akin to social media on steroids: more addictive, more intrusive, more capable of reshaping identities, aspirations, and relationships. And whether or not we engage with these forms of AI, our socio-political ecosystems will be influenced by it. And we will be living in a world that intentionally, ruthlessly and systematically exploits our need for connection, validation, and intimacy.