jgcarpenter dot com

Julie Carpenter, Ph.D.

People + Technology + Culture

My research explores how identities and social relationships are shaped, negotiated, and expressed through our interactions with technology—and how these interactions redefine what it means to be human, influencing the designs of the tools we use.

Current projects

After years of passion, research, and writing, my latest book has made its way into the world. The Naked Android: Synthetic Socialness and the Human Gaze, is now available on Routledge, Amazon, Barnes & Noble, and many other booksellers.

~~~

I’ve written something new about artificial general intelligence (AGI), or more precisely, about the belief systems and power structures that have formed around it, in "Why AGI Won't Save The World (And Who Wants You To Think It Will)". This post isn’t about whether it is technically possible, it’s about how the promise of AGI has become a kind of secular salvation myth promoted by those who stand to benefit not just financially, but by shaping the narratives that define intelligence, labor, and human worth.

Its proponents use compelling predictions of the future to sell AGI, diverting attention from very real harms (like labor exploitation, environmental cost, and surveillance creep). Their techno-utopian storyline isn't neutral; it's ideological, often cloaked in the language of inevitability and progress. When you trace the AGI narrative to its source, there are not just unaccountable machines, but a specific worldview packaged as desirable by the people promoting it.

~~~

I recently joined New Zealand's Sex in Space podcast to talk about the intricate intersections of artificial intelligence and human relationships. Our conversation included topics such as what people are looking for when it comes to intimate AI relationships and the consumer's perspective of how brands can impact whether we trust AI products.

Additionally, under the guise of companionship or assistance, AI systems like chatbots and social robots can engage in subtle forms of emotional manipulation. These interactions raise critical questions about trust, consent, and the commodification of intimacy in a digital age.

Listen to the episode here: Apple Podcasts & YouTube

~~~

I am excited to share that my piece "Asimov Cocktail" is included in the new anthology Roboexotica: Beautiful Failure (AUT: monochrom). Asimov Cocktail begins with an unexpected question I got: What would you say to a robot at a cocktail party? What seemed like a lighthearted prompt quickly unraveled into something deeper, reflecting the often simplistic assumptions about robots, human-robot interaction, and the ethical blind spots in how people design AI.

Instead of imagining small talk with a robot, I started thinking about the exchange of attention—who’s really watching whom? While the robot might be a charming party guest, it is also silently collecting data on my words, expressions, and habits. And knowing that it’s a robot, do I subtly change how I speak, move, or react under its gaze? As I sip my drink, is it building a digital version of me, to be analyzed, sold, and shaped? Asimov Cocktail is my way of raising a glass to the uneasy, fascinating space between human curiosity, machine perception, and the invisible transactions that happen in every human-robot interaction.

~~~

I had the opportunity to participate in AI in Love/L’IA et l’amour, a virtual event hosted by Policy Horizons Canada. The discussion centered on the ways artificial intelligence is becoming embedded in our relationships—whether as mediators, facilitators, or even direct participants. As AI continues to evolve, the idea of humans forming bonds with AI agents is no longer fiction; it’s a reality we are beginning to navigate.

The event was part of the launch of Policy Horizons Canada’s Future of AI: Policy Considerations report, which explores the societal and ethical implications of AI in various domains. During the panel, I joined subject matter expert Dr. Marisa Tschopp and representatives from the Prime Minister’s Youth Council to unpack how AI might shape intimacy, companionship, and emotional connections over the next decade.

The conversation touched on crucial questions: What does it mean to invite AI into our personal lives? How do we regulate AI-driven relationships, ensuring they are ethical and safe? And ultimately, how do we balance technological innovation with human well-being? These discussions are more important than ever as AI’s presence in our social and emotional spheres expands. In "If not friend, why friend-shaped?", I talk more about the ways AI companion chatbots are designed to engage with people and fulfill social expectations.

It was an insightful experience to engage with policymakers and researchers on these emerging questions. The intersection of AI, relationships, and policy will undoubtedly continue to evolve, and I look forward to seeing where this conversation leads next.

~~~

I joined The Today Programme on BBC Radio 4 for a short interview about what viral videos of people "abusing" robots reveal about human psychology. I was also interviewed the same morning on Adrian Chiles' BBC Radio 5 Live show, where we further explored these viral robot abuse scenario videos, and what they reveal about human curiosity (or cruelty) toward machines—and if they expose deeper questions about empathy, moral boundaries, and perceived agency. These chats led to me writing more about the phenomenon of robot mistreatment in, "The Ethics of Kicking A Robot" on my blog, Drift.

~~~

I'm quoted in the New York Times article She Is in Love With ChatGPT by Kashmir Hill. The article explores how artificial intelligence reshapes the boundaries between technology and human companionship, raising critical questions about intimacy, connection, and the ethical design of AI systems.

  • At what point does simulated affection become detrimental emotional manipulation?
  • Can AI companionship influence how people prioritize and cultivate human relationships?
  • What data is collected when AI engages in emotionally intimate interactions, and how is it used?
These are some of the urgent questions driving my work, and I’m excited to contribute to this important conversation about the future of human-AI relationships.

~~~

Someone on Bluesky had the very good idea that researchers and science communicators should make a Spotify list of some of our interviews in case anyone wants to dive deeper into our work, so here's a link to a new list of my own.

~~~

In October, 2024, I was interviewed for The Bunker (UK) podcast ep, Will Musk's robot fantasy be a dream or a Terminator nightmare? We delved into the promises of convenience Optimus boasts vs the surveillance and personal data collection it would require to succeed.

~~~

I was delighted to be interviewed by Drs. Zena Assaad and Elizabeth Williams for the Algorithmic Futures podcast, SE2E01: Artificial intelligence and the human gaze, with Julie Carpenter. We had a wide-ranging conversation spanning AI, love, intimacy, and the human existential need to be recognized by others, or an Other.

~~~

Rachel Metz spoke with me about the big ethical claims of a new chatbot-on-the-block for her Bloomberg article New ChatGPT Challenger Emerges With Claude.

~~~

I was on the Tomorrow, Today podcast and the conversation flowed with interviewer and host Nash Flynn in the episode "Artificial intelligence, robots, love, and humanity with Dr. Julie Carpenter". Nash asked about my observations of user adoption and trust issues during the Y2k era, and I connect that to the immediate push afterwards to repair user trust to prepare people for the normalization of accepting the web for retail, online commerce, and e-mail for everyday communication. We also talked about the potential benefits and challenges behind using AI today in therapy (e.g., robots and chatbots), the data and privacy issues we all negotiate with smart technologies and the social networks that we have come to depend upon, the socioeconomics of tech accessiblity, how capitalism and emerging tech is sold to people, ethics, death, uploading consciousness, consent, gender, and ablesim.

 

jgcarpenter.com © 2025 by Julie Carpenter is licensed under CC BY-NC-ND 4.0.

Follow me on Mastodon @jgcarpenter@fediscience.org