Julie Carpenter, Ph.D.

People + Technology + Culture

TL;DR

My work delves into how identities are shaped, negotiated, and expressed through our interactions with technology—and how these interactions redefine what it means to be human, influencing the very design of the tools we use. With over a decade of industry experience, my research portfolio includes leading UX studies for a major social media platform, pioneering work in human-robot interaction, and consulting on an award-winning digital voice innovation.

Current work & in the news

After years of passion, research, and writing, my latest book has made its way into the world. The Naked Android: Synthetic Socialness and the Human Gaze, is now available on Routledge, Amazon, Barnes & Noble, and many other booksellers.

Someone on Bluesky had the very good idea that researchers and science communicators should make a Spotify list of some of our interviews in case anyone wants to dive deeper into our work, so here's a link to a new list of my own.

I was interviewed for The Bunker (UK) podcast ep, Will Musk's robot fantasy be a dream or a Terminator nightmare? We delved into the promise of convenience Optimus proposes vs the surveillance and data collection it would require to succeed. How much personal information do you want your robot to know? I believe the brand of the robot will play a crucial role for consumers.

On Bluesky and no longer on Twitter/X, so I can now be found there occasionally posting and often lurking as @jgcarpenter.com.

I was delighted to be interviewed by Drs. Zena Assaad and Elizabeth Williams for the Algorithmic Futures podcast, SE2E01: Artificial intelligence and the human gaze, with Julie Carpenter. We had a wide-ranging conversation spanning AI, love, intimacy, and the human existential need to be recognized by others, or an Other.

Rachel Metz spoke with me about the big ethical claims of a new chatbot-on-the-block for her Bloomberg article New ChatGPT Challenger Emerges With Claude.

I was recently on the Tomorrow, Today podcast and the conversation flowed with interviewer and host Nash Flynn in the episode "Artificial intelligence, robots, love, and humanity with Dr. Julie Carpenter". Nash asked about my observations of user adoption and trust issues during the Y2k era, and I connect that to the immediate push afterwards to repair user trust to prepare people for the normalization of accepting the web for retail, online commerce, and e-mail for everyday communication. We also talked about the potential benefits and challenges behind using AI today in therapy (e.g., robots and chatbots), the data and privacy issues we all negotiate with smart technologies and the social networks that we have come to depend upon, the socioeconomics of tech accessiblity, how capitalism and emerging tech is sold to people, ethics, death, uploading consciousness, consent, gender, and ablesim.

This was a fun podcast to do: The Future of You: Robot rights and relationships. Tracey Followes, Dr. David Gunkel, and I talk about people, AI, robots, the Metaverse, and how these different mediums offer discovery, adventure, and many ethical questions.

The interview I did with Volume podcast takes a deep dive into the why questions of my research as well as the findings. Host Dr. Filipe Forattini asked about my own path to the research I do now, and I got a rare chance to talk about the journey that led me from art history to film theory to Web development to human-technology interaction. Streaming now, Episode 22.

Would you harm a robot on purpose? "Will the robot scream in pain? Will the robot cry? Will the robot laugh and mock me?" How will others see you? As a hero or as a villain? In S8E6 Humans as Robot Caretakers of command_line heroes podcast I talked about the "murder" of Hitchbot in 2015, and what committing robot abuse says about us as humans. What I didn't have the chance to get into as much in this interview is that of course, context matters. The situation when robot "abuse" is perceived as such is entirely dependent on the people in—and condition of—a violent human-robot interaction. 

I was interviewed for Caterina Fake's podcast, Should this exist? in the episode Grandma, here's your robot. This episode centers on older people and robots and the ethical concerns and challenges around designing robots that can act as social actors with older people. The link leads to the podcast audio as well as the show transcript.

Author Aifric Campbell's new book, The Love Makers, is available and I am proud to be one of its essay authors, "Robots as Solace and the Valence of Loneliness." Campbell incorporated essays throughout this novel from scientists and pundits exploring how human–machine relationships are changing." From robot nannies to generative art and our ancient dreams of intelligent machines, The Love Makers blends storytelling with science communication to investigate the challenges and opportunities of emergent technologies and how we want to live."

Really enjoyed my conversation with Johannes Grenzfurthner for B3 Biennale of the Moving Images. We managed to discuss Blade Runner, AI, ethics, science fiction, storytelling, culture, war, robots, autonomous weapons and flawed humans.

In May I was honored to have been a keynote speaker for the Augmented Authorship – Digital Strategies for Artistic Collaboration  symposium hosted by HeK - Haus der elektronischen Künste Basel. The event was in English and organized by the Master of Fine Arts, Critical Image Practices Major at the Lucerne University of Applied Sciences and Arts – Lucerne School of Art and Design, together with the HeK (House of Electronic Arts Basel). The digital version of the event should be posted online soon.

I am excited to announce I am writing a new book for CRC Press, The Naked Android: Synthetic socialness and the human gaze. It is about how humans position AI and robots conceptually in relation to us, the (changing) cultural significance of robots as objects, experiences, tools, and artifacts and how these factors impact everything from robotic concepts to research and development direction, design, advertising messages, and integration into culture.

In Engineering & Technology I spoke a little bit about emotional attachment to tech and also "emotion reading" technologies in: Could you love one of these?

In an interview with Nora Young of Spark CBC I had the opportunity to talk about robots, COVID-19, AI hype, and best practices for introducing and integrating new technologies into your workforce. Read the article (no paywall) and listen to episode 487, Rise of the robots.

Recently I had fun speaking with Should this exist? podcast host Caterina Fake about emotional attachment to robots in an episode exploring the ethics of robots performing eldercare.

In IEEE Spectrum's Why you should be very skeptical of Ring's indoor security drone I had the opportunity to say a few words about drones, surveillance, and the ethics of building consumer digital twins.

Just received a hard copy version of The Free Lunch magazine's first issue, and I am honored to be included in this fantastic new magazine with such great company. Get a copy of this gorgeous publication for yourself, which bills itself as "a magazine on what you can trust in a time when nothing seems to be what it looks like." You can read my interview "Gaze and Golems" online, no paywall. I also recently participated in another Robopsych podcast episode on robots in the time of COVID-19.

I had the pleasure of participating in the second episode of a new podcast, Work and The Future with Linda Nazareth. In Episode 2, How is the pandemic changing the race to automate?, I spoke briefly with Linda about how automation necessitates holistic organizational changes as it is introduced into a business, such as the appropriate training and ongoing support of employees as the nature of their jobs change with increased integration of technology.

WIRED has asked me to talk about robots, AI, and hype during the COVID-19 pandemic. I'm a fan of both AI and robots, but we have to recognize their limitations. During a time of crisis, robots and AI in general can be very useful at augmenting our own abilities and even to help us answer some questions. However AI is unable to understand human-centered problems and cultural context through our lens, which is critical. More in If robots steal so many jobs, why wren't they saving us now? and Spot the coronavirus doctor robot dog will see you now

Late announcement that Living with robots: Emerging issues on the psychological and social implications of robotics is out now, with my chapter Kill switch: The evolution of road rage in an increasingly AI car culture. Kill switch proposes a theoretical framework for exploring anger and frustration within driving experiences as our relationship with vehicles changes. Side note that (hopefully fixed soon) Elsevier's website has my chapter titled incorrectly at the moment.

SXSW has been cancelled, which was the right thing to do. If my panel is re-scheduled I will post information here. I was supposed to speak as part of this panel on the Fantastic Futures track on March 19: Staying human after sex robots become 'perfect.'

For my German-reading friends, there are two new and very in-depth interviews with me about robots, AI, people, technology, human sexuality, ethics, bias, and gender out in the new year:  Frankfurter Allgemeine Quarterly (FAZ Quarterly) Ist es falsch, den Robotern zu vertrauen? (01/2020, pp 38-40) and Julie Carpenter: Können wir Robotern vertrauen? Ein Hintergrundgespräch von Magdalena Kröner in the new volume of KunstForum, Bd 265.

I am honored to be included in the new Lighthouse3 100 Brilliant Women in AI Ethics to Follow in 2020 and Beyond list. I am in great company on this list, and I urge anyone who is putting together an academic conference, needs speakers for an industry event, or journalists who need expert sources to download the Diversity + Ethics in AI 2020 report as an excellent SME resource, including this 2020 list. I also suggest bookmarking the related open directory of Brilliant Women Working in AI Ethics.

ACM Interactions asked me to talk about a research project that I've been very close to the past year, so I wrote Why Project Q is more than the world's first nonbinary voice for technology, now available in the December 2019 issue.

I very much enjoyed speaking as part of a panel at the Philadelphia Museum of Art on Saturday, November 16. The theme was "Why is AI a woman?" and this talk was in conjunction with the exhibition Designs for Different Futures. This installation and speaker series explores how design enables planning and imagining our individual and shared futures. On a personal note, my family is from Philadelphia and it is always a pleasure to visit again.

In July, Dr.Heather Roff organized a panel on AI and ethics that I participated in with Drs. Cindy Grimm, Joanna Bryson, and Kate Darling at Johns Hopkins University Applied Physics Lab in Baltimore. It is truly an honor any time I get to share a stage with any of these brilliant women in science, psychology, technology, national security, ethics, and AI.

I am delighted to share excellent news about Q, a project I acted as research consultant on with VICE, their creative agency VIRTUE, and Copenhagen Pride. In June, Q won a bronze Glass Lion: Lion for Change in Cannes. Also in Cannes, Q won three bronze Radio & Audio awards in the following categories: Not-for-profit/Charity/Government; Sound Design; and Use of Technology/Voice-Activation. Project Q has also been shortlisted for the Beazley Design of the Year award via the Design Museum of London. Q is the world's first gender-neutral voice for use in AI voice-assistive technologies, and was developed to contribute to a global discussion about gender and technology, and eventually, to become an open source resource for everyone. Q can be viewed as inclusive nonbinary gender option for AI and natural language interactvity that is embedded in things we interact with in our everyday world, from GPS to phone and home assistants or even robots or appliances. The research work of Anna K. Jørgensen scaffolded this project, so please check it out.

I spoke briefly with Dalia Mortada from NPR about Q in Meet Q: The gender-neutral voice assistant, which gave Q a great positiive media boost. One of my favorite articles that has covered Q in some detail is the FastCompany piece by Mark Wilson, The world's first genderless AI voice is here. In my role as a research consultant for Q's development, I also spoke with Reuters and WIRED about Q in Alexa, are you male or female: Sexist virtual assistants go gender neutral and The genderless digital voice the world needs right now.

It was a fun surprise to see a bit about my work in Madrid's Razón newspaper in the article ¿Salvaría antes a un humano o a un robot? in mid-February.

In January, I spoke with Brianna Booth about AI and sexuality as part of the Technology & Conciousness podcast series hosted by the California Institute of Integral Studies in San Francisco, and you can listen to it on Soundcloud here. I also had the fun opportunity to join my colleague and friend David Gunkel to discuss Alexa, home assistants, ethics, and culture on CBC Radio's The Current in the episode, Do you swear at Alexa?

CNN asked me to talk about people who report feeling humanlike love and affection for AI, such as Akihiko Kondo's feelings for hologram Hatsune Miku in The man who married a hologram. I chatted with Emily Dreyfuss about in-home AI assistants, gender, stereotypes, culture, AI personas, and how kids are relating to humanlike AI in The terrible joy of yelling at Alexa for WIRED.

In October, I was honored to be included in Mia Dand and Lighthouse3's 100 Brilliant Women in AI Ethics to Follow in 2019 and Beyond list.

Alan Winfield has done some really interesting work around the idea of applying Theory of Mind to robots, and Scientific American asked me to comment about it in How to make a robot use Theory of Mind. I've also had the pleasure of speaking with Matt Simon from WIRED several times over the last couple of months. Matt wrote It's time to talk about robot gender stereotypes. In the article, I said about robots as (both) design and user mediums, "It'd be great if somehow we could use robots as a tool to better understand ourselves, and maybe even influence some positive change. Globally, the social movement we're moving towards is equality. So why go backwards? Why refer to gender norms from the 1960s?"

Also in WIRED and authored by Matt Simon, in We need to talk about robots trying to pass as human we talked about how people regard highly humanlike AI. In August, we also had a conversation about robots as an emerging social category and whether that means they can create peer pressure, in his article How rude humanoid robots can mess with your head. Additionally, in August I spoke at UX Week‐2018 in San Francisco about Dark Patterns and the ethics of robot design.

In June, The Washington Post included my thoughts in an article that discusses recent research about sex robots and their therapeutic potential. An interview I did with WIRED also came out this week, where I talked about highly humanlike robots and the idea of "deception" in robot and AI design.

Robopsych podcast invited me back as a guest in July, and in Episode 65 Dr. Tom Guarriello, Carla Diana, and I discussed AI, robots, and deception. We unpacked what the word "deception" often means in the context of human-AI interactions, and the factors intertwined with user trust of AI systems, such as user expectations of interactions based on anthropomorphic design cues and brand influences.

 

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Follow me on Mastodon @jgcarpenter@fediscience.org