JULIE CARPENTER, PH.D

AI+People+Culture

Human figures sharing chess board with chess pieces

Humans + AI

My research focuses on human behavior with emerging technologies, especially within vulnerable and marginalized populations. I situate experiences within their larger cultural contexts and social systems to offer a framework for describing what phenomena are occurring and explain how peoples’ expectations, behaviors, and ideas adapt and change over time as they work and live with new technologies. A great deal of my research and theory-building has been related to how AI in its many forms encourages or discourages trust, informs decision-making, and evokes human emotional attachment. The findings from this work can be applied practically to make human-AI outcomes safe and effective in collaborative and cooperative interactions.

Curent work and in the news

For my German-reading friends, in the linked issue of Frankfurter Allgemeine Quarterly (FAZ Quarterly) there is a new and very in-depth interview with me about robots, AI, people, technology, and gender, Ist es falsch, den Robotern zu vertrauen?

ACM Interactions asked me to talk about a research project that I've been very close to the past year, so I wrote Why Project Q is more than the world's first nonbinary voice for technology, now available in the December 2019 issue.

I very much enjoyed speaking as part of a panel at the Philadelphia Museum of Art on Saturday, November 16. The theme was "Why is AI a woman?" and this talk was in conjunction with the exhibition Designs for Different Futures. This installation and speaker series explores how design enables planning and imagining our individual and shared futures. On a personal note, my family is from Philadelphia and it is always a pleasure to visit again.

In July, Dr.Heather Roff organized a panel on AI and ethics that I participated in with Drs. Cindy Grimm, Joanna Bryson, and Kate Darling at Johns Hopkins University Applied Physics Lab in Baltimore. It is truly an honor any time I get to share a stage with any of these brilliant women in science, psychology, technology, national security, ethics, and AI.

I am delighted to share excellent news about Q, a project I acted as research consultant on with VICE, their creative agency VIRTUE, and Copenhagen Pride. In June, Q won a bronze Glass Lion: Lion for Change in Cannes. Also in Cannes, Q won three bronze Radio & Audio awards in the following categories: Not-for-profit/Charity/Government; Sound Design; and Use of Technology/Voice-Activation. Project Q has also been shortlisted for the Beazley Design of the Year award via the Design Museum of London. Q is the world's first gender-neutral voice for use in AI voice-assistive technologies, and was developed to contribute to a global discussion about gender and technology, and eventually, to become an open source resource for everyone. Q can be viewed as inclusive nonbinary gender option for AI and natural language interactvity that is embedded in things we interact with in our everyday world, from GPS to phone and home assistants or even robots or appliances. The research work of Anna K. Jørgensen scaffolded this project, so please check it out.

I spoke briefly with Dalia Mortada from NPR about Q in Meet Q: The gender-neutral voice assistant, which gave Q a great positiive media boost. One of my favorite articles that has covered Q in some detail is the FastCompany piece by Mark Wilson, The world's first genderless AI voice is here. In my role as a research consultant for Q's development, I also spoke with Reuters and WIRED about Q in Alexa, are you male or female: Sexist virtual assistants go gender neutral and The genderless digital voice the world needs right now.

It was a fun surprise to see a bit about my work in Madrid's Razón newspaper in the article ¿Salvaría antes a un humano o a un robot? in mid-February.

In January, I spoke with Brianna Booth about AI and sexuality as part of the Technology & Conciousness podcast series hosted by the California Institute of Integral Studies in San Francisco, and you can listen to it on Soundcloud here. I also had the fun opportunity to join my colleague and friend David Gunkel to discuss Alexa, home assistants, ethics, and culture on CBC Radio's The Current in the episode, Do you swear at Alexa?

CNN asked me to talk about people who report feeling humanlike love and affection for AI, such as Akihiko Kondo's feelings for hologram Hatsune Miku in The man who married a hologram. I chatted with Emily Dreyfuss about in-home AI assistants, gender, stereotypes, culture, AI personas, and how kids are relating to humanlike AI in The terrible joy of yelling at Alexa for WIRED.

In October, I was honored to be included in Mia Dand and Lighthouse3's 100 Brilliant Women in AI Ethics to Follow in 2019 and Beyond list.

Alan Winfield has done some really interesting work around the idea of applying Theory of Mind to robots, and Scientific American asked me to comment about it in How to make a robot use Theory of Mind. I've also had the pleasure of speaking with Matt Simon from WIRED several times over the last couple of months. Matt wrote It's time to talk about robot gender stereotypes. In the article, I said about robots as (both) design and user mediums, "It'd be great if somehow we could use robots as a tool to better understand ourselves, and maybe even influence some positive change. Globally, the social movement we're moving towards is equality. So why go backwards? Why refer to gender norms from the 1960s?"

Also in WIRED and authored by Matt Simon, in We need to talk about robots trying to pass as human we talked about how people regard highly humanlike AI. In August, we also had a conversation about robots as an emerging social category and whether that means they can create peer pressure, in his article How rude humanoid robots can mess with your head. Additionally, in August I spoke at UX Week‐2018 in San Francisco about Dark Patterns and the ethics of robot design.

In June, The Washington Post included my thoughts in an article that discusses recent research about sex robots and their therapeutic potential. An interview I did with WIRED also came out this week, where I talked about highly humanlike robots and the idea of "deception" in robot and AI design.

Robopsych podcast invited me back as a guest in July, and in Episode 65 Dr. Tom Guarriello, Carla Diana, and I discussed AI, robots, and deception. We unpacked what the word "deception" often means in the context of human-AI interactions, and the factors intertwined with user trust of AI systems, such as user expectations of interactions based on anthropomorphic design cues and brand influences.

War is Boring blog wrote a very interesting article about the cultural mythologies of war and how robots willbecome part of the narratives, What happens when robots fight our wars? Two hypotheses. The article also included a very nice mention of my book, saying:

"Ground-breaking research by Julie Carpenter offers an alternative vision for the impact that robot soldiers could have on the relationship between the military and the state. Her seminal book Culture and Human-Robot Interaction in Militarized Spaces: A War Story is an extensive account of the relationships that have developed between Explosive Ordnance Disposal teams in the U.S. military and their robot comrades in arms."

My thoughts on the topic of sentient AI are included in a March 2018 HP Labs article The ethics of AI: Is it moral to imbue machines with conciousness?.

I was also delighted to read a very kind review written by Dr. Jordi Vallerdú in Robots sexuales: ¿Los límites de nuestra sexualidad..o de la de los robots? of my chapter that is included in Robot sex: Social and ethical implications.

 

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.