My research examines how people interact with emerging technologies. In particular, I'm interested in understanding and predicting human behaviors with AI by deeply understanding the cultural contexts of the people designing and using it. A great deal of my research and theory-building has been related to how AI (usually embodied, like robots or autonmous vehicles) encourages or discourages trust and human emotional attachment. The findings from this work can be applied to the development of robots, autonomous vehicles, and other technologies using AI that are effective in their roles for human-technology collaborative or cooperative situations.
In addition to looking at human emotion, trust, and decision-making, what sets my work apart is that I situate peoples dynamic experiences with technology within larger social systems in order to identify changeable issues to influence how people work with technological systems. In short, as a consultant, I can also work with an organization or institution (short-term, such as for a one-off workshop) or long-term (in-house education or research) to predict how people will work and live with complex emerging technologies, and guide strategy to design systems for peoples' trust.
I am delighted now that it has debuted, I can talk about my work with VICE's creative agency Virtue and Copenhagen Pride on Q, a gender-neutral voice for use in voice assistive technologies. Q was developed in part to contribute to a global discussion about gender and technology. Q can be viewed as inclusive nonbinary gender option for AI and natural language interactvity that is embedded in things we interact with in our everyday world, from GPS to phone and home assistants or even robots or appliances. The research work of Anna K. Jørgensen scaffolded this project, so please check it out. In my role as a consultant for Q's development, I spoke with Reuters and WIRED about Q in Alexa, are you male or female: Sexist virtual assistants go gender neutral and The genderless digital voice the world needs right now.
It was a fun surprise to see a bit about my work in Madrid's Razón newspaper in the article ¿Salvaría antes a un humano o a un robot? in mid-February.
On January 24, I spoke with Brianna Booth about AI and sexuality as part of the Technology & Conciousness podcast series hosted by the California Institute of Integral Studies in San Francisco, and you can listen to it on Soundcloud here. Also in January, I had fun speaking with my colleague and friend David Gunkel about Alexa, home assistants, ethics, and culture on CBC Radio's The Current in the episode, Do you swear at Alexa?
CNN asked me to talk about people who report feeling humanlike love and affection for AI, such as Akihiko Kondo's feelings for hologram Hatsune Miku in The man who married a hologram. I chatted with Emily Dreyfuss about in-home AI assistants, gender, stereotypes, culture, AI personas, and how kids are relating to humanlike AI in The terrible joy of yelling at Alexa for WIRED.
In October, I was honored to be included in Mia Dand and Lighthouse3's 100 Brilliant Women in AI Ethics to Follow in 2019 and Beyond list.
Alan Winfield has done some really interesting work around the idea of applying Theory of Mind to robots, and Scientific American asked me to comment about it in How to make a robot use Theory of Mind. I've also had the pleasure of speaking with Matt Simon from WIRED several times over the last couple of months. Matt wrote It's time to talk about robot gender stereotypes. In the article, I said about robots as (both) design and user mediums, "It'd be great if somehow we could use robots as a tool to better understand ourselves, and maybe even influence some positive change. Globally, the social movement we're moving towards is equality. So why go backwards? Why refer to gender norms from the 1960s?"
Also in WIRED and authored by Matt Simon, in We need to talk about robots trying to pass as human we talked about how people regard highly humanlike AI. In August, we also had a conversation about robots as an emerging social category and whether that means they can create peer pressure, in his article How rude humanoid robots can mess with your head. Additionally, in August I spoke at UX Week‐2018 in San Francisco about Dark Patterns and the ethics of robot design.
In June, The Washington Post included my thoughts in an article that discusses recent research about sex robots and their therapeutic potential. An interview I did with WIRED also came out this week, where I talked about highly humanlike robots and the idea of "deception" in robot and AI design.
Robopsych podcast invited me back as a guest in July, and in Episode 65 Dr. Tom Guarriello, Carla Diana, and I discussed AI, robots, and deception. We unpacked what the word "deception" often means in the context of human-AI interactions, and the factors intertwined with user trust of AI systems, such as user expectations of interactions based on anthropomorphic design cues and brand influences.
War is Boring blog wrote a very interesting article about the cultural mythologies of war and how robots willbecome part of the narratives, What happens when robots fight our wars? Two hypotheses. The article also included a very nice mention of my book, saying:
"Ground-breaking research by Julie Carpenter offers an alternative vision for the impact that robot soldiers could have on the relationship between the military and the state. Her seminal book Culture and Human-Robot Interaction in Militarized Spaces: A War Story is an extensive account of the relationships that have developed between Explosive Ordnance Disposal teams in the U.S. military and their robot comrades in arms."
My thoughts on the topic of sentient AI are included in a March 2018 HP Labs article The ethics of AI: Is it moral to imbue machines with conciousness?.
I was also delighted to read a very kind review written by Dr. Jordi Vallerdú in Robots sexuales: ¿Los límites de nuestra sexualidad..o de la de los robots? of my chapter that is included in Robot sex: Social and ethical implications.