My research examines how people interact with emerging technologies. In particular, I'm interested in understanding and predicting human behaviors with AI by deeply understanding the cultural contexts of the people designing and using it. A great deal of my research and theory-building has been related to how AI (usually embodied, like robots or autonmous vehicles) encourages or discourages trust and human emotional attachment. The findings from this work can be applied to the development of robots, autonomous vehicles, and other technologies using AI that are effective in their roles for human-technology collaborative or cooperative situations.
In addition to looking at human emotion, trust, and decision-making, what sets my work apart is that I situate peoples dynamic experiences with technology within larger social systems in order to identify changeable issues to influence how people work with technological systems. In short, as a consultant, I can also work with an organization or institution (short-term, such as for a one-off workshop) or long-term (in-house education or research) to increase the effectiveness of a product and better understand how people work and live with complex emerging technologies.
In October, I was honored to be included in Mia Dand and Lighthouse3's 100 Brilliant Women in AI Ethics to Follow in 2019 and Beyond list.
Alan Winfield has done some really interesting work around the idea of applying Theory of Mind to robots, and Scientific American asked me to comment about it in How to make a robot use Theory of Mind. I've also had the pleasure of speaking with Matt Simon from WIRED several times over the last couple of months. Matt wrote It's time to talk about robot gender stereotypes. In the article, I said about robots as (both) design and user mediums, "It'd be great if somehow we could use robots as a tool to better understand ourselves, and maybe even influence some positive change. Globally, the social movement we're moving towards is equality. So why go backwards? Why refer to gender norms from the 1960s?"
Also in WIRED and authored by Matt Simon, in We need to talk about robots trying to pass as human we talked about how people regard highly humanlike AI. In August, we also had a conversation about robots as an emerging social category and whether that means they can create peer pressure, in his article How rude humanoid robots can mess with your head. Additionally, in August I spoke at UX Week‐2018 in San Francisco about Dark Patterns and the ethics of robot design.
In June, The Washington Post included my thoughts in an article that discusses recent research about sex robots and their therapeutic potential. An interview I did with WIRED also came out this week, where I talked about highly humanlike robots and the idea of "deception" in robot and AI design.
Robopsych podcast invited me back as a guest in July, and in Episode 65 Dr. Tom Guarriello, Carla Diana, and I discussed AI, robots, and deception. We unpacked what the word "deception" often means in the context of human-AI interactions, and the factors intertwined with user trust of AI systems, such as user expectations of interactions based on anthropomorphic design cues and brand influences.
War is Boring blog wrote a very interesting article about the cultural mythologies of war and how robots willbecome part of the narratives, What happens when robots fight our wars? Two hypotheses. The article also included a very nice mention of my book, saying:
"Ground-breaking research by Julie Carpenter offers an alternative vision for the impact that robot soldiers could have on the relationship between the military and the state. Her seminal book Culture and Human-Robot Interaction in Militarized Spaces: A War Story is an extensive account of the relationships that have developed between Explosive Ordnance Disposal teams in the U.S. military and their robot comrades in arms."
My thoughts on the topic of sentient AI are included in a March 2018 HP Labs article The ethics of AI: Is it moral to imbue machines with conciousness?.
I was also delighted to read a very kind review written by Dr. Jordi Vallerdú in Robots sexuales: ¿Los límites de nuestra sexualidad..o de la de los robots? of my chapter that is included in Robot sex: Social and ethical implications.
In February, Jefferson Graham interviewed me for the USA Today article, Erica, the humanoid robot, is chatty but still has a lot to learn. Additionally, I was honored to be interviewed by Spl/Sgt David Buck in EOD+AI: Working with robots, included in the Jan/Feb issue of the International Association of Bomb Technicians and Investigators (IABTI) magazine, The Detonator.
I had the opportunity to talk about science fiction's influence on real world user expectations of AI and some of the ethical dilemmas of human-robot/AI emotional attachment in an interview on HumanAutonomy blog.
Louisa Hall wrote an interesting article for MIT Technology Review about "How we feel about robots that feel", where she quotes my book extensively in her discussion about human attachment to robots. My point of view about awarding legal rights to robots is quoted briefly in "How soon before we're having sex with our smart homes?" on Intomore.com.