Julie Carpenter, Ph.D.



My work has taken a path from film and media theory to human behavior with emerging technologies research. To me, the connections seems very clear because I see both paths of inquiry scaffolded by an interest in human communication through and with technologies. M√ľnsterberg [1] has said that the arrangement of events in film correlates with the way we think, and that idea has, in part, been one of the things that ignited my interest in human communication via technologies years ago when I began looking at film critically as a cultural medium and artifact of expression and interpretations.

In my earliest years in Web development, part of my job was working on Y2K preparation and response for an organization in the banking sector. The role included everything from coding the sites to acting as a Y2k expert and taking calls from bank owners and the general public who had conerns about the safety of their financial holdings. This was an era that clearly demonstrated to me the user-centered design and development challenges connected to emotion-centered aspects of how people interacted via information we presented to them or that they read from other sources on the Web. Certainly, I began to think about factors like the importance of building and maintaining (or repairing) user trust in their experiences online. Over the years my professional and academic interests grew to encompass peoples' interactions with a variety of technology artifacts, usually AI in some form, such as software, robots, voice user interfaces (VUI), or autonomous cars.

My backgrounds in film theory and learning sciences influence my view of AI as a communication medium not dissimilar to other forms of storytelling in its theoretical and practical contructs. In a simplified example of this way of thinking, producing a technology using AI requires people to make a series of thoughtful, collaborative, creative, and cooperative decisions that produces something meant to convey specific messages to other people, or end users, via the AI artifact (e.g., a website, robot, or VUI). Realistically, these development decisions are also influenced by their real-world project constraints, such as timeline, resources, and budget, which are decisions that are also reflected in the medium. How people receive, interact with, and interpret (or ignore) these messages in the AI artifact is based on their own subjective mental models, is context-dependent, and always situated within larger cultural systems of beliefs.

Thus, my work explores the cultural influences and narratives that influence people, their reciprocal influence on technology design, and the cultural narratives around that technology. Therefore, the methods and strategies I use to explore these topics are human-centered and rooted in social science ways of understanding individuals and their relationships to others and to the world around them. In other words, I am interested in who gets to make decisions about developing technologies, the narratives they tell, and the stories we individually have about living with technology.

The current state of AI development is moving quickly, yet sociocultural human-centered research in the field is still nascent. This void in research will have an enormous impact on peoples' safety as technology is increasingly enmeshed in our everyday communication, social, medical, defense, space exploration, energy, agricultural, and humanitarian aid systems and infrastructures. Developing AI without attending to the human factors will have a heavy ethical cost given the roles AI plays, and how it will continue to influence the human experience.

[1] M√ľnsterberg, H. (1916). The photoplay: A psychological study. (2005, eds.) A. Longhurst & Feilbach, A. [The Project Gutenberg EBook #15383]. New York: D. Appleton & Company.

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.