The quest to discover how Artificial Intelligence may choose to represent itself in human form when interacting with people has taken a new, interesting twist.
The quest to discover how Artificial Intelligence may choose to represent itself in human form when interacting with people has taken a new, interesting twist.
A team of international researchers, including Dr Edmond Awad from the University of Exeter, have launched a new “citizen science” project to help learn how AI will choose to appear to humans.
Called The Face Game, the online project sees participants pit their wits against AI programs by rating profile pictures against whether they look like they are ‘team players’ or more ‘self-seeking’.
The game aims at understanding how AI will learn to choose different types of faces for itself, depending on the impression it wants to make and the human it interacts with.
The online experiment is led by researchers from the Max Planck Institute for Human Development, with fellow researchers from the Toulouse School of Economics, the University of Exeter, and the University of British Columbia, together with the Universidad Autonoma de Madrid and Université Paris Cité.
Dr Awad, a Senior Lecturer at the University of Exeter’s Department of Economics and Institute for Data Science and Artificial Intelligence and part of the project said: “This game provides a valuable tool for examining on a worldwide scale how individuals engage with each other and AI algorithms through the use of profile pictures.”
The Face Game is designed to help researchers navigate the future of digital interactions with AI. Profile pictures are in abundant use across social media and online platforms, and play a crucial role in shaping the first impression we make on others.
Currently, AI gives people the digital tools to transform their online appearance in any way they desire, including changing their appearance to look younger, for example.
However, AI is not only helping us play this ‘face game’ amongst ourselves, but is also learning the game from us and quietly deciding which face it will showcase as itself when interacting with us.
“As we increasingly come across AI replicants with self-generated faces, we need to understand what they learn from observing us play the face game and ensure that we retain control over how we interact with these digital entities,” says Iyad Rahwan, Director at the Center for Humans and Machines at the Max Planck Institute for Human Development. His research center explores ethical questions concerning AI and the concept of Machine Behavior.
This project comes from the research team that developed the Moral Machine, a massive online experiment that went viral in 2016. It explored the ethical dilemmas faced by autonomous vehicles, highlighting universal principles as well as cross-cultural differences in how people want AI to behave. The results were published in leading journals, including Science and Nature.
Developed by Universidad Autonoma de Madrid researchers, The Face Game operates on multimodal AI methods, including human behavior analysis with discrimination-aware machine learning and realistic synthetic face images.
Publication: Jean-François Bonnefon, et al., The social dilemma of autonomous vehicles, The Social Dilemma Of Autonomous Vehicles (2023). DOI: 10.1126/science.aaf2654.
Original Story Source: University of Exeter