Erin Drake Kajioka, Head of Applied Game Design, Google ResearchGAMES / INSTRUCTIONAL DESIGN
Erin Drake Kajioka
The multi-dimensional mathematical structures created by AI to represent conceptual structures of language, images, video, and music -- or any kind of information -- are commonly accepted to be "black boxes" that cannot be directly inspected by humans. As a result, so-called AI representational alignment -- the function of AI solutions matching those that humans would produce -- is a difficult problem made significantly moreso by the size of modern foundation models. In this session, we'll describe our experiments to leverage the metaphorical representation affordances of video game interfaces to "open" these opaque boxes. We'll walk through our process of ingesting high complexity machine learning embedding spaces (the representational structures that AI models use to produce answers) and translating them into 3D projections using the Unity Game Engine. We'll further discuss using conversation games to expand and evaluate these models, exploring other avenues of human control and influence over complex machine learning artifacts.
Attendee Benefits
This session posits a new kind of "serious game" -- one directed at controlling and evaluating AI. Attendees should hopefully leave with a sense of how video games can be highly relevant in the problem space of making AI trustworthy, understandable, and controllable. These are urgent issues for society broadly, and we are in the earliest phases of understanding how to solve them. This session is mainly intended to spark ideas and start a conversation about how video games can play an important role in creating bridges between AI and people, continuing the long tradition that video games have of pioneering new uses for digital technology.