AIIDE 2020


Stephanie M. Lukin (Army Research Lab)

"How Many Different Ways Could You Tell a story About a Sequence of Images? Procedural Expression for Narrative and Dialogue Generation and Visual Storytelling"

headshot of Stephanie Lukin

How many different ways could you tell a story about a sequence of images? What is important? How do you talk about it? How can information about the story world be computational modeled to generate expressive and diverse story tellings? This talk examines how the combination of a computational fabula (story world) rich in semantics, and a representation that preserves syntactics, can generate diverse sujet (story tellings) to convey dynamic character voices, adaptive narrative styles, and multi-perspective plots. This talk further explores the new challenges that arise when the story is derived from a sequence of images, including how to recognize, interpret, and extrapolate visual events for telling an engaging story.

Stephanie Lukin is a Computer Scientist at the Army Research Laboratory West in Los Angeles. She received her Ph.D. in 2017 from UC Santa Cruz, in parameterizable, natural language generation for domain-independent storytelling. She is currently exploring the emerging field of visual storytelling, in particular, how an agent decides to talk about the most important elements in a scene. Dr. Lukin is active in the areas of natural language processing, generation, storytelling, and human-robot interaction, and has organized the StoryNLP (2019) and the GAMNLP (2019-2020) workshops, the Special Session on Situated Dialogue at SIGDIAL (2018, 2020), and acted as publication chair at NAACL (2018-2019).

David Silver (DeepMind)

"Deep Reinforcement Learning from AlphaGo to AlphaStar"

headshot of David Silver

Could the objective of maximising reward be enough to drive intelligent behaviour? Reward-maximising systems have achieved remarkable success in several challenging problems for artificial intelligence, by combining reinforcement learning with deep neural networks. In this talk I describe the ideas and algorithms that led to AlphaGo: the first program to defeat a human champion in the game of Go; AlphaZero: which learned, from scratch, to also defeat the world computer champions in chess and shogi; and AlphaStar: the first program to defeat a human champion in the real-time strategy game of StarCraft.

David Silver is a distinguished research scientist at DeepMind and a professor at University College London. David’s work focuses on artificially intelligent agents based on reinforcement learning. David co-led the project that combined deep learning and reinforcement learning to play Atari games directly from pixels (Nature 2015). He also led the AlphaGo project, culminating in the first program to defeat a top professional player in the full-size game of Go (Nature 2016), and the AlphaZero project, which learned by itself to defeat the world’s strongest chess, shogi and Go programs (Nature 2017, Science 2018). More recently he co-led the AlphaStar project, which led to the world’s first grandmaster level StarCraft player (Nature 2019). His work has been recognised by the ACM Prize in Computing, Marvin Minsky award, Mensa Foundation Prize, and Royal Academy of Engineering Silver Medal.

Accepted Papers

Oral Presentations (In alphabetical order by title)

Paper Title Author(s)
"It’s Unwieldy and It Takes a Lot of Time." Challenges and Opportunities for Creating Agents in Commercial Games Mikhail Jacob, Sam Devlin, and Katja Hofmann
A Declarative PCG Tool for Casual Users Ian Horswill
A Formalization of Emotional Planning for Strong-Story Systems Alireza Shirvani and Stephen Ware
A Good Story is One in a Million: Solution Density in Narrative Generation Problems Cory Siler and Stephen G. Ware
Are Strong Policies also Good Playout Policies? Playout Policy Optimization for RTS Games Zuozhi Yang and Santiago Ontañón
Behavioral evaluation of Hanabi Rainbow DQN agents and Rule-based agents Rodrigo de Moura Canaan, Xianbo Gao, Youjin Chung, Julian Togelius, Andy Nealen, and Stefan Menzel
Bringing Stories Alive: Generating Interactive Fiction Worlds Prithviraj Ammanabrolu, Wesley Cheung, Dan Tu, William Broniec, and Mark Riedl
Computer-Generated Music for Tabletop Role-Playing Games Lucas N. Ferreira and Levi Lelis
Game Level Clustering and Generation using Gaussian Mixture VAEs Zhihan Yang, Anurag Sarkar, and Seth Cooper
Generating Explorable Narrative Spaces with Answer Set Programming Chinmaya Dabral and Chris Martens
Generating Game Levels for Multiple Distinct Games with a Common Latent Space Vikram Kumaran, Bradford Mott, and James Lester
Germinate: A Mixed-Initiative Casual Creator for Rhetorical Games Max Kreminski, Melanie Dickinson, Joseph C. Osborn, Adam Summerville, Michael Mateas, and Noah Wardrip-Fruin
Narrative Planning for Belief and Intention Recognition Rachelyn Farrell and Stephen G. Ware
PCGRL: Procedural Content Generation via Reinforcement Learning Ahmed Khalifa, Philip Bontrager, Sam Earle, and Julian Togelius
The Unexpected Consequence of Incremental Design Changes Nathan Sturtevant, Nicolas Decroocq, Aaron Tripodi, and Matthew Guzdial
TOAD-GAN: Coherent Style Level Generation from a Single Example Maren Awiszus, Frederik Schubert, and Bodo Rosenhahn
Tree Search vs Optimization Approaches for Map Generation Debosmita Bhaumik, Ahmed Khalifa, Michael Green, and Julian Togelius
Using Deep Convolutional Neural Networks to Detect Rendered Glitches in Video Games Carlos García Ling, Konrad Tollmar, and Linus Gisslén
Using Domain Compilation to add Belief to Narrative Planners Matthew Christensen, Jennifer Nelson, and Rogelio E. Cardona-Rivera
Video Game Level Repair via Mixed Integer Linear Programming Hejia Zhang, Matthew Fontaine, Amy Hoover, Julian Togelius, Bistra Dilkina, and Stefanos Nikolaidis
Watershed Graphs for Faster Pathfinding in Binary Occupancy Grids Patrick Hew
Your Buddy, The Grandmaster: Repurposing the Gameplaying AI Surplus for Inclusivity Batu Aytemiz, Xueer Zhu, Eric Hu, and Adam M. Smith

Poster Presentations (In alphabetical order by title)

Paper Title Author(s)
Chaos Cards: Creating Novel Digital Card Games through Grammatical Content Generation and Meta-based Card Evaluation Tiannan Chen and Stephen Guy
Coevolution of Game Levels and Gameplaying Agents Aaron Dharna, Julian Togelius, and Lisa Soros
Combinatorial Q-Learning for Dou Di Zhu Yang You, Liangwei Li, Baisong Guo, Weiming Wang, and Cewu Lu
Contrast Motif Discovery in Minecraft Samaneh Saadat and Gita Sukthankar
Differentia: Visualizing Incremental Game Design Changes Kenneth Chang and Adam Smith
Dynamic Guard Patrol in Games Wael Al Enezi and Clark Verbrugge
Evaluating and Comparing Skill Chains and Rating Systems for Dynamic Difficulty Adjustment Anurag Sarkar and Seth Cooper
Exploring Level Blending across Platformers via Paths and Affordances Anurag Sarkar, Adam Summerville, Sam Snodgrass, Gerard Bentley, and Joseph Osborn
Exploring Livestream Chat Through Word Vectors Charlie Ringer, Mihalis A. Nicolaou, and James Alfred Walker
Foundations of Computational Game Design via Artificial Intelligence: Abstractions and Tradeoffs Rogelio E. Cardona-Rivera
Image-to-Level: Generation and Repair Eugene Chen, Christoph Sydora, Brad Burega, Anmol Mahajan, Abdullah, Matthew Gallivan, and Matthew Guzdial
Learning to Reason in Round-based Games: Multi-task Sequence Generation for Purchasing Decision Making in First-person Shooters Yilei Zeng, Deren Lei, Beichen Li, Gangrong Jiang, Emilio Ferrara, and Michael Zyda
Multimodal Player Affect Modeling with Auxiliary Classifier Generative Adversarial Networks Nathan Henderson, Wookhee Min, Jonathan Rowe, and James Lester
Murder Mysteries: The White Whale of Narrative Generation? Markus Eger
PAIndemic: A Planning Agent For Pandemic Pablo Sauma Chacón and Markus Eger
Reinforcement Learning with Quantum Variational Circuits Owen Lockwood and Mei Si
Say "Sul Sul!" to SimSim, A Sims-Inspired Platform for Sandbox Game AI Megan Charity, Dipika Rajesh, Rachel Ombok, and Lisa Soros
Studying General Agents in Video Games from the Perspective of Player Experience Cristina Guerrero-Romero, Shringi Kumari, Diego Perez-Liebana, and Sebastian Deterding
Towards Action Model Learning for Player Modeling Abhijeet Krishnan, Aaron Williams, and Chris Martens
Tribes: A New Turn-Based Strategy Game for AI Diego Perez Liebana, Yu-Jhen Hsu, Stavros Emmanouilidis, Bobby Khaleque, and Raluca Gaina
Trouncing in Dota 2: An Investigation of Blowout Matches Markos Viggiato and Cor-Paul Bezemer
Word Autobots: Using Transformers for Word Association in the Game Codenames Catalina M. Jaramillo G., Megan Charity, Rodrigo Canaan, and Julian Togelius


(In alphabetical order by title)

Demo Title Creators
A Demonstration of Anhinga: A Mixed-Initiative EPCG Tool for Snakebird Nathan Sturtevant, Nicolas Decroocq, Aaron Tripodi, Carolyn Yang, and Matthew Guzdial
A Demonstration of Mechanic Maker: an AI for Mechanics Co-creation Vardan Saini and Matthew Guzdial
A Sketch-based tool for authoring and analysing emergent narratives Ben Kybartas, Clark Verbrugge, and Jonathan Lessard
Deep Learning Bot for League of Legends Aayush Shah, Aishwarya Lohokare, and Michael Zyda
Geometry Of Hiding: Generating Visibility Manifolds Adrian Koretski and Clark Verbrugge

Playable Experiences

(In alphabetical order by title)

Playable Experiences Title Creator(s)
0DayDreams: What Do Machines Draw When They Daydream? Sabine Wieluch
Why Are We Like This? Melanie Dickinson, Max Kreminski, Michael Mateas, and Noah Wardrip-Fruin

Doctoral Consortium

(In alphabetical order by title)

Paper Title Author
Artificial Intelligence as an Art Director Adrian Gonzalez
Personalized Procedural Content Generation for Increased Replayability Kristen Yu
Principles for AI Co-creative Game Design Assistants Alex Elton-Pym
Towards Designing Out Helplessness:AI Interventions for Videogame Learnability Batu Aytemiz
Towards Transferrable Affective Models for Play-based Learning Samuel Spaulding

Conference Schedule

(Updated Live)