Improving Animatronic Interactivity – Weeks 1&2

Class | Independent Study, CMU, Spring 2019
Purpose | Prototype an animated figure employing AI/machine learning software to improve guest-character interaction with a themed entertainment lense
Software/Hardware | SolidWorks, Maya, Python, Raspberry Pi

Introduction

Hello! This a development blog for my independent study for my last semester as a graduate student at Carnegie Mellon! The purpose of my independent study is to explore how to make animated figures more interactive in the themed entertainment industry to improve the connection between the physical character and guest. The software for being able to achieve this has become readily available for novice users, which makes me wonder why this hasn’t been implemented more widely in theme parks. The inspiration for my study came from learning about the Vyloo in the Guardians of the Galaxy – Mission: BREAKOUT! attraction at Disneyland California Adventure. The Vyloo have personalities, can interact with guests through non-verbal gestures, and are powered by an onboard system rendering them autonomous. I would like to see what could be pushed in this area, experimenting with machine learning and AI elements.

My secondary goal for this semester is to become more familiar with the entire pipeline for an animated figure. I aspire to be a mechanical designer for animated figures, but want to become more well-rounded to further improve my design skills. I will be developing the software, hardware, character design, model, rig, animation, mechanical design, and installation for this figure, which will be a big challenge, but is also really exciting that I will be able to touch each piece of the figure!

Learning Goals

  • Explore control systems for a small-scale animatronic
    • Hardware to explore: Raspberry Pi, Beagle Board, Arduino are all potentials to explore for microprocessors/single board computers
    • Software to explore: Facial tracking, facial detection, computer vision, voice detection, machine learning, personal agent software (ex: Alexa, Google Home)
    • How can this piece be integrated with the current animation pipeline for animatronics?
  • Design a simple physical animated figure
    • Includes specifically a mouth and eye functions as those are the most expressive for creating a believable character
  • Learn the animation pipeline for the application of an animated figure
    • Learn how to use Maya/ZBrush and the basics of modeling, rigging, and animating to apply to this project

Weeks 1&2 Development

So far, I have began taking Pluralsight tutorials to learn Maya and rigging. I have started to rig a humanoid figure to learn the basics and am hoping to complete my rig by mid next week. After that, I would like to move onto modeling in Maya tutorials while I start to think about character design. In parallel, I have been researching control systems that would work best for computer vision applications. For the scale of my project, it seems that the Raspberry Pi 3 B+ would be the best fit given its extensive documentation which will help a non-programmer like me! It has already been used for several computer vision applications, which means it has the performance to support the needs of the project, and it is also can support multiple servo inputs with the help of a compatible servo shield.