In the World Engines Lab, we are currently working on module I which is testing the radical motion live monocular motion capture system. If this system works for us (with face, body and hand mocap, hands will be implemented soon), we may use this data to drive our interactions with agents. We will also see if we can build “a plugin around a plugin” to define some common interactivity features we’d like in our worlds.