complex ai in large worlds
every world, more alive
AI & Asset Design: (1.) ai/ml for generalist agents (LLM to speech/action), (2.) ai/ml for terraforming swarms (procgen/drl + agents), (3.) megastructure environment design, (4.) cyborg character/creature design and animation
Our foundry is focused ai design on the hand and asset design on the other. We posit that the newest and most advanced science fiction worlds also require the newest and most advanced AI implementations. The ai we design are for (1.) generalist agents that can extemporize in dialogue and action and (2.) swarms of agents that can be used to transform worlds in real time (build buildings, modify terrain, etc.). The assets we design are (3.) science fiction megastructures and environments and (4.) cyborg humanoid and non humanoid characters (and animations). We are often brought in on game studio and game tech projects to contribute design pieces or to expand/implement new AI systems. We are known for our meticulous attention to detail, unique style and embrace of experimental techniques. Every world we touch feels more alive than it was before.
work history
Current & Previous Clients
Agile Lens, Australian National University, Goethe Institute, New Art City, Anti-Eviction Mapping Project, Wild, Gray Area, Buzzfeed, Mary Mattingly, Richard Rendering
select CASE STUDIES
[ML, Dialogue] LLM Dialogue via Inworld for Richard Rendering, 2024
In this demo world, you can see a short example of talking to one of the characters, Akira. We didn’t design any of the aesthetics or characters. We implemented the InWorld dialogue system. We also made it so that the character can go to waypoints, and that worked well with the dialogue system. We also integrated inworld and gpt4-0 so that the character can look through her camera and describe what she’s seeing (in this instance, she describes the painting).
[AI, Behavior, Robotics] Robotic Simulator for Grizzly, 2024
We created a highly complex simulation of an excavator. The excavator has ISO controls, and is able to simulate digging in the ground (via the voxel plugin). The excavator is also able to save both articulation and camera data that was used to train an imitation learning model to dig a trench autonomously.
[Music Video, Level Design, Reinforcement Learning, AI Texturing] Music Video for Artist RTyler, 2023
We created a music video for RTyler, using a combination of reinforcement agents and a reskinning with parts of our music video with stable diffusion. The video is meant to represent an advanced civilization that has radically fused biology and computation, and is able to generate all manners of creatures and agents.
[VR, Data Visualization] Climate Data Visualization VR for Australian National University, 2024
We created a VR data visualization for the Australian National University. The program visualizes “guassian splats” and allows the user-scientist to manipulate, scale, transform, rotate and explore the scan. These scans are being used to train a machine learning model to predict how much carbon sequestration different types of terrain achieve.
[VR, Level Design] Christmas Carol VR for Agile Lens, Level Studies, 2023
We created a stage set for Agile Len’s VR production of Christmas Carol by Charles Dickens. We created a Victorian London “level,” a variety of “sets” (like the Scrooge bedroom above) and also experimented with a variety of complex volumetric fog transitions.
[Level Design] Megastructure Studies for New Art City, 2020
We created a series of “megastructure studies” for New Art City, helping them to determine more about what kinds of scales of architectural space are helpful for their platform. The studies were also an advanced survey of importing grasshopper generated forms into the platform.