The following is a rundown of a new demo added to the OgmaNeoDemos repository, Ball Physics Prediction. We plan to release new demos regularly!
Ball Physics Prediction/Simulation
An important aspect of world modeling in humans is the ability to simulate physical interactions ahead of time. Let’s try to make a simple demo of this using OgmaNeo!
In this demo, we will try to emulate 2D physics by showing a OgmaNeo hierarchy video examples of a ball bouncing around in a box. Then, when it has seen enough, we will run the hierarchy off of its own predictions, leading to something like a neural physics engine. We will do this directly from pixels!
In this demo, a ball starts at around the center of the box, and is given a random initial velocity. The ball then bounces around the box for a bit, after which it is reset to the center of the box again with a new random velocity. This process occurs many times until the network can approximately model the video it is seeing. We render video frames of the scene in greyscale, and each “episode” of ball bouncing takes around 60 frames.
When in simulation mode, the hierarchy receives its own predictions as input, resulting in a sort of simulation loop. The hierarchy receives the first 5 frames of the simulation from the video render (as a starting point to derive the appropriate heading of the ball), after which the hierarchy cycles off of its own predictions to produce the rest of the physical interactions.
The network used in this demo consists of 6 96×96 unit layers. Each layer is a Chunk encoder, with a chunk size of 6×6, meaning there are 16×16 chunks per layer.
Each layer is staggered temporally to run at half the rate of the previous layer, allowing us to capture longer term dependencies more easily.
Below is a video showing the results! In the video, we switch between training mode (to show the underlying sequence) and simulation mode (to show the network simulating the physics).
Until next time!