It’s been a while since we posted here – but research on SPH (Sparse Predictive Hierarchies) has not slowed down! (for those unfamiliar with SPH, check out the AOgmaNeo user guide. AOgmaNeo is an implementation of SPH)
We recently presented SPH at a local AI conference (FLAIRS-35), where we explained our technology and showed off some of our robots. One of the robots shown was our latest, fastest iteration of the Lorcan Mini quadruped robot. We have recorded a YouTube version of this demo for you to enjoy!
We have previously shown some videos of Lorcan Mini, but none that ran as fast as this one. The AOgmaNeo reinforcement learning (RL) agent used on the robot ran on a Raspberry Pi Zero 2, and was trained for roughly 30 minutes in the real world. Doing this took multiple battery charge cycles, as it only runs for about 10 minutes at a time. The robot’s motors are also prone to thermal throttling, but we still managed to get it running quite fast. Unlike many other robots that use RL to run, ours initializes to a base (hand-made) policy and then improves entirely in the real-world – no simulators needed!
Aside from continued work on AOgmaNeo, we decided to finally make a GPU version of SPH again. This time we wrote it in Python using PyOpenCL. The resulting library is called CLOgmaNeo, and can run some very large SPH’s. So far we have tested some models that run in real-time with around a billion parameters (with learning enabled). As usual, these learn fully online without any replay or other i.i.d. mechanisms. We also ran a test on the moving-MNIST digits dataset, where the larger scale hierarchies seem to have improved the results (we just ran through the dataset once).

For those unfamiliar with the dataset, the goal is to predict the second 10 frames from the first 10 frames (the predictions are the blurry part). Note that unlike how AOgmaNeo replaced OgmaNeo2, CLOgmaNeo will likely not replace AOgmaNeo. AOgmaNeo is still better for CPU-based experiments. However, it is nice to have the option to run larger-scale experiments on a GPU again! We will release the source code soon, it just needs some additional work before then. This post will be updated with a link once it’s ready. (EDIT: Here it is!)
Until next time!