Hello,
In our previous post, we showed that we can now play Atari games from pixels on low-power hardware such as the Raspberry Pi. We can do so in an online, continually-learning fashion.
However, the version of OgmaNeo2 used in that post still used backpropagation for a part of the algorithm just for reinforcement learning. It used a “routing” method to perform backpropagation despite the heavy sparsity in order to approximate a value function. This works reasonably well, but has some drawbacks:
- Sacrifices biological plausibility
- Can have exploding/vanishing gradients
- Runs slower (backwards pass is slow)
- Limits the hierarchy to reinforcement learning only (inelegant integration with time series prediction/world model building)
We have now completely removed backpropagation from our algorithm, and the resulting algorithm performs better than before (and runs faster)!
The new algorithm relies entirely on the bidirectional temporal nature of the hierarchy to perform credit assignment. The reinforcement learning occurs at the “bottom” (input/output) layer of the hierarchy only. All layers above learn to predict the representation of the layer directly below one timestep ahead of time. The reinforcement learning layer simply selects actions based on the state of the first layer and the feedback from the layers above. For more information on our technology, see our whitepaper (DRAFT).
Here we have a video of the agent playing Atari Pong on a Raspberry Pi 4. It found an exploitable position, although sometimes it will randomly miss and have to play “normally” as well. Training is actually ongoing in this video, since training and inference are about the same speed in OgmaNeo2. It is not shown in this video, but the agent has managed to get a perfect game several times.
Our agent is comprised of only 2 layers in our “exponential memory” structure as well as an additional third layer for the image encoder. Our CSDRs are all of size 4x4x32 (width x height x column size), including the image encoder. The rough architecture of the Pong Agent is shown below.

We have gone ahead and released the version of OgmaNeo2 used in the video (master branch). As mentioned previously, a handy feature of this newest, backprop-free version is that one can perform both time series prediction and reinforcement learning with the same hierarchy.
Finally, here is a peak at what will hopefully become our next demo.

Until next time!
Glad to see you guys still at it! You seem to be making some real progress. Have you guys looking into augmenting your networks with external memory? You could probably just add it as some neurons on your IO layer which take feedback from the higher levels and use that as the content based address and respond with the nearest neighbour in the next forward pass.
Not yet, but it’s definitely a direction we are interested in!
Eric – Where can I find your company’s contact info?
Thank you,
Joe Yacura
Here: https://ogma.ai/contact-ogma-ai/