Note: The interactive demo may take a bit to load!
Use W/S to move the arm up/down, and A/D to move the arm left/right. Try moving the marble in the track!
A brief summary for those who don’t know what AOgmaNeo is, it is a biologically-inspired online learning system that runs very fast. It takes in data in a streaming fashion (non-i.i.d.) and learns without replay.
AOgmaNeo was originally just for Arduino devices, but has since become the go-to version of OgmaNeo for desktop as well (replacing OgmaNeo2). While we have some videos of AOgmaNeo in action on our YouTube channel, we thought it would be interesting to have a more interactive demo. So, we came up with “Real2Sim”, an experiment in world-model learning.
Real2Sim tries to go the opposite direction of many robotics projects. Instead of training a simulated robot and then attempting to transfer it to the real world, we take a real robot and try to learn a simulator for it. In our case, AOgmaNeo observes a video feed of a simple robotic environment as well as control signals. In order to avoid illegal actions, we simply control the robot ourselves for ~5 minutes and allow AOgmaNeo to associate our movement commands with the visual feed. Next, we took the world model it learned and put it in a web interface where the user can change the robot’s control commands for themselves (made using WebAssembly), and play a simulated version of the original real-world environment.
It’s far from perfect (e.g. command latency is a big issue), but all legal actions are available to the user (if an illegal action is attempted, AOgmaNeo generally just goes to the closest legal action instead), and you can try it here yourself! The current control command is visualized in the top left, mimicking a joystick (it was trained with a gamepad originally).
Also, here is an image of the setup! The arm was 3D printed and uses two micro-sized servos.