This video is a demo of the current project I am working on for Olympus AI. The number in the upper left shows what stage of the exercise the user is in.
How does this work?
It works by using a pose estimation model from Google's mediapipe package, then running the coordinates of the user's body parts through a neural network. This network was trained on more than 6,000 exercise photos generated by me. If you would like to try out this technology, you can. If you have google chrome and a webcam, you can try this out. This classifier is trained to recoginise the "ok" hand gesture. It is not perfect, but it is in development.