This project detects 3 actions 'hello', 'thanks' and 'I love you' leveraging media pipe for keypoints and landmark detection. LSTM model is employed for training the model and logic is provided to detect signs of the user in real-time.
You can run this project cell-wise and train the model on your own video depicting the above actions