✅ Day 5 – From Paper to Practice: My First Experiment

After reviewing the HDVR (Hierarchical Dance Video Recognition) paper in depth,
I wanted to start building a simplified version of the pipeline using real pose data and my own code.

The original paper relies on 3D lifting and motion segmentation, but I decided to begin with the 2D keypoint side: extracting features like joint distances, angles, and velocities from pose sequences and visualizing motion.


🛠️ What I Built – 2D_Pose_Feature_Builder.ipynb

🎯 Purpose

📂 Functionality


💡 Why This Matters

This experiment helped me:

It was also important to validate that a full 3D lifting module isn’t strictly necessary for building a useful system, especially in constrained or real-time environments.


📊 My Model Setup (So Far)

No learning model yet – this was a feature engineering stage only.
But these outputs will soon be fed into a temporal model like:


🔭 Next Steps (aka Day 6 Plan)


📝 Reflection

Implementing this part manually gave me a better grasp of why the HDVR paper’s hierarchical structure makes sense.
Rather than depending on raw pixel data or supervised 3D annotation, I can now construct explainable, modular pipelines based entirely on pose motion.

This also sets the foundation for a real-time dance feedback system, or downstream applications in fitness, rehab, or choreography assistance.