top of page
BlazePose.png

Deep Learning-Based Knee Joint Analysis While Performing Adho Mukha Svanasana to Utkatasana

Published in: Journal of Complementary & Alternative Healthcare

Co-authors: Naimisha Sanjay, Yuktha Jayagopal, Dhruv Shindhe S, Omkar SN

Project Overview

This study examined whether a single camera combined with deep learning based pose estimation could accurately analyze knee joint kinematics during dynamic yoga transitions.

The transition from Adho Mukha Svanasana to Utkatasana was selected due to its controlled but continuous lower body movement. The objective was to determine whether BlazePose could produce motion measurements comparable to manually annotated ground truth data from Kinovea.

The broader goal was to test whether low cost computer vision systems can approximate biomechanical analysis typically performed using expensive motion capture infrastructure.

Research Question

Can a single camera deep learning pipeline reliably extract knee joint displacement

and higher order motion parameters during dynamic yoga transitions?

Methodology

Video footage of the transition was recorded using a single camera setup. BlazePose was used to extract right knee joint coordinates across frames.

For validation, the same transition was manually annotated using Kinovea to establish ground truth coordinate data.

From the extracted coordinate sequences, the following kinematic parameters were computed in Python:

  • Displacement

  • Velocity

  • Acceleration

  • Jerk

These were derived through temporal differentiation of joint coordinate values.

Model performance was evaluated by calculating mean squared error between BlazePose derived signals and manually annotated ground truth data.

Results

The system achieved low mean squared error across all motion parameters:

Displacement: 0.000306 pixel

Velocity: 0.000220 pixel per second squared

Acceleration: 0.002897 pixel per second

Jerk: 0.000103 pixel per second cubed

These values indicate sub pixel level deviation relative to manual annotation in a controlled setting, demonstrating high alignment between automated estimation and ground truth measurements.

The automated pipeline significantly reduced analysis time compared to frame by frame manual processing.

Technical Contributions

I contributed to the experimental setup, data collection, computational analysis, and manuscript writing.

This required:

Learning and applying Python for signal processing

Computing derivative based motion parameters

Validating model outputs against ground truth

Interpreting error metrics within biomechanical context

The project demanded integration of deep learning, biomechanics, and kinematic analysis.

Kinovea transition.png

Limitations

The study was conducted in a controlled environment with a single subject and specific pose transition.

Future work should include:

  • Multiple participants

  • Different movement patterns

  • Lighting variation testing

  • Robustness evaluation across body types

Impact

This research demonstrates the feasibility of using a single camera deep learning system for biomechanical motion analysis.

Potential applications include:

  • Remote rehabilitation monitoring

  • Accessible sports performance analysis

  • Low cost movement tracking in resource constrained settings

  • Yoga biomechanics research

  • By reducing reliance on expensive motion capture systems, this approach increases accessibility to kinematic analysis tools.

Reflection

This project reinforced a core research principle: validation matters more than novelty.

Computer vision outputs are only meaningful when benchmarked against reliable ground truth.

Working across biomechanics and deep learning strengthened my ability to evaluate model performance critically rather than accepting algorithmic output at face value.

Read the Paper here

bottom of page