The Data Behind Better Movement

Why we collect climbing data, how we use it, and the ML pipeline turning movement into injury prevention.

Why Data Collection Matters

Most climbing injuries come from accumulated stress and poor movement patterns that go undetected until pain shows up. There's almost no large-scale biomechanical data on climbers — sports like running and football have decades of motion capture research, climbing has almost none.

Dynalytix is building the first open dataset of climbing biomechanics. Every video generates 40+ data points per frame — joint angles, speeds, velocities, body positions — all labeled with move types, sensation data, and quality ratings.

This data does two things:

Immediately

Gives you a detailed breakdown of your own movement patterns — where you're compensating, where you're strong, where you're at risk.

Over Time

Trains ML models that detect dangerous patterns before they cause injury — not just for you, but for all climbers.

What We Collect

Automatic (extracted by computer vision)

12

Joint Angles per Frame

Left/right elbow, shoulder, hip, knee, ankle, plus upper and lower back angles.

15

Landmark Positions

Full body tracking with x, y, z coordinates and visibility confidence.

Speed & Velocity

Center of mass, individual landmarks, wrist/hip velocity vectors.

40+

Data Points per Frame

All exported as CSV with frame number and timestamp.

You Label

1

Move Type

12 supported: Static, Deadpoint, Dyno, Lock-off, Gaston, Undercling, Drop Knee, Heel Hook, Toe Hook, Flag, Mantle, Campus

2

Quality Rating & Effort Level

1–5 form quality score and 0–10 perceived effort per move

3

Sensation Tags

9 types (Sharp Pain, Dull Pain, Pop, Unstable, Stretch/Awkward, Strong/Controlled, Weak, Pumped, Fatigue) with body part and intensity

4

Move Boundaries

Frame-accurate start/end markers for each move

Video never leaves your browser. MediaPipe JS runs pose extraction entirely client-side — we only store the extracted pose data and your labels, never the video.

The ML Pipeline — From Your Video to Personalized Insights

Two-stage model architecture: supervised learning on labeled data combined with unsupervised pattern detection.

1

Base Model

Supervised Learning

Trains on the collective dataset from all contributors. Using labeled data (move types, quality ratings, sensation tags), it learns:

  • What good form looks like across different move types
  • Which patterns correlate with pain
  • Common compensation patterns that precede injury
2

Personalized Model

Transfer Learning + Unsupervised

Once you've uploaded enough sessions, the base model fine-tunes on YOUR data:

  • Adapts to your body proportions, flexibility, and style
  • Discovers patterns you didn't explicitly label
  • Detects technique drift as you tire
Base Model(trained on all climbers)

├── Supervised: learns from labeled move quality + sensation data

└── Recognizes general "safe" vs "risky" movement patterns

Generic movement knowledge
You upload YOUR climbing sessions
Personalized Model(fine-tuned on you)

├── Transfer learning: adapts base model to your body

└── Unsupervised: discovers YOUR hidden patterns

Personalized Feedback

├── "Your left shoulder drops 15° more on dynos vs deadpoints"

├── "Your knee valgus increases after minute 40 — fatigue signal"

└── "Your lock-off form degrades when effort > 7"

Model Training Approaches

ApproachUse CaseData Needed
Rule-based engineMovement assessments with explicit criteria (deep squat)None — works immediately
Supervised MLComplex movements like climbing500–2,000+ labeled examples
Unsupervised MLHidden pattern detection, anomaly flaggingAccumulates over time per user
HybridRules + supervised + unsupervisedBuilds progressively

We're currently in the data collection phase. The rule-based scoring engine (used in the clinical assessment tool) works today. The ML models are next.

The Data Flywheel

Every contribution makes the system better for everyone.

Contributors upload climbing videos

Pose extraction generates biomechanical data

Contributors label moves + tag sensations

More contributors = more diverse data

Labeled data trains the base model

Better feedback attracts more contributors

Better model = more accurate feedback

Key Points

More climbers = better models

Diversity of body types, climbing styles, and skill levels makes the model more robust.

Your data helps others

A pattern in your data might prevent someone else's injury.

The dataset is open

Syncs to a public GitHub repo (dynalytix-data). Researchers can access it.

Early contributors get the most value

Free access to personalized insights when ML models go live.

Start contributing data

Help build the future of climbing injury prevention. Every video you label makes the models smarter for everyone.

Start contributing data