Doubles support in AI tennis apps: where most fail

Most AI tennis apps were trained on singles and silently break on doubles. Here's why partner-swap is the universal failure mode — and how AceSense handles it (with caveats).

If you're a doubles player evaluating tennis-AI apps, you've probably noticed something: most of them talk about singles. The marketing screenshots are singles. The feature pages are singles. And when you finally upload a doubles match, the report has a specific kind of weirdness — your shot count looks off, the heatmap puts a forehand in your partner's territory, the stroke quality scores don't match the rallies you remember.

This is a real, structural problem in the category, and it has the same root cause across vendors. This post explains the cause, the failure modes, where AceSense's doubles support stands today, and what to do about it.

I'm the founder of AceSense, so I'll be honest: doubles is harder for us than singles, and we have a documented caveat. We're not unique in this — the whole category has the same issue.

TL;DR

  • Most tennis-AI models were trained on singles. Doubles is the under-served case.
  • The universal failure mode is partner-swap: the model loses track of which player is which during long rallies and assigns shots to the wrong person.
  • AceSense supports doubles with one documented caveat: occasional partner swap on long crosscourt rallies. Match-level stats (heatmap, total shots, court coverage) remain accurate.
  • For doubles drills (cross-courts, poach reps), the data is reliable. For per-player attribution in a heated match, hand-verify.
  • The fix is more doubles training data and better re-identification models. Solvable, not solved.

The question that prompted this post

The Google "people also ask" box for swingvision doubles and adjacent queries surfaces the same inferred question repeatedly: Does SwingVision work for doubles? The honest answer for SwingVision and for most of the category is: yes, but with caveats most apps don't volunteer.

We're writing this because doubles players keep emailing us asking what's actually different on a doubles video, and the answer deserves more than a footnote. The Talk Tennis thread "Any Free Video Analysis Apps?" carries the same question across products, and the App Store reviews of SwingVision (source) hint at doubles-specific issues without naming them directly.

Why doubles is harder than singles for vision AI

Three reasons, in order of severity.

1. Training data imbalance

The public datasets used to train tennis vision models — academic ones like the various TrackNet datasets, plus the user-uploaded video that vendors collect — skew heavily singles. There are several reasons. Singles is what gets televised most. Solo players film themselves more than doubles players film their teams. And the academic groups that built the foundational datasets focused on singles because the problem was simpler.

The result: a model trained on a 90/10 singles/doubles split sees fewer doubles examples and learns singles-specific assumptions. It's not that the model can't learn doubles; it's that it didn't see enough of them.

2. Occlusion is worse

In singles, the players are typically on opposite ends of the court. They occlude each other rarely (only on a few approach shots and put-aways at net). In doubles, all four players are sometimes within 4 meters of each other — net exchanges, switch volleys, formation changes. Pose-estimation models like MediaPipe can confuse who's who when bodies overlap, and that confusion cascades through the rest of the pipeline.

3. Partner swap

This is the universal failure mode. Vision models that track players across frames assign each player an identity (player 1, player 2, etc.) and try to keep the identity consistent across the video. When two players on the same side cross paths — say you switch sides on a poach, or your partner moves up to net — the model can swap the identities. From that moment forward, every shot you hit gets attributed to your partner, and vice versa, until the next swap (which sometimes corrects, sometimes doesn't).

Partner swap is solvable. It requires a re-identification model — a separate vision component that compares player appearance (clothing, body shape, gait) across frames and re-anchors identities when they get confused. Re-identification is well-studied in surveillance vision; it's just not standard in tennis-AI pipelines yet, because the singles use case didn't need it.

What partner swap looks like on your report

Three telltale signs:

  • Asymmetric shot counts. You hit roughly the same number of shots as your partner, but the report shows 80% on one side. Almost always partner swap.
  • Heatmap clusters in the wrong half. Your forehand cluster shows up on your partner's side of the court. Almost always partner swap.
  • Stroke quality scores that contradict your memory. The report says your forehand technique improved dramatically mid-match — but actually it just started attributing your partner's shots to you, and your partner has a different forehand.

The fix on the user side is hand-verification, which we don't pretend is acceptable for a paying customer. The fix on the vendor side is re-ID and more doubles training data. We're working on both.

Where AceSense's doubles support stands today

Honest summary, as of the current build:

What works well:

  • Match-level statistics (total shots, total bounces, total winners/errors). These are computed from the ball trajectory, which doesn't depend on which player hit the ball.
  • Court coverage heatmap (where the ball landed, regardless of who hit it).
  • Shot type distribution (overall percentage of forehands vs backhands vs serves vs volleys).
  • Per-rally analysis (rally length, rally end-shot type, who served).
  • Stroke quality scoring on close-up clips, where occlusion is minimal.

What has the documented caveat:

  • Per-player shot attribution on long crosscourt rallies with multiple net exchanges. Partner swap risk increases with rally length.
  • Per-player heatmaps when a swap has occurred mid-match.
  • Stroke-quality scores assigned to a specific partner during a swap-affected segment.

What we're working on:

  • Re-identification across player crossings.
  • More doubles training data (if you want to share doubles videos for training, please email [email protected] — we credit and respect privacy).
  • A "doubles mode" toggle that runs a tighter re-ID pass at higher GPU cost.

What to do as a doubles player

Three practical takeaways:

  1. Trust the match-level data. Total shot counts, the court-coverage heatmap, rally lengths — these are fine. The aggregate picture of your team's match is reliable.
  2. Hand-verify per-player attribution before sharing. Open the per-shot view on the report, scroll through, and confirm the labels for the rallies that matter. Five-minute task; eliminates the swap risk for the rallies you actually care about.
  3. Use it for drills, not just matches. Doubles drills (cross-court patterns, poach reps, serve+1, formation work) have shorter exchanges and fewer crossings. Drill-mode data is rock-solid.

A note on competitor doubles support

I won't pretend to have run a structured comparison across SwingVision, BaselineTennisAI, OnForm, and the rest on a curated doubles test set. We haven't done that, and anyone who tells you "we tested every vendor on the same doubles match" is probably exaggerating.

What I can tell you from forum threads (example) and App Store reviews (source) is that doubles glitches are reported across products. SwingVision's doubles experience is generally more mature than a smaller vendor's, mostly because more doubles videos have been uploaded to it and more bug-fix cycles have been run. But the underlying partner-swap problem is the same.

Why we publish this rather than hide it

A lot of vendors don't talk about doubles weak spots because the marketing case for "AI tennis analysis" is cleaner if you assume singles. We talk about it for two reasons:

  1. Doubles is the majority of US club tennis. USTA league play is overwhelmingly doubles. If our product silently fails on the format most of our users actually play, we're failing them.
  2. The accuracy page argument only works if it's honest. We publish per-surface and per-shot-type accuracy on /accuracy. Hiding doubles-specific weak spots would undermine the whole credibility argument.

When doubles AI will be solved

It will get fixed. The required training data is growing every month (every doubles video uploaded teaches the next model). Re-identification models are improving in adjacent computer-vision fields (sports broadcast, surveillance, retail). The vendors that prioritize doubles will close the gap first.

In the meantime, the honest story is: doubles works, with documented limits, and the caveat lives in /accuracy not in fine print at the bottom of a marketing page.

FAQ

Does AceSense support doubles? Yes — with one documented caveat (partner swap on long crosscourt rallies). Match-level data is reliable.

Does SwingVision work for doubles? Yes, with similar caveats — the partner-swap problem affects the whole category. SwingVision's doubles support is more mature than smaller vendors, but the underlying problem persists.

What works perfectly for doubles? Total shot counts, ball-trajectory-based court coverage, rally-level analysis. Anything that doesn't depend on per-player attribution.

What should I hand-verify? Per-player shot attribution on long crosscourt rallies. Open the report, scroll through, confirm labels.

Will doubles get better? Yes. The fix is more doubles training data and re-identification models. We're working on both.


Try AceSense free — match-level doubles data is reliable, per-player attribution has documented caveats. Start free · How AceSense works · Tennis ball tracking accuracy explained · AceSense vs SwingVision