In 2025, Wimbledon retired its line judges. The 147 white-clad humans who'd called lines on the Championships' courts since the 1870s were replaced by Hawk-Eye Live, an electronic line calling (ELC) system that's been on the men's tour for years and was already universal at the US Open and the Australian Open. The change made every front page.
Two questions are being asked about this on Google right now (Google PAA on the AI line calling SERP):
- "Are the line calls at Wimbledon AI generated?"
- "How does AI line calling work in tennis?"
Both deserve real answers, not headline answers. And both are worth understanding if you're using a phone-based tennis AI app — because the gap between what Wimbledon has and what a phone can do is the gap that defines what an amateur tool can honestly claim.
TL;DR
- Wimbledon uses Hawk-Eye Live — 10+ high-frame-rate calibrated cameras per court, 3D triangulation, deterministic geometric line-call decision.
- It's "AI" only in the loose sense. There's no neural network deciding the call. It's computer vision plus geometry.
- Published accuracy: roughly 3.6 mm mean error. That's a stadium-grade number that no single-camera phone app can match — or should claim.
- Phone-based AI line calling for amateurs is useful (15–30 cm tolerance is enough to settle most disputes) but is not on the same scale.
What Hawk-Eye Live actually does
Hawk-Eye, the company, has been building ball-tracking systems for cricket and tennis since 2001. The "Live" variant — used for real-time, in-game line calls without challenge protocols — has a specific architecture:
Cameras. 10–12 high-frame-rate cameras (typically 340+ fps) mounted around the court at fixed, calibrated positions. The Australian Open and US Open use Hawk-Eye Live across all courts; Wimbledon adopted it across all courts in 2025.
Triangulation. Each camera sees the ball as a 2D pixel. With 10+ cameras at known positions, the system triangulates the ball's 3D position to a measured precision. The published accuracy is around 3.6 mm.
Geometric decision. Once you have the 3D ball position over time, and the 3D court geometry (which is known and measured to the millimetre), an in/out call is just trigonometry. No machine-learning judgement. The ball is either inside the line plane or outside.
Real-time audio. A calibrated voice plays "out" or "fault" within a fraction of a second of the bounce. The chair umpire still presides; ELC replaces the linesperson role.
This is not "AI" in the way a Google product manager might use the word. There's no transformer. There's no LLM. There's not even a deep neural net doing the line call itself. The ball-detection step within each camera frame is computer vision (which has had ML components since the 2010s), but the actual line call is pure geometry.
The "AI generated" framing in headlines reflects how loosely the public uses the term. Worth understanding the distinction if you care about the tech.
Why this isn't reproducible on a phone
A phone has one camera. Hawk-Eye has ten. Multi-camera triangulation is fundamentally a different problem from monocular inference. You can do a lot with one camera (we've built a whole product on it), but you cannot do Hawk-Eye accuracy with one camera, and any phone app that implies otherwise is selling a vibe, not a measurement.
The math is straightforward:
- Stereo / multi-view triangulation: depth error scales linearly with the inverse of camera baseline. More baseline, more cameras, less error. Hawk-Eye's setup gets to millimetres.
- Monocular + scene geometry: depth is inferred from the known geometry of the court, not measured. Inference adds a structural error that no software improvement collapses to zero.
We talked about this exact mechanic in the context of serve speed estimation — same root cause. Single-camera setups have a structural error band that's bigger than Hawk-Eye's by an order of magnitude, and the honest move is to be explicit about it.
What phone-based AI line calling can do (honestly)
In our pipeline (explained here), bounce detection runs on a single phone camera with court keypoint detection providing the geometric scaffold. Under good filming conditions — phone on the centre line, behind the baseline, 30 fps minimum — bounce position error is typically under 30 cm.
Thirty centimetres is bad for officiating a Slam. It's also fine for resolving most amateur arguments. A bounce 5 cm in or 5 cm out is something humans get wrong all the time too — that's why Hawk-Eye exists at the pro level. A bounce 50 cm in is something even your skeptical doubles partner will accept.
So phone AI line calling for amateurs has a real use case — settling close calls in social play, where the alternative is "we both think it was in / out and we replay the point." It does not have a use case as an officiating system, and any app implying otherwise is overselling.
Why Wimbledon held out
Until 2025, Wimbledon kept its line judges while every other Slam moved to ELC. The reasons were partly tradition (Wimbledon does tradition) and partly that Hawk-Eye Live had only been used as a supplement to humans for challenge calls; the full replacement use case was newer.
Once the US Open's 2020 deployment held up across multiple tournaments, and the Australian Open's 2021 follow-up confirmed the operational stability, the case for Wimbledon's hold-out got thinner each year. The 2025 announcement was the inevitable closure of that gap. The era of the human linesperson is over at the top of the sport.
What this means for the future of amateur AI line calling
A few things follow from where the technology is now:
- Hawk-Eye-grade ELC won't come to amateur courts soon. The hardware cost is six-figures per court. Public parks will not get this in our lifetimes.
- Phone-based amateur line calling will keep improving as court-keypoint models get better and as more amateurs film at 60 fps (frame rate is a structural lever on accuracy that doesn't depend on hardware cost).
- The sweet spot for phone tools is post-match analysis, not real-time officiating. Looking at where bounces clustered over a match is way more useful at the amateur level than calling individual lines, and the accuracy needed for clusters is much more forgiving than the accuracy needed for individual calls.
This is why AceSense's product surface emphasises court heatmaps, shot patterns, and stroke quality over real-time line calling. We can do the post-match analysis well; we can't do Hawk-Eye-grade live calling and we won't pretend we can.
A footnote on terminology
Watch the language. "AI line calling" is sloppy shorthand for what's actually happening at Wimbledon — which is computer vision plus geometric inference plus calibrated multi-camera hardware. The phrase "AI" is doing a lot of work, most of it inaccurate.
The same is true in the amateur space. A tennis app that says it does "AI line calls" is either using the phrase very loosely or claiming something it can't deliver. Ask: how many cameras, what frame rate, what court-keypoint detection, what published accuracy? If a vendor can't answer, the system isn't measured the way Hawk-Eye is measured.
That's a useful filter as the AI-tennis space gets noisier through 2026.
Related reading: How AI tennis shot detection actually works explains the AceSense pipeline (which is not the Wimbledon pipeline). How accurate is AceSense? covers our methodology — and where it sits on the accuracy spectrum vs Hawk-Eye. Or read /how-it-works for the visual product version.