When no model exists for
the problem, you build one.
How on-screen human representation was transformed from invisible to measurable — across tens of hundreds of thousands of video ads.
| Sector | Ad Tech / Media Intelligence |
| Service | Video processing pipeline for facial & body analysis |
| Scale | Hundreds to tens of thousands of videos per client per year |
| Outcome | Novel CV pipeline → production-grade D&I intelligence product |
Key Highlights
The ask was deceptively simple: help brands understand who is represented in their ads, across skin tone, body type, age, gender, demographic diversity. The reality was technically brutal. No off-the-shelf model existed for this. Academic computer vision research didn't map cleanly to broadcast ad footage. And the system needed to scale — processing video volumes that ranged from hundreds to tens of thousands per client per year, without manual intervention.
Before this work, representation monitoring was either manual, incomplete, anecdotal, or simply not happening. The data didn't exist. The tooling didn't exist. The definitions didn't fully exist either — which turned out to be the first real problem to solve.
Define. Build. Scale.
Build the definition before the model.
The first challenge wasn't technical — it was conceptual. What does "skin tone" mean consistently enough to train a detector on? We worked to establish precise, reproducible definitions for each attribute — definitions that could survive annotation at scale, edge cases in lighting and camera angle, and scrutiny from product and commercial stakeholders. The definition and the detector were built simultaneously.
Custom models, built from the ground up.
With no usable off-the-shelf solution, we built several AI models including skin tone and body type detection models entirely in-house — covering data collection, annotation pipeline design, model training, and iterative refinement. Each model had to perform reliably across diverse lighting conditions, video quality levels, and camera angles typical of broadcast ad production.
A pipeline that could handle broadcast volumes.
Models alone weren't enough. The system needed to process video at scale without manual triggering or oversight. We designed and deployed a GPU-optimised pipeline capable of handling tens of thousands of videos per batch trigger — integrated directly into ExtremeReach's existing infrastructure and outputting structured results into their client-facing insights product, XR IQ.
What changed.
| Before Ferrous Labs | After Ferrous Labs |
|---|---|
| No model existed for skin tone or body type detection at broadcast fidelity | Novel CV capabilities — not available from any third-party provider |
| Representation monitoring was manual, anecdotal, or absent | Production-ready D&I intelligence pipeline deployed at broadcast scale |
| No scalable way to process client video volumes | Brands gain quantified, time-series insight into representation across their ad portfolios |
| Brands had no data on who appeared in their own advertising | Automated, scalable — zero manual intervention per video batch |
The hardest part wasn't the model — it was defining what 'skin tone' means consistently enough to train one. We built the definition and the detector simultaneously.Ferrous Labs engineering note
Stack
Talk to engineering.
If you're at the edge of what production AI can do, we've delivered novel CV capabilities from scratch. Book a discovery call.