May 06, 202615 min

Drive time, honestly: a transparent tachograph layer for mixed fleets

Drive time, honestly: a transparent tachograph layer for mixed fleets

Drive time, honestly: a transparent tachograph layer for mixed fleets

 

Why drive time is the bottleneck nobody talks about

For a freight forwarder with own assets, chartered trucks, and the occasional spot booking, the question "can this driver legally do this load?" sits underneath almost every commercial decision: which truck do I assign, can we still make the slot, do I need a relay, do I call the customer now or in three hours.

Today, in most ops floors we've sat in, that question is answered three times a day, by hand, per truck. Dispatchers we interviewed described phoning carriers, eyeballing telematics, and doing the 9h-vs-11h rest math in their head. They also described the cost when the math is wrong:


"If the truck is not able to bring windshields on time, there is a risk the whole factory will stop. You can imagine what kind of losses we're talking about."
— Vladislavs Aleksejevs, sennder chartering


"One late truck can force me to replan five loads. Each change ripples through the day, and the admin is overwhelming."
— Bob Snijder, FleetPartner


The cost isn't just compliance. It's utilization (the #1 KPI for chartering), it's empty kilometres on the replan (typically 50–100% more on a forced reassignment), and it's the OTP threshold above which shippers keep paying you. Drive time data is the input that decides whether ops get to fix things proactively or apologetically.

So when we set out to rebuild the drive time surface in the CO3 platform, we didn't try to make a prettier dashboard. We tried to make the underlying data trustworthy enough that a planner could act on it without double-checking.


Three commitments

Before we get into how we built it, the three commitments the new layer makes — because they're what the rest of the post is about:

  1. Real-time, by construction. Whenever a planner asks — 09:14, 14:02, in the middle of a phone call with a customer — the answer reflects what the driver is doing right now, not what the carrier's system happened to record at the last convenient moment. There is no "wait for the next snapshot" failure mode.
  2. Harmonized, regardless of source. The shape of the answer is the same whether the truck is on a top-tier OEM telematics feed, a third-party tracker, a DDD upload, or a hard-to-track subcontractor. A planner UI, an API client, or a partner TMS can render the same fields in the same way for every truck in a mixed fleet. The complexity of "which provider?" is covered by the CO3 platform, not the consumer.
  3. Transparent about provenance. Every field in the response carries the answer to "how do you know?". Which sources contributed which slice of the timeline, when they were recorded, where the platform had to fall back, and how confident we are. We think hiding this is a category default that's wrong.


The context gap: why drive time data is harder than it looks

A driver's "remaining legal budget" is not a single number. Under EU Regulation 561/2006 it is a small system of state machines: daily and weekly driving caps, a working-time cap, three reduction budgets, two extension budgets, a 13-or-15-hour shift window, a compensation debt counter, and a split-rest pattern detector. Each resets on a different boundary.

Now layer reality on top:

  • Fleets are mixed. Own trucks, chartered carriers, spot bookings — each with a different telematics provider, a different OEM, different data sets available and different transmission frequencies.
  • Snapshots and files go stale. A tachograph snapshot or DDD file from a provider is a counter at an instant. The driver kept driving after that instant. By the time the planner looks, the snapshot is already 15 min old; if it's based on DDD file downloads, much older.
  • Sources disagree. A snapshot says the driver has 4h 15m of daily drive left; the activity history feed disagrees by 12 minutes because of how a 3-minute gap was classified.
  • The "fallback" question is unavoidable. What do you show when the snapshot is missing the shift window field but you have the activity history? What about when you have neither and only telematics pings to infer driving from?

The instinct in this category is to abstract all of that away and surface a single number. We tried that — the v1 API exposed roughly a third of the domain model, no extension or reduction budgets, no shift window, and a flat source string instead of structured provenance. This created bottlenecks in data trust.

The thing customers kept telling us — across every discovery call — was a variant of:

"I need to trust the data and recommendations, so I can act faster and with more confidence."

You don't earn that trust by hiding the seams. You earn it by showing them.


What the new API exposes

The first move was unglamorous: show the fields. What a planner (or a partner platform) now gets, in the vocabulary the regulation actually uses:

  • Five capacities, all in the same shape: next drive, daily drive, weekly drive, bi-weekly drive, weekly work — each with remaining time, the regulatory limit, and a percentage. Packaged for UIs to iterate over and render gauge cards programmatically.
  • A shift window object. When the current work period started, when it must end (with the 13h-vs-15h split-rest case computed inside the platform), and how many minutes are left. This is the answer to "when must this driver stop?" — the question that actually triggers a re-plan.
  • Counter-style budgets. Driving extensions, daily rest reductions, weekly rest reductions — exposed as "X of Y remaining" with an active flag where it matters.
  • A current activity object. What the driver is doing right now (Driving / Break / Available / Other Work / Unknown), when they entered that state, and how long they've been there.
  • Compensation debt as a number. When a driver has taken a reduced weekly rest, they owe time. We expose the debt in minutes so a planner sees it before promising the driver a 24-hour rest that isn't legal.
  • A structured provenance record. Per response, which sources contributed which slice of the timeline, where the platform fell back, and a confidence read at the field level.
What the new API exposes


Real-time, always: the best-available answer, every time you ask

The non-obvious property of the new layer is the one that took longest to get right: a planner can ask at any moment, and the platform produces the best estimate it can make as of that moment. There is no "wait for the next snapshot" failure mode.

The platform maintains several internal computation paths, ranked from highest fidelity (a fresh, complete provider snapshot) to lowest (a conservative reconstruction from movement events alone). When you ask, it picks the highest-fidelity path the available data supports for that driver at that moment — and if the primary path is missing a field, it backfills from a lower-fidelity path rather than discarding the whole answer.

The user-visible consequences:

  • The freshest answer always wins. Output reflects state up to the request time, not the last convenient input.
  • Partial data is still useful. Missing a piece of the truth doesn't return null; it returns a conservative best estimate, marked accordingly.
  • The shape never changes. A planner's UI doesn't need to special-case "I have a snapshot today, only history tomorrow" — it gets the same fields regardless of which path was taken.
  • Conservatism is a deliberate property of the lowest-fidelity paths. A planner told "30 minutes" who actually has 50 will not be upset. The reverse might end a contract.


Harmonization: one response shape, every fleet, every provider

For a forwarder running a mixed fleet, the surface area of "different providers behave differently" is where most of the integration cost has historically lived. The new layer absorbs that surface area into the platform.

  • A planner sees the same fields for the truck on a top-tier OEM feed and the truck on a third-party tracker. No two-pane UI, no "this carrier doesn't support X".
  • Subcontracted vehicles look the same as own vehicles. Same shift window, same capacities, same lineage record. The chartering teams we work with described this as "pure gold".
  • A logtech partner integrates once. Their TMS or planning tool doesn't ship a "Volvo adapter" and a "FleetBoard adapter". They ship one consumer of the CO3 surface.
  • Edge cases are computed inside the platform, not by every consumer. Split daily rest, reduced weekly rests, compensation debt — consumers don't reimplement EU 561, they read its outputs.


Showing our work: the audit lens

The thing we are proudest of — and the thing we did not see in any competing API surface — is the auditability of every drive time response. Internally we run a tool, the Tacho Audit Analyzer, that takes any vehicle and any "as-of" timestamp and produces a complete, human-readable account of how the platform arrived at its answer. The same provenance ships in the API response, structured.

What it shows per request:

  • A plain-English decision summary. Which computation path was selected and why, how many input rows informed it, and what time window the platform had contiguous coverage over.
  • Field-level confidence. Each capacity, budget, and shift field carries a fidelity tier (HIGH / MEDIUM / LOW) and a numeric confidence score.
  • Resolved regulatory limits. The exact maxima the platform applied so an auditor can see we're computing against the right rule set.
  • The lineage table. Every contiguous slice of timeline that contributed to the final answer, with data source type, provider name, recorded-at timestamp, and the time window it covers.
  • What was considered but not chosen. When the platform had multiple candidate inputs, we record which was selected, why, and what was dropped.
Showing our work: the audit lens

A few use-cases this unlocks beyond a planner's confidence in a single number:

  • Provider scorecards. Which TSP delivers complete snapshots vs. which ones force backfill? You can't run that scorecard if you don't write down what each provider gave you.
  • Auditor / compliance defensibility. If a violation question lands on a forwarder's desk, the lineage record is a contemporaneous account of what the platform knew and when.
  • Partner debugging without a screen-share. When a logtech partner sees an unexpected number, they look at the lineage themselves — no ticket needed.

We think hiding lineage is a category default that's wrong. A drive time number without lineage is a number we cannot defend.


The compliance edge cases we actually compute

Every drive time vendor demo goes well until someone asks about split rest. The cases the platform handles today, so consumers don't reimplement them:

  • Split daily rest (3h + 9h). Recognised as a valid full daily rest, triggers the 15-hour spread-over on the shift window.
  • Daily rest reductions (11h → 9h). Counted in the daily rest reduction budget. Reset on every qualifying weekly rest.
  • Weekly rest reductions (45h → 24h). Compensation debt computed and surfaced as minutes owed.
  • Driving extensions (9h → 10h). Counted with an active flag for whether the current shift is using one.
  • Night shift trigger (00:00–04:00 work → 10h work cap). Adapted for country-specific time periods.
  • Cabotage and cross-trade support (bi-lateral use cases not yet covered).

If you're a logtech platform building on top: what the response contains is computed inside the domain. There is no "client must combine field A and field B" gotcha. If the platform can't compute a field correctly, it returns null and the lineage record explains why.

What we learned

A few principles that fell out of building this:

  • Hide complexity, but not provenance. Customers can absorb "this answer used a fresh snapshot plus five minutes of replay". They cannot absorb "the answer is wrong, sorry". Provenance is not a feature for power users; it is the precondition for trust.
  • The right unit of fallback is the field, not the response. When a primary input is missing one piece, fill that piece from a secondary source — don't discard the rest.
  • Measure yourself before customers do. Running parallel paths and comparing is cheap relative to the cost of being wrong.
  • Conservatism is a feature in the lowest-fidelity path. The lower on the fidelity ladder, the more the answer should under-estimate what's left.

Try it

If you're a freight forwarder running a mixed fleet, the API is live in v2.0 today. If you're a logtech platform — TMS, planning tool, dispatcher app — and want to build on top of an EU-561-faithful drive time model, talk to us.