Analysis

Pharmaceutical Track and Trace: An Infrastructure Problem Disguised as a Compliance Problem

The WHO's 10% falsification estimate is a supply chain architecture failure, not a regulatory shortfall.

December 18, 2025 10 min read Ryan Pehrson
Pharmaceutical Track and Trace: An Infrastructure Problem Disguised as a Compliance Problem

Key Takeaways

  • Ten percent of medicines in low- and middle-income countries may be substandard or falsified. That statistic is not a regulatory failure — it is an infrastructure architecture failure
  • Centralized versus distributed serialization architecture is not a technical preference — it is a sovereignty question that determines which countries can participate in the system
  • EPCIS 2.0 and GS1 standards are necessary but not sufficient. The last-mile verification problem in resource-constrained environments requires design choices that standards bodies don't specify

The World Health Organization estimates that up to 10% of medicines circulating in low- and middle-income countries are substandard or falsified. That number gets cited often in pharmaceutical policy discussions, usually as evidence that stricter regulation is needed. But the framing is wrong, and the misframing matters. Countries with the highest falsification rates don’t lack regulation. They lack the infrastructure to enforce it. The gap is architectural, not legislative — and closing it requires making different infrastructure choices than the ones that have dominated the past decade of track-and-trace investment.

This matters because pharmaceutical serialization is one of the few supply chain domains where a technical architecture decision is simultaneously a public health decision and a geopolitical decision. Getting the architecture wrong doesn’t just produce bad software. It produces systems that work in Hamburg and fail in Nairobi, and then policy frameworks that treat the Nairobi failure as a governance problem rather than a design problem.

Why the Compliance Frame Fails

The standard narrative treats pharmaceutical track and trace as a compliance challenge. Regulators set serialization mandates. Manufacturers build systems to meet them. Distributors integrate. Pharmacies scan. The chain closes. The narrative is linear and easy to follow, and it describes what actually happened in the United States with DSCSA and in the EU with the Falsified Medicines Directive. Both are reasonably successful by the metrics they were designed to optimize for: manufacturer compliance rates, serialized units in circulation, verified dispense events at licensed pharmacies.

But those metrics measure the easy part. They measure the behavior of large, well-resourced actors operating inside regulated environments with reliable connectivity and economic incentives aligned toward compliance. They say nothing about what happens at the edge of the supply chain — the informal market, the cross-border trade route, the rural pharmacy with a 2G connection and a generator that runs four hours a day. That edge is where falsification actually happens, and it is where compliance-frame systems break down.

The infrastructure problem is more fundamental: track-and-trace works as a verification system only if the verification infrastructure is present at the point of dispensing. You cannot verify a serialized unit without a working scan device, a live or recently-synced database, and a process for acting on a negative verification result. Remove any one of those three, and the system degrades gracefully toward a label — a barcode that signals intended authenticity without providing any mechanism to test it.

The Architecture Decision That Determines Everything Else

Every pharmaceutical track-and-trace system makes a foundational choice that shapes everything downstream: where does the serialization data live, and who controls it? The answer falls somewhere on a spectrum from fully centralized to fully distributed, and the position on that spectrum is not a technical preference. It is a sovereignty question.

Centralized architectures concentrate verification logic and transaction history in a single repository, typically operated by a national medicines authority or a contracted third-party platform. The operational advantages are real: consistency is straightforward, deduplication is tractable, regulatory reporting is a query rather than an aggregation exercise. The EU’s EMVS repository architecture and the U.S. DSCSA verification router model both lean centralized, and both function well within their designed operating contexts. The problem emerges when you ask who can participate in a centralized system at the level of sovereignty that cross-border pharmaceutical trade requires. If the repository is operated by a foreign vendor, or if verification requires a live API call to a data center located outside the country, then the country’s ability to enforce its own medicines regulations is structurally contingent on a third-party relationship it may not be able to sustain.

Distributed architectures push serialization records closer to the actors who generate them, using shared protocols rather than shared infrastructure to achieve interoperability. The transaction history lives with manufacturers, importers, and distributors as a graph of custody transfers, with verification happening by querying the relevant node rather than a central repository. This model is technically harder — eventual consistency, deduplication across jurisdictions, and reconciliation logic are genuinely complex problems. But it is the architecture that allows a country to participate in global pharmaceutical trade without ceding operational control of its verification infrastructure.

The practical answer for most markets is neither extreme. Hybrid models — centralized verification indexes with distributed transaction records — have shown the most resilience in practice. The index enables fast verification against a known-good database; the distributed records provide the audit trail and survive repository failures. The constraint that should govern every architecture decision: the verification path for a medicine dispensed in Kampala should not require a live connection to a server in Amsterdam. Whatever architecture is chosen, it must be operable at the country level, by the country’s own institutions, with the country’s own technical capacity.

Sub-Saharan Africa as Architectural Stress Test

Sub-Saharan Africa’s pharmaceutical supply chain challenges illustrate exactly what architecture choices that look neutral in Hamburg cost in Kampala. The region accounts for a disproportionate share of the falsified medicine problem globally — not because enforcement is uniquely weak, but because the structural preconditions for verification-based enforcement are frequently absent.

Consider the distribution chain for a typical medicine dispensed at a facility in rural Kenya or Uganda. It may have crossed three international borders, changed hands between an importer, a national distributor, a regional wholesaler, and a last-mile distributor before reaching the dispensing point. Each custody transfer is a potential entry point for falsification, and each transfer happens under different regulatory jurisdictions with different serialization requirements — if any. Nairobi International Airport processes pharmaceutical imports from forty-plus countries of origin. The East African Medicines Regulatory Harmonization initiative has made progress toward regional alignment, but national agencies still operate largely independently, with different serialization timelines and different technical requirements.

The practical consequence is a verification gap at exactly the point where verification matters most. Large private-sector pharmacies in Nairobi or Dar es Salaam have the connectivity, the device infrastructure, and the trained staff to operate verification at the point of dispense. Mission hospitals in rural Uganda often have none of these. A track-and-trace system designed with the Nairobi pharmacy as the model user will produce adequate verification rates in aggregate while systematically failing the populations with the highest exposure to falsified medicines.

This is not an argument against deploying pharmaceutical track and trace in sub-Saharan Africa. It is an argument that the architecture must be designed for the rural dispensing point as the primary constraint, not as an edge case to be handled later. Systems designed for the hardest operating environment degrade gracefully when deployed in easier ones. Systems designed for the easy environment fail silently in the hard one.

Where Standards End and Architecture Begins

GS1 and EPCIS 2.0 solved real problems. A common serialization vocabulary, a shared event model for custody transfers, and interoperable identifiers that work across manufacturer, distributor, and pharmacy systems are what make global interoperability possible at all. The standards are not the problem.

The problem is the widespread assumption that adopting the standards is sufficient — that an EPCIS-compliant system is, by definition, a system that will work. Standards specify what data looks like and how it is structured. They do not specify what happens when the internet is unavailable, how verification degrades in low-bandwidth environments, what the acceptable false-positive rate is when flagging a medicine as potentially falsified, or how a pharmacy with no technical staff is supposed to act on an ambiguous verification result.

Offline-first verification is the design requirement standards bodies avoid because it forces them to specify behavior their members don’t want to implement. A system that requires live connectivity to verify a medicine is not a verification system in environments where live connectivity is intermittent. The architecture must support a locally-cached verification database that can operate disconnected, sync when connectivity is available, and flag stale data with a timestamp rather than silently returning false positives. This is a solvable engineering problem. It is solved regularly in other domains — point-of-sale systems, mobile health applications, election management software. It requires someone to choose it explicitly — and compliance-framing organizations reliably choose not to.

The false-positive problem is where the tension between counterfeit interdiction and patient access becomes sharpest, and it is, as anyone who has sat in a regulatory working group knows, the hardest argument to resolve. A verification system that flags too many legitimate medicines as potentially falsified creates access barriers — pharmacists who turn patients away, medicines that sit in supply chains pending resolution, populations that stop trusting the verification process entirely. A system that is too permissive allows falsified medicines to pass. The right threshold is context-specific, it requires empirical data about the actual falsification rate in a given market, and it should be revisited as the system matures. That this calibration problem exists is not a reason to avoid deploying track and trace. It is a reason to deploy it with explicit threshold governance built in from the start.

Data Sovereignty and the Governance Layer

Track-and-trace transaction records are a byproduct of the verification process, but they are not a negligible byproduct. A complete record of custody transfers for all medicines dispensed in a country is, in aggregate, a map of that country’s pharmaceutical supply chain, its prescribing patterns, its disease burden, and its healthcare utilization. The question of who has access to that data, under what conditions, and for what purposes is not a technical detail. It is a governance decision with significant national security, public health, and commercial implications.

Proprietary platforms operated by foreign vendors have a structural interest in aggregating this data, and the contractual protections against that aggregation are frequently inadequate. National medicines authorities that have deployed track-and-trace on vendor-hosted infrastructure often discover, several years in, that the data they need to do pharmacovigilance — to detect emerging quality problems, to identify distribution anomalies, to understand their own medicine supply — is held by a vendor whose cooperation they depend on.

The architecture decision and the data governance decision are not separable. A distributed architecture with national node operators provides a structural basis for data sovereignty that a centralized vendor-hosted architecture cannot replicate by contract alone. This does not mean national infrastructure is always the right choice — the operational capacity required to sustain national infrastructure is itself a constraint. But it means the data governance question must be part of the architecture decision, not a downstream policy question addressed after the infrastructure is deployed.

What Actually Works

Programs that have actually closed the falsification gap share design choices that don’t appear in standards documents. They’re built for the rural dispensing point, not the urban pharmacy. Offline operation is first-class, not an exception mode — cached data, sync windows, explicit staleness timestamps. False-positive thresholds are set before deployment with a governance process attached, not treated as a property the system will have by default. And they run on open standards with national operational control, not vendor-hosted infrastructure that turns data sovereignty into a contract negotiation.

But the most underrated design input is stakeholder engagement that reaches past the large manufacturers and national medicines authorities — to the distributors who actually move medicines, the pharmacists who actually dispense them, and the community health workers who are often the last point of contact before a medicine reaches a patient. Not as a consultation exercise, but as a design input. The people who know where the supply chain actually breaks are not in the room where systems are designed. Getting them into that room is not a process requirement. It is an accuracy requirement.

Synthesis

The 10% falsification estimate is not inevitable. It is a consequence of architecture decisions — about where data lives, who controls it, how verification degrades when infrastructure is unreliable, and whose operating environment is treated as the design baseline. Regulatory mandates can accelerate deployment of the wrong architecture as efficiently as they can accelerate deployment of the right one.

The countries that solve this problem will not do so by adopting stricter regulation. They will do so by making infrastructure choices that place verification capability at the last mile, that preserve national operational sovereignty, and that treat offline-first design and false-positive governance as engineering requirements rather than policy aspirations. None of this shows up in a serialization compliance audit. It is also the only set of choices that will actually close the gap.

Share
Ryan Pehrson
Ryan Pehrson
Founder & Managing Principal, Pharynos

Ryan advises organizations on strategy, technology, and transformation. He founded Pharynós to bring top-tier advisory rigor to leaders navigating digital change.

LinkedIn

Want to Discuss This Topic?

Schedule a Conversation