Beyond the Rocket: What Our Innovation Metrics Miss
- Paul Egglestone

- Dec 16, 2025
- 7 min read

If the first rocket in this series was about the myth of innovation, this one is about the metrics.
In the new image, the rocket looks ready to launch, but it never quite makes contact with the world beneath it.
That’s more or less how we measure innovation.
We track the moment of lift-off; R&D spend, patent counts, start-up formation, licences signed, but our numbers get fuzzy right at the point where innovation is supposed to matter: when it changes how care is delivered, how power flows, how classrooms work, how people’s lives actually go.
This post is about that gap between rocket and ground line—and how our metrics got stuck there.
What we count because it’s easy
If you look at most national innovation dashboards, annual reports and strategy documents, the same indicators show up again and again:
R&D expenditure
Patents filed or granted
Number of spin-outs and start-ups
Licensing income and IP deals
All of these do tell us something. R&D spend and patents can indicate knowledge creation and the intensity of inventive activity. Patent and IP statistics are an entire sub-field for a reason. Persee+1
They’re also wonderfully neat:
they’re countable (how many patents, how much R&D),
they’re attributable (to a university, company, or agency),
they fit tidily into existing reporting systems.
That neatness has consequences. Over time, the metrics we can easily count start to define what innovation is. If it doesn’t look like a patent, a research project or a spin-out, it risks disappearing from the story.
The trouble is that even the official guidance we rely on for measurement tells a more complicated story.
What the Oslo Manual actually says (if you read past the tables)
The OECD’s Oslo Manual is the global reference book for innovation statistics. Governments quietly use it to design surveys, frame policies and decide what counts as “innovation” in official data.
Two bits are worth pulling out.
First, the Manual’s core definition is already much broader than the way it’s usually discussed. It defines a business innovation as a new or improved product or business process that differs significantly from what a firm did before and has been introduced on the market or brought into use by the company. OECD MANUAL
That last phrase matters: brought into use.

Innovation isn’t just an idea or an invention. It only exists, in the OECD’s own words, once it’s being used.
Second, the earlier versions of the Manual explicitly expanded the field beyond “high tech” gadgets.
The 2005 edition recognises four types of innovation: product, process, marketing and organisational. It notes that this was the first time non-technological innovation, things like new organisational methods or service models—had been systematically included in the framework. OECD+1
So on paper:
Innovation includes new business models, workflows, organisational changes and service designs.
Innovation only really “counts” once it’s adopted—when someone actually uses it.
Yet in practice, our indicator sets still cling to the rocket on the launchpad: R&D, IP, firm formation. We take a measurement at the moment of lift-off and then look away. Even when we stay with patents, the evidence is more awkward than the charts suggest.
Patents and productivity: a messy relationship
One reason patents became such a dominant proxy for innovation is that they’re visible. We can count them, classify them, map them onto sectors and regions. That has made them attractive to economists and policymakers looking for something that feels “objective.”
But there’s a long literature warning against treating patent counts as straightforward indicators of innovation performance or economic impact.
Reviews for the European Commission have highlighted the methodological difficulties in using patents as innovation indicators, and the fact that the link between patenting and productivity growth is complex and context dependent. eml.berkeley.edu
Empirical studies show that many innovations are never patented, and that the propensity to patent varies hugely between industries and firm sizes. Patents capture some kinds of knowledge (codified, protectable, suited to certain legal regimes) and miss others entirely. Ssoar+1
When Australia’s Productivity Commission reviewed the national IP system, it concluded that the arrangements had “swung too far in favour of rights holders,” and that the innovation system had, in effect, become over-reliant on formal IP even when it was not the most efficient route to diffusion or productivity gains. Productivity Commission+2
From a measurement point of view, that means two things:
High patent activity doesn’t guarantee real-world improvements. You can have patent thickets, defensive filings and rent-seeking behaviour that actually inhibit diffusion and follow-on innovation.
Low patent activity doesn’t mean nothing is happening. In services, care, education and many digital domains, innovation is often embedded in practice, not IP.
So when we put patents on the y-axis and call it an “innovation score,” we’re often measuring what’s easy to see, not what’s actually changing.
The same critique applies, in different ways, to R&D inputs and spin-out counts. They tell us who is investing in potential rockets. They tell us almost nothing about whether those rockets ever land as functioning services.
To see that, we have to look at implementation.
The expensive, invisible part: implementation
In fields like healthcare, implementation science has grown up as its own discipline because people realised something awkward: getting from protocol to practice is hard, slow and expensive.
Recent reviews of implementation costs show that:
implementation strategies draw on multiple cost components—staff time, training, workflow redesign, data systems, supervision—that sit outside the neat budget lines of “buy tech / install tech”; these costs are often poorly measured or omitted altogether. PMC+1
systematic reviews of hospital decision-support systems find that implementation activities (integration with existing IT, change management, training) are a major share of total costs, and that failures are often due not to the underlying technology but to under-resourced implementation. PMC+1
commentaries in the healthcare innovation literature explicitly describe implementation science as a “critical but undervalued” part of the ecosystem: essential to realising benefits, but marginal in funding and prestige. Wiley Online Library+1

If you transpose that back to our rocket image, implementation is everything that happens between the rocket clearing the gantry and a reliable service being delivered on the ground:
redesigning shifts and roles so new tools actually fit,
rewriting protocols and pathways,
changing incentives and contracts,
monitoring and adjusting when reality doesn’t match the slide deck.
Almost none of that shows up in the headline innovation stats.
The same pattern shows up in energy and climate work, where the International Energy Agency and others increasingly stress that the binding constraints on transition are:
grid integration,
regulatory frameworks,
consumer adoption and behaviour,
financing and risk-sharing arrangements,
....... not the existence of more experimental hardware in labs.
Yet the things we celebrate and measure are still skewed towards the lab side: technology announcements, pilot projects, gigawatts “in the pipeline.”
In other words: our metrics are hovering just above the implementation zone. They follow the rocket into the air and then stop. Before the hard part.
What’s missing: adoption, workflow, outcomes
If we took the Oslo definition seriously—innovation as something brought into use—our indicators would look very different:
We would care less about:
how much R&D was done on a digital tool,
how many patents were filed on a new battery chemistry,
how many start-ups were formed around a care-tech idea.
We would care more about:
how many wards, clinics, councils or neighbourhoods actually use the new approach;
whether frontline staff have changed their workflows;
how long it takes for a new practice to move from pilot to the “boring” business-as-usual budget line;
whether there are measurable changes in outcomes for people and places.
To make that concrete:
In dementia care, innovation might look like a new way of organising shifts, environments and routines so people experience fewer distressing transitions—not a new app as such. The metrics would track fall rates, use of restraints, staff turnover and family-reported quality of life.
In community energy, it might look like a mutual model that changes how households share and govern surplus power across a feeder, not a new panel or battery. The metrics would track bill stability, participation rates, resilience during outages and local reinvestment.
Some of these things are harder to measure. They don’t reduce neatly to a single number. They force us into conversations about equity, place, institutional design and long timeframes.
But they are closer to what the Oslo Manual already says innovation is: new or improved products and processes that are brought into use to create value. OECD+1
Right now, we often treat those as “implementation issues” and push them off the edge of the chart.
Why this matters for public money (and for FASTlab)
None of this is an academic quibble. The way we measure innovation:
shapes where public funding flows,
sets the KPIs for agencies and universities,
drives what gets written into grant applications and annual reports.
If success is defined as “more rockets launched”—more R&D, more patents, more spin-outs—then that is what systems will optimise for.
We’ll keep investing most heavily in the first 10–20% of the innovation journey, then be surprised when so many promising pilots never turn into reliable, equitable services.
For organisations like FASTlab, which spend most of their time in the “boring” middle, co-designing new practices, working through governance, connecting communities, clinicians, engineers and councils, this shows up as a structural mismatch:
The work we do is central to whether innovations land,
but it sits in the part of the process that current metrics don’t see.
That’s one reason we care so much about reframing the story—and why this second rocket in the series hovers where it does.
Bringing the rocket down

If Rocket 1 was about the myth—the idea that innovation mainly happens in universities and spin-outs....

......Rocket 2 is about the measurement system that keeps that myth alive.
We’ve built dashboards that are great at capturing:
money spent on research,
knowledge captured in patents,
ventures launched from campuses and labs,
deals signed over intellectual property.
We’ve invested far less in understanding:
whether those things are being adopted in practice,
how they are reshaping workflows, roles and relationships,
what they do to outcomes for people and places.
Our indicators hover just above the line where innovation actually meets reality.
If we want an innovation system that is more than a sequence of launches, we have to change what we count. That doesn’t mean throwing away patents or R&D statistics; it means putting them in their place—as early-stage inputs and partial proxies, not as the main scoreboard.
The next posts in this series will look at what’s currently out of frame: the innovations that never appear in our metrics at all, and the systemic work required to coordinate and ground them.
For now, Rocket 2 leaves us with a simple thought:
We don’t just miss outcomes. We miss entire categories of innovation because we stop measuring just before they land.





Comments