When the Numbers Serve Power
When the Numbers Serve Power
Economic statistics present themselves as if they were neutral instruments, an unfiltered record of the world. They arrive as precise numbers — growth rates, employment figures, inflation percentages — and their authority depends on the assumption that they are produced without interference, according to consistent methods, insulated from the pressure of immediate political advantage.
They are designed to provide a shared frame of reference for policymakers, markets, and citizens, a way of seeing the same underlying conditions even when interpretation differs. That shared frame is not inevitable. It rests on a specific arrangement of institutions, incentives, and norms, and it can be dismantled.
The structure of this arrangement is deliberate. Statistical agencies operate within government but apart from its partisan machinery. Their work is carried out by career specialists who apply established definitions and techniques to data collected through surveys, administrative records, and other channels. Leadership, though appointed by political authorities, is expected to act as a custodian rather than a political agent.
The purpose of this design is not only to protect the accuracy of the numbers but to preserve their legitimacy as a common reference. The numbers may be imperfect; the process must be beyond suspicion.
This separation has deep roots. The creation of economic statistics has often been tied to moments when governments needed new ways to see and manage their economies. In the seventeenth century, William Petty’s “Political Arithmetick” measured Britain’s resources for war. During the Great Depression, Simon Kuznets’s national income accounts helped direct federal policy against collapse.
The invention of GDP during the Second World War enabled the Allies to coordinate and manage resources for the conflict. In each case, a statistical lens was built to fit the structure and demands of the moment, and in each, the legitimacy of the figures was crucial to their purpose.
The statistical frameworks in use today were shaped in the mid-twentieth century, when economies were dominated by manufacturing, wage labour, and market transactions. GDP measures output at market prices; productivity measures the value produced per hour worked; price indices track changes in a fixed basket of goods.
These measures were well suited to the economies that created them, but their fit is less certain in a digital economy where much value creation occurs outside formal markets, where unpaid labour and free services replace priced ones, and where corporate profits are tied to control over data and service ecosystems rather than physical production.
This misalignment is more than a technical problem. When the categories no longer match the structure of the economy, the picture they produce becomes less comprehensible. Growth may appear steady even as many households experience stagnation or decline. Employment may be counted as strong even if the quality of work is worsening.
In such conditions, the distance between statistical record and lived experience grows, eroding trust. That erosion is amplified when political leaders, rather than working to improve the measures, question their validity or seek to alter the institutions that produce them.
The temptation to do so is built into the structure of political incentives. Economic indicators are also performance indicators; they can be used to claim success or assign blame. A favourable number can be amplified as evidence of competence; an unfavourable one can be challenged as misleading or wrong. In stable periods, the insulation of statistical agencies holds, preventing such pressures from distorting the process. In more polarised environments, that insulation becomes a target.
One of the most direct ways to break it is to remove the leadership of a statistical agency in response to data that reflects poorly on those in power. Even if the professional staff remain in place and continue their work according to established methods, the act signals that measurement itself is politically contingent. The lens through which the economy is seen can be changed by changing the person responsible for it.
Once this precedent is set, the authority of the numbers is diminished. The next release will be read not only for what it says about the economy but for what it suggests about the political loyalties of those producing it.
The normal processes of statistical revision illustrate how fragile this trust can be. Initial figures are based on incomplete data; as additional information arrives and seasonal adjustments are refined, the numbers are revised. This is a feature of the system, a means of balancing timeliness with accuracy. But the technical nature of these changes makes them easy to recast as signs of bias or deliberate manipulation.
Downward revisions to earlier months’ job growth, for example, can be portrayed as politically motivated even when the same processes have produced similar revisions under different governments. The complexity of the methods becomes a liability in the face of a simpler, more politically useful story.
When this kind of politicisation takes hold, the consequences extend beyond a single figure or agency. Markets that doubt the reliability of official data price in greater uncertainty, raising costs for governments and businesses alike.
Policymakers who question their instruments risk making poorer decisions. Citizens who no longer trust the numbers lose one of the few shared reference points available for judging policy and understanding collective conditions. Without that shared ground, measurement becomes another site of partisan struggle, and the possibility of coordinated action diminishes.
In this vacuum, private data sources gain prominence. Corporations already collect vast, detailed information about economic activity, often more quickly than public agencies can. But their incentives differ: the data is gathered for commercial purposes, not as a public good. Access may be restricted, coverage uneven, and methods opaque. The shift from public to private measurement fragments the informational basis of economic governance, replacing a common lens with multiple, proprietary ones.
The danger in politicians making the statistics fit is therefore not only that the numbers will be wrong in the short term. It is that the underlying system for producing credible, shared measures of the economy will weaken, and with it, the capacity for society to act on a common understanding of its condition. The more the statistical lens is treated as an instrument of political advantage, the more it ceases to function as a lens at all. What remains is not a dispute over the meaning of the figures, but a deeper contest over the authority to decide which figures are recognised and which are discarded.
The history of economic statistics shows that measurement systems can and do change to reflect new realities. But it also shows that such changes require institutional stability, political will, and public trust. Without these, the shift is unlikely to be deliberate or coherent. Instead, the space left by a discredited public framework will be filled by competing claims, each backed by selective evidence, each aligned with a different political or commercial interest.
At that point, the nature of the economy itself becomes secondary to the struggle over its representation. The numbers cease to be a shared description of reality and become weapons in a contest to define it. And once reality is contested at that level, the question is no longer what the economy is doing, but who has the power to say.