This is a draft script of a talk I gave at the Cambridge Union as part of a debate on the motion: "this house believe we can put a value on human life". I had to abridge it quite a bit for the actual event, but I can't be bothered editing it here and some people may be interested in a fuller articulation.
In Stephen Spielberg’s cinematic masterpiece Jurassic Park, Scientists clone dinosaurs from prehistoric DNA. It’s all wonderful until the dinos get free and start eating everyone. Jeff Goldblum has a lot of iconic lines in Jurassic Park, but the one that made him meme famous was: “your scientists were so preoccupied with whether they could, they didn’t stop and think whether they should”. I put it to you that the allegory of Jurassic Park and the logic of Goldblum’s dialogue applies when it comes to putting a numerical value on human life. We can, but we shouldn’t.
There are a few ways analysts approaching determining the value
of a life in economic analysis.
First, there are willingness to pay methods. These involve
eliciting, either through direct questions or some sort of ingenious survey,
how much money someone will pay to achieve appreciable reduction in their risk
of dying.
There are two related methods. In the labour market
approach, we look at the differences in pay between relatively more or less
risky jobs. How much ‘danger money’ do you get for working in a warzone, that
sort of thing.
And in the consumer preference approach, we look at how much
people pay in real market transactions for safety features like airbags. We can
then back out how much safety is worth to them.
Second, there is the human capital method, where we try to
estimate the earning potential of an individual. We think about how long a
person is likely to live, how much time they will spend working in that life,
and their typical hourly wage. Then we multiply it all up to get their value as
a human being.
So we can and do put a value on human life. Or so analysts
will tell you. But it’s not clear to me that any of these techniques produce
meaningful values. These numbers are actually illusory.
One set of reasons for why these estimates are meaningless pertain
to technical issues. Let me list some of these off quickly. First, it is well
established that humans are terrible intuitive statisticians. We badly misjudge
the base rate with which some events occur, we’re highly susceptible to the
gambler’s fallacy, and our attitude to various risks depends substantially on
how those risks are framed, rather than on the underlying math. Pointing out
all this errors in our common statistical reasoning was one way that
behavioural economics got started.
Why then would we trust people’s stated or revealed preferences
for risk?
This is especially true with respect to events that rarely
occur. Wilfredo Pareto, one of the main intellectual progenitors of the
rational choice theory that underpins this sort of economic analysis, stressed
that people’s behaviour could only be taken to reveal preference in cases where
they have repeated the choice many times. Otherwise they simply didn’t have
enough information.
Now with respect to risky thing, like car crashes and
airbags, very few of us have any opportunity to learn what it’s like for those
things to happen. That’s great. But it does mean that willingness to pay
judgements are not rational, informed, or wise in such contexts.
A second technical issue relates to long term predictions in
complex systems, which is basically impossible. The butterfly effect is a
catchy name for the idea that a small change in a deterministic nonlinear
system can result in very large changes over time in that system.
This is a big problem for the labour market approach. I’m
37. Let’s assume, generously, that I’ll live to the classic academic retirement
age of 65. So I’ve got 28 more years to work. What are the odds that I will be
promoted to Professor in that time? Or that my next popular book will be a
smashing success? What are the odds that my trade union will successful lobby
for higher wages? What about the odds that the government will improve funding
for universities? Or that university management will finally address the colossal
administrative waste that drives much of our cost base? What are the odds that
there will be an apocalypse in that time from AI, climate change, some rogue
dictators, or whatever. All these probabilities swirling but we just take my
wage and multiply it by my life expectancy.
These problems of projection and prediction bedevil nearly
all cost benefit analysis. Take a very simple case – whether to expand a bus
network in medium size city, or build a tram system. The tram system completely
changes the long term structural development of that city by encouraging
densification along a narrow corridor. The bus system instead encourages a
proliferation of hubs. Neither of these long term developmental pathways can be
predicted today, but they matter immensely for the wellbeing of the living in
that city.
So we get some numbers of out these methods that seem ‘reasonable’,
but it’s not clear to me that they are meaningful. They rely on assumptions
stacked upon assumptions, many of which are little more than sticking a finger
to the wind. And these assumptions are often immensely decisive. Setting an
interest rate high or low can easily turn a long-lived infrastructure project
from a bargain into a white elephant, and we’ve seen in the past year how unexpectedly
and unpredictably interest rates can fluctuate.
But the bigger issue around meaning is really the normative
assumptions – the ethical judgements – that
go into putting a value on a human life. The most obvious one being that
everything is denominated in money. My willingness to pay. The market value of
my labour.
I’m reminded of one of my favourite Saturday Morning
Breakfast Cereal comics, called the ethical fourier transformation. This a joke
method where when you’re confronted with an ethical conundrum, you convert it
to the realm of economics. Having solved it there, you declare it a solved
ethical problem.
In the comic, an economist advocating for the ethical
fourier transform is challenged to explain how it can solve even basic ethical
dilemmas, like whether it is ethical to steal bread to feed your family. The
economist checks grain prices and declares that it is ethical until the projected
price dip in Mid-November.
The interlocutor replies that this is a stupid method – it should
always be ethical to steal to eat.
But the economist is crafty and asks “is it ethical to steal
truffle mushrooms and champagne to feed your family?”
I guess not. The interlocutor reflects on this awkwardly for
a while before saying that the conversation is weirding them out.
The economist shakes their fist: “come to the dark side!”
I put it to you that this is in fact the dark side. That it
drives us towards dystopia. And that there’s plenty of evidence for that
already.
A central reason why we use money in cost-benefit analysis
and the other methods that utilise the numerical value of a human life is that
money is unidimensional and cardinal. This makes mathematical comparisons
across options relatively straightforward.
But there are obviously very many things that we care dearly
about that are not traded in markets, nor are they straightforwardly
describable numerically in terms of magnitude or intensity. Things like love,
freedom, agency, social harmony, neighbourliness, democracy, prestige, etc. Human
wellbeing, especially at the social level, is made up of incommensurate,
multidimensional factors that are often indivisible.
The consequences of brushing these complexities out of
economic analysis are voluminously documented, especially in the policy space. For
example, in Hilary Cottam’s book Radical Help, which concerns how to fix
the British welfare state, she explains how aged care services are
overwhelmingly commissioned on the basis of efficiency. Standard economic
logic. We need value for the taxpayer dollar. We need to get aged care done at
rock bottom prices so that we have more money for homelessness, or tax breaks,
depending on your politics. Certainly we need ‘rigorous’ evaluation, and that
requires measurement.
What’s measurable? The number of old people washed, wages,
time.
What can’t we measure? Human care. Sensitivity. Being personable.
Relationships.
So old people are washed brusquely by underpaid workers
whose every incentive encourages them to rush. The old and vulnerable are left
feeling violated and discarded by society, and the aged care workers much the same.
What is the value of these feelings?
Have we commissioned good ‘aged care services’ in this case?
I think not. But economic analysis is frankly unable to commission good aged-care
services because precisely what makes these services good, makes them humane,
is something qualitative.
More broadly, the fundamental weakness of economic modelling
is that it must assume sociology, psychology, and political science away for
tractability. This can often result in powerful policy insights that improve
human lives. An example is quota-trading systems for fisheries management, which
have resulted in highly profitable and biologically sustainable fisheries in
Australia and New Zealand.
But there are also many negative results, including some of
today’s most pressing social problems, and most of these have some relationship
to the value of human life, and certainly to economics’ impoverished
understanding of human wellbeing as mere desire fulfilment.
In the US rust belt and to a lesser but still significant
extent in the UK’s deindustrialised North, deaths of despair from opioids are
accelerating at a frightening rate. Much of the academic literature on this
topic is converging on the disintegration of social networks as a key driver of
this hopelessness. Community pubs, scout halls, sports clubs and other sources
of social cohesion were systematically defunded from the 1980s onwards because
they contributed insufficiently to productivity.
Meanwhile, the prevailing economic orthodoxy was that if
economic opportunity was leaving a place then we shouldn’t fight those market
forces. It would be inefficient. Dying regions didn’t need help; people in them
needed luggage so that they could move to opportunity. This again overlooks the
centrality to human wellbeing of roots, connections, familiarity, identity, networks,
history, tradition, culture, on and on. All that is considered is where wages
are higher – that’s all that people can possibly want. So the brightest are
incentivised to leave while rest stay behind in increasing misery.
The
youth mental health epidemic, which any of my academic colleagues can tell you
about, is a function of encouraging teens to see their value in terms of their
personal ‘achievement’, whether financial or in terms of how hot and cool you
are, rather than their self-actualisation or their contribution to a
collective. Education is aggressively oriented towards preparing people for
work, not life, and then student’s sense of self worth comes to be dominated by
their grades, internships, graduate positions, not their interpersonal skills, relationships,
wisdom, or happiness.
I’ll
give you one story, again from Cottam’s Radical Help. After months of intensive
social work, 2 children from a ‘difficult’ home were finally prepared for
school. There household had been calmed down, they were being fed and
chaperoned, they had uniforms. They were blocked at the school gates by the
principal who was worried about the effect on this already marginalised and
penalised school’s league table results if these 2 problem children were
readmitted. They returned home, their hope and trust shattered. Funding models
premised on literacy and numeracy results are the product of analysing human
life in terms of the economic factors we can measure instead of the giving due
to our full richness.
This
is an example of New Public Management, the site of the greatest human carnage
caused by economic logic is New Public Management – the application of economic
logic to how government goes about its business, especially the commissioning
of public services like education, health, social care, and job centres. Again,
there were many bright spots, like garbage collection. But there are also so
many spots of the darkest blight.
Mental
health and social workers spending most of their time filling out forms to meet
diagnostic criteria and justify funding. Pissing millions up the wall on
spurious business cases in the name of not wasting taxpayer dollars.
Metrics
and no-strings philanthropy.
Maybe
you think this isn’t about the value of a human life. But it’s all the same
central conceit – that the full richness of a complex puzzle is a distraction,
an inconvenience that prevents us from rationally analysing a problem.
So we have to flatten that complex problem into a few key variables that we can
stick into a spreadsheet and run some maths on. Let’s just make some
simplifying assumptions. That’s fine until your assume away the point of
the exercise.
I’m
starting to drift into public policy, and it is here where I think faith in
numerical, material models of human life have their most pernicious influence,
which is to abrogate the moral responsibility of politicians and citizens and
simultaneously debase the role and value of science in democracy.
Remember
COVID? Remember how the politicians kept saying ‘we’re following the science’.
The ‘science’ they were referring to were epidemiological models that make
ethical assumptions about the value of life expectancy and cost-benefit
analyses of the economic burden of lockdown that can’t handle non-market goods.
These value judgements aren’t a matter of science. They’re ethical. They’re
political. They are precisely the sorts of thing we want politicians to take
responsibility for. Instead they abrogate that responsibility to science.
They do this precisely when science is least capable of making the associated judgements – high stakes, unpredictable, urgent situations like pandemics, financial crises, climate change, refugees, wars, etc.Complex systems
Scientists try their best in these situations, no shade on SAGE, but no matter how many caveats they pump into their reports about confidence intervals and ethical complexity, politicians are going to ignore all of that and just ask for a number. Politicians do not want to take responsibility for hard decisions, and scientists are too willing to let them abrogate responsibility, because it gives them power.
Beatrice Cherrier, a historian of economics, has investigated the slow disappearance of ‘welfare economics’ from the field. Welfare economics is essentially the exploration of normative issues in economics. Her conclusion was that economists realised that they obtained more policy influence the more they could pretend that their work was pure science. So the value judgements get buried ever further.
When economists do cost-benefit analysis that want a dispassionate procedure that can be applied mechanically to spit out a number. Ethics, which is ultimately vibes, contaminates the desired sterility. But politicians and citizens don’t need to be dispassionate. They can make the value judgements. We should let them, not sweep all these value judgements under the rug.
Some economists will say that this is precisely what cost-benefit analysis is – a contribution to public discourse merely meant to stimulate debate. If that were true then economists wouldn’t be so outraged when policymakers disagree with policy analysis, and they certainly wouldn’t accuse policymakers of being irrational.
What this ends up doing is debasing science and corrupting public policy. We undermine democratic processes, reduce faith in technical analysis, and feed populist concerns that scientists sneak values into their technical analysis to arrive at conclusions that suit ‘elites’. So we should instead be much more reluctant to use models that put statistical values on human life in public policy.
Comments
Post a Comment