21 Jul 2016 - 22 Jul 2016 All day CHANGE OF VENUE, now in SG1 Alison Richard Building, Sidgwick Site


Registration for this event is now closed. If you are interested in attending please email Michelle Maciejewska so that your name can be put on the waiting list.

This is a conference convened by the  Limits of the Numerical Project at CRASSH.

Anna Alexandrova
Gabriele Badano
Stephen John
Trenholme Junghans

Numbers and quantifying operations guide the social, ethical, and financial valuation of medical interventions at all levels of healthcare policy.  How have particular measures and models of evidence currently prevalent in healthcare developed?  What are the practical effects of the use of such measures and models, and what philosophical issues do they raise?  Convened by the Limits of the Numerical research group at CRASSH, this two-day conference brings together scholars working across history, literature, philosophy, political science, sociology and STS to engage these and related questions.

The conference will open and close with keynote addresses by Professors Havi Carel (University of Bristol) and Ted Porter (University of California Los Angeles). Please click on the tab to see the draft programme.

For administrative enquiries please contact Michelle Maciejewska.

The Limits of the Numerical project is funded jointly by the ISRF and the Isaac Newton Trust.



Thursday 21 July



Introduction and welcome

Keynote Speaker

Havi Carel (University of Bristol)


Tea/Coffee break



Quantifying life: Understanding the historical emergence of Quality-Adjusted Life-Years (QALYs)
Eleanor MacKillop (University of Liverpool)

How far can quantified measures of quality of life be generalised?
Polly Mitchell (UCL)





Practices of valuation: lessons from epidemics
Victor Roy (University of Cambridge)

The molecular diagnostics market, 'surplus-value', and the cost per QALY ratio 'taboo' in the UK/USA – a psychoanalytic discourse analysis
Owen Dempsey (University of Manchester)


Tea/coffee break



Statactivism: How to fight with numbers
Isabelle Bruno (Université Lille 2)


Wine Reception in the atrium


Friday 22 July


GRADE-ing evidence and the 'true treatment effect'
Chris Blunt (LSE)

Conceptual engineering and the science of well-being
Elina Vessonen (University of Cambridge)


Tea/coffee break



Re-describing measurement:  understanding the evolving diversity of self-rated health
Tiago Moreira (University of Durham)





Why do countries make different drug coverage recommendations?  Are qualitative findings similar, complementary, or contradictory to quantitative ones?
Elena Nicod (LSE)

The concept of ‘patient relevance’ in pharmaceutical benefit assessments in Germany
Katharina Kieslich (KCL)


Tea/coffee break



Politics by the Numbers
Ted Porter (University of California Los Angeles)


Chris Blunt: “GRADE-ing Evidence and the 'True Treatment Effect'.”

GRADE is the most influential and most widely used hierarchy of evidence, a tool for the appraisal of the quality of evidence and the strength of recommendations in healthcare. It is used by organizations including the WHO, the Cochrane Collaboration, the British Medical Journal and NICE to evaluate the strength and quality of clinical evidence. The system embeds a fundamental assumption—that there is a 'true treatment effect' for any given treatment which can be measured with some degree of accuracy. Drawing upon work on heterogeneous responses to treatment, I argue that the 'true treatment effect' is usually a fiction. Predicting individual treatment effects is the most significant task in therapeutics. The notion of the 'true treatment effect' obscures both the importance of this task, and the work that can be done towards providing a solid evidence base for individualized predictions. There are methodological limitations which prevent us directly measuring individual treatment effects. As such, work to measure and predict individual effects will need to integrate a number of approaches and sources, many of which are not, and cannot be, prioritized by hierarchies such as GRADE.

Isabelle Bruno: “Statactivism: How to Fight with Numbers.”

Nowadays statistics are often contested. Certain movements denounce them, accusing quantification of freezing human relations; of conveying a cold image of society; of constantly evaluating human beings, citizens, workers. Yet there are also forms of emerging collective action that use numbers, measurements and indicators as means of denunciation and criticism. Hence our coining of the portmanteau word: statactivism. Formed by contraction of statistics and activism, it refers to a particular form of action within the repertoire used by contemporary social movements: the mobilization of statistics. It may be understood as a slogan to be brandished in battle, but it is also a term to be employed in describing those experiments aimed at reappropriating statistics’ power of denunciation and emancipation. Traditionally, statistics has been used by the worker movement within the class conflicts. But in the current configuration of state restructuring, new accumulation regimes, and changes in work organization in capitalist societies, the activist use of statistics is moving. Based on the results of a research project (Bruno and Didier, 2013) and of an international conference bringing together activists, artists and social scientists (Paris, May 2012), this talk will seek to shed light on these developments.

Owen Dempsey: “The molecular diagnostics market, 'surplus-value', and the cost per QALY ratio 'taboo' in the UK/USA – a psychoanalytic discourse analysis.”

I have re-framed our commonly accepted views of how medicine makes progress through biotechnological innovations in terms of a circulation of capital, and of the citizen transformed as both labour-power and raw material.  The 'object' of Kordela's theory of a biopolitics of capitalism is 'surplus-value' in terms of surplus economic value (profits for industry) and the fantasy of immortality (faith for the patient), where what is at stake is the exploitation of 'life itself'.  War-like discourses of the creation and marketization of a prognostic genetic signature to predict cancer recurrence are analysed. I will describe the potential effects of, a) the test-result on the transformation of patient subjectivities, and b) the use of a relative cost effectiveness ratio threshold, in the marketisation process, on UK National Health Service effectiveness.

Katharina Kieslich: “The concept of ‘patient relevance’ in pharmaceutical benefit assessments in Germany.”

Pharmaceutical benefit assessments on new medicines were introduced in Germany in 2011. Similar to other countries, the underlying rationale was the desire to conduct evidence-based assessments that inform health care coverage decisions. A close look at the German pharmaceutical benefit assessment system shows that its methodological design deviates from other, more well-known, examples of health technology assessment (HTA) systems, such as the National Institute for Health and Care Excellence (NICE) in England, in three important ways. First, the Institute for Quality and Efficiency in Health Care (IQWiG) conducts assessments within, not across disease categories. Second, the outcomes of benefit assessments form the basis for price negotiations between sickness insurance funds and pharmaceutical manufacturers, not the basis for making coverage decisions. Third, the key criterion for assessing the benefit of a new product is the concept of ‘patient relevant’ endpoints. This talk explores how IQWiG and the Federal Joint Committee (FJC) have operationalised and quantified this concept of patient relevance. It concludes that ‘patient relevance’ remains a poorly defined, and difficult to operationalise, concept that gives rise to controversy. 

Eleanor MacKillop: “Quantifying life: Understanding the historical emergence of Quality-Adjusted Life-Years (QALYs)”

Quality-Adjusted Life-Years (QALYs) measurements are central to health care decision-making in Britain and abroad (Weinstein et al. 2009). Yet, their history remains obscured and/or often simplified in current research. Echoing the CRASSH research agenda, I here argue that what is needed is a more in-depth and political history of QALYs, allowing to better understand and maybe critically evaluate its current dominance. In doing so, I mobilise Multiple Streams Analysis (MSA). Initially developed by Kingdon (1984; 1993), this policy framework builds a complex and dynamic picture of how policy ideas ‘catch on’, bringing together notions of policy entrepreneurs, streams, ideas, negotiation/bargaining and structures alongside chance and key events. Rather than recounting historical ‘truths’, I focus on problematizing the emergence of QALYs within the scope of its current dominance in British health care policy and decision-making. Why was it preferred over other Quality of Life (QoL) type measurements? Who and what were the key players – or policy entrepreneurs –, ideas, discourses, negotiations and events leading to the adoption and growth of QALYs from the 1970s? In starting to address these questions, and to set the context, I first begin by reviewing the available literature on the emergence of QALYs, highlighting their tendency to streamline this development and overlook its complexity. Second, I outline the Multiple Streams Analysis approach and its advantages for examining the historical emergence of QALYs in Britain. In this section, I also discuss the type of data mobilised – namely different archives and oral history interviews with key protagonists. Third, I examine past attempts at quantifying life, notably in operational research, economics and psychology. Fourth, I problematize the emergence of QALYs in particular, characterising the three streams, policy entrepreneurs and windows of opportunity in the inception of QALYs. Finally, the conclusion offers some leads for future research within this project.

Polly Mitchell: “How far can quantified measures of quality of life be generalised?”

Numerical measures of quality of life are widely used to compare and evaluate health states. Measuring quality of life usually involves assigning quality weightings or utility values to health states, using methods such as preference elicitation in structured interviews and ‘direct’ measurement of experienced utility. This paper will argue that the instruments used to assign quality or utility values to health states are sensitive to framing biases and changes in the measurement context. Resulting quality weightings and health state orderings are thus inextricable from the descriptions under which they were measured. I will question the assumption that health states can be assumed to constitute a complete and transitive set of prospects over which preferences can be elicited, and argue that competing accounts of health state utility cannot be compared without employing external evaluative criteria.

There is good reason to think, then, that the resulting measurements of health state utility and quality of life should not be treated as broadly generalisable beyond the specific conditions of measurement. While this shouldn’t necessarily lead to an outright rejection of the possibility of quantifying of quality of life, it does indicate that numerical measures of quality of life should be used with caution and with an awareness of their limitations, particularly with respect to their commensurability and generalisability.

Tiago Moreira: “Re-describing measurement:  understanding the evolving diversity of self-rated health.” 

Health measurement and monitoring has been a central concern for researchers, governments, insurance companies and employers for over 7 decades. Social science engagement with this process has mainly taken two forms: studying the attitudes, beliefs or behaviours related to health, or critically relating health measurement to specific modes of social organisation in late modernity. In this paper, I draw broadly on the latter to explore the genesis and development of ‘self-rated health’ measurement from the 1950s to the present. In this, I identify three overlapping, sedimenting repertoires: one concerned with the regulation of help seeking behaviour, the second aiming to monitor ‘patient experience’ and the third focusing on the ‘embodied mind’. I argue that this evolving diversity is best understood as resulting from fluid – rather than network – relations: a process whereby the deployment of a standardising measurement is seen as partially generating or accelerating the proliferation of ‘local’ – yet to be ‘socialised’ – singularities.    

Elena Nicod: Why do countries make different drug coverage recommendations?  Are qualitative findings similar, complementary, or contradictory to quantitative ones?”

Health technology assessment (HTA) is an evidence-based tool used to inform drug coverage decisions.  Despite the systematic nature of this approach, the application and outcomes of HTA vary extensively across countries.  Two studies using qualitative and quantitative research designs respectively investigated the reasons for differences in the drug coverage decisions issued in four countries using HTA (England, Scotland, Sweden and France). This session will start by describing the approach developed within the qualitative stream that allows to systematically analyse and compare these drug coverage decisions across countries. In the second part of the session, the findings from the qualitative study will be compared with those derived quantitatively in a separate study, which will also be briefly introduced.  Integrating the findings from these two studies, a conceptual framework describing the relationships was built. The relationships within this framework were explored in order to assess if quantitative and qualitative findings are similar, complementary or contradictory.  We conclude that although these processes are systematic and evidence-based, a component of these decisions rely on judgments made during the deliberative process, which cannot be captured quantitatively.

Victor Roy: “Practices of valuation: lessons from epidemics”

Epidemics, like 'natural' disasters, are “extreme events” that stress human capabilities and social systems. As social processes, epidemics provide a lens into how people both as individuals and as part of systems, value human health. On the one hand is the ideal of care in biomedicine – that taking care of the weak and vulnerable preserves intrinsic value, restores bodily function and social personhood, and enables new pathways to opportunity. Yet embedded in a nexus of states, governments, global agencies, and financial institutions is an epistemology of risk and efficiency amidst uncertainty, which promises to use economic and financial modeling to allocate and attract capital to maximize public health and financial returns for multiple public and private investors. This research explores the consequences of this epistemological and strategic turn in response to epidemic disease drawing on lessons from three case studies: drug-resistant tuberculosis, Hepatitis C, and Ebola.

Elina Vessonen: “Conceptual engineering and the science of well-being”

Concept formation, i.e. the activity of characterizing a target concept and providing arguments in favor of the chosen characterization, is a research phase that rarely gets explicit attention in social sciences. Such neglect is problematic in light of recent studies in history and philosophy of measurement, that show that careful, iterative re-characterizations of the target concept tend to be crucial in the improvement of measurement practices. In this talk I show how appropriate attention to methods of concept formation can help improve measurement practices in the science of well-being.

Conceptual engineering is an approach to concept formation that identifies the primary purpose the target concept is supposed to serve, e.g. measurement, and provides desiderata in light of which the target concept can be shaped for this purpose. I argue that when the aim is well-being measurement, the two most important criteria for concept formation are relational exactness and similarity. When the target concept is relationally exact, we know that entities (e.g. people) that are compared in terms of the target concept (e.g. well-being) relate to each other in ways that allow us to meaningfully represent those relations numerically. If such relations do not (or cannot be shown to) exist, the concept is not (demonstrably) quantifiable. The similarity criterion requires that the definition of the target concept matches with pre-scientific uses of the same concept. I argue that there are moral and epistemic reasons for requiring that the similarity criterion be fulfilled when the aim is well-being measurement. I finish with thoughts on the extent to which current well-being measures fulfill these criteria.


Upcoming Events


Tel: +44 1223 766886
Email enquiries@crassh.cam.ac.uk