|23 Jul 2015 - 24 Jul 2015||All day||CRASSH, Alison Richard Building, 7 West Road, CB3 9DT - SG1&2|
Please register online for this event.
– Conference fee: £40 (full), £20 (students) – includes lunch, tea/coffee (Deadline: Sunday 19 July 2015)
– Conference fee including the conference dinner at Clare Hall on Thursday 23 July: £80 (full), £50 (students) – includes lunch, tea/coffee (NOW CLOSED).
Twitter Hashtag: #CRASSHmeasure
Daniel Jon Mitchell (History and Philosophy of Science)
Hasok Chang (History and Philosophy of Science)
Eran Tal (History and Philosophy of Science)
The Making of Measurement is an interdisciplinary conference that seeks to consolidate an emerging international community of scholars interested in the history and/or philosophy of measurement. This new wave of scholarship is still in an embryonic stage and no general conceptual frameworks or schools of thought have yet emerged. Inevitably, tensions exist between methodologically-diverse approaches across the fields of philosophy, history, and sociology of science, particularly with respect to whether measurement outcomes reflect facts about nature, or about human tools and concepts. Hence the goal of this conference to bring together scholars to review recent advances and to identify key issues for further development. This decade is also seeing dramatic changes in the metric system because four scientific units are being redefined in terms of fundamental constants; the contemporary relevance of a systematic approach in the humanities to the study of measurement is therefore particularly strong. Contributors might choose to address one or more of the questions listed under the following themes:
- Philosophies of Measurement: Under what conditions is the world justifiably deemed quantifiable? How do existing philosophies of measurement, for example operationalism, fit specific historical cases? Can measurements of the properties of macroscopic bodies and microscopic entities be analysed in the same way? When measuring instruments disagree, is it always possible to ascertain which one is in error? Do the relationships between measurement and theory in the natural sciences hold true for the social and human sciences?How does measurement function in areas of scientific enquiry, for example, psychological and psychical, where the entities under study have a dubious ontological grounding?
- Units, Standards, and Metrology: Are measurement standards accurate by virtue of fact or convention? What are the social, political, and scientific aims for which, and means through which, units and standards are established? What impact have specialized metrological institutions had on processes of standardization?
- Practices of Measurement: What kinds of conceptual approaches, methodological and mathematical tools, and practical steps have measurers taken to ensure sufficient reliability and precision for their measurements and instruments? How have these varied from sites ranging from elite laboratories to the workshop, factory, and home, and what kinds of exchange (of personnel, instruments, apparatus, techniques, and so on) takes place between these sites? What determines judgements of the level of acceptable error, and how do these relate to the purpose of the measurement, and economic and technological development?
- Nancy Cartwright, Durham University
- Graeme Gooday, University of Leeds
- Terry Quinn, International Bureau of Weights and Measures
Supported by the Centre for Research in the Arts, Humanities and Social Sciences (CRASSH) and the School of Arts and Humanities at the University of Cambridge, the 7th European Community Framework Programme, the Leverhulme Trust and the British Society for the Philosophy of Science.
Accommodation for speakers selected through the call for papers and non-paper giving delegates
We are unable to arrange or book accommodation, however, the following websites may be of help.
University of Cambridge accommodation webpage
Administrative assistance: firstname.lastname@example.org
|DAY 1 - Thursday 23 July 2015|
PARALLEL SESSIONS (A)
PANEL 1: Sensory Measurements
Chair: Annette Mülberger
PANEL 2: Case Studies in the Physical Sciences
Chair: Richard Staley
PANEL 3: General Methodology
Chair: Eran Tal
PARALLEL SESSIONS (B)
PANEL 1: Points of Conversion: Quantification and Measurement in Germany and Mexico (1800–1940)
Chair: Daniel Mitchell
PANEL 2: Measurement in Practical Realms of Life
Chair: Leah McClimans
PANEL 3: Measurement and Philosophy of Science
Chair: Hasok Chang
PARALLEL SESSIONS (C)
PANEL 1: Physics and Astronomy
Chair: Brian Pitts
PANEL 2: Economics
Chair: Anna Alexandrova
PANEL 3: Specific Methodology
Chair: Alfred Nordmann
Drinks at Clare Hall followed by dinner.
|DAY 2 - Friday 24 July 2015|
PARALLEL SESSIONS (D)
PANEL 1: Units and Dimensions
Chair: Daniel Mitchell
PANEL 2: Improving Person-Centered Health Measuring Instruments: What is Needed?
Chair: Anna Alexandrova
PARALLEL SESSIONS (E)
PANEL 1: General Philosophy of Measurement
Chair: Eran Tal
PANEL 2: Charles Sanders Pierce on the Limits of Measurement and the Measurement of Limits
Chair: Alistair Isaac
Response: Daniel Jon Mitchell (University of Cambridge)
Drinks reception at CRASSH
A complete list of abstracts can be downloaded here.
Nancy Cartwright: The Theory of Measurement
From work with Norman Bradburn and Rosa Runhardt.
Measurement, I shall argue, requires: 1) A characterization of the quantity or category: we have to be able to identify its boundaries and know what belongs to it and what does not (characterization); 2) a metrical system that represents the quantity or category (representation); and 3) rules for intervening in the world to produce measurement results (procedures). It is essential that we can defend these three mesh together properly. Representation theorems are crucial for the link between 1 and 2, and a great deal of empirical knowledge for linking 3 with the other two.
I shall next distinguish between concepts that refer to a single quantity or category that can be precisely defined and those that refer to things that are loosely related and for which the boundaries of the concept are not clear (Ballung concepts). When we make these precise for purposes of exact science, we generally leave behind a good deal of the original meaning. This suggest representing these with tables of indicators but at the cost of comparability.
Finally I shall consider advantages of purpose-built versus common metrics.
Terry Quinn: From Artefacts to Atoms: The Basis of Reliable Measurements
At the 26th General Conference on Weights and Measures, due to take place in 2018, it is planned to adopt a new definition of the International System of Units SI to be based on a set of fixed numerical values of constants of nature. This will be the culmination of more than two hundred years of metrology finally putting into practice the original ideas of those who created the metric system. The key is the replacement of the kilogram artefact of Pt-Ir by definition based on a fixed numerical value for the Planck constant. In this lecture I shall outline how this has come about and link it to the need for a system of measurement that is uniform and accessible world-wide for international trade, high-technology manufacturing, human health and safety, protection of the environment, global climate studies and the basic science that underpins all these.
Terry J. Quinn, CBE FRS was the Director of the International Bureau of Weights and Measures (BIPM) between 1988 and 2003. BIPM is an international standards organisation, one of three such organisations established to maintain the International System of Units (SI) under the terms of the Metre Convention (Convention du Mètre).
Graeme Gooday: A Measured Hearing
What does it mean to ‘measure’ hearing? For what purposes would it matter that hearing could or should be measured – by whatever understanding of measurement might be involved? A conventional answer is that the clinical sub-discipline of audiology developed the relevant form of expertise as the distinctive intervention to quantify human hearing capacities. It did so by combining into a hybrid technical field insights from the physics of acoustics, the technology of telecommunications testing and the physiological expertise in hearing pathologies hitherto the province of the otologist. Put in historical context it is uncontroversial that the profession of audiology emerged in the mid-twentieth century, with audiologists using the audiometer as its principal instrument to quantify the extent and nature of hearing loss, doing so with a view to its amelioration by the prescription of appropriately configured hearing aids.
But what exactly did the audiometer measure? Certainly not the capacity to hear and understand human speech. So then what was signified by the audiometer’s readings and how did that come to serve any useful clinical purpose? This paper sets out to answer that question, relating it chronologically to the massive increase of hearing loss in Second World War combatants, especially in the USA. To understand this story there is in turn a much longer term panorama to be explored of how the audiometer emerged in the late 19th century as a correlate of the telephone, and yet was not adopted for formal hearing testing until half a century later. This links in turn to the complex politics of the history of hearing loss that has so often been tangled up with the history of deafness in ways that have long obscured the huge variety of human capacities for hearing, and the multiple aetiologies for hearing loss.
Historical research in telecommunication history and disability history has recently started to uncover the diversity of human hearing capacities throughout the industrial era. These differential capacities were brought to focus by first encounters with the telephone in the late 1870s. This device only allowed for aural communication (stripping out all visual elements) and revealed a diversity of facility that had hitherto been masked by the starkly dichotomous language of deaf vs. hearing. So amongst all this diversity how was a notion of ‘normal hearing’ then created against which hearing loss could be diagnosed by relative comparisons? This initially contested notion of normalization was eventually canonized in the regular use of the audiometer in the 1920s not for clinical purposes, but is instead to deal with the civic problem of public ‘noise’, especially tin controversial campaigns to reduce alleged increases in civic street noise that have been studied by Karen Bijsterveld and Emily Thompson.
My paper will start with David Edward Hughes’ proposal to the Royal Society in 1878 for devices to amplify human speech (the microphone) and an as yet unnamed device to measure human capacity to hear individual pure tones. This latter technique was soon taken up by the physician Benjamin Richardson’s analysis in The Lancet, 1879, in which he christened the audiometer (initially just ‘audimeter’) proposing its adaptation to determine the hearing capacities for certain key professions, and attempting to define relative deviation from ‘perfect’ hearing. However, I show instead how other methods of comparing hearing capacities were long preferred by physicians (tuning forks) and telephonic engineers (telephone-based devices). I then look at C.M.R. Balbi’s attempt in The Lancet, 1925, to reinvent the audiometer to map in two-dimensional form the (drug-induced) variability of hearing loss independently of patients’ own testimony. It was through such instrumentalisation of hearing tests that the audiometer could be used both in the clinic and laboratory to pathologize certain forms of hearing capacity as somehow less or greater than a new stipulated norm.
Nevertheless clinicians remained concerned that the audiometer only served to quantify human capacity to hear individual pitched sounds, not the much more significant human capacity to understand highly modulated multi-frequency speech. Hence my paper concludes by showing how the registrations of the audiometer came to serve as the surrogate for human hearing – and hearing loss – when the vast scale of combat-induced deafness in the Second World War obliged the American military services to adopt the audiometer in new specialist clinics for large-scale and high-speed evaluation and ameliorative prescription of reduced hearing capacities. Even so, the newly specialised body of audiologists that emerged to handle the long-term care of such deafened veterans did not lightly treat the audiometer as a simple measuring device – it was clear to them that a large amount of training was required to interpret the results of audiometric testing as well as special low-noise testing laboratories and a substantial division of labour in handling aftercare. Although this use of the audiometer was not then a full reductive ‘industrialisation’ of measurement (as discussed in Gooday, 2004) it is pertinent to note that 21st century audiology has moved way from intensively audiometric measurement regimes, returning to the more conversational bespoke evaluations of hearing loss that pre-dated the introduction of the audiometer.