Better safe than sorry? How much caution is too much in thinking about animal sentience and animal welfare?
I had a really enjoyable discussion with Simon Lauder about this topic on ABC South East NSW earlier this week (Wednesday 19th June 2020). The audio is available here.
The question of animal sentience (i.e. the capacity to experience pain) is fundamental to discussions of animal welfare—undue animal suffering is immoral and must be avoided and if animals. Whilst this much is agreed upon, applying this principle requires an understanding of animal sentience Which animals are sentient? Which are not? When is a given animal in pain or suffering? When is it not?
Answering these questions is often less than straightforward. Behaviours we associate with pain in ourselves, not always being present in animals in similar situations (e.g. an animal may conceal pain rather than yelp or cry out). In the face of this uncertain evidence, policy makers typically adopt a policy of “erring on the side of caution” or “giving the animal the benefit of the doubt”. This results in animal welfare policies which treat an animal as though it is capable of experiencing pain despite uncertain evidence (see discussion by Jonathan Birch here).
Whilst this approach has the great benefit of minimising unnecessary suffering in the world, recent of pain perception in bees and fish challenges the practicality of the principle. For example, worldwide some 970 to 2700 billion fish are wild caught annually. If fish are sentient, then the number of sentient beings in the form of fish that are slaughtered for food annually equals at least twelve times that of the current human population (see Bob Jones on this here). . This offers a great case for ceasing animal fishing on the grounds of being better safe than sorry morally. Such a policy would, however, come at huge human cost. 3.2 billion of the world’s population rely on fish for a significant proportion of their daily food intake, not to mention the economic reliance on fisheries of people This creates a dilemma for the welfare policy maker. Perhaps even more stark is the challenge with invertebrates like bees and flies (see discussion of bee sentience by Colin Klein and Andy Baron here). If the evidence of their sentience is correct, then there are serious moral implications of pesticides. Again, however, “erring on the side of caution” and avoiding invertebrate deaths and suffering would have massive human costs. How should we weigh up the evidence in such a situation? Should we abandon the principle of erring on the side of caution? Or should we bite the bullet and treat even the tiniest fly as sentient? Is there a middle ground?
A version of this blog post was published on The Conversation on 8th April 2020 and discussed with Amanda Vanstone on Counterpoint (ABC Radio National) on 11 May 2020 (audio available here) and discussed with Simon Lauder (on ABC Radio Sth East NSW) on 10 May 2020).
From bushfire prediction to climate change, scientific modeling has been getting a fair bit of press lately, and no more so than with the current COVID-19 crisis. As countries battle the pandemic, scientific modelers are playing a central role in predicting how the virus will spread, what impact it will have, and what sorts of interventions might halt it.
Whilst the public profile of modeling has perhaps never been greater, the broader understanding of what scientific models are (and what we can and can’t expect of them) remains poor. As a philosopher of science, there is little I can do to contribute to the current effort against COVID-19 apart from stay home.
One thing I do know a little about, however, is the nature of scientific models, their strengths and limitations. Given that, here I offer a brief and accessible guide to scientific modeling for the uninitiated in the hope that a broader knowledge of their power and pitfalls will be of value in public debate.
What is a scientific model?
Scientific models are representations of parts of the real world. They can take many forms. They range from physical small-scale models of phenomena—such as the San Francisco Bay Model—a hydraulic model used to investigate water flow in San Francisco Bay—to the type of mathematical models used to understand the spread of COVID-19.
Like any replica or microcosm of the real world, models can be used to indirectly explore the nature of the real world. They can tell us what the important features of real-world systems are, how those features interact, how they are likely to change in the future, and how we can successfully alter those systems.
Why are models so valuable?
Scientific models make it possible for us to explore features of the real world that it is impossible, or impractical, to investigate directly. For example, it would be impractical to do direct experiments on what proportion of the population of Australia needs to engage in social distancing to make “flattening of the curve” likely.
Even if we could devise good experiments , the time it takes for people to become sick and transmit COVID-19 means any experimental results would arrive too slowly to be of practical use. Scientific models offer us a way to use data from other countries, along with theory and other information to make a reasonable estimate of what impacts particular interventions would have. Models are thus invaluable in a situation like the COVID-19 pandemic where time is of the essence and we are interested in effects at a large-scale.
What are the limits of scientific models?
The usefulness of a model is limited by its accuracy—how well it represents the real world. For example, a model of the spread of COVID-19 based on data from a densely populated part of urban Europe may not work for suburban Sydney or Melbourne because it lacks the relevant similarity to those locations.
There is a well-known trade off between generality and specificity in model-building. Really detailed modeling such as this work coming out of Imperial College London can be used to make very accurate, relatively narrow range predictions about specific cases in the US and Great Britain. Simpler, general models such as these ones by Ben Phillips at the University of Melbourne, on the other hand, offer valuable large-scale insights, but far less local precision. Such general models have been particularly useful early in the pandemic, when localised information is scarce. However, as we build more information about local circumstances modeling will become more specific and more accurate, and these general models will be less important.
One challenge for modeling in a real-world context like COVID-19 is that our modeling may not get it right every time. This is partly because we lack enough fine-grained information about the real-world situation. It is also because individual actions and sheer bad luck in the short term can make big differences in the longer term. Like a stone thrown into a lake, the failure of one individual to self isolate or quarantine, can produce a much larger scale ripple of downstream effects. The massive impact of individual actions in generating some of the clusters of cases (such as this case in South Korea) is a testament to this.
What does this all mean?
Despite the uncertainty inherent in the COVID-19 pandemic, we should be optimistic about the science. The general principles behind the models we are basing our public policy on are the product of decades of testing and research and we are learning more and more specific information about COVID-19 everyday. In terms of the history of humanity, this scientific progress means we are in a far better place than any generation before us to deal successfully and efficiently with a pandemic of this scale. This is in no small part thanks to the power of model-based science.
(Thanks to Carl Brusse, Simon Greenhill, Adrian Currie, Ross Pain, Rob Lanfear and Stephen Mann for their comments and suggestions)
(This piece is aimed at scientists that are not already engaged with philosophers of science and originally appeared in The Biologist in April 2020).
Philosophy of science is frequently described by those outside the discipline as obscure, technical and irrelevant to scientific practice. While this picture of my own field is accurate enough to cause my ears to burn a little, I want to tell you why it is still a caricature and a harmful one at that.
Philosophy of science is alive and kicking, and in many aspects, of huge relevance to science today. The poor perception of the discipline means, however, that the potential benefits of collaboration between philosophers and scientists are largely being squandered. Looking at the two broad areas of work in the philosophy of science, the possibility of fruitful engagement between philosophy and science is clear.
First, there is the philosophy of nature. Along with scientists, philosophers of nature are concerned with answering big-picture questions about our world such as ‘what is cognition?’, ‘what is life?’ and ‘are humans unique?’. This integrative and somewhat speculative research requires us to step back from the nitty-gritty of particular disciplines and contexts, and focus on what science is saying as a whole.
Of course, it requires a great deal of scientific literacy and understanding – many philosophers of science have tertiary training in science – but it also helps to have tertiary training in logic, reasoning, analysis and argumentation, the type of training study in philosophy provides.
Philosophers of science in this context see their work as continuous and overlapping with that of scientists, and any division between the two as arbitrary. Good science requires clear concepts and reasoning, along with carefully constructed, large-scale theories, but the daily burden of scientific practice imposed by the laboratory or field makes finding the time for such theoretical work difficult.
Philosophers are specifically trained to do this particular work, and have the time and inclination to focus on it, so can contribute to the scientific enterprise. Importantly, philosophers of science are not denying that scientists too can engage in this practice – they are merely putting their hands up to contribute.
Some critics chastise philosophers of nature for being concerned with ‘non-empirical’ issues, such as what concepts mean or how arguments fit together. This is a mistake: these issues are important when developing a big picture of what science is saying. Scientific disagreement is all too often a product of talking past each other or a failure to recognise where the language used in related disciplines diverges. Take the concept of a ‘gene’, for example. Not only has our understanding of the gene changed over time, but also different areas of science have adopted different definitions of it. This is a natural product of scientific progress, but also a source of avoidable confusion and disagreement.
Second, there is ‘traditional’ philosophy of science. Like their famous forebears, Popper, Lakatos and Kuhn, philosophers engaged in this project seek to understand what science is and the success of the scientific method. Some key contemporary projects include ‘what makes for a good scientific model?’, ‘is science value-free?’ and ‘how do generality and specificity trade off in scientific theories?’.
This work has direct relevance to contemporary debates in science, especially those concerning the replication crisis, publication bias and open science. However, there is still infrequent collaboration between philosophers and scientists. Very little of the debate regarding the replication crisis, for example, has occurred in philosophy journals, despite its obvious philosophical relevance.
To my mind the reasons for this are twofold. While there is the potential for philosophy of science to influence the sciences, understanding science is the sole aim for many traditional philosophers of science.
The other reason for a lack of collaboration could be the sociological and institutional barriers. The central roadblocks I see between philosophical and scientific engagement lie in, for a start, antiquated teaching: if I had a pound for every scientist who presents Popper’s outdated 1960s falsificationist view of science as the cutting edge of philosophy of science, I’d be a millionaire.
Second, there is a lack of infrastructure and support for interdisciplinarity within universities – for example, to co-locate cognate research across the humanities and sciences, funding and time for training and education in cross-disciplines, and recognition for interdisciplinary work in research assessments. Finally, there is the poor perception of philosophy of science from outside the discipline, something I have attempted to counter here.
In denying the current Australian bushfire crisis is directly caused by climate change, Prime Minister Scott Morrison (and others) take the hair-splitting that has become the norm in Australian political rhetoric to a new level. And it all rests on what is meant when we say that one thing “a cause” of something else. Causation is, you see, a complicated beast to get your head around. Even though quite small babies have a grasp of the idea that one event can generate another—crying brings a caregiver, kicking moves toys in my baby-gym, and so on—causation in a broader sense is rarely that simple. Whilst the cynic in me is pretty sure that the PM knows this and is simply exploiting the vagaries of language for his own political ends, here is a lesson on the nuances of causation and explanation for ScoMo to mull over on his pre-Christmas Hawaiian holiday.
What causes any particular bushfire? Anyone who has tried to start a campfire knows that a spark alone is rarely enough to start a small fire, let alone a big one. What is required is fuel of the right type; the fuel has to be dry; the air can’t be too wet, and so on. Without these “background conditions” in place, one can go through whole boxes of matches without generating anything so much as a whiff of smoke or roasting a single marshmallow. Extending this conclusion to our current predicament, it is trivially true that every bushfire is directly caused by a spark of some sort, whether it be an arsonist’s match or a lightning strike (a fact that I do not wish in any way to downplay). If we want to know, however, why it is that any given spark comes to result in a large scale or out of control bushfire we must look more broadly and think more sophisticatedly about what is meant by an explanation for an event and go beyond mere causes.
Good explanations of events point to those things in the world which philosophers call robust difference makers for those events we want to explain. This is just a fancy philosopher’s way of pointing to those things that together make the outcome of interest highly likely. In the case of a campfire, for example, the robust difference makers are a spark of some sort, dry fuel and low humidity. Whilst the spark is necessary for the fire, it is not by itself usually sufficient for it. Other conditions must hold. In the case of a bushfire, we are (simply speaking) looking at something akin to the campfire. So, whilst the PM is right that it is only the spark that directly causes any one fire, adequately explaining a bushfire requires reference to the other conditions that robustly contribute to fire, such as hot, dry conditions, and the quality and quantity of available fuel. Now, whilst climate change may not be a cause of the spark that starts a fire (though that in itself is debatable), it is definitely a cause of hot, dry conditions and increased fuel quality and quantity. In this sense it is, at a minimum, a key explanation for our current bushfire emergency, and on all but the most restrictive accounts of causation, a cause (albeit, yes, an indirect one) of the fires.
As I said at the outset, the cynic in me suspects our PM is aware of all this and is just splitting-hairs for political gain, but maybe I am wrong. In which case he really does need a lesson on causation for Christmas.