📢 8ème Table ronde de philosophie de la Médecine 20-21 juin 2019 – Paris, Université Paris 1, centre Malher

L’International Philosophy of Medicine  Roundtable est la principale institution en philosophie de la médecine au niveau international. Créée il y a 16 ans pour organiser un premier rassemblement des acteurs de ce domaine en plein essor, elle a depuis considérablement grandi et regroupe à présent plus de 400 membres. Son principal événement est un rassemblement qui a lieu tous les deux ans. Après une procédure d’appel à candidatures, Paris et l’IHPST ont été retenus en 2017 pour organiser la table ronde de 2019. Il s’agira de deux journées de présentations sélectionnées après un appel à contribution, et de deux conférences invitées (Phyllis Illari de University College London, Thomas Pradeu de l’Université de Bordeaux). Certaines contributions sont ensuite sélectionnées pour un numéro spécial de Theoretical Medicine and Bioethics, qui sera édité par Jeremy Simon (University of Columbia) et Maël Lemoine (Université de Bordeaux).

 

Thursday 20th

9 Introduction

9h15 Antoine C. Dussault. Naturalism without part-functionalism: Towards a holistic-naturalistic account of health

9h45 Alexander Geddes. Pregnancy, Parthood and Proper Overlap

10h15 Jonathan Grose. Disease, Sex, Senescence and Pregnancy. Who’s normal?

10h45 Coffee Break

11 Bengt Autzen. Is the Replication Crisis a Base-Rate Fallacy?

11h30 Plenary conference: Thomas Pradeu, TBA

12h30-14 Lunch Break

14 Oliver Galgut and Elselijn Kingma. Better than Randomisation? A philosophical defence of ‘dynamically allocated controlled trials’.

14h30 Adam La Caze. Randomized trials are not black boxes

15h Insa Lawler and Georg Zimmermann. Misalignment between research hypotheses and statistical hypotheses – A threat to evidence-based medicine?

15h30 Coffee break

15h45 Michael Wilde. Evidential pluralism in cancer epidemiology

16h15 Lynette Reid. The semantic content of “cancer”

16h45 Lauren Ross. The distinctiveness of disease explanation

17h15 Conclusion (First day).

 

Friday 21st

9h15 Carlo Martini and Mattia Andreoletti. Progressive Science Or Pseudoscience: The Case Of Medical Controversies

9h45 Bennett Holman. Medical Knowledge is What Doctors Know

10h15 Mark Tonelli. Skeptical Practice

10h45 Coffee Break

11 Thomas Grote and Philipp Berens. Evidence, Authority, and Accountability – On Algorithmic Decision Making in Medicine

11h30 Plenary conference: Phylis Illari (TBA)

12h30-14 Lunch Break

14 Virginia Ghiara and Federica Russo. Reconstructing the mixed mechanisms of health: the role of bio- and socio-markers

14h30 Kathryn Tabb. The Prospects of Precision Psychiatry

15h Talk 7

15h30 Coffee break

15h45 Leen De Vreese. Risk factors and prevention

16h15 Adrian Erasmus. Inductive Risk, Expected Utility, and the Consequences of P-Hacking in Medical Research

16h45 Mattia Andreoletti. What Are Drugs? Towards A More Rational Regulation Of Medical Interventions

17h15 Conclusion (Second day).

 

 

Ghiara & Russo, Reconstructing the mixed mechanisms of health: the role of bio- and socio-markers

It is widely agreed that social factors are related to health outcomes: much research served to establish correlations between classes of social factors on the one hand, and classes of disease on the other hand. However, the increased understanding of the biological factors involved in the mechanisms of diseases formation has not been accompanied by an appropriate comprehension of the active role played by social factors. It follows that our mechanistic understanding is in general obtained only at the biological level, through the use of biomarkers, while the social level is still studied through correlations between population-level variables.
In the past few years, the notion of “sociomarkers”, defined as measurable indicators of social conditions in which a person is embedded, has started to be discusses in the health sciences (Barboza Solís et al., 2016; Shin et al., 2018). While the idea of sociomarkers has been developed in analogy to biomarkers, the question whether they can be more than just “indicators” for broader social or health conditions is still relatively unexplored.
In this paper, we focus on one specific conceptual issue, namely whether, in analogy to biomarkers, which help trace the trajectory from exposure to disease at the biological level, we can use sociomarkers to trace the trajectory from exposure to disease development at the social level. We claim that, in order to properly understand how sociomarkers differ from social indicators, we need to endorse a philosophical approach to causation in terms of information transmission, which allows for the conceptualization of (bio- and socio-) markers as signals of transmission of information. We show that, like biomarkers, sociomarkers are identified at the individual level, but their study in large data sets allows scientists to make inferences about the population, rather than single individuals.To clarify our claim, we describe how a set of social measures called Adverse Childhood Experiences can be used as sociomarkers to explore the causal process from social exposure to disease.
We argue that the introduction of sociomarkers contributes to avoiding the risk of reductionism in at least two senses. Firstly, health and disease are not reduced to the biological realm; instead, social factors are active causes in disease aetiology. Secondly, while the study of sociomarkers is at the individual level, sociomarkers help understand how population-level social risks act by measuring both individual behaviours and individual-level effects that depend on structural causes active at the population level.
We conclude by claiming that the combination of sociomarkers and biomarkers helps to develop a general epistemological framework for thinking about causality across different types of factors (social, biological) and across different levels (individual, population).

 

Stefano Canali, The Exposome as a Postgenomic Repertoire: Exploring Scientific Change in Contemporary Epidemiology

In the last decade, a new notion has emerged in epidemiology: the ‘exposome’. The exposome is a way to describe and characterise the totality of exposures experienced by individuals, distinguished between: generic external (e.g. social capital), specific external (e.g. environmental pollutants) and internal exposure (e.g. oxidative stress). The exposome is considered and presented as a highly innovative, a new paradigm for epidemiological research.

However, I argue that the innovation of the exposome is better captured by the notion of repertoire (Ankeny & Leonelli, 2016). In this framework, scientific innovation is connected to conceptual, institutional, material, technological, organisational and economic elements of scientific research. I use the framework to argue that the exposome is based on the alignment of conceptual, material and social components. At the conceptual level, the repertoire is built on a commitment to understanding exposure as a dynamic and multi-layered issue, which implies an expansion of the notion and a broad characterisation of environment. At the methodological, technological and material level, the repertoire employs omic technologies developed in the genomic context and the study of biomarkers, which have significant influences on the size of datasets and interdisciplinary. At a social and institutional level, the repertoire is organised in short-term projects with the funding framing of public and environmental health and disease risk.

The specification of the components of the exposome allows me to show how many of these components have been transferred from other lines of research, including: the sequencing repertoire, that emerged in the genomic context and has since then increasingly spread in the life and health sciences; exposure science, i.e. the discipline that studies human contact with external agents; and the biomarkers approach, which studies elements or characteristics that can be precisely measured and used as indicators of various processes. I show that the exposome repertoire is thus the result of the repurposing of these approaches for new audiences.

The analysis of the conceptual and material background of the exposome leads me to engage with discussions on innovation in the life and health sciences. I specify my claim and argue that the exposome can be considered a ‘postgenomic’ repertoire. I use the term with a historical meaning, to describe research that employs genomic-based technologies, is increasingly aware of the complexity in interpreting genomic results and has a critical engagement with gene-centric approaches. On this basis, I discuss conceptual implications on notions of exposure and environment and the epistemic impact of large omic datasets, thus connecting my account to discussions on the innovative character of postgenomics at conceptual and methodological levels (Stevens & Richardson, 2015).

 

Thomas Grote and Philipp Berens. Evidence, Authority, and Accountability – On Algorithmic Decision Making in Medicine

In this paper we aim at scrutinizing epistemological and ethical issues arising from the deployment of machine learning (ML) algorithms within the context of medical decision making. In medicine, making ‘good decisions’ constitutes much of the daily work of healthcare professionals. They need to accurately diagnose diseases based on limited evidence and determine the best treatment strategy among different possibilities for the patient at hand. Recently, there has been a surge of interest in ML for medical decision making (reviewed by Esteva et al. 2019), fueled by two studies demonstrating ‘expert-level’ accuracy of ML algorithms in diagnosing diabetic retinopathy from fundus images (Gulshan et al. 2016) and different types of skin cancer from images of skin lesions (Esteva et al. 2017). Relevant algorithms are assumed to augment the decision making capacities of healthcare professionals by providing an additional evidential source. As healthcare professionals often rely on their instincts when making medical diagnosis (Stanley & Campos 2013), the promise of ML algorithms is that these might make medical diagnosis more reliable.
However, when making an informed decision, how much weight should a healthcare professional assign to the algorithm`s diagnosis? Due to the architecture of current ML algorithms, the deployment of ML algorithms forces the healthcare professional to make trade-offs on the epistemic level. For once, the algorithm in question might make a more accurate diagnosis than even the healthcare professional. Yet, it doesn`t supplement her with additional information with respect to the certainty of its diagnosis or with an explanation of the (causal) factors contributing to the diagnosis in question. Hence, when making her diagnosis or treatment decision, the healthcare professional is being epistemically dependent (Pritchard 2015) on the algorithm, as she is unable to properly interpret its output. If we imagine a case where the initial judgement of the healthcare professional and the algorithm`s diagnosis differ, not much can be done in order to resolve the disagreement. Thus, she might either defer to the algorithm or place trust in her own abilities. An even stronger case regarding the normative force of ml algorithms can be made once we consider larger clinical settings. If we accept the proposition, that the classification of diseases does – at least partly – depend on evaluative judgments made by healthcare professionals (Kingsma 2014), then ml algorithm`s might set the bar for how a certain disease is being defined. This makes it necessary to consider how certain conceptions of disease gain entrance into the algorithm, e.g. through labeled data. However, one route also worthy exploring is whether ml algorithms allow us to get a clearer picture of the functions or reference classes of diseases – which in the end promotes a more reliable decision making in clinical settings.
The deployment of ml algorithm also entails problems with respect to the attribution of responsibility and accountability. In the literature on moral responsibility, it is widely agreed upon that what an agent is responsible for is intimately linked to what she is capable knowing of (Robichaud & Wieland 2017). As ml algorithm`s do not provide explanations for their diagnosis, the healthcare professional is (to some extent) ignorant about the accuracy of relevant diagnosis. Admitted, at the current state, this will not save her from blame. Nevertheless, things potentially become more complicated once deferring to the algorithm becomes the norm in clinical settings. Thus, deploying ML algorithms for medical decision making potentially diffuses the attribution of accountability. Further ethical complications arise with respect to patient autonomy. For instance, it has been argued that the deployment of ML algorithms might reintroduce a paternalistic model of medical decision making – in the guise of a ‘computer knows best’-attitude (McDougall 2018). While the paper will not give a conclusive answer to all the problems highlighted, we hope that it will build the ground for further debate on the ethics of algorithmic decision making in medicine.

 

Mattia Andreoletti, WHAT ARE DRUGS? TOWARDS A MORE RATIONAL REGULATION OF MEDICAL INTERVENTIONS

The huge progress of biomedical research in the last decades has not been matched by correspondingly conceptual developments. In the era of personalized and precision medicine, many fundamental medical categories, such as health and disease, are being wrecked (Manrai, Patel, and Ioannidis 2018). The concept of “drug” is no exception. Nonetheless, over the years, neither medical scientists nor philosophers of medicine paid much attention to what count as a medicine.
Understandings and definitions of “drug” have a significant impact on the social acceptability of new biomedical products. In fact, our regulatory schemes depend on a previous classification of the treatments. The US Food and Drug Administration (FDA) classifies the medical products it regulates under three main categories according to their “nature”, each of them provided with its own testing standards. If the product is a technological object, it is considered a medical device; if the product is composed of biological compounds (e.g. sugar, proteins, nucleic acids, etc.), it is a biologic; finally, if the product is chemically synthesized, it falls on the category of drugs. Regulatory guidelines vary between categories: pharmacological interventions usually require full-fledged randomized controlled trials (RCTs); whereas medical devices, and biologics, can be approved on the basis of observational studies, case reports, or laboratory tests. Although there is an extensive philosophical literature on the validity of medical testing standards (e.g Cartwright 2010; Clarke et al. 2013; Osimani 2014; Stegenga 2011), there is nearly no discussion on the foundations of treatments classifications. This is not only interesting per se but also politically and socially relevant. On the one hand, treatments tested with different methods may bring into the market potentially different thresholds of safety and efficacy, eventually exposing patients to unnecessary harms (Kesselheim and Avorn 2017). On the other hand, flawed regulatory schemes might slow down the pace of innovation (Tabarrok 2017).
Currently, the definition of ‘drug’ is captured by (i) its intended use and (ii) by its chemical composition. However, I claim that the increasing complexity of biomedical treatments makes this account obsolete. Then, I propose to drop the part (ii) of the current definition while incorporating the concept of risk. Moreover, I suggest adopting a concept of risk that can capture both the hazards and the exposure of medical interventions. The major consequence of this proposal is that the distinction between categories completely dissolves. I will look at personalized drugs (PDs) and microbiota directed therapies (MDTs) as cases in point. At the moment, the FDA require RCTs to grant approval to PDs while MDTs are left unregulated. Instead, if we take into account the risks they harbor, it would be better to test MDTs in clinical trials while accepting different sources of evidence (e.g. mechanisms) for marketing PDs. If biomedicine will deliver its promises, future treatments will be very different from traditional drugs we are used to. Regulators should be prepared to rise the new challenge.

 

Kathryn Tabb. The prospects of Precision Psychiatry

Since the turn of twenty-first century, biomedical psychiatry around the globe has embraced the so-called “precision medicine paradigm,” a model for medical research that uses innovative techniques of data collection and analysis to reevaluate traditional theories of disease. The goal of precision medicine is to improve diagnostics by restratifying the patient population on the basis of a deeper understanding of disease processes. In the United States, the Precision Medicine Initiative has granted substantial funds to the project of gathering a million-person representative cohort to supply the big data that precision needs to get off the ground. Meanwhile, basic methodological, financial, and rhetorical shifts by the National Institute of Mental Health (NIMH) reveal the government’s growing interest in funding basic science research with downstream applications to psychiatry, instead of clinical studies. Spokespeople for the NIMH have argued that psychiatry should be reconceived of as “clinical neuroscience,” and have explicitly appealed to the paradigm of precision to make their case.

In this talk I argue that precision is ill-fitting for psychiatry for two reasons, illustrating each through an analogy with oncology, where precision has already proved promising. First, I show that in psychiatry, unlike in oncology, precision medicine has been understood as an attempt to improve medicine by casting out, rather than merely revising, traditional taxonomic tools. This radical approach to nosology is epistemically suspect, I show, because under it the demarcation of psychiatrically-relevant and psychiatrically-irrelevant basic science research becomes impossible. As such, there is no principled way for the NIMH to evaluate proposals in terms of their potential to produce advances in psychiatric treatment and care.

Second, I show that in psychiatry the term “biomarker” is often used for those signs or symptoms that allow patients to be classified and then matched with treatments. In oncology, on the other hand, “biomarker” is often understood as a disease mechanism that is not only useful for diagnostics, but also for discovering causal pathways that drug therapies can target. This ambiguity of the term means that many of claims to “precision psychiatry” amount to very different sorts of advances than those seen in other fields where precision medicine is applied. I show that because most biomarkers in other fields are genetic, and because psychiatric genetics holds little promise for revolutionizing patient care, the role that biomarkers will play in translational and clinical research in psychiatry must be different.

I conclude that, given these differences between how the precision medicine paradigm operates in psychiatry and in other medical fields like oncology, the close connection between the NIMH’s embrace of precision in order to justify their enthusiasm for “clinical neuroscience” over psychiatry’s traditional projects is deeply problematic. Psychiatry cannot progress through the simple application of neuroscientific conclusions to the clinic without being mediated by psychopathology. In addition, psychiatry cannot hope – any time soon at least – to discover the sort of causal biomarkers that have been transformative in oncology and other fields. While “precision psychiatry” may be successful rhetoric, it is not promising as a paradigm.

 

 


Vous aimerez aussi...

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search