Download the program and the abstracts (1.8 Mo)



To the abstracts of :

Donato Bergandi  |  Dominic J. Berry  |  Cécilia Bognon-Kuss  |  Ute Deichmann

Barbara Demeneix  |  Antonino Drago  |  Franck Dumeignil  |  Miguel Escribano Cabeza

Martin Feelisch  |  Jean-Baptiste Fini  |  Grant Fisher  |  Sara Franceschelli

Michele Friend  |  Gregor Greslehner  |  Quentin Hiernaux  |  Sarah Hijmans

Jean-Pierre Llored  |  Petra R. M. Maitz  |  Benjamin J. McFarland  |  Guglielmo Militello

Daniel J. Nicholson  |  Klaus Ruthenberg  |  Jérôme Santolini  |  Stéphane Sarrade

Olivier Sartenaer   |  Joachim Schummer  |  Massimiliano Simons  |  Michael Tobin



To Wednesday 26 / Thursday 27



Session I [10:45-12:15]
New  Scientific Works & Ethical Implications

10:45. Stéphane Sarrade:

Green Chemistry and ecology

Chemistry, as we know it, is based on a paradox, that of having to continuously produce more and more quantities, while taking care to limit its impact on our environment. Industrial chemistry has therefore become aware of its role in protecting the planet and has entered a new era whose challenge is to produce more by consuming and rejecting less. This goal is obviously the base on the concept of green chemistry, inducing a lower impact on environmental issues.

Chemistry is ubiquitous in our daily lives. Baking a cake in the oven at home is the result of a chemical action. Toothpaste, plastics or sunscreens are all products developed through chemical processes. Green chemistry is an approach to chemistry inspired by the concept of sustainable development. It incorporates optimization of process efficiency, economy and recycling of raw materials and by-products of chemical reactions, ultimate waste reduction and impact on human health and the environment. Because modern chemistry is based on a paradox: the need to produce more and more quantities (because of growing demography, for our development or our comfort) while reducing the impact of chemistry on our environment and our health, and ensuring sanitary safety.

The challenge of a green chemistry is to produce more while consuming and rejecting less.

It is about producing more to face the development of humanity with a view to 9 billion people by 2030-2050, and consume less to ensure the protection of the planet (by removing less material first and thus rejecting less waste).

Thus, at the dawn of the twenty-first century, chemistry is challenged by the emergence of the concept of green chemistry in order to meet five major challenges:

- Produce food (agriculture, livestock, ...);

- Produce drinking water;

- Produce drugs;

- Producing energy;

- Protect the environment.

Green chemistry represents a real break between the chemistry of the twentieth century and that of the twenty-first century. In the twentieth century, the design and development of chemical processes were essentially based on a goal of optimizing reactions. Today, chemical processes come from a global conception that takes into account both nature and the quantity of matter involved (raw materials and solvents), the required energy expenditure (notion of soft chemistry), the amount of waste through recycling, as well as the ability to analyze the materials involved at all stages on reduced sample quantities. For green chemistry, the ecology is embedded in the global concept. 

Top of page


11:15. Barbara Demeneix and Jean-Baptiste Fini

Importance of fetal thyroid hormone signaling for brain development: a window of vulnerability for endocrine disruption

Brain development requires thyroid hormone. Different processes, from proliferation and neuronal versus glial fate determination to differentiation and myelination all are modulated by thyroid hormone. The last two decades have witnessed major insights into the molecular controls determining thyroid hormone availability and action in the brain, notably the requirement for thyroid hormone from the earliest stages of neurogenesis, the first trimester of pregnancy in humans. Recent data show that maternal levels of thyroid hormone during early pregnancy is associated with offspring’s IQ and brain structure [1]. Another revolution in thinking comes from the importance of tight control of tissue and cell levels of thyroid hormone by deiodinases and membrane located thyroid hormone transporters (THTs).

In parallel to increased understanding of mechanisms controlling tissue availability of thyroid hormone levels within strict physiological limits, other research has shown how vulnerable each level of thyroid hormone signalling and action is to endocrine disrupting chemicals (EDCs)., particularly during early pregnancy. Both experimental and epidemiological data suggest that interference with thyroid hormone orchestration of human brain development could be implicated in the increased incidence of neurodevelopmental disease, such as autism spectrum disorder, as well as in significant IQ loss at a population level [2]. Our own data show that EDCs mixtures present in human amniotic fluid interfere with thyroid hormone signalling and brain development in Xenopus embryos [3]. Our more recent results carried out in collaboration with the EDC-mix risk consortium (http://edcmixrisk.ki.se) allow epidemiological data on prenatal exposure to different mixtures be tested on multiple models (Zebrafish, Xenopus, human brain organoids). Results underline how exposure to EDCs, including thyroid disrupting chemicals, during this prenatal period can be associated with both low birth weight and language delay. The socio-economic and ethical implications of exposure will be briefly discussed as well.


1. Korevaar, T.I., et al., Association of maternal thyroid function during early pregnancy with offspring IQ and brain morphology in childhood: a population-based prospective cohort study. Lancet Diabetes Endocrinol, 2016. 4(1): p. 35-43.

2. Demeneix, B., Toxic Cocktail: how chemical pollution is poisoning our brains. Oxford University Press., 2017.

3. Fini, J.B., et al., Human amniotic fluid contaminants alter thyroid hormone signalling and early brain development in Xenopus embryos. Sci Rep, 2017. 7: p. 437-86.

 Top of page


11:45. Grant Fisher:

The ethics and epistemology of stem cell toxicity models

Human pluripotent stem cells (stem cells capable of differentiating into all three tissue types) have long been regarded as providing the potential for the development of therapies to treat degenerative diseases. But they also provide novel platforms to model disease in vitro, for drug discovery, development, and toxicity testing (McGivern & Ebert 2014). Induced pluripotent stem cells (iPS) have become especially important in this respect. They are thought to offer ethical advantages by not directly contributing to the destruction of human embryos and by providing patient specific therapies that avoid immunological rejection. Moreover, pluripotent stem cells are increasingly seen as providing novel in vitro models to study chemical toxicity without animal experimentation. Stem cell based toxicity assays are seen as offering potentially epistemically more reliable as well ethical models to study the effects of chemical pollutants on human development and reproduction (Faiola et al 2015). This paper explores the ethical, epistemic, and regulatory implications of stem cell in vitro models. For example, iPS cells have been criticized on ethical grounds for failing to avoid embryo destruction (Devolder 2015) because human embryonic stems are epistemically indispensable to stem cell biology to ensure their pluripotent potential (Fagan 2013). In the context of stem cell drug and pollution toxicity studies, similar ethical concerns arise but with potentially additional considerations. For example, what are the epistemic and ethical implications of novel forms of drug testing using stem cell models when human tissues can be used in ways not permissible for human subjects? What are the ethics of biomimetic systems? What impact does the ethics and epistemology of animal experimentation have on determining the viability of iPS toxicity models? What consequences follow from how we conceive the technoscience of stem cell toxicity modelling? For example, drawing parallels between cultured stem cell lines and model organisms in the philosophy of biology (Fagan 2013) may have important implications for the regulation and funding of stem cell research.


Devolder, K. The Ethics of Embryonic Stem Cell Research (Oxford: Oxford University Press, 2015).

Fagan, M.B. The Philosophy of Stem Cell Biology: Knowledge in Flesh and Blood (Basingstoke: Palgrave MacMillan, 2013).

Failoa, F., Yin, N., Yao, X., & Jiang, G. “The Rise of Stem Cell Toxicity”, Environmental Science & Technology, 49 (2015): 5847 – 5848.

McGivern, J.V. & Ebert, A. D. “Exploiting pluripotent stem cell technology for drug discovery, screening, safety, and toxicology assessments”, Advanced Drug Delivery Reviews 69-70 (2014): 170-178

 Top of page

Session II [13:30-17:00]
Historical Studies


13:30. Antonino Drago:

A common foundation of biology and chemistry according to a Leibniz’s suggestion

It is scarcely remarked that the two celebrated historians of Biology present a same conclusion about the foundations of their science. Ernst Mayr states that “there exist two Biologies, of the functional causes and the evolutive causes”. As a fact, since early 19th Century a harsh debate between Darwinists and Mendelians started. According to Stephen J. Gould, Darwin’S Origin has presented “a dichotomy”, “Functional adaptation and structural constraints”, from which the whole historical development of Biology depends. “Th[is] dichotomy persists in present times. I believe in this persistence as a fundamental issue of Biology.”

I characterize the two Biology’s theories as follows: Being derived from a principle (natural selection) Darwin’s theory is organised as a deductive, axiomatic theory (AO). Moreover, by dealing with all ages (its time goes from -ꝏ to +ꝏ), it is based on actual infinity (AI). Instead, Mendel’s theory is organized for solving a problem (how the genetical patrimony is transmitted; PO). Moreover, it is based on potential infinity (PI), since its time is discrete, one generation after another. I characterise the dichotomy within the foundations of modern Biology as follows: Functionalism means to be subjected to a function, i.e. an authoritarian law (AO) which, coming from the outside an organism, is experienced by it as AI. Structuralism means that an organism both is aimed at solving the problem of the unity of (its structure) through intrinsic connections (PO) and undergoes changes through discrete steps of its components (PI). One may define as a “paradigm” a theory whose couple of choices is so much influential to obscure other choices and thus to depreciate their corresponding theories. Inside Biology, AI&AO choices of Darwin’s evolutionism generated a paradigm which depreciated Mendel’s PI&PO theory. Thus from its beginning theoretical biology split into two incommensurable theoretical models of a theory. They represented an incommensurability phenomenon for historical reasons. The incommensurability between two theories for philosophical reasons is caused by a difference in their couples of choices. It is manifested by radical changes in meaning of the basic notions; e.g., the notion of "species" (either phenotype or genotype), “teleology”, “genetic variation”, etc. Yet, incommensurability does not implies untranslatability; Ernst Mayr has reconciled Darwin's theory with Mendel’s through radical changes in meanings of some basic notions; e.g. the notion of species (concerning the kind of organization) and the notion of gene in the genome (concerning the kind of infinity).

Owing to last century studies, the foundations of Mathematics split into respectively classical Mathematics (AI) and constructive Mathematics (PI), whereas the foundations of Logic split into classical Logic and intuitionist Logic; the latter ones respectively govern AO and PO. Newton founded mechanics by choosing AO (his three principles) and AI (infinitesimals). Along three centuries these choices constituted a paradigm which depreciated all other theories, in particular Chemistry, whose choices are the alternative ones, i.e. PI (a mathematics at the lowest level as possible) and PO (Chemistry solves the problem of how many elements compose matter). Remarkably, as Newton’s mechanics dominated the entire theoretical physics, as Darwin’s theory, by sharing the same couple of choices AI&AO, dominated the entire Biology.

The two dichotomies are recognised as Leibniz’s "two labyrinths of human reason": "infinite" (either actual or potential) and "either law or freedom"; the latter one subjectively representing the two kinds of (theoretical) organization; ie, either an organization in which from fixed premises (principles-axioms) one deduces all consequences (for both social behaviour and his mind), or an organization of a free search aimed at solving a crucial problem.


Drago A. (1990), “History of the relationships Chemistry-Mathematics”, Fres. J. Anal. Chem. 337, 220-224; 340, 787.

Gould S.J. (2002), The Structure of Evolutionary Theory, Harvard: Harvard U.P., 161, 251-22, 261, 278-279, 505-506.

Howard J.G. (2009), "Why did not Darwin discovered Mendel's Laws?", Journal of Biology, 8, 15, 1-8.

Mayr E. (1972), The Growth of Biological Thought, Lakonia NH: Belknap Press. (pp. 10 and 58 of the It. Tr.).

Mayr E. (1970), Populations, Species, and Evolution. Harvard: Harvard University Press

Top of page


14:00. Miguel Escribano Cabez:

G.W. Leibniz and the problem of constitution and genesis of the organic bodies. Eduction and preformationism

Leibniz understands his Dynamics as a 'general ontology' capable of offering a foundation to the continuity and coordination of the processes characteristic of the three natural kingdoms (mineral, vegetal and animal). However, in the case of the constitution and generation of organic bodies there seems to be a discontinuity between two perspectives of analysis that we can call 'chemical' and 'biological':

(1) On the one hand, according to Leibniz organic bodies are part of the masses and, therefore, the understanding of their complexity is approachable by the 'eductive model' that is developed by chemistry.

(2) On the other hand, in the context of the problem of embryogenesis, Leibniz's preformationist position entails that any organic form or structure proceeds (through metamorphosis) from another organic form in which it is preformed.

Has this chemical-eductive model of understanding the composition of bodies a genetic dimension?

Does it contradict the preformationist explanation? In the background of this problem lies the ambiguity with which Leibniz defines both the nature of organic bodies – entities halfway between substances and aggregates - and the boundary between organisms and other forms of non-organic organization (such as chemical species or minerals). The aim of this paper is to explore the tension between the principle of continuity and the problem of the difference between living bodies and inert bodies in Leibniz’ thought. This tension is central to the question of the origin of life, even though in Leibniz we do not find an explicit formulation of this question.


Blank, A. (2017) “Protestant Natural Philosophy and the Question of Emergence, 1540–1615”, in:Magyar Filozófiai Szemle 2017/2. Philosophy and Science: Unity and Plurality in the Early Modern Age, pp. 7-22.

Duchesneau, François. 2010. Leibniz le vivant et l'organisme. Paris: Vrin.

Duchesneau, François. & Smith, Justin E.H. 2016. The Leibniz-Stahl Controversy. New Haven and London: Yale University Press.

Leibniz, Gottfried W. 1923-. Sämtliche Schriften und Briefe. Ed. Deutsche Akademie der Wissenschaften. Darmstadt/Leipzig/Berlin: Akademie Verlag.

----------- 1978. Die philosophische Schriften von Gottfried Wilhelm Leibniz. 7 vols. Ed. C.I.

Gerhardt. Hildesheim: Georg Olms Verlag.

----------- 1989. G. G. Leibnitii Opera Omnia. 6 vols. Ed. L. Dutens. Hildesheim: Georg Olms


Smith, Justin E. H. (ed.). 2006. The Problem of Animal Generation in Early Modern Philosophy. Cambridge: Cambridge University Press.

----------- 2011. Divine machines: Leibniz and the sciences of life. New Jersey: Princeton University Press.

Top of page


14:30. Sarah Hijmans:

Oxygen and the reactions of life in Lavoisier’s chemistry

Ever since the late 18th century, biological processes have been explained in the same way as chemical processes. For Lavoisier, the study of different types of oxidations allowed for an extension of his research program to include the chemistry of life. By analyzing the research on oxygen in (post)Lavoisian chemistry, this paper will explore one of the ways in which biology and chemistry interacted in the late 18th century.

During the period known as the chemical revolution, a new understanding of a number of chemical transformations enabled Lavoisier (1743-1794) to develop a coherent theoretical structure which came to replace the theory of phlogiston. Lavoisier’s research program of the 1770s and 1780s showed that the ancient elements of air and water were actually composed, and that acids and calxes were compounds of oxygen. Lavoisier became extensively involved in plant and animal chemistry from 1785 onwards, not because of his interest in the subjects themselves but because it helped him in the formulation of general chemical theories: the composition of plant and animal substances matched his description of the simple principles underlying all of matter (Holmes 1985 p. 408). All of these questions were embedded in the same general research program with oxygen as a key element, which Lavoisier pursued throughout most of his career.

Besides the understanding of composition, this extensive oxygen-centered research program also led to a new understanding of chemical processes: combustion and calcination were identified as reactions involving oxygen, and this view was extended to respiration in 1790 (Crosland 1994). Thus, Lavoisier identified respiration as a type of slow combustion, and this constituted one of the first satisfactory explanations of a biological process in purely chemical terms, which marked a major step in the history of science (Crosland 1994). Whereas previous attempts to explain biological operations reduced them to mechanical processes, the new system of chemistry explained life as a series of chemical transformations, many of which were a type of oxidation.

In the 19th century, even though some of Lavoisier’s theories - such as the identification of oxygen as the universal principle of acidity - were proven wrong (Siegfried 1988), chemistry continued along Lavoisierian lines. Works on different kinds of oxidation continued to be published in chemistry journals well into the 19th century, and chemists continued to study the composition and the reaction of plant and animal substances in order to formulate chemical theories. This reflects the lack of an ontological separation between biological and chemical processes in the eyes of the chemists of the time: for Lavoisier and his successors, processes inside living bodies operated according to the same principles as those outside of them (Nickelsen 2015, p. 43). Thus, the study of biological processes and composition was complementary to their work on chemistry, rather than separate from it. Likewise, Lavoisier’s investigations into fermentation, respiration, and plant and animal chemistry were intimately connected to his research into combustion, and they cannot be understood separately from one another (Holmes 1985, p. xv; Holmes 1988). An analysis of the role of oxygen in Lavoisier’s chemistry will therefore show how studies of biological processes helped formulate his revolutionary system of chemistry.


Maurice Crosland, 1994, In the Shadow of Lavoisier: The Annales de Chimie and the establishment of a new science, BSHS Monographs 9, Oxford: The Alden Press, pp. 181-192.

Frederic Holmes (1988), Lavoisier’s conceptual passage, Osiris – The Chemical Revolution: Essays in Reinterpretation, 4: 82-92.

Frederic Holmes (1985), Lavoisier and the Chemistry of Life: An Exploration of Scientific Creativity, Madison: The University of Wisconsin Press.

Kärin Nickelsen (2015), Explaining Photosynthesis, Models of Biochemical Mechanisms 1840-1960, History, Philosophy and Theory of the Life Sciences vol. 8, Dordrecht: Springer.

Siegfried, Robert (1988), “The Chemical Revolution in the History of Chemistry”, Osiris – The Chemical Revolution: Essays in Reinterpretation, 4: 34-50.

 Top of page


15:30. Cécilia Bognon-Kuss:

From Stoffwechsel to metabolism: The rise of a chemical biology

In this paper I investigate the epistemic conditions of the emergence and development of the concept of metabolism throughout the XIXth century, starting from the German notion of « Stoffwechsel » (the metamorphosis of matter in living tissues) and contrasting it with Claude Bernard’s theory of « indirect nutrition ». I will focus on the conceptual shifts that have enabled an understanding of living organisms in terms of self-regulating machines, able to convert the substance of foodstuffs into their own matter and to synthesize genuine organic compounds. On this basis, I will argue that the study of nutrition and the correlative development of a chemically informed metabolic model of life challenged a widespread commitment to some forms of vitalism that was shared by many of these researchears in the life sciences, a commitment expressed by a realistic or constitutive use of vital forces in physiology and organic chemistry. I will show that the development of a chemical understanding of vital processes and laws of organization strived to overcome the antinomy between vital organization and the molecular level. Finally, against some overemphasis on the role various vitalisms played in the emergence of modern biology, I will draw conclusions on the role played by this conception of metabolism on the emergence of the modern notion of organisms as individuals, i.e., able to remain themselves as organic wholes, in virtue of their constant relation with their environment.

Top of page


16:00. Sara Franceschelli:

Instabilities first! A relational interpretation of Turing’s approach to morphogenesis

Turing’s “The Chemical Basis of Morphogenesis” can be viewed as a mathematician’s proposition to bridge biology (the problem of cellular differentiation in the growing embryo) with chemistry (the diffusion on a surface of two interacting chemicals)1. In this article Turing explores the mathematical properties of a system of two partial differential equations, known as reaction-diffusion equations, to understand spontaneous pattern formation in nature.

He mathematically shows how the instabilities in a process of reaction-diffusion of two interacting chemical substances (that Turing calls “morphogens”) can produce spatial patterns, starting from a homogeneous state, because of symmetry breaking. As Turing states: the “investigation is chiefly concerned with the onset of instabilities.” Despite the fact that he is one of the founders of computer science, Turing does not connect his developmental mathematics to the then emerging sciences of information and molecular biology. In 1944, Erwin Schrödinger had introduced a powerful informational metaphor in What is Life? describing early on the gene as a code script2. However Turing makes no mention of it or of the incipient understandings of the genetic program in the context of the development of molecular biology. In Turing’s mathematical model, even if Turing does not exclude that genes can be a particular kind of morphogens, they play only a catalyzing role, defining the reaction rates.

Quite reasonably, Turing’s approach to morphogenesis is often qualified of “physicalist” since, as Turing asserts, “the theory [of morphogenesis] does not make any new hypothesis; it merely suggests that certain well-known physical laws are sufficient to account for many of the facts.” Nevertheless, based on the definition of “morphogen”, Turing’s neologism referring to substances involved in his equations, I will defend the thesis that it would be misleading to class Turing’s approach as a materialist reductionism. I will instead defend a relationalist interpretation of Turing’s approach to morphogenesis; the morphogen has a purely relational definition, not a materialistic one. Its actions rather than substance give it definition.

I will then argue that this interpretation of Turing’s theoretical stance offers a perspective to shed light on his theoretical ambitions in biology, ambitions fully in resonance with the theoretical biology research program of Conrad Hal Waddington, with Needham biochemical approach and with their notions of “evocator and “competence”3.

I will thus criticize the received view of Turing as the typical mathematician or physicist interested only in applying to biology (too) abstract models without having a pertinent and solid biological agenda and defend instead the idea of the bridging role between biology and chemistry played, in Turing’s approach, by the (mathematical) study of instabilities.


1 Alan M. Turing, “The Chemical Basis of Morphogenesis”, Philos. Trans. Roy. Soc. B237 (1952), 37-72.

2 Erwin Schrödinger, What is life?, Cambridge University Press, 1944.

3 Conrad Hal Waddington, Organisers and genes, Cambridge University Press, 1940.

Top of page


16:30. Ute Deichmann:

Data, theory, and scientific belief in early molecular biology: Pauling's and Crick's conflicting notions about the genetic determination of protein synthesis and the solution to the 'secret of life'

Opinions on the relationship between data and theory (here broadly understood as any universal statement that purports to describe and explain phenomena of the natural world) vary greatly and fall between the two poles of empiricism and anti-empiricism. On the one hand, the Duhem-Quine thesis holds that theories are so radically underdetermined by data that data are insufficient to determine what scientific beliefs a scientist should hold. On the other, proponents of big-data science have declared a new era of empiricism in which the volume of data accompanied by computational tools enables data to speak for themselves, free from theory.

Motivated by these contradictory arguments, I analyse, both historically and conceptually, the generation of two highly important conflicting theories in early molecular biology, namely Linus Pauling's structural theory of protein synthesis and Francis Crick's informational theory of protein synthesis, both generated in the 1950s. My goals are:

  • To explore the relationship between experimental data and theory in Pauling's structural and Crick’s informational theories
  • To show that both Pauling and Crick based their views on only a few, and nearly the same, experimental data
  • To make clear that despite this apparent 'underdetermination', scientific theory choice was possible at the time, though not on the basis of data alone
  • To argue that:
    • Under certain circumstances, highly probable, testable, and fruitful theories can be based on only a few data, and that data in biology do not speak for themselves.
    • A reliance on data alone would have prevented the generation of Pauling's concept of molecular specificity and molecular disease, as well as Crick's concept of the 'sequence hypothesis' (sequence principle) (that also became the basis of 'big data science' today).


Crick F.H.C. 1958. On protein synthesis, Symposia of the Society for Experimental Biology 12: 138-163.

Crick, F.H.C. 1988. What mad pursuit: A personal view of scientific discovery. New York: Basic Books.

Duhem, Pierre. [1906] 1954. Physical theory and experiment. in: Philip P. Wiener (trans.). The aim and structure of physical theory. Princeton University Press.

Kay, Lily E. 1993. The molecular vision of life. Oxford Univ. Press.

Kitchin, R. 2014. Big Data, epistemologies and paradigm shifts. Big Data and Society April-June 2014: 1-12.

Pauling L., Niemann C. 1939. The structure of proteins. Journal of the American Chemical Society 61: 1860-1867.

Pauling L., Itano H.A., Singer S.J., Wells I.C. 1949. Sickle cell anemia, a molecular disease. Science 110: 543-548.

Polanyi, M. [1958] 1962. Personal knowledge: Towards a post-critical philosophy. University of Chicago Press.

  Top of page




Session IIIa [10:00-12:45]
Ontological Issues


10:00. Keynote speaker:

Stephan Guttinger:

Understanding the nature of molecules: Process ontology and scientific practice

In recent years philosophers have made renewed efforts to develop and defend a process ontology. Here I will argue that this project suffers from a key weakness, namely its inability to account for the level of molecules. The molecular realm therefore remains a bedrock for substance ontology. I will claim that process philosophers can overcome this roadblock by pursuing a practice-informed scientific metaphysics. I will use a case study from protein biology to further develop this point and to highlight the importance of a particular type of practice in this context, namely what I will call ‘energy-level management’ (ELM) practices.

Top of page


11:15. Donato Bergandi:

Rethinking emergence

Emergence is an elusive phenomenon. Currently, emergent properties are confused with collective properties. Especially, what are the emergents: properties, relations, entities or laws? The time has come to transform the « magic emergence » into a philosophically sound scientific emergence. To achieve this, we inpire by the transactional ontology of John Dewey and Arthur F. Bentley. We make a kind of ontological bid on reality. We mantain being evolution and history processes intrinsically transactional by nature. And emblematically, in ecology where evolution and history play a very important and constitutive role, we see the transactional discipline par excellence.

Top of page


11:45. Jean-Pierre Llored:

Relational ontologies in chemistry, biology, and ecology: An investigation

The starting point of this paper is the scrutiny of the ways relational ontologies are used in current chemistry, ecology, and biology. Following this line of enquiry, we will point out: (1) the co-definition of chemical relations (transformations) and chemical relata (bodies) within chemical activities; (2) the constitutive role of the modes of intervention in the definition, always open and provisional, of ‘active’ chemical bodies; and (3) the mutual dependence of the levels of organization in chemistry. We will then focus our attention on the ways philosophers of immunology investigate biological identity and redefine organisms. We will also highlight ecological development, niche construction, and the constitutive role of the contexts in the expression of genes in epigenetics. In doing so, we will propose new developments about the ways of conceving: (1) individuals, organisms and subjecthood; (2) relational and intrinsic properties; (3) emergence and reduction, and (4) new concepts from which we could develop a new kind of environmental ethics in connection with a new kind of humanism, that is to say, not an ‘abstract humanism’ cut off from the whole Earth and the other forms of life, but, by contrast, a humanism thought in mutual dependence with other ‘milieux,’ be they human or not.


Boi, L. (2008). “Epigenetic Phenomena, Chromatin Dynamics, and Gene Expression. New Theoretical Approaches in the Study of Living Systems”, Rivista di Biologia/Biology Forum,

101 (3), 405-442.

Harré, R. & Llored, J.-P. (2013). “Mereologies and molecules”, Foundations of Chemistry, volume 15, n° 2, 127-144.

Hull, D. & Ruse, M., (Eds.). (2007). The Cambridge Companion to the Philosophy of Biology, Cambridge: Cambridge University Press.

Merleau-Ponty, M. (1963). The Structure of Behavior, A.L. Fisher (trans.), Boston: Beacon Press.

Morange, M. (2006). “Post-genomics, between réduction and emergence”, Synthese, 151, 355-360.

Newman, S. A. & Müller, G. B. (2000). “Epigenetic mechanims of character origination”, Journal of Experimental Zoology, 288, 304-317.

Pradeu, T. (2012). The Limits of the Self: Immunology and Biological Identity, Oxford : Oxford University Press.

Top of page


12:15. Quentin Hiernaux:

Do chemical substances have ecological existence conditions?

Chemical compounds and even more so chemical elements are often a priori considered as substances that exist in themselves, regardless of a possible level of biological or ecological organization. Isn't biological organization an emerging phenomenon of physico-chemical organization and therefore a second reality? Are not the chemical elements and many chemical compounds generated in space, in the very heart of the stars, thereby emphasizing the autonomy of a degree of physico-chemical organization independent of life?

A first difficulty with this intuitive thesis is that the variety of chemical compounds seems to increase with the diversity of life forms. Most chemical compounds found on Earth are likely to be intimately linked to life and ecological processes. Indeed, many organisms produce unique chemical compounds that are useful to their physiology, defence or communication. Wouldn't proteins, vitamins, toxins be restricted to the terrestrial environment? What is the probability that vitamin A or hemoglobin appear on the uninhabited borders of the galaxy?

The chemical compounds synthesized by organisms therefore indicate that there is a very strong dependence between a high degree of chemical organization and a certain ecological context (life on Earth in the broadest sense). But aren't the assemblies of atoms that constitute the elements, metals, minerals, ontologically independent of biological activity? Indeed, they are found in other places than on Earth (think of iron, carbon, sulphur, etc.), and there are even more abundant elements in space than on Earth. Should we not therefore distinguish the treatment of basic chemical elements from some of their assemblies?

The question then is: “what criteria can be used to determine whether or not a chemical compound has ecological conditions of existence?” Can we consider that the basic chemical elements of the periodic table do not have ecological conditions of existence? In reality, the existence of certain elements is extremely abstract if we cut it off from its conditions of emergence or preservation on Earth. Moreover, all elements with a mass above 94 are only known through human synthesis. No known natural (astro)physical processes lead to their existence elsewhere in the Universe. And what about transactinides, or superactinides, superheavy elements whose existence has remained hypothetical to this day? Couldn't these elements, which have a very strong ecological dependence on intelligent biological activity, lead us to reinterpret the existence of all the chemical elements if not in an ecological sense, in a much more

Top of page



Session III b [14:00-16:30]
Ontological Issues


14:00. Guglielmo Militello:

Structural and organisational conditions for being a machine

The concept of ʻmachine-like systemʼ has been extensively employed in the conceptual framework of neo-mechanistic accounts, since many biological mechanisms have been regarded as the functional components of a system which behaves like a machine (Bechtel and Richardson 1993 (2010); Glennan 1996). Moreover, the analogy between machines and certain biological macromolecular structures plays a pivotal role in nanotechnology, as some kinds of macromolecules are artificially reproduced by considering them machine-like systems. Yet, it has been recently argued (Moore 2012; Skillings 2015) that molecular devices (biological and synthetic as well) are not machines since they are subject to physicochemical forces that are different from those of macroscopic machines. Nicholson (2013) has stressed that, since the concept of ʻmachineʼ implies that of designer, biological macromolecules cannot be considered machines.

Despite this, the structural and physicochemical conditions that allow both macroscopic machines and microscopic devices to work and perform new functions, through a combination of elemental functional parts, have not yet been examined. In order to fill this void, this talk has a threefold aim: first, to clarify the structural and organisational conditions of macroscopic machines and microscopic devices; second, to determine whether the machine-like analogy fits nanoscale devices; and third, to assess whether the machine-like analogy is appropriate for describing the behaviour of some biological macromolecules.

In order to address these issues, the presentation will be divided into three parts. Firstly, the criticisms (Moore 2012; Nicholson 2013; Skillings 2015) levelled at the machine-likeness of molecular devices will be discussed. Particular attention will be devoted to the question of whether or not the different structures and the different physicochemical behaviour of macroscopic and molecular machines prevent us from employing the term ʻmachineʼ at the molecular level. Secondly, the structural and physicochemical conditions underlying both macroscopic (e.g. mechanical machines) and microscopic (biological as well as synthetic) devices will be examined. Finally, a comparative analysis of synthetic (e.g. artificial DNA architectures) and biological (e.g. myosin, dynein, and F0F1ATPase) machines will be carried out so as to appreciate their differences and the distinctive character of biological molecular devices.

In line with Militello and Moreno (2018), this study suggests that, even though macroscopic and molecular machines exhibit different component parts, a distinct design, and different physical laws (Newtonian mechanics underlying macroscopic machines and quantum mechanics governing microscopic ones), both kinds of devices share a common organisation which is the ontological basis for being a machine: they are sets of functional components that harness a flow of energy so as to do work and perform function(s). This essential characteristic stresses that, contrary to what have been claimed by Moore (2012), Nicholson (2013), Skillings (2015), a number of microscopic devices can rightly be regarded as machines.

It will be argued that the machine-likeness of certain biological macromolecules was of paramount importance in prebiotic and biological evolution, because it opened up a new domain of functional diversification: new forms of mechanistically-complex functions could be achieved through different combinations of parts.


Bechtel W, Richardson RC (1993 (2010)) Discovering Complexity: Decomposition and Localisation as Strategies in Scientific Research. MIT Press, Cambridge (Mass)

Glennan S (1996) Mechanisms and the Nature of Causation. Erkenntnis 44: 50-71

Militello G, Moreno Á (2018) Structural and organisational conditions for being a machine. Biol Phil 33:35 https://doi.org/10.1007/s10539-018-9645-z

Moore PB (2012) How Should We Think About the Ribosome?. Ann Rev Biophys 41: 1-19

Nicholson DJ (2013) Organism ¹ Machines. Stud Hist Phil Biol Biomed Sci 44: 669-678

Skillings DJ (2015) Mechanistic Explanation of Biological Processes. Philos Sci 82: 1139-1151

Top of page


14:30. Daniel J. Nicholson:

Is it appropriate to conceptualize biological macromolecules as molecular machines?

Biology has a long tradition of importing ideas from engineering in its conceptualization of the entities it investigates¾a tradition that stretches back to the rise of modern science in the seventeenth century and its mechanization of the world picture. Organisms, from Descartes on, have been ontologically understood in analogy with machine. And with the advent of molecular biology in the mid-twentieth century, proteins and other macromolecules also came to be regarded as machines of microscopic size. In his Chance and Necessity Monod described the protein as ‘a veritable machine’, but it wasn’t until the publication of Alberts’ 1998 manifesto ‘The Cell as a Collection of Protein Machines: Preparing the Next Generation of Molecular Biologists’ that this way of thinking about subcellular assemblies became pervasive.

I believe that the success of the ‘molecular machine’ concept has to do with the fact that it brings together the two traditional concerns of molecular biology: structure and specificity. To say that a particular macromolecule is a machine is to draw attention to both the importance of its structural configuration in determining its activity and the specificity and regularity of its mode of operation. In recent years, however, new empirical findings made possible by the introduction of novel methodologies have begun to cast serious doubts on the theoretical adequacy of the ‘molecular machine’ model. Some of its problems pertain to its undue emphasis on structure, and others to its undue emphasis on specificity.

With regards to structure, it has become clear that proteins in their native environments actually behave more like liquids than like solids. Their structure is soft and fluid, not hard and rigid—like that of a machine. Moreover, many proteins do not have an ordered three-dimensional conformation, but instead roam the cell as unfolded polypeptide chains. These intrinsically disordered proteins (IDPs) challenge the old idea that proteins bind to their substrates because their shapes match like a lock and a key. IDPs only acquire stable functional conformations when they bind to appropriate targets, and some remain disordered even after binding. This is very different from how we think about structure and its relation to function in a machine.

With regards to specificity, it is becoming apparent that functional specificity is the rule rather than the exception for proteins, even enzymes—a phenomenon dubbed ‘moonlighting’. What a protein does is largely determined by the cellular milieu in which it is embedded. The molecular machine concept justifies ignoring this context, allowing researchers to focus their attention on the structure of the mechanical device and the ‘mechanism’ of its operation, and this inevitably leads to a misleading understanding of how proteins act. Additionally, the ambiguity, contingency, and context-dependence of protein-protein interactions are hard to reconcile with the exquisite specificity and tightly constrained operation that we would expect from a molecular machine composed of various proteins. Similarly, the transient nature of protein associations conflicts with the fixity and durability that we intuitively associate with the arrangement of parts in a machine.

In the final part of the paper, I explain how these problems are currently prompting a shift away from the molecular machine model towards the view that biological macromolecules are best described as pleomorphic ensembles.

Top of page


15:30. Klaus Ruthenberg:

Acids: Chemical or biological kinds?

In the philosophical reflections and discussions about so-called natural kinds (which according to many analytically oriented authors do correspond to groupings or orderings that do not depend on humans), chemical entities do have an outstanding position, because, for example, chemical elements (like those represented by the periodic table) and pure substances (like water) are often considered to be paradigmatic examples. Presumably, many philosophers and historians – and without any doubt most chemists – hold that acids are natural kinds, too. Many cultural and natural scientist would subscribe to the claim that acidity is a real and natural, rather than a conceptual description. Due to the historical facts, however, a “realistic” point of view with respect to acids (and any other chemical species) – that is the independence of stuff descriptions from human activities – is not easy to maintain, if not fundamentally wrong.

Another important aspect which is supported by historical investigations is the fact that among the earliest experience with sour substances was the contact with vinegar. In addition, other vegetable and animal material served to gain substantial knowledge by distillation, among them the knowledge about the chemical individual now called formic acid, which was distilled from ants. Hence, a good part of early descriptions of acidity comes from the manipulations of and experiments with living material. Although there was practically no elaborated theory of acidity until the late 19th century, some of the acidic substances slowly became individualized long before, and in some important cases, those individualizations were performed not purely chemically.

Apart from the making and characterizing of substances like formic, acetic, and citric acid in a “biological” framework, there are two other important points to be mentioned in the discussion of the bridging of the philosophies of biology and chemistry. Both the pH-concept and the glass electrode are results from bioanalytical research, that is even here there is some overlap between the science of substances and the science of life – and perhaps between their philosophies as well.

In the present contribution, I shall apply a pronounced practical (or empirical) point of view and address (chemical and biological) materiality rather than (physical) matter.


Berthollet, C. L. (1801). Untersuchungen über die Gesetze der Verwandtschaft (Ernst Gottfried Fischer, Trans.). Leipzig: Engelmann (First published as Recherches sur les lois de l’affinité, 1801).

Chang, H. (2012). Acidity: The Persistence of the Everyday in the Scientific. Philosophy of Science, 79, 690-700.

Cremer, M. (1906). Über die Ursache der elektromotorischen Eigenschaften der Gewebe, zugleich ein Beitrag zur Lehre von den polyphasischen Elektrolytketten. Zeitschrift für Biologie, 47, 562-607.

Hooykaas, R. (1958). The concepts of “individual” and “species” in chemistry. Centaurus, 5, 307-322.

Lowry, T. M. (1915). Historical introduction to chemistry. London: Macmillan & Co.

Michaelis, L. (1922). Die Wasserstoffionenkonzentration. Teil I (2nd ed.). Berlin: Verlag von Julius Springer (First published 1914).

Moran, B. (2005). Distilling knowledge. Alchemy, Chemistry, and the Scientific Revolution. Cambridge, Mass: Harvard University Press.

Plato. (1888). The Timaeus of Plato. Edited, with introduction and notes, by R.D. Archer-Hind. London & New York: Macmillan & Co.

Ruthenberg, K. Chang, H. (2017) Acidity: Modes of characterization and quantification. Studies in History and Philosophy of Science 65-66, 121-131.

Sørensen, S. (1909). Ergänzung zu der Abhandlung: Enzymstudien. II. Mitteilung. Über die Messung und die Bedeutung der Wasserstoffionenkonzentration beienzymatischen Prozessen. Biochemische Zeitschrift, 22, 352-356.

Top of page

16:00. Gregor Greslehner:

What is the explanatory role of the structure-function relationship in immunology?

According to a common slogan in the life sciences, “structure determines function”. While both ‘structure’ and ‘function’ are ambiguous terms that denote conceptionally different things at various levels of organization (Godfrey-Smith, 1993), the scientific focus has traditionally been on the three-dimensional shapes of individual molecules, including molecular patterns that play a central role in immunology. The specificity of binding sites of antibodies and pattern recognition receptors has given rise to the so-called “Janeway paradigm”, according to which the three-dimensional shape of molecules is the key to understanding immunological function: microbial pathogen-associated molecular patterns (PAMPs) bind specifically to pattern recognition receptors (PRRs) of the host and by “recognizing” signature molecular motifs of pathogens an immunological reaction is triggered (Medzhitov and Janeway, 1997). If correct, these molecular structures would be crucial for solving the riddle of how the immune system is able to distinguish between self and non-self – and between harmful and beneficial commensals.

However, this narrative faces a major challenge, as molecular motifs are being shared among pathogens and symbiotic commensals alike. Both express a similar set of molecular patterns that are specific for prokaryotes. Other instances are known in which one and the same molecular motif can trigger opposing immune reactions, depending on the presence or absence of additional signals in the cellular context (Sansonetti, 2011). It is speculated that a second “danger” signal might be needed in order to trigger an immune response. Whatever the nature of this second signal might be, it will require stepping away from the fixation on molecular patterns. I argue that it is rather structural motifs of networks which carry the explanatory weight in these immunological processes. I suggest to distinguish between different meanings of ‘structure’ and ‘function’, to which separate explanatory roles can be attributed. While the three-dimensional shape of signature molecules (structure1) can be used to explain their function1 – understood as biochemical properties and activities – their immunological function2 – biological roles, like immunogenicity – can only be explained with respect to higher-level structures2, i.e. the interaction network of molecules and cells. These different explanatory roles also imply different explanatory accounts. The former remains within a physico-chemical framework, whereas the latter rather calls for mechanistic and topological explanations (Machamer et al., 2000; Huneman, 2010).

Studying the interaction topology and dynamics of structures2 with mathematical tools, modeled as signaling games, promises to shed new light on these interaction processes that increasingly get to be described as equilibrium states between multiple interaction partners by immunologists (Muraille, 2013). Rather than focusing only on the presence or absence of molecular signatures, topological properties explain the features of these networks and their activities beyond the molecular interactions between PAMPs and PRRs. This way, opposing effects resulting from the same kind of molecular structure1 can be explained by differences in their “downstream” organizational structure2. While still preserving the centrality of structurefunction relationships, I suggest to keep these conceptually different notions of ‘structure’ and ‘function’ and their respective explanatory roles apart.


Godfrey-Smith, P. (1993). Functions: Consensus without unity. Pacific Philosophical Quarterly, 74(3):196–208. doi:10.1111/j.1468-0114.1993.tb00358.x.

Huneman, P. (2010). Topological explanations and robustness in biological sciences. Synthese, 177(2):213–245. doi:10.1007/s11229-010-9842-z.

Machamer, P., Darden, L., and Craver, C. F. (2000). Thinking about mechanisms. Philosophy

of Science, 67(1):1–25. doi:10.1086/392759.

Medzhitov, R. and Janeway, Jr., C. A. (1997). Innate immunity: The virtues of a nonclonal system of recognition. Cell, 91(3):295–298. doi:10.1016/S0092-8674(00)80412-2.

Muraille, E. (2013). Redefining the immune system as a social interface for cooperative processes. PLoS Pathogens, 9(3):e1003203. doi:10.1371/journal.ppat.1003203.

Sansonetti, P. J. (2011). To be or not to be a pathogen: that is the mucosally relevant question.

Mucosal Immunology, 4:8–14. doi:10.1038/mi.2010.77.

  Top of page



Session IV [9:30-15:00]
 Bridging Concepts & Approaches


9:30. Keynote speaker: 

Samir Okasha:

Can the philosophy of biology provide any lessons for the philosophy of chemistry?

The philosophy of biology came to fruition in the 1970s, and is now an established academic discipline and a major component of the contemporary philosophy of science syllabus. In this talk, I reflect on the motivation and impetus behind the rise of philosophy of biology, with an eye to possible lessons for the philosophy of chemistry. I distinguish three sorts of enquiry that all fall under the "philosophy of biology" rubric as usually understood. These are: (i) the use of biology as a touchstone to assess particular positions in the general philosophy of science; (ii) addressing conceptual and methodological issues that arise within biology itself; (iii) drawing on biological ideas to help tackle traditional philosophical problems. I give examples of all three sorts of enquiry, and ask whether parallels can be found in the philosophy of chemistry. I close by highlighting what I regard as some of the wrong turns that philosophy of biology has taken over the years, and again consider whether the philosophy of chemistry can learn anything from this.

Top of page


10:45. Olivier Sartenaer:

A place of its own? On the role of chemistry in the dispute over the reducibility of biology to physics

Although it is always tricky – if not misguided – to speak of a consensus in philosophy, I think it is fair to say that, as of today, there is a widespread and relatively “consensual” view in philosophy of science as to the relationship between the biological and the physical sciences. In a nutshell, despite the fact that biological objects are commonly seen as complex interactive webs of physical objects – so “physicalism” is taken to prevail –, biological objects are often considered not fully and adequately explainable on the sole basis of the epistemic resources of physics – to the effect that “(epistemological) reductionism” fails (see e.g. Mayr 2004). Such a stance, referred to as “non-reductive physicalism”, typically comes hand in hand with a rather unapologetic prejudice with respect to chemistry. As things stand, biology would have a unique and idiosyncratic status that underlies its irreducibility to physics, a status that a science like chemistry would fall short of having, to the effect that it could be fully amalgamated with physics without further consideration (a practice often implicitly conveyed through the use of the umbrella term of “physico-chemistry”; see e.g. Weber 2005).

In a series of publications (e.g. in Rosenberg 2006), Alex Rosenberg recently challenged the alleged “received view” of non-reductive physicalism in philosophy of biology, by arguing in favor of a full-fledged reductionist stance (dubbed “Darwinian reductionism”). The gist of his argument to this effect lies in the (plausibly controversial) idea according to which the “Principle of Natural Selection”, which Rosenberg takes to be the only law of biology underlying all biological explanations – be them proximate or ultimate –, is actually an underived, fundamental law of chemistry. Accordingly, all biological explanations turn out to be, at their core, chemical explanations and, ultimately – given the prejudice introduced above –, physical explanations. 

In the proposed talk, I’ll first reconstruct the full logical structure of Rosenberg’s argument, supplementing such a reconstruction with a concrete illustration coming from a case study located at the crossroad between evolutionary and developmental biology, namely the formation of eyespots on the wings of certain species of butterflies. I’ll then turn to a critical examination of the argument’s (explicit and implicit) premises. In particular, I’ll stress an important limit of the argument that, to the best of my knowledge, has surprisingly passed unnoticed in the literature. In line with the main theme of the conference, my overall goal will be to discuss the possible implications that some of Rosenberg’s claims have for the way in which one should construe the role and place of chemistry in the dispute over the potential reducibility of biology to physics. Between a forced alliance with physics (as the sciences of inert matter) and a forced alliance with biology (as the sciences of natural selection), one may wonder whether chemistry could finally have a place of its own, coherently with recent works in philosophy of chemistry (e.g. Hendry 2011).


Hendry, R. F. (2011). Philosophy of Chemistry. In S. French & J. Saatsi (Eds.), The Continuum Companion to the Philosophy of Science (pp. 293-313). London: Bloomsbury.

Mayr, E. (2004). What Makes Biology Unique? Considerations on the Autonomy of a Scientific Discipline. Cambridge: Cambridge University Press.

Rosenberg, A. (2006). Darwinian Reductionism. Or, How to Stop Worrying and Love Molecular Biology. Chicago: The University of Chicago Press.

Weber, M. (2005). Philosophy of Experimental Biology. Cambridge: Cambridge University Press.

 Top of page


11:15. Jerome Santolini & Martin Feelisch:

Redox chemistry as the principal biological language

The representation of the world is an intrinsic and structuring feature of the dynamics of Life; all life is semiosis”[1]

The ability of living entities, from organelles to ecosystems, to represent living processes and structures that enable and support their function appears as an obvious requirement to maintain, extend and perpetuate Life. Although biologists have been aware of this for some time[2], the necessity for a ‘representation capacity’ can today be conceptualised more accurately at three different levels : i) organism’s implicit order: organisms represent protected spaces in which numerous distinct biostructures are co-embedded to support a dynamic network of interactions that comprise multiple superimposed and co-evolving bioprocess enabling specific biological functions in an efficient manner; the regulation, coordination and multi-level integration of these various processes rely on the ability of the organism to adequately monitor and control them; ii) resilience to stress and metabolic adjustment: all living entities are facing physiological challenges as a result of major variations in physico-chemical factors or bioenergetics requirements related to environmental stresses or metabolic challenges; in order to survive, an organism must sense these variations, translate them into physiologically meaningful information and adjust its metabolic functioning to provide an adequate match to the altered conditions; iii) ecological and evolutive interaction: beyond gene-centered replicative processes, the interaction between organisms and their environments is a continuous co-evolutionary process that leads to the reciprocal modification of physio-chemical parameters.

The obligate representation of one’s inner and outer environment must rely on an efficient operative language. Until now, the attempt to define this “representation capacity” appealed to concepts such as awareness, sentience, and consciousness most of which originate from social/psychological fields and are burdened by a negative anthropocentric and/or logocentric bias. There have been a few attempts to address the mechanism of representation in biophysics or in genetics[3]. We propose here that chemistry, and above all redox chemistry, constitutes the universal language of living entities that supports the representation ability – as defined here-above – of all living species.

In this presentation we will summarize our current knowledge on the various manifestations of Redox Chemistry as a Biological Language.

  • We will first describe how Redox Biochemistry, by bridging all living processes and major chemical cycles of the lithosphere, hydrosphere and atmosphere, provides with a universal gate connecting the inanimate and the living world. By engaging both inorganic and complex organic molecules, Redox chemistry acts as an interpreter between the outside and the inner milieu, the Self and the Environment and translates the physico-chemical properties of the internal and external milieu into the reflection of a ‘status’ that allows the organism to interpret and adjust to accordingly.
  • We will develop the concept of “Redox landscape” that provides a conceptual framework to apprehend the way an organism interacts with its environment and converts environmental cues into physiological actions. By “understanding” (sensing/filtering/integrating) changes in its redox environment an organism will be able to modify its own redox biology through a cascade of redox reactions that connect the external changes with the most appropriate physiological answer (‘best match’).
  • This “Redoxome” constitutes a redox-connected system that is continuously synchronized and locally stabilized to allow integration and differentiation of major functions and functionalities. Over time, new reactions eventually emerge along with new structures, associated to new physiological processes but always embedded within a coherent and synchronized system. Thereby, the redox language becomes more and more sophisticated: new words, sentences, grammar emerging into new meanings, shaping an increasingly complex Redox Biology.
  • This richness of the redox language is crucial in allowing an organism to adjust its physiology to the ever-changing environment. We will illustrate how Redox language, by enabling the integration and coordination of major physiological processes, constitutes a unique strategy to adjust, endure, survive, adapt and evolve.

As redox chemistry lies at the core of all living processes, pathological conditions could be envisioned as the difficulty to “speak the language”, when the language itself becomes unintelligible by the organism, and the words and sentences do either not make sense anymore or cannot find an adequate match among the biological responses on offer. If drastic inner/outer environmental changes are only transient the organism may become able to interpret those changes and cope, translating into the emergence of new sensing, signalling or detoxifying strategies.

Conclusion: The translation of environmental cues and physiological processes into a “chemical language” challenges the concepts we are currently using to describe and interpret biological phenomena. In particular, the continuous and integrative concept of the redox landscape questions the processes that rely on quantitative limits (Stress/Homeostasis) or on “ontological” borders, such as Symbiosis/Immunity. A global approach based on Redox Systems Chemistry might assemble all these apparent contradictary processes into various declinations of distinct Redox Communication.

Top of page


11:45. Michael Tobin:

Species as Quanta? State discreteness in biology

It has long been noted that life forms are discretized into well-defined states (species) along phenetic, biological, and/or phylogenetic lines. Observed gaps in the fossil record have often raised questions concerning the paucity of transitional forms. Inferring from these discontinuities, this paper examines the concept of speciation from the modeling standpoint of biological quantization as it has manifested itself from Schrodinger (1944) and Simpson (1944) onward. The soft implication of species quantization hinted at in Gould and Stanley's punctuated equilibrium hypothesis is then critically sifted and quantified in light of population genetics and stochastic fusion dynamics at the macromolecular level. Fisher's fundamental theorem of natural selection and the quasispecies model are applied in these regards. An abductive analogy is then drawn between the reductive physical chemical (quantum) notions of isotope and/or element and the phenotypic observation of distinct instances of life forms: Can a biocommensurate metric analog for "energy level" be conceptualized?  Contra Abbott et al. (2008), this paper focuses on the etiological application of the physical science quantum concept to speciation itself and not the proposed quantum mechanisms implicated in the origin of life.


Abbott, D., Davies, P.C.W., Pati, A.K., (eds): Quantum Aspects of Life. Imperial College Press, London (2008)

Schrodinger, E.: What is Life?. Cambridge University Press, Cambridge (1944)

Simpson, G.G., Tempo and Mode in Evolution. Columbia University Press, New York, (1944)

 Top of page


13:30. Michele Friend:

Conceptual analysis in chemistry and biology using formal languages

I present an original formal language for conceptual analysis in macro-chemistry. The language is designed to accommodate the idea that chemistry is about stuff, conceived of as a collection of macro-stable co-located properties. Operations, or events, occur involving the collections of co-located properties. After an operation, macro-stability is reached to yield (sometimes) another set of co-located properties. The co-located properties, the operations and the whole transformation of one set of co-located properties through an operation to another set of co-located properties, are each qualified by an open operator combining a ceteris paribus clause and a notion of milieu.

The purpose of developing the language is to help with conceptual analysis in macro-chemistry. In particular, the language is developed to help with analysing: what ceteris paribus means in chemistry, what aspects of milieu matter, the notions of: emergent properties, affordances, thermodynamics, time sensitivity and proportion sensitivity. All of these are present in reasoning in macro-chemistry. To accommodate macro-biology, we would have to add at least the notions of life, death, individual, health, reproduction, growth and evolution. We add these to the formal language used for macro-chemistry, and see how they help with conceptual analysis in macro-biology.

The formal language makes plain that a living individual is a pocket of negative entropy. A dead individual re-enters the entropic process. The ecosystem, family or community is part of the milieu within which macro-biological pockets of negative entropy live and die. Health is characterised in terms of ceteris paribus and affordances. These characterise “normal” functioning. Reproduction can be analysed in terms of pockets of negative entropy and individuation, and therefore numbers of individuals. Growth is change within a pocket of negative entropy that usually increases in volume, and evolution is change in properties of a species. We have new types of pockets of negative entropy that reproduce.

Formal languages are a filter. They represent certain aspects we are concerned about in reasoning and obliterate what is unimportant in reasoning. For this reason, a formal language that accommodates these many features in biology is helpful for conceptual analysis.

Top of page


14:00. Benjamin J. McFarland:

The power of dissipative systems for chemical and biological explanation

Philosophers and scientists have long wondered about the emergence of mind from matter and of culture from nature. Two thinkers have proposed explanations for each of these puzzles, which exhibit interesting parallels despite coming from different scientific disciplines. Terrence Deacon has proposed a theory based on chemical explanation for the emergence of mind from matter, and Rene Girard has proposed an anthropological theory based on biological explanation for the emergence of culture from nature. Both of these theories build complex phenomena from contingent, natural events without foresight and follow the pattern of dissipative cycles.

Deacon, in Incomplete Nature, suggests that dynamic arrangements of matter (such as metabolic cycles, self, and ultimately mind itself) develop from constrained, dissipative cycles, which he classifies as thermodynamic, morphodynamic, and teleodynamic. In Deacon’s view, self-agency emerges from progressive constraints (such as those that focus attention) on randomly firing neurons. Memory mirrors the outside world and forms internal maps for gathering resources and increasing survival.

Girard, in his corpus of work, suggests that cultural arrangements such as rituals and taboos developed to constrain the violence that results when a society of minds mirror and mimic each other’s intentions and desires. When the object of desire cannot be shared, violence inevitably results, which Girard argues is best calmed by a system of scapegoating and sacrifice. Girard’s theory is similar to Richard Dawkins’s unit of cultural transmission (the meme), but better accounts for the origin of the symbolic, mimetic system, because scapegoating restrains violence and allows the community to survive. Girard proposes that the practice of expelling scapegoats decreases intraspecific violence and increases evolutionary fitness, which accounts for the system’s selection and survival.

Girard compared his system of cultural scapegoating to biological observations from the animal kingdom: for example, geese form relational bonds by redirecting violence onto a third party, and crowded colonies of cats exhibit mob-like behavior (like Girard’s violent “mimetic crisis” that provokes the need for a scapegoat). These species gather and expel aggression in a pattern that matches that of a dissipative biochemical cycle. Humans follow the same pattern but our minds can better mirror the outside world, leading to increased memory, violence, and mimetic crisis, and requiring the scapegoat resolution. This dissipative, cultural mechanism can be discovered repeatedly without extended foresight or extreme fortune.

If self-organizing, dissipative cycles can unite the thoughts of these two disparate thinkers, then Ilya Prigogine’s Nobel-winning chemical concept may help explain both the emergence of human minds and of human culture from the natural world.


Terrence W. Deacon, Incomplete Nature: How Mind Emerged from Matter. WW Norton & Company, 2011.

Rene Girard, Evolution and Conversion: Dialogues on the Origins of Culture.  Bloomsbury Academic, 2008.

Top of page


14:30. Petra R. M. Maitz:

‘Equilibre vivant’. RNA folding manner creates new insights of self-organising systems, a metamodernism?

We know that genetic determinism of the last century is anything but true. Instead, the RNA world hypothesis is based on a much more complex interpretation. The method of allowing many voices to talk about the subject is intended. Although self-replicating and self-organising, RNA, which is a preliminary stage of DNA, is considered to be very unstable, it has changed the whole concept of natural science, above all it has changed evolution theory due to its dual function (protein and information memory).What is the meaning of metamodernism caused by this new understanding of chemical configuration changes?

The philosophising on the course of chemical, biochemical and laboratory-related change of the world is reflected in the cooperation between sciences. To collaborate and to make a philosophical critical contribution is nearly an artistic perspective.

Pictures of evolution matters are pictures of science, which we create on the basis of our knowledge. Currently, most biochemists and evolution researchers believe in a chemical starting point.

Scientific images are forms of non-written knowledge transfer. The context of knowledge transfer in which they are read has to be determined a priori. The predominant claim regarding scientific images is that they are not purely creations of the human mind, but refer to knowledge of nature as part of observing nature.

The ribosome, above all, has come to the centre of attention in all recent visualisation trials, because it is there that RNA synthesis and RNA interference take place.

Genes provide the blueprint for life; or, as a “spirit of the cell”, do they rather trigger the process of life? Is the RNA world hypothesis to be seen as a spirit, because it is equal to the spirit and the spirit itself, being dually present, continually changing identity, once as an enzyme then as an information memory which is able to let both functions happen in a possible self-fertilisation?

Scientists are working together with colleagues from around the world on the visualisation of the RNA interference technique. Thanks to the fact that RNA regulates the DNA strand, we can say that the genetic material in our cells is subject to the principle of self-organisation.

Given today’s knowledge, epigenetics, as Waddington understood it, can be seen as the transfer of certain features to descendants, which are not, or not completely, due to changes of gene regulation and gene expression in development.

One can also say that we are still involved in permanently recording logarithms and are constantly exposing ourselves to a flow of information. RNA is both a question and solution. There is a spatial quality between nucleic acids and proteins which decides on the function as a nucleic acid or an enzymatic catalyst. The experiments suggest that the RNA world hypothesis is correct. From a philosophical point of view, it is difficult to find an approach to the problem of materiality in biochemistry and chemistry. One would first have to clarify what is meant by analysing materiality and space.

We are both, being a part of nature and a part of spirit. That is human and this human kind allows non human participants like images to be a proof for changes in meaning.

RNA world hypothesis and folding theory carries a kind of metamodernism or posthumanism?


Peter Schuster, Chemolution: From Chemistry to Evolution“ 2016 University of Vienna, Institute for Theoretical Chemistry,  Symposium;

Renée Schroeder/Ursel Nendzig: Die Henne und das Ei, Residenz 2012; Petra Maitz: Visualisation of Evolution, deGruyter, 2014;

Massimo Pigliucci: Phenotypic Plasticity, Gerd Müller: Evolution – the Extended Synthesis. MIT Press, 2010;

Joachim Schummer: Die künstliche Herstellung von Leben im Labor, edition unseld 2011;

Bruno Latour: Die Hoffnung der Pandora, Suhrkamp, 2002;

Donna Haraway: When Species Meet, University of Minnesota Press 2008 & Staying with the Trouble: Duke University Press 2016;

Humberto Maturana und Francisco J. Varela: Der Baum der Erkenntnis, Fischer 2010;

Isabelle Stengers: Spekulativer Konstruktivismus, Merve 2008;

Timotheus Vermeulen, Robin van den Akker, Alison Gibbons, Metamodernism: Historicity, Affect and Depth after Postmodernism, Rowman & Littlefield Int. 2017;

Michael Yarus: Life from an RNA World: The Ancestor Within. Harvard University Press, Cambridge 2010.

  Top of page


Session V [15:00-17:30]
Synthetic Biology


15:30. Franck Dumeignil:

Hybrid catalysis as a new topical concept bridging biotechs and chemistry

To date, catalysis has strongly backed the development of the fossil resource-based chemical industry, e.g., for upgrading oil cuts (hydrotreatments, cracking reactions, etc) and efficiently valorising molecules to chemicals. Nowadays, 95% of all products (in volume) of the chemical industry are synthesised using at least one catalytic formulation. More than 70% of the total industrial chemical processes involve catalysts, which can be heterogeneous (80% of the total catalytic processes), homogeneous (15%), or enzymatic (5%). More than 80% of the added value of the chemical industry is generated by catalysis. The associated catalytic processes are now mature and most of the work on chemical valorisation of petro-resources consists of the optimisation of existing processes and catalytic formulations.

Since the beginning of this century, the development of catalysts has encountered a new boom, with an exponential growth of scientific articles dealing with the catalytic conversion of biomass and the upgrading of biomass-derived platform molecules. With the progressive rarefaction of fossil resources and a stringent environmental pressure, biomass is indeed driving much interest when considering sustainability of our Civilization [1]. Catalysis is a key technology at the core of the biorefineries development [2], in which catalytic reactions are historically envisaged in sequential processes, consisting of homogeneous, heterogeneous and enzymatic catalysis, optionally interweaved with thermochemical reactions.

In this context, catalysis faces new challenges. For example, biomass-derived molecules are highly functionalised, and, thus, very selective catalysts must be developed. Further, biomass-derived molecules are much more reactive than the petroleum-derived ones, which could be seen as an advantage, but which leads to much faster coking issues that need to be properly addressed. In addition, the presence of water in most of the streams has an influence on the surface properties of the catalysts. Their in-situ functions are most probably different from those that can be deduced from conventional ex-situ analysis. Accordingly, new specific characterisation technologies must be developed that take into account the presence of water in the working atmosphere for gas phase reactions. The liquid aqueous phase operando methodologies can also be envisioned. Finally, the presence in the real feeds of various impurities inherent to biomass upstream treatments induces the need to rethink the catalytic formulations in order to ensure sufficient tolerance to these new kinds of performance-killing compounds. Thus, a large number of strategies must be applied to design a new generation of smart heterogeneous catalysts for biomass processing technologies.

The next step will undoubtedly be to combine in a one pot process the biotech and heterogeneous catalysts. These two worlds already coexist as sequential reactions in some processes, but full integration is now a topical interest. They can be gathered only using overlapping areas of conditions, e.g., temperature, and the potential area of common temperatures has strongly expanded due to progress in both fields. First, enzymes that can work in a larger temperature range, from low temperatures (e.g., psychrophilic enzymes, discovered in the Antarctic, can work at temperatures as low as 0 °C, and are currently used in washing powders/liquids so that they are efficient even in cold water) to high temperatures (thermophilic and hyperthermophilic enzymes, present in deep-sea hydrothermal vents, can still work at temperatures over 110 °C). Some heterogeneous catalysts are adapted to this range of temperatures, for example, gold-based catalysts can, in certain cases, develop strong activity from temperatures well below 0 °C. Therefore, the interval between 0 °C and 110 °C can be used. Temperature is only one example and a set of other common processing areas can be identified.

The smart integration of these technologies requires the development of new concepts intimately merging biotechnologies and chemistry. ‘Hybrid’ catalytic systems are thus under development [3], which are creating new scientific interactions between fields that were previously not much connected.


[1] Franck Dumeignil, Thomas Dutoit, Martine Benoit, Jean-Pierre Llored, Philippe Sabot."Biosourced Chemistry, Biorefineries: The Origin, the Status and the Fate of Bioeconomy in our Civilization". Submitted to Foundations of Chemistry.

[2] “BIOREFINERIES – An Introduction”, Edited by Aresta, Michele / Dibenedetto, Angela / Dumeignil, Franck, de Gruyter, ISBN 978-3-11-033153-0 (2015).

[3] Franck Dumeignil, Marie Guehl, Alexandra Gimbernat, Mickaël Capron, Nicolas Lopes Ferreira, Renato Froidevaux, Jean-Sébastien Girardon, Robert Wojcieszak, Pascal Dhulster, Damien Delcroix. "From Sequential Chemoenzymatic Synthesis to Integrated Hybrid Catalysis: Taking the Best of Both Worlds to Open up the Scope of Possibilities for a Sustainable Future". Catalysis Science & Technology, Vol.8, 5708–5734 (2018).

Top of page


16:00. Joachim Schummer:

Knowing through making: A comparison between synthetic chemistry and synthetic biology

The recently emerged synthetic biology differs from the received biotechnologies, such as metabolic engineering, by emphasizing the creation of living beings rather than their mere modification. Apart from the technological use of the products, it is promised that the creations will also improve our basic understanding of life. Various epistemic claims have been made that revive the classical verum factum principle (“Knowing Through Making”): from bold claims such as “What I cannot build, I cannot understand” to more modest statements according to which the creation of living beings brings about some important understanding. Such claims are frequently justified by the analogy between today’s synthetic biology and 19th-century synthetic chemistry.

In this paper I scrutinize the epistemic claim of synthetic biology and the analogy to synthetic chemistry (Schummer 2011). From the mid-19th to the mid-20th century the elucidation of chemical or molecular structures was made by the total synthesis of substances. Thus making indeed enabled knowing in a way that was unique within all science. I will argue that the uniqueness is because of some logical characteristics of chemistry which are largely missing in biology. Instead much of synthetic biology follows earlier approaches of genomics which might rather be called “Knowing Through Modifying”.


Schummer; Joachim: Das Gotteshandwerk: Die künstliche Herstellung von Leben im Labor, Suhrkamp, Berlin; 2011, chap. 11.

  Top of page


16:30. Massimiliano Simons:

Dreaming of a universal biology

There is a long discussion about what is specifically shifting in postgenomic life sciences such as systems biology or synthetic biology. To explain the shift, a number of plausible candidates exist. For instance, one could point at a shift away from reductionism towards a form of holism. Alternatively, one could claim that different explanation strategies are followed, situated on the system instead of the molecular level. A third possibility is to stress the prominence of engineering and simulation in the life sciences, leading to a picture where biology no longer observes, but constructs nature.

Although these accounts are insightful, they also have their problems. This paper will therefore propose to identify an additional shift that might shed a new light on these developments. Postgenomic life sciences are characterized by a different way in which they articulate biological nature. One could see a shift away from ‘terrestrial biology’, with all its particularities and contingencies, towards what the actors in the field call a ‘universal biology’: life as it could be rather than how it exists on Earth. We are thus confronted not only with a range of novel answers, but also radically different ways of posing questions. In that sense, it could be argued that the object of study has shifted away from existing biological beings towards what one could call biological possibilia.

To understand this shift, biology should be brought into conversation with chemistry. For what is precisely happening in fields like synthetic biology is that specific approaches and mindsets, typical for chemists, are becoming relevant and feasible for biological projects. By translating biological problems in problems of synthesis (bio)chemists can mobilize their approaches to solve problems typically associated with the life sciences. This paper will particularly analyze this through the historical case study of the research in the origins of life. By contrasting early approaches, associated with authors such as Stanley Miller or J. D. Bernal, with contemporary biochemists such as Pier Luigi Luisi or Stephen Mann, this shift towards universal biology can be fleshed out.

Finally, and perhaps most importantly, this novel understanding can shed light on the increasing affinities of fundamental research and applications in postgenomic life sciences. In other words, it can offer an explanation of why engineering and design have become so central to these disciplines and how it became possible that the distinction between basic research and applications have been reconfigured in its contemporary form in these life sciences.


Bensaude-Vincent, B., & Benoit-Browaeys, D. (2011). Fabriquer la vie: Où va la biologie de synthèse? Paris: Seuil.

Bernal, J. (1967). The Origin of Life. London: Weidenfeld and Nicolson.

Keller, E. F. (2002). Making sense of life: Explaining biological development with models, metaphors, and machines. Cambridge: Harvard university press.

Luisi, P. (2006). The Emergence of Life. Cambridge: Cambridge University Press.

Mann, S. (2013). The Origins of Life: Old Problems, New Chemistries. Angewandte Chemie International Edition, 52(1): 155-162.

Miller, S. (1953). A Production of Amino Acids under Possible Primitive Earth Conditions. Science, 117(3046): 528-529.

Rheinberger, H. (1997). Toward a history of epistemic things: Synthesizing proteins in the test tube. Stanford (Calif.): Stanford university press.

  Top of page


17:00. Dominic J. Berry:

The synthesis of DNA: Chemical subunits making a narrative of biological engineering

Historians and philosophers of science are used to unpacking the often overblown claims associated with ‘biotech’ as it emerged in the late twentieth century. More recently they have also become used to unpacking the often overblown claims made on behalf of synthetic biology. In my paper I argue that we can more effectively analyse and navigate our way between biotech and synthetic biology by giving explicit attention to the history of DNA synthesis. A focus on the various different methods for making DNA gives us new insights into the history of biology in the twentieth century and its direct relations with, and dependence on, chemistry. A focus on DNA synthesis also allows us to reach the heart of the matter when it comes to making as knowing, and what it means to make biology into an engineering science. We cannot make sense of the history of biology in the second half of the twentieth century without also attending to the history of chemistry, technology, and engineering.

How to draw all these elements together? A recent programme of work emerging in the philosophy of science asks us to consider how narrative matters for the history and philosophy of science. For some introductions to this approach please visit www.narrative-science.org Drawing on these foundations I argue that the making of DNA in increasingly larger amounts, with increasing speed and accuracy, has helped cement notions of control, feedback, standardisation, and ‘making to order’ (Curry, 2016) within biology, notions that were becoming prevalent immediately prior to and subsequent to the second world war. A new narrative of biology could be begun, thanks to the presence of new material origins for DNA. In this instance narrative matters because it gives us cause to attend to epistemic and material starting points. In many scientific debates and developments, it is really alternative starting points that are either the subject or the motivation for debate. Attention to narrative helps us address starting points more systematically. The actual significances of synthetic DNA of course were, and remain, up for debate, but the terms of that debate only become clear once we know more about the chemical synthesis of DNA and its biological work.


Curry, Helen Anne. Evolution Made to Order. The University of Chicago Press (2016).

Morgan, Mary S. and M. Norton Wise. ‘Narrative science and narrative knowing. Introduction to special issue on narrative science’, Studies in History and Philosophy of Science, vol. 62 (2017), pp. 1-5.

[1] Eduardo Kohn. How Forests Think : Towards an Anthropology beyond the Human, University of California Press; See also Gregory Bateson. Mind and Nature, a necessary unity, Hampton Press

[2] Among many others, Charles Minot, Science 1902, 1-12 ; Lynn Margulis, the conscious cell, Ann. New York Acad. Sci. 2001, 929, 55.

[3] Baluska F. and Reber A., Bioessays (2019) e1800229

Online user: 18