Lecture One: Enaction 1.2

1.2 The science of groundlessness

 

Varela, Thompson and Rosch begin their discussion with an account of the origin and development of cognitive science. Cognitive science is a multidisciplinary field of research, which includes among others (but is not necessarily restricted to) computational sciences, linguistics, psychology, neuroscience, engineering, and philosophy. The purpose of cognitive science is to offer a viable and workable account of cognition, and possibly use this account to design effective technology. This field is still relatively young (its origins can be traced back to the 1940s). The authors identify two major trends that led the evolving views in cognitive science to diverge from the standard adversarial approaches. The first trend concerns the problems connected with modelling cognition in terms of representational states. The second trend concerns the problems connected with modelling the external world as a given background, which simply provides objective stimuli to which cognitive systems need to react.

The idea that cognition is some sort of representation of an external world is not new. It was not new when Kant tried to improve on Descartes. It was not new when Descartes himself reworked medieval scholastic models.[1] The novelty introduced in (especially early) cognitive science is a particular model of representationalism based on computation. Discussing the emergence of cognitivism in the mid-twentieth century, the authors write:

The central intuition behind cognitivism is that intelligence—human intelligence included—so resembles computation in its essential characteristics that cognition can actually be defined as computations of symbolic representations. […] The cognitivist claim [is] that the only way we can account for intelligence and intentionality is to hypothesize that cognition consists of acting on the basis of representations that are physically realized in the form of a symbolic code in the brain or a machine. […] Here is where the notion of symbolic computation comes in. Symbols are both physical and have semantic values. Computations are operations on symbols that respect or are constrained by those semantic values. […] A digital computer, however, operates only on the physical form of the symbols it computes; it has no access to their semantic value. Its operations are nonetheless semantically constrained because every semantic distinction relevant to its program has been encoded in the syntax of its symbolic language by the programmers. (Varela, Thompson and Rosch 20162, 40-41)

The cognitive hypothesis posits three distinct layers: the physical instantiation of symbols, the syntactic rules that regulate how these symbols can be related, and the semantic domain to which symbols refer. Despite distinguishing between these three layers, in practice cognitive systems modelled in computational terms dispense with meaning or take it for granted as something given (by the programmers for instance). A computer does not see or deal with this layer, but it can manipulate the physical instantiation of symbols based on their syntactic rules.

A major consequence of cognitivism is the discrepancy it creates between the first-person experience of cognition, and the actual process that is supposed to underpin that same experience. When I look at a table or sit on a chair, I do not have the experience of computing a vast amount of symbolic information according to given syntactic rules in order to form a representation of ‘table’ and ‘chair.’ I simply (this is how it feels to me) look at the table or sit on a chair. However, if the cognitivist hypothesis is correct, then behind my seemingly naïve first-person experience, there is a complex array of computational processes that allows me to have these apparently simple and unproblematic experiences. Even more importantly, these computational processes are by definition outside my own first-person domain of experience and cannot ever enter it, because first-person experience and the computational process run at different levels.

It could be added that this is the way in which cognitivism gives a new flavor to the old representationalist tenet according to which a subject cannot know the world in itself, but only a representation of it. Cognitivism adds that the subject cannot directly know even the process through which representations themselves are computed. Since the computing process is the fundamental engine that makes computation possible, this conclusion has important implications for how I understand my own experience. I might think (naively) that my experience is about looking at the table or sitting on a chair, and that these are the basic and most fundamental experiences I could have. It turns out that these are in fact just the result of computational processes that are not accessible in any direct way to my first-person experience. This first-person experience, thus, is not the ground of experience (pace Descartes), but its result. I do not make up my representations. At best, I enjoy them as I would do when watching a show.

Varela, Thompson and Rosch stress that cognitivism fundamentally challenges the ordinary sense of self as being the agent and ground of lived experience. It turns out that such a subject does not have any significant role to play in the process that makes its own cognition possible. As they write:

According to cognitivism, cognition can proceed without consciousness, for there is no essential or necessary connection between them. Now whatever else we suppose the self to be, we typically suppose that consciousness is its central feature. It follows, then, that cognitivism challenges our conviction that the most central feature of the self is needed for cognition. In other words, the cognitivist challenge does not consist simply in asserting that we cannot find the self; it consists, rather, in the further implication that the self is not even needed for cognition. (Varela, Thompson and Rosch 20162, 51)

In this remark, the tension between scientific views and first-person perspective that substantiate the second core claim of enactivism begins to emerge. Before focusing on these implications, though, it is important to follow the authors’ discussion through some further steps. As they explain, cognitivism itself is not without problems. Cognitivism tends to work in a top-down fashion. The programmer is a deus ex machina that encodes meanings in syntactic rules, by thus enabling computers to manipulate symbols. The computer does not even understand what a symbol is and can only understand what a syntactic rule is and how to implement that on a given array of material elements. There are two major difficulties in this cognitivist and computational approach. First, an enormous amount of knowledge needs to be encoded in a system in order for it to work effectively and show sophisticated cognitive faculties (and it is not always clear how to encode all this knowledge or how to define what sort of knowledge will be relevant). Second, if one examines biological cognitive systems (like the human brain, but also simpler life-forms like a fly or even a bacterium) it is extremely hard to discover symbols and syntactic rules, or anything that would make their working analogous to computers. Biological systems are seemingly based on distributed and non-hierarchical structures, and cognition does not result as the output of some linear process, but rather as an emergent phenomenon.

Emergentism can be broadly defined as the view according to which systems can operate in such a way as to self-organize themselves, and through this self-organization they acquire new properties, or these properties emerge in the system.[2] Global cooperation between the different parts of a system leads to self-organization and to the instantiation of higher functions and ways of operating, without a need for centralization or top-down control. Emergentism characterises more of a diverse family of approaches and research programs than a unified field. Nonetheless, it is possible to highlight the crucial advancement introduced by it. As Varela, Thompson and Rosch remark:

One of the most interesting aspects of this alternative approach in cognitive science is that symbols, in their conventional sense, play no role. […] This nonsymbolic approach involves a radical departure from the basic cognitivist assumption that there must be a distinct symbolic level in the explanation of cognition. […] How do the symbols acquire their meaning? In situations where the universe of possible items to be represented is constrained and clear-cut (for example, when a computer is programmed or when an experiment is conducted with a set of predefined visual stimuli), the assignment of meaning is clear. Each discrete physical or functional item is made to correspond to an external item (its referential meaning), a mapping operation that the observer easily provides. Remove these constraints, and the form of the symbols is all that is left, and meaning becomes a ghost, as it would if we were to contemplate the bit patterns in a computer whose operating manual had been lost. In the connectionist approach, however, meaning is not located in particular symbols; it is a function of the global state of the system and is linked to the overall performance in some domain, such as recognition or learning. (Varela, Thompson and Rosch 20162, 99).

While this emergentist view moves further away from the computational paradigm, its implications for the ordinary sense of self that usually informs daily life are significant. Discussing some development of this view, for instance, the authors present how the mind can be modelled as a society of agents, each one specialized in performing a certain function. Different agents can be composed or decomposed in more or less articulated structures, whose performance emerges from their interaction and mutual adaptation. From the point of view of first-person experience, once again this is not how I look at the table or sit on a chair. I have no clue about all these agents cooperating together in order for this state (‘I see the table’) to emerge, I just look at the table. As far as I am concerned, it is a simple experience. But in reality it isn’t. Emergentism further challenges the ordinary sense of self already advanced by cognitivism. But it also suggests a way that one might become aware of how the self emerges, namely, by directly looking at the various processes in which the self is actively engaged. This means that when I look at the table, I do not see the various agents that makes this global state emerge because I simply do not pay attention to the various reasons and motives that underpin my intention of looking at the table. Why do I look at the table? When? Where? How? Behind each of these questions there are various ways in which I engage with the environment in which I find myself, and through these ways, I become an actor in my own experience, or rather I enact my world.

In order to see the relevant processes that bring forth cognition, one cannot simply look ‘inside’ (the brain, the mind, or any other ‘inside’ point one wishes to take), because cognition is not just the result of inner actions, but more properly of inter-actions. I will not see how I bring forth my vision of the table by simply looking inside of me (whatever this means), I need to pay attention to the complex way in which I relate to the environment in which I happen to encounter the table. This does not mean that the ‘inside’ should be dismissed or treated as a black box, but simply that it cannot be considered a wholly sufficient condition for cognition. Not only there is no inside without an outside, but the very distinction between the two is something that has to be constantly created.

By no longer assuming an intermediary symbolic layer between first-person experience and cognition, emergentism is not forced to postulate that cognitive processes are doomed to remain opaque to their subjects (in the active and passive sense of the term). If cognitive processes are essentially interactions within cognitive systems, then making these processes transparent for the system itself might become matter of attentiveness and disciplined observation aimed in the right direction. The importance of this point will fully appear shortly. For now, it is important to mention how developing an emergentist approach leads not only to a rethinking of the unfolding of cognitive processes (steering away from representationalism) but also of the relation between cognition and environment (undermining the idea of an objectively pregiven external world).

Giving their own twist to the topic of emergentism, Varela, Thompson and Rosch define their particular approach as an attempt at studying cognition as embodied action, or enaction:

The enactive approach consists of two points: (1) perception consists in perceptually guided action and (2) cognitive structures emerge from the recurrent sensorimotor patterns that enable action to be perceptually guided. […] The point of departure for the enactive approach is the study of how the perceiver can guide his actions in his local situation. Since these local situations constantly change as a result of the perceiver’s activity, the reference point for understanding perception is no longer a previgen, perceiver-independent world but rather the sensorimotor structure of the perceiver (the way in which the nervous system links sensory and motor surfaces). This structure—the manner in which the perceiver is embodied—rather than some pregiven world determines how the perceiver can act and be modulated by environmental events. Thus the overall concern of an enactive approach to perception is not to determine how some perceiver-independent world is to be recovered; it is, rather, to determine the common principles or lawful linkages between sensory and motor systems that explain how action can be perceptually guided in a perceiver-dependent world. (Varela, Thompson and Rosch 20162, 173)

If cognition depends on interaction with the environment, then the environment too must depend on the cognitive process. There is no pregiven environment or world, since that can be encountered only in the interplay between the living organism and its effort to perceive what is around it. The very distinction between living organism and environment has to arise because of this process of interaction, and it cannot be posited before it. Without organisms, there is no environment; and without environment there are no organisms. The point is not to establish which is first, but rather to assert that neither is first; that they co-occur, or co-determine each other, that they are dependently co-originating, and that neither is more fundamental.

The authors support this view with a detailed analysis of the process of vision, which is taken as a case study to illustrate the advantages of the enactive approach. However, they also acknowledge one possible way for a realist critic of their view to ‘save’ the objectivity of the external world. The realist could claim that the self-organization of cognitive systems is the result of a process of natural selection, through which organisms evolve in order to adapt in the best possible way to the conditions of their (pregiven) environment and adjust to changes occurring there. Although ideals of adaptation and fitness are very widespread in discussing neo-Darwinian evolutionary theory, the authors argue that these notions cannot fully account for the actual ways in which living beings are observed to evolve. The key problem is that natural selection understood as a process of adaptation is taken to work in a prescriptive way, as indicating what features should be preserved by evolving organisms. The authors suggest that natural selection is better understood instead in a proscriptive way, as indicating what evolving organisms should avoid. As they write:

Cognition is no longer seen as problem solving on the basis of representations; instead, cognition in its most encompassing sense consists in the enactment or bringing forth of a world by a viable history of structural coupling. It should be noted that such histories of coupling are not optimal; they are, rather, simply viable. This difference implies a corresponding difference in what is required of a cognitive system in its structural coupling. If this coupling were to be optimal, the interactions of the system would have to be (more or less) prescribed. For coupling to be viable, however, the perceptually guided action of the system must simply facilitate the continuing integrity of the system (ontogeny) and/or its lineage (phylogeny). Thus once again we have a logic that is proscriptive rather than prescriptive: any action undertaken by the system is permitted as long as it does not violate the constraint of having to maintain the integrity of the system and/or its lineage. (Varela, Thompson and Rosch 20162, 205)

This account of natural selection includes the co-evolution of individuals and their environment (their ‘coupling’) within the enactive framework. Evolution does not offer a realist argument against enaction. Duly understood, evolution actually strengthens the case for enaction. And once again, this makes the ordinary first-person view even weirder. I thought I could look at a table and sit on a chair, now I discover that there is no table and no chair out there as existing independently from my sensorimotor system and how it co-evolved with this environment. In a sense, the chair is there only because my body is such that it can sit on it. However, I still look at the table and sit on the chair without usually being aware of this co-evolution and of how it affects both my cognition and the presence of the world. The world seems to be just there, ready at hand. But this view must be naive at best, or false at worse, if enaction (and the scientific models that underpin it) is to be taken seriously.

Discussion so far illustrated that, in Varela, Thompson and Rosch’s reading, recent trends in cognitive sciences significantly challenge the ordinary way experience is understood and conceived from an ordinary first-person perspective. This ordinary view is based on a sense of self (‘me’) as being at the center of the scene, knower and agent in a pregiven world. As it turns out, there is neither this unified and fundamental self, nor a pregiven world. The enactive view instead entails an original and mutual co-dependence of self and world, individual and environment, in which neither is more fundamental and both are co-originated in the process of their mutual interaction. What is the ground for this interaction? In fact, here is no ground; it does not need to be grounded in something else, nor could it (since anything else depends on it).

Some might already take this plea for groundlessness to be a signal that the whole enactivist project should be wrong or have some serious theorical flaw. However, contemporary debates in metaphysics show that groundlessness or non-foundational theories (theories of reality in which no ultimate foundation or ground is provided) are a serious and viable option. Jan Westerhoff, in his The Non-Existence of the Real World (2020), has offered a complete and encompassing discussion of various debates that point in different ways to the fact that (i) foundationalist approaches are fraught with serious difficulties, which is not entirely clear how to solve; (ii) that the main objections raised against non-foundationalist approaches (enactivism included) can be successfully addressed or are not as serious as foundationalist opponents assume. While we cannot get into the details of this discussion here, Westerhoff’s work provides a wealth of other arguments (besides enactivism) in support of the general claim that both an objectively pregiven external world and an ontologically substantial and independent self are philosophical constructions that do not necessarily stand up to scrutiny.

From a historical point of view, we can still observe that this non-foundationalist view can find relatively scant support in the historical canon of Western philosophy. And yet, its emergence is not without reason. On the one hand, it is possible to observe a progressive erosion, in Western metaphysics, of ontological notions and views that require a strong foundation. Historically, the paradigmatic case of grounding would have been the appeal to a Supreme Being, or God, as ultimate foundation of reality. But as mentioned in Lecture Zero, and as we are going to discuss further in Lectures Nine and Ten, this paradigm was already under serious threat by the end of the nineteenth century. On the other hand, some non-Western traditions have sometimes been much more precarious and keener on articulating what a non-foundationalist view of reality might look like. Westerhoff himself is a remarkable scholar of classical Indian Buddhist philosophy, and especially of Nāgārjuna, the second-century Buddhist authority that plays also a pivotal role in Varela, Thompson and Rosch’s discussion. In his 2020 book, Westerhoff (deliberately and strategically) refrains from explicitly drawing on Buddhist texts and arguments, but Varela, Thompson and Rosch widely and emphatically do so. It is time now to understand why.


  1. For some of the historical background of Descartes’s views, see Han Thomas Adriaenssen, Representation and Scepticism from Aquinas to Descartes (2017).
  2. For further discussion of emergentism, see in particular Ganeri, The Self. Naturalism, Consciousness, and the First-Person Stance (2012), Part II, 69-126. Mark Johnson, The Meaning of the Body: Aesthetics of Human Understanding (2007), explores a parallel emergentist account, inspired by American pragmatists like James and Dewey, that emphasises how meaning emerges from bodily and somatic patterns.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

The Tragedy of the Self Copyright © 2023 by Andrea Sangiacomo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.