Obstacles to Functionalism: Chalmers’ Paradox of Phenomenal Judgment


Functionalism is roughly the view which characterizes mental states in terms of the functional role they play in a cognitive system.  Functionalism seems to be the emerging paradigm of cognitive science, but there are epistemic difficulties in accepting functionalism.  One of these difficulties is instantiated in David Chalmers’ notion of the paradox of phenomenal judgment. If Chalmers is correct, or at least possibly correct, it seems that functionalism, at least as Hilary Putnam describes it, is in a lot of trouble.  The aim of this paper is to first arrive at some understanding of Putnam’s construal of functionalism, since (in the history of the philosophy of mind and cognitive science) Putnam is viewed as one of the father figures of functionalism (indeed he was one of the first to use the term).  Chalmers’ arguments will next be matched against Putnam’s, and it will be shown that there are good reasons to suppose that Chalmers’ arguments unveil some potentially disastrous tensions within the functionalist paradigm.     

In order to see how Chalmers’ argument works, it is necessary first to establish an understanding of functionalism.  Functionalism understands brain states according to the functional role they play in a cognitive system.  Hilary Putnam, in “The Nature of Mental States,”(1973) describes mental states in terms of the functional role they play in a computational system.  A computational system is binary in that the input and the output of the system are both necessary and sufficient for the system’s functional identity.  The system is defined by what it is attuned for (that is, the input) and by what it yields, or produces (that is, the output). Putnam spells out his version of functionalism, bearing all of this in mind:

 

  1. “All organisms capable of feeling pain are Probabilistic Automata.
  2. Every organism capable of feeling pain possesses at least one Description of a certain kind (i.e., being capable of feeling pain is possessing an appropriate kind of Functional Organization.)
  3. No organism capable of feeling pain possess a decomposition into parts which separately possess Descriptions of the kind referred to in (2).”
  4. For every Description of the kind referred to in (2), there exists a subset of the sensory inputs such that an organism with that Description is in pain when and only when some of its sensory inputs are in that subset.”[1]

 

 

The first condition is essentially redundant since “[mostly anything can be a] Probabilistic Automaton under some Description.”  What is most relevant here is the second condition, for it species that the identification of a given mental state, for instance a pain state or a state of conscious awareness, has to do with the functional organization that the state manifests within the system.  Condition (3) is important because it posits that the parts (or processes, or mechanics) responsible for the realization of a functional state are not themselves functional state in the sense described by the second condition.  Thus, for instance, if processes a, b and c are necessary for the realization of functional state Y, then it is not the case that any of the processes alone exhibit functional organization.  Thus, functional organization is a sort of emergent property, whereby the conditions which stipulate the occurrence of a certain functional organization are not alone exhibitive of functional organization. 

So one can go about defining a given mental state in some way resembling the following: ‘S is in mental state x if mental state x possess the kind of functional organization appropriate to mental state x.’ Certainly, discovering what it means to possess the right kind of functional organization is as much an empirical question as it is a semantic question.  Putnam concedes that “this hypothesis is admittedly vague.”  The upshot is that the identification of a mental state is not reduced to the material nature the state is realized in.  Computers, aliens, humans, and bacteria can have brain states so long as the appropriate functional organization is realized.  The physical-chemical properties are secondary to the functional properties.  Such is evident as Putnam remarks,

 

“…if the program of finding psychological theories of different species-ever succeeds, then it will bring in its wake a delineation of the kind of functional organization that is necessary and sufficient for a given psychological state, as well as a precise definition of the notion ‘psychological state.’”[2]

 

 

Essentially, this sort of delineation increases the applicability of psychological laws, since such laws are not restricted by the physical makeup of an entity; computers and people alike can both exhibit states of memory or states of anger, for instance.

            What is problematic for this program is that it is unable to explain, under its own terms, what Chalmers calls 2nd order phenomenal judgments.  A judgment can be thought of as an assertion or proposition related to some kind of input, whether that input is simply sensory input, or perceptual input, or perhaps input that had to do with a specific set of memories (or past recorded information) the system has.  First-order phenomenal judgment may be distinguished from second-order phenomenal judgments in that first-order judgments are concerned with a perceptual object, while second-order judgments are related to an awareness of the way in which a specific judgment is distinctly meaningful. 

Roughly speaking, first-order phenomenal judgments have perceptual or sensory objects; that is, what it is they purport has to do with sensory or perceptual data.  An example would be the judgment, “the sky is blue,” or “the room is dark.”  The fact that these judgments are phenomenal just means that, along with the functional state the judgment occurs in, there is an inherent and distinct sense attributed to the meaning of the judgment for the subject making it.  The subject may be thought of a system, but this kind of system is attuned to the way in which particular judgments differ from one another according to the likeliness of one state to another.  Consciousness is at the heart of the matter, since two semantically equal judgments, for instance “that ball is red” and “that ball is red” can present different meanings for the one making it depending on the circumstances the individual is in.  If the subject makes that judgment as he is reading a book, then the qualia associated with the judgment is markedly different from the qualia associated with the same judgment when the subject makes it as he actually sees a red ball.  The awareness of the particular way in which a given judgment is meaningful represents what Chalmers means by a second-order judgment.  If the objects of first-order judgments are sensory objects, then the objects of second-order judgments can be said to the phenomenological distinctness of a given first-order judgment. 

Chalmers defends this distinction according to the role of consciousness,

 

“First-order judgments are the judgments that go along with conscious experiences, concerning not the experience itself but the object of the experience…It seems fair to say that any object that is consciously experienced is also cognitively represented.  Alongside every conscious experience there is a content-bearing cognitive state.  This cognitive state is what I am calling a first-order judgment.”[3]

 

 

Thus, consciousness can parallel first order judgments but are not really necessary for them, while in second-order judgments, consciousness itself is what is of concern in the making of the judgment:

 

“[Second-order judgments] are more straightforwardly judgments about conscious experiences.  When I have a red sensation, I sometimes notice that I am having a red sensation.  I judge that I have a pain, that I experience certain emotional qualities, and so on.  In general, it seems that for any conscious experience, if one possesses the relevant conceptual resources, then one at least has the capacity to judge that one is having that experience.”[4]

 

Chalmers uses the distinction to uncover a tension in cognitive science.  Judgments, being cognitive acts, can be explained in terms of the functional organization a given judgment realizes, whether conscious or unconscious.  Thus, all judgments should, in principle, be explained functionally since they are cognitive acts, and cognition is individuated according to functional organization.  Accordingly, judgments which have ‘chairs’ as their objects are equally relevant to cognitive science as judgments which have ‘conscious awareness’ as their objects.  Chalmers writes, “there should be a physical or functional explanation of why we are disposed to make the claims about consciousness that we do, for instance, and of how we make the judgments we do about conscious experience.”[5]  If that’s the case, then the way in which we explain our judgments pertaining to conscious awareness is irrelevant to the occurrence of consciousness as a psychological phenomena, one which should be empirically verifiable, according to the functionalist program.  Chalmers drives the point home as he posits,

 

“When I comment on some particularly intense purple qualia that I am experiencing, that is a behavioural act.  Like all behavioral acts, these are in principle explainable in terms of the internal causal organization of my cognitive system….at a higher level, there is probably a story about cognitive representations and their high-level relations that will do the relevant explanatory work…[and] if the physical domain is causally closed, then there will be some reductive explanation in physical or functional terms.”[6]

 

 

The problem is that the behavioural act of making judgments about consciousness is functionally irrelevant to the content of the judgment about consciousness, namely consciousness.  What does this imply?  Namely, that machines, or any other system which was not conscious, could make claims about consciousness, and those claims could be explained according to the functional organization they instantiated, despite the fact that, in reality, there would be no inner awareness of the system which related at all to the matter judged! 

 

“In giving this explanation of my claims in physical or functional terms, we will never have to invoke the existence of conscious experience itself.  The physical or functional explanation will be given independently, applying equally well to a zombie as to an honest-to-goodness conscious experiencer.”[7]

 

 

The significant message to come away with is that consciousness appears to be functionally irrelevant to the explanation of a particular cognitive state, consciousness.  Evidently then, Chalmers’ argument leads to a form of epiphenomenalism, since consciousness appears to lack functional and causal import. 

            Going back to Putnam’s conditions listed above, it appears that condition two is rejected.  The problem with condition two is that it is rejected if it is shown that there exists a possibility in which an organism is capable of being conscious without a corresponding functional organization that the state of consciousness realizes. Of course, Chalmers argument hinges on the assumption that consciousness is a brain state.  There’s not much reason to think that a conscious state is not a brain state, since affecting one’s brain seems to affect one’s phenomenal consciousness, that is, seems to affect how one takes things to be, or how things seem to be presented to the individual. 

            However, to be fair, perhaps it is possible that consciousness is just irrelevant to functionalism in that it is difficult, if not seemingly impossible, to determine the input and output relationships a system requires in order for the system to be in a state of conscious.  But if one assumes that consciousness is not appropriate for functionalism, how does that go towards resolving or dissolving the question of why should things be meaningful in different ways according to the way in which one consciously attends to things.  That is, on some intuitive level, it seems that the cup in my field of vision is meaningful in a different way than the cup I am reading about; the cup I experience myself and consciously attune myself to is phenomenally distinct from the cup I read about, although I may assent equally in both cases to the cognitive judgment that “the cup is black.”  Either way one looks at it, the failure of functionalism to be appropriate for consciousness, or the failure of consciousness to be relevant to the kinds of acts functionalism can explain, does little to account for the dynamic character of meaningful judgments.

 

 

Works Cited

 

 

 

Chalmers, David.  The Conscious Mind: In Search of a Fundamental Theory.  New York: Oxford University Press, 1996. 

 

 

Putnam, Hilary.  “The Nature of Mental States.” Philosophy of Mind: Classical and Contemporary Readings.  New York: Oxford University Press, 2002. pp. 73-79




[1] Hilary Putnam, 76

[2] Hilary Putnam, 77

[3] David Chalmers, 175

[4] David Chalmers, 176

[5] David Chalmers, 176

[6] David Chalmers, 178

[7] David Chalmers, 182

Advertisements

One thought on “Obstacles to Functionalism: Chalmers’ Paradox of Phenomenal Judgment

  1. Let’s conjecture that both first and second-order judgements about consciousness must in some sense be implemented functionally in order to have behavioural consequences, and hence, in this sense, are feasibly available to a functional, or virtual, perceiver-reasoner (i.e. a “zombie”). I submit that, in accounting for ourselves, we are all (at least) implementations of such zombies – but that this doesn’t exhaust all the facts about the world. Now, a sufficiently powerful inner zombie reasoner (e.g. my own), reflecting on the possibility of its own “substantive existence”, could in principle deduce this to be merely a functional-virtual illusion. Direct-realist fictions to the contrary, the independent subsistence of any macro-scale composite entity – including of course the inner zombie itself – is strictly redundant under a putatively closed micro-physicalism. Under such analysis, there is no motivation to ascribe substantive existence, beyond the conventional, to any composite entity. True, zombie reasoners’ urgent requirements for theoretical compression may indeed motivate the search for intermediate “virtual” layers of explanatory hypotheses. Such expository considerations aside, there is no corresponding necessity for a literal layering of downwards-causation in order to facilitate a sufficient account of the world in terms of its micro-physical constituents and their relations. Notwithstanding this, the zombie reasoner can go on to deduce that IF aspects of its merely functional existence were somehow additionally capable of substantive “self-realisation”, such integrative syntheses, conceivably, could literally comprise something like a novel ontic domain with its own transcendent states of affairs. Such higher-order states of integration would, ex hypothesi, transcend the limiting presentational characteristics of any purely analytic schema, whilst by exactly the same token being abstracted from any functional or causal role; hence such states would be categorically inaccessible at the micro-physical, or functional, level of analysis.

    Having made these deductions, and notwithstanding the rich domains of functionally-defined reference thus potentially opened up for exploration, beyond a certain point our zombie reasoner must find itself unable (literally) to say more. Ineluctably, access to the immediate truth of such putatively transcendent states could, even in principle, only be through “upwards” integration into substantive realisation, not “downwards” analysis into constituent relationships, and hence must forever be inaccessible, in their direct aspect, to any purely functional mode of apprehension. Thus, frustratingly – “self-integrating” non-zombie as I find myself to be – I lack any ability to describe the “raw feel” of my independent substantive existence in the irreducibly transcendent terms in which it is rendered. Rather, I can merely sketch, however vividly, the shapes of its (ultimately micro-physically-based) functional or virtual proxies. All my references must be ostensive – I can gesture towards publicly available referents, and trust that any other non-zombies in the vicinity are capable of realising similar states via analogous means. Hence the confusion over terms. Though each of us, in our totality, may indeed be in a position to know – incorrigibly – that there are novel states of affairs beyond what is available through the functional interpretation of a micro-physical base, our inner zombies can never communicate directly in terms of such facts, and this failure of reference turns out to be logically quite predictable – even to zombies.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s