Tuesday, 10 February 2015

Refuting the computer theory of mind - and why it matters to Buddhists.

The Computational Model of the Mind


Computers can simulate all physical processes.
There are some mental processes that computers cannot simulate.
Therefore, some aspects of the mind are non-physical.

Materialism, spirituality and art
Materialism is the belief that matter is the only reality in life and everything else, such as mind, feelings, emotions, beauty etc are just the by-products of the brain's physical and chemical activity, with no independent existence of their own.  Once their material basis is gone, mind and consciousness just disappear without trace.   Needless to say, materialism denies the validity of all religions and spiritual paths, not just Buddhism

In addition, the growth of the materialist worldview within the Buddhist community is having and will continue to have a harmful effect on Buddhism, especially in the form of materialistic Buddhism aka 'Secular Buddhism'.

The debilitating effect of materialism doesn't just affect religions; it despiritualises everything in its path, degrading art and encouraging brutalism.

Philosopher Roger Scruton believes that all great art has a 'spiritual' dimension, even if it is not overtly religious. It is this transcendence of the mundane that we recognise as 'beauty'.

Materialism as Pseudoscience
Materialism is gaining ground due to an incorrect and scientifically unsupportable interpretation of neuroscience, which claims that neurological mechanisms are sufficient to explain thought-processes, emotions, consciousness and mind. 

This misinterpretation of neuroscience, together with all the other varieties of materialism are included in, or equivalent to, the the Computational Theory of Mind (CTM).  And because the Computational Theory of Mind encompasses and subsumes every form of materialism (according to the Church-Turing-Deutsch Principle), then if we can refute the CTM, we have also refuted materialism in general.

Pre-history of the Computational Theory of Mind
Although this article is primarily concerned with computers, its basic argument was stated 140 years ago by the Victorian physicist John Tyndall:

  [The] passage from the physics of the brain to the corresponding facts of consciousness is unthinkable. Granted that a definite thought, and a definite molecular action in the brain occur simultaneously; we do not possess the intellectual organ, nor apparently any rudiment of the organ, which would enable us to pass, by a process of reasoning, from the one to the other. They appear together, but we do not know why.
 Were our minds and senses so expanded, strengthened, and illuminated, as to enable us to see and feel the very molecules of the brain; were we capable of following all their motions, all their groupings, all their electric discharges, if such there be; and were we intimately acquainted with the corresponding states of thought and feeling, we should be as far as ever from the solution of the problem, "How are these physical processes connected with the facts of consciousness?" The chasm between the two classes of phenomena would still remain intellectually impassable.
Let the consciousness of love, for example, be associated with a right-handed spiral motion of the molecules of the brain, and the consciousness of hate with a left-handed spiral motion. We should then know, when we love, that the motion is in one direction, and, when we hate, that the motion is in the other; but the "Why?" would remain as unanswerable as before. "

John Tyndall (1871), Fragments of Science

To put this in modern terms, there is no conceivable mechanism by which any form of  physical structure, either static or dynamic, can give rise to ‘intentionality’ (I feel love/hate about this person) or ‘qualia’ (I have the subjective experience of loving/hating). Both intentionality and qualia are non-algorithmic phenomena.

The activities and arrangements of the molecules and biological structures  associated with mental events are nowadays known as ‘neural correlates’. 

Of themselves, neural correlates have no known causative mechanism for producing thoughts. There is an explanatory gap between matter and mind, and an additional factor must be at work.  The Buddhist would claim that this explanatory gap cannot be bridged by building any further out from the physical side of things, as no further structural additions will make any difference.    Neuroscience may tell us in ever more complex detail how sense impressions are processed and structured by the brain, but it’s just more of the same.  We are no further on than we were in Tyndall’s time.

The Buddhist would say that the explanatory gap can only be bridged by building out from the side of the mind. The mind ‘goes to’ or ‘reaches out to’ the datastructures/neural correlates and gives them meaning.  The mind is not explainable in material terms, but is a fundamental aspect of reality, like time, that is irreducible to any other phenomena.

So what’s the significance of the Computer Theory of Mind for Buddhists?

In contrast to the Buddhist view, the computational theory of mind holds that the mind is a computation that arises from the brain acting as a computing machine. The theory can be elaborated in many ways, the most popular of which is that the brain is a computer and the mind is the result of the program that the brain runs. The CTM was very popular from the 1960’s to the late 1990’s, with the prospect of artificial intelligence promised as being just around the corner, available as soon as we had constructed the algorithms, datastructures and electronic neural nets that could emulate the mind. Of course it never happened, and computers are still just as dumb as ever.

This isn’t to deny that there are datastructures and algorithms operating within the brain, indeed the ‘neural correlates’ could be regarded as datastructures, and their dynamic changes could be regarded as being brought about by algorithms. But even if these could be emulated exactly in a computer, they are of themselves incapable of explaining the mind any more than could Tyndall’s conjectures about spiralling molecules. 

Nevertheless, if we accept this model as being valid as a partial explanation of the mind, we can see how and why it fails to be a complete explanation.

The significance of the CTM is that because we have an exact definition of a universal computing machine, in the form of a Turing Machine, we can explore why computers of all varieties cannot emulate the mind.   Every computer, no matter how powerful, is functionally equivalent to a Turing Machine.

And furthermore and most importantly, if we can show that a Turing Machine cannot emulate all the functions of the mind, then we have also shown beyond reasonable doubt that no physical system of any kind can emulate all the functions of the mind.  The justification for this far-reaching and rather surprising conclusion is provided by the Church-Turing-Deutsch Principle, which states that a universal computing device can simulate every physical process.   If we discover any processes in the real world that cannot be thus simulated, then we have discovered processes that are fundamentally and irreducibly non-physical.

Physical processes include chemical, biochemical, neurochemical and physiological processes, plus the operations of all mechanisms and electronic systems - no matter how complex.  Nothing within the remit of neuroscience, or indeed materialism in general, can escape the constraints of physicalism.   The failure of the CTM thus inevitably pulls all materialist, physicalist and mechanistic explanations down with it.

Understanding why the Computer Theory of Mind Fails.
I’ll discuss four levels of computer systems as examples, demonstrating at each level how the system fails to cope with meaning and ‘aboutness’.  I'll start with the familiar spreadsheet, and then go deeper into computer languages, and then delve into instruction sets and Turing machines.

This failure to integrate mechanistic functionality (syntax and quantity) with semantics  (meaning and qualitative thought)  is characteristic of all mechanistic systems from the most sophisticated to the most primitive, and goes all the way down, as we shall see.  Humans superimpose a layer of meaning on the underlying mechanisms, and like a layer of oil on water, it never mixes.  

- The ‘Aboutness’ of Spreadsheets
Most of us are familiar with spreadsheets. They consist of a table of cells which are organized as rows and columns and identified by the their row/column location (C4, F12 etc).  These cells may contain simple data or formulae, where each formula is drawn from a small repertoire of operations - add, divide, multiply, etc.   Each of these operations can be thought of as a dedicated Turing Machine.  These little Turing Machines can be chained and networked together to fiddle produce annual accounts or construct complex models for financial ‘what if?’ predictions etc.

Text labels are usually put alongside cells to identify what they are:  Profit’, ‘Loss’, ‘Tax’, 'Depreciation', ‘Slush fund’, ‘Embezzlement Allowance’ etc.

Prudence may dictate that ‘Slush fund’ and ‘Embezzlement allowance’ should be renamed ‘Contingencies’ and ‘Sundries’ before submitting the accounts to the auditors.  But it doesn’t matter what you call these cash flows, their name has no effect on the underlying functionality.

Taken to extremes of caution, if you’re doing accounts for the mob, and remembering how Eliot Ness nailed Al Capone, it might be best to remove all text from the spreadsheet altogether and keep it on a separate piece of paper concealed in your moll’s undergarments.   The spreadsheet will still function perfectly well with all meaning removed, and the Feds can't get you for a meaningless arithmetical structure, or a list of words with no figures.

- The ‘Aboutness’ of High Level Languages
Young and old are likely to be familiar with high level computer languages such as BASIC and Python.   Those of intermediate age are less likely to be familiar with them due to dumbing-down of computer education in the intervening years.

High level languages are used for writing mathematical, scientific and financial formulae as statements that are both readily understandable by humans, and easily translatable into the instruction-set (machine level operations) of the computer.  One of the first such languages was FORTRAN - short for ‘formula translation’.   

However, in translating from a human-readable to machine-readable program, the translation software strips out and discards all meaning from the original source formulae.    Thus the following two statements are ‘about’ very different subject matter, but they are translated into exactly the same machine level operations:

(i) IF RoomLength * RoomWidth > CarpetArea THEN NeedMoreCarpet = TRUE

(ii) IF Audience * TicketPrice > HireOfVenue THEN AvoidedBankruptcy = TRUE

- The ‘Aboutness’ of Instruction Sets
Every computer has a surprisingly small repertoire of operations, usually numbering around twenty, which allow it to carry out all its calculation, simulation and modelling programs. 

Each instruction can be thought of as a dedicated Turing Machine (a low-level calculation or logical operation).  These operations are chained together to implement programs.  The full repertoire of operations ('opcodes') is known as the instruction set, and would typically consist of SET, MOVE, READ, WRITE, ADD, SUBTRACT, MULTIPLY, DIVIDE, AND, OR, XOR, NOT,  SHIFT, ROTATE, COMPARE, JUMP, JUMP-CONDITIONALLY, RETURN

Examination of each of these operations shows that none of them have the capacity to be ‘about’ anything qualitative.  None of them can process ‘meaning’ or ‘intentionality’, neither individually nor in combination.   No artiificial intelligence is ever going to ‘emerge’ from these operations of such limited scope, no matter how many we chain or network together.

So is there something omitted from the instruction sets of all computers which could be put right by devising a computational operation that could deal with meaning?   For instance, can we devise an operation code such as  UNDERSTAND? 

The answer is no.  This is an omission that cannot be filled by any form of Turing Machine, and since the Turing Machine is the basis of all computation, it cannot be filled at all.    To see why this is, we need to know a little more about the lowest level of all computation - the  Turing Machine.

The Aboutness of the Turing Machine
A Turing Machine is not primarily a physical device (although physical demonstrations have been constructed) . Its primary purpose is as a thought-experiment, or a precisely defined simple mathematical object, whose precision and simplicity produce a rigorous definition of the fundamental behavior of all mechanical devices and physical systems.

A Turing machine consists of just two main components: 
(i) A tape of characters, which may be limited to just 1’s and 0’s.
(ii)  A table of actions, which instructs the machine what to do with each character.

There are also two minor components:
(iii) A read/write head, which simply transfers symbols from the tape to the table and vice versa.
(iv)  A register that holds the numeric identifier for the machine’s current state.

The tape consists of a string of characters. These are sometimes imprecisely described as 'symbols', but this is rather confusing in that symbols often make reference to something beyond themselves (they exhibit 'derived intentionality' or evoke a qualitative state of mind.)   It is important to remember that the characters on the tape carry no intrinsic meaning.   

The precise definition of the marks on the tape is that they are characters drawn from a defined alphabet, where the term ‘alphabet’ is used in a rather technical sense of a restricted  set of characters, such as the 26 characters of the  Latin alphabet, the 33 characters of Russian alphabet, the four characters of the DNA alphabet, or the two characters of the binary alphabet.   The size of the alphabet makes no difference to the capabilities of the Turing Machine, since all characters are capable of being encoded as binary.

The table consists of five columns, with as many rows of instructions as are needed to do the job.  The columns are:

1  The row's machine state identifier to be tested against the actual machine state.
2  The row's character to be tested against the current character as read from the tape.
3  The identifier of the new state to which the machine will change
  The new character to be written to the tape.
5  An instruction to move the head one character right or left along the tape.

The machine works by going down the table checking each row until it finds a row where the state identifier corresponds to the machine’s current state as held in the register, and the character corresponds to the character under the head.
In accordance with the three remaining columns in that row, the machine then:
(i) changes the state of the register
(ii) moves the head 
(iii) writes a new character on the tape
It then restarts the checking procedure from the top of the table.

- Computer equivalence of the Turing Machine
So it’s apparent why the Turing Machine isn’t a practical proposition for doing any useful tasks: the number of rows in the action table would become huge.   Real computers condense the action table into a small set of instructions or ‘opcodes’.    Nevertheless, the simple architecture of the Turing Machine can be mathematically proved to be completely functionally  equivalent to any real-world computer.

Computer geeks will have spotted that the tape corresponds to the memory of a computer and the table to its program.  The correspondence between tape and memory is direct and one-to-one, but the correspondence between the action table and a practical computer program is less direct and requires a different kind of architecture to keep the table in a manageable form.

- Physical equivalence of the Turing Machine
Not only can the  Turing machine simulate any other kind of computer, it can simulate and predict the behaviour of any physical system, including any other type of machine.

The tape corresponds to datastructures (including two and three dimensional structures which can be represented by the linear memory array of any computer.)

The table corresponds to causal relationships, including formulae for physical and chemical laws.

So Alan Turing had well and truly defined ‘mechanism’, including biophysical mechanisms such as the body.  We now turn our attention to the Buddhist understanding of mind.

Why Buddhist Philosophy goes beyond the Computer Theory of Mind

- The inability of the Turing machine to emulate mental designation
Buddhist philosophy states that the phenomena we experience depend upon  three modes of ‘existential dependence’:

Causes  - which correspond to the table
Structure - which corresponds to the tape
Mental designation or ‘aboutness’  - for which there is no equivalent structure in the Turing Machine!  As mentioned earlier, the tape consists only of character strings, which in themselves are not ‘about’ anything.

Since mental designation is a fundamental and axiomatic aspect of reality, and cannot be reduced to either structure or causality, it follows that there are aspects of our experience of phenomena that are non-mechanistic and non-physical.

- The inability of the Turing Machine to hold and manipulate qualitative states.
The Buddhist practise of Lamrim meditation uses mental procedures to generate qualitative states of mind.  These qualitative mental feelings are known as 'qualia'.   They are internal subjective mental states which are produced by guided thought procedures.   However, the Turing machine does not possess any structure that could hold or experience such states, nor could any combination of instructions within the table generate such states even if there were something that could hold them.  

The inability of the Turing machine to hold internal qualitative mental states is obvious.  The only internal state it can have is the number in its register.    Even if additional registers were added, they could only contain ‘alphabetic’ characters or state numbers, for there is nothing else in the machine and nothing else can get into the machine. For more on this topic, see
Mind and Mechanism – Buddhism and the Turing Machine

Also AI Winter

For a general background see Buddhist Philosophy


Unknown said...

Maybe aboutness isn't something that can be simulated and mental states are not reducible to physical states. Isn't it possible though that physical states might be reducible to mental states? Or that the dichotomy between mental states and physical states is not a true description of reality? It seems to me that it is a sort of dualism and if mental states can affect physical states and vice versa then there must be some medium through which this interaction takes place. That medium through which cause is propagated would be more fundamental layer of reality than either mental states or physical states. Both interpretations I think are a sort of calculus we perform on our experience so that we can render it in language. I've always thought that Buddha's ideas implied that reality is psycho-physical and that attempts to explain it in terms of mind or matter are misguided. Just my thoughts as a lay person. I really enjoyed reading the post.

samir sardana said...

Is Buddhism for the East Asians ?

Sample Theravada Buddhism in Khmer Land

They stole the Tooth and Hair of Boodha from a Temple ! dindooohindoo


They run English teaching schools - only to pimp the students - In English !


They traffick minors as wives to foreigners who in turn sell them to brothels


They sell their kids as professional beggars in foreign nations


They feel that cheating in schools is their fundamental right


After 2000 years of evolution - they farm and eat rats


75% of their students fail in the exams ! Y Educate !