- Causality
- Structure
- Mental Designation or Meaning
(1) Causal dependency.
Functioning objects exist in dependence on the causes and conditions that brought about their existence in the first place, and continue to maintain their existence (e.g. acorn, soil, rain, air and sunlight for an oak tree). In particular, causal dependencies show a high degree of regularity (oak trees aren't produced from chestnuts, and the planets don't wander around the solar system randomly, but are constrained by Newton's laws).
(2) Compositional and structural dependency. (Sometimes known as 'mereological' dependency')
Functioning phenomena exist dependently upon their parts, and upon the way that those parts are arranged (structural features such as aspects, divisions, directions etc).
The parts of a functioning phenomenon are known as the 'basis of designation', which, when arranged in an appropriate manner, prompt the observer to designate the entire structure as a single entity. Thus the correct arrangement of pistons, cylinders, crankshaft, spark plugs etc is designated 'engine', and the correct arrangement of engine, wheels, chassis etc is designated 'car'.
But neither engine nor car can exist as independent entities, apart from their bases of designation. See Mereological Dependence in Buddhist Philosophy for a detailed discussion.
(3) Conceptual dependency
This is the most subtle mode of existential dependency, and concerns the way that things exist in dependence of our minds designating them by concept and name.
For example, what is a box? Is there some kind of ideal prototype box existing in the Platonic realm of ideal forms, or does a box exist only by arbitrary convention in the mind of the box-user, or from the collective minds of box-users?
If I say "I'll get a box to put this stuff in", then most people will understand that I'm going to fetch a container which performs the conventional function of a box, i.e. holds things. To do this it must have a bottom and at least three sides (like some chocolate boxes), though usually four. A lid is optional.
But if we were to cut the sides of a box down, it would perform the functions of a tray.
The box exists from causes and conditions (the box-maker, the wood from which it is made, the trees, sunlight, soil, rain, lumberjacks etc.)
The box exists in dependence upon its parts (bottom and three or more sides).
The box also exists because I and others decide to call it a box, not because of some inherent `boxiness' that all boxes have as a defining essence.
If it were a big cardboard box, and I cut a large L-shaped flap out of one side so it hinged like a door, then I could turn it upside down and it would be a child's play-house.
If I cut the sides of a wooden box down a centimetre at a time, then the box would get shallower and shallower. At some point the box would cease to exist and a tray would have begun to exist. So at some arbitrary point did the essence of `boxiness' miraculously disappear, and 'trayfulness' jump in to the undefined structure?
Where does box end and tray start?
I don't know. Maybe there's an EU directive forbidding the construction of boxes with insufficiently high sides, or specifying that all boxes must have lids permanently attached to avoid any possible confusion with trays.
EU standard box |
Or perhaps there's a Tray Descriptions Act enforcing a maximum height for trays.
But whichever way, as well as existing in dependence on its parts, and on its causes and conditions, the box exists in dependence upon our minds (or the collective minds of the EU Box-Standards Inspectorate).
The minds project 'box' over a certain collection of parts. And those parts can be the common bases of designation of both a box and a tray.
Mental designation goes all the way up, and all the way down
Developments in 20th century physics have shown that the observer is part of the system, both at the very smallest levels of reality (quantum physics) and at the very largest (relativity). These findings confirm what Buddhists have been saying for thousands of years; that the observer is part of the system at all levels of reality, not just in our everyday world of domestic storage containers.
Causal regularities in Buddhist philosophy
Unlike Islam, which completely rejects the laws of science and insists that everything happens moment-to-moment because of God's arbitrary will, Buddhism has always viewed regularities in the working of the universe as axiomatic.
As Jay L. Garfield states in 'The Fundamental Wisdom of The Middle Way' (footnote 29 p 116)
'The Madhyamika position implies that we should seek to explain regularities by reference to their embeddedness in other regularities, and so on. To ask why there are regularities at all, on such a view, would be to ask an incoherent question. The fact of explanatorily useful regularities in nature is what makes explanation and investigation possible in the first place and is not something itself that can be explained.'
The mathematical laws governing the motion of the planets can be simulated by clockwork |
The mathematical and algorithmic nature of regularities
Although asking why there are explanatorily useful regularities in nature may be ultimately incoherent, to ask why these take a mathematical form is a valid subject for enquiry.
The standard computer analogy for causality is to regard the laws of physics as being analogous ('isomorphic') to algorithms, with the physical objects being analogous to the datastructures the algorithms act upon.
From an article by Gregory Chaitin...
'My story begins in 1686 with Gottfried W. Leibniz's philosophical essay Discours de métaphysique (Discourse on Metaphysics), in which he discusses how one can distinguish between facts that can be described by some law and those that are lawless, irregular facts. Leibniz's very simple and profound idea appears in section VI of the Discours, in which he essentially states that a theory has to be simpler than the data it explains, otherwise it does not explain anything. The concept of a law becomes vacuous if arbitrarily high mathematical complexity is permitted, because then one can always construct a law no matter how random and patternless the data really are. Conversely, if the only law that describes some data is an extremely complicated one, then the data are actually lawless.
Today the notions of complexity and simplicity are put in precise quantitative terms by a modern branch of mathematics called algorithmic information theory. Ordinary information theory quantifies information by asking how many bits are needed to encode the information. For example, it takes one bit to encode a single yes/no answer. Algorithmic information, in contrast, is defined by asking what size computer program is necessary to generate the data. The minimum number of bits---what size string of zeros and ones---needed to store the program is called the algorithmic information content of the data. Thus, the infinite sequence of numbers 1, 2, 3, ... has very little algorithmic information; a very short computer program can generate all those numbers. It does not matter how long the program must take to do the computation or how much memory it must use---just the length of the program in bits counts...
...How do such ideas relate to scientific laws and facts? The basic insight is a software view of science: a scientific theory is like a computer program that predicts our observations, the experimental data. Two fundamental principles inform this viewpoint. First, as William of Occam noted, given two theories that explain the data, the simpler theory is to be preferred (Occam's razor). That is, the smallest program that calculates the observations is the best theory. Second is Leibniz's insight, cast in modern terms---if a theory is the same size in bits as the data it explains, then it is worthless, because even the most random of data has a theory of that size. A useful theory is a compression of the data; comprehension is compression. You compress things into computer programs, into concise algorithmic descriptions. The simpler the theory, the better you understand something'
In summary: If a computer program or algorithm is simpler than the system it describes, or the data set that it generates, then the system or data set is said to be 'algorithmically compressible'.
This concept of algorithmic simplicity/complexity can be extended from the realms of mathematics into physical systems. The complexity of a physical system is the length of the minimal algorithm than can simulate or describe it. Thus the orbits of the planets, which seemed so complex to the ancients, were shown by Newton to be algorithmically compressible into a few short equations.
Visually complex but algorithmically simple |
The computer model of the three levels of dependency
So causal dependency can be modelled as algorithms, and compositional/structural dependency can be modelled as datastructures, but where does that leave conceptual dependency?
According to Buddhist philosophy, the function of the mind cannot be reduced to physical or quasi-physical processes.
The mind is clear, formless, and knows its object. Its knowing the object constitutes the conceptual dependency, which is fundamental, axiomatic and cannot be explained in terms of other phenomena, including algorithms and datastructures.
Buddhism versus Materialism
The question that separates the Materialist from the Buddhist is whether there is anything left to explain about reality once algorithms and and datastructures have been factored out.
The Materialist would answer that algorithms and datastructures offer a complete explanation of the universe, without any remainder. The Buddhist would claim that a third factor, mind, is also required.
The Mother of all Algorithms
The mind itself is not algorithmically compressible, but is responsible for carrying out algorithmic compression.
Algorithms, as executed, do not contain within themselves any meaning. For example, the following two statements reduce to exactly the same algorithm within the memory of a computer
(i) IF RoomLength * RoomWidth > CarpetArea THEN NeedMoreCarpet = TRUE
(ii) IF Audience * TicketPrice > HireOfVenue THEN AvoidedBankruptcy = TRUE
Such considerations have led critics of philosophical computationalism to claim that algorithms can only contain syntax, not semantics. Hence computers can never understand their subject matter. All assignments of meaning to their inputs, internal states and outputs have to be defined from outside the system.
This may explain why the process of writing algorithms does not in itself appear to be algorithmic. The real test of computationalism would be to produce a general purpose algorithm-writing algorithm. A convincing example would be an algorithm that could simulate the mind of a programmer sufficiently to be able to write algorithms to perform such disparate activities as controlling an automatic train, regulating a distillation column, and optimising traffic flows through interlinked sets of lights.
According to the computationalist view this 'Mother of all Algorithms' must exist as an algorithm in the programmer's brain, though why and how such a thing evolved is rather difficult to imagine. It would certainly have conferred no selective advantage to our ancestors until the present generation (even so, do programmers outreproduce normal people?).
The proof of computationalism would be to program the Mother of all Algorithms on a computer. At present no one has the slightest clue of how to even start to go about producing such a thing.
According to Buddhist philosophy this is hardly surprising, as the Mother of all Algorithms is itself NOT an algorithm and never could be programmed. The mother of all algorithms is the formless mind projecting meaning onto its objects (i.e. conceptually designating meaning on to the sequential and structural components of the algorithm as it is being written).
The non-algorithmic dimension of mind, of understanding of meaning, is needed to turn the user's (semantically expressed) requirements into the purely syntactic structural and causal relationships of the algorithmic flowchart or code.
Minds, machines and meaning
The computer analogy of conceptual dependency, as far as one is possible, would be the 'meaning' of symbolic variables which gets stripped out of high level languages during compilation to machine code. This removal of meaning is inevitable because a machine cannot understand, interpret, use or manipulate meaning. Only minds can grasp meaning, hence the programmer's lament:
I'm sick and tired of this machine
I think I'm going to sell it
It never does do what I mean
But only what I tell it
Neuroenvy
"...So just what can be proved about people by the close observation of
their brains? We can be conceptualised in two ways: as organisms and as
objects of personal interaction. The first way employs the concept
‘human being’, and derives our behaviour from a biological science of
man. The second way employs the concept ‘person’, which is not the
concept of a natural kind, but of an entity that relates to others in a
familiar but complex way that we know intuitively but find hard to
describe. Through the concept of the person, and the associated notions
of freedom, responsibility, reason for action, right, duty, justice and
guilt, we gain the description under which human beings are seen, by
those who respond to them as they truly are. When we endeavour to
understand persons through the half-formed theories of neuroscience we
are tempted to pass over their distinctive features in silence, or else
to attribute them to some brain-shaped homunculus inside. For we
understand people by facing them, by arguing with them, by understanding
their reasons, aspirations and plans. All of that involves another
language, and another conceptual scheme, from those deployed in the
biological sciences. We do not understand brains by facing them, for
they have no face.
We should recognise that not all coherent questions about human nature
and conduct are scientific questions, concerning the laws governing
cause and effect. Most of our questions about persons and their doings
are about interpretation: what did he mean by that? What did her words
imply? What is signified by the hand of Michelangelo’s David? Those are
real questions, which invite disciplined answers. And there are
disciplines that attempt to answer them. The law is one such. It
involves making reasoned attributions of liability and responsibility,
using methods that are not reducible to any explanatory science, and not
replaceable by neuroscience, however many advances that science might
make. The invention of ‘neurolaw’ is, it seems to me, profoundly
dangerous, since it cannot fail to abolish freedom and accountability —
not because those things don’t exist, but because they will never crop
up in a brain scan.
Suppose a computer is programmed to ‘read’, as we say, a digitally
encoded input, which it translates into pixels, causing it to display
the picture of a woman on its screen. In order to describe this process
we do not need to refer to the woman in the picture. The entire process
can be completely described in terms of the hardware that translates
digital data into pixels, and the software, or algorithm, which contains
the instructions for doing this. There is neither the need nor the
right, in this case, to use concepts like those of seeing, thinking,
observing, in describing what the computer is doing; nor do we have
either the need or the right to describe the thing observed in the
picture, as playing any causal role, or any role at all, in the
operation of the computer. Of course, we see the woman in the picture.
And to us the picture contains information of quite another kind from
that encoded in the digitalised instructions for producing it. It
conveys information about a woman and how she looks. To describe this
kind of information is impossible without describing the content of
certain thoughts — thoughts that arise in people when they look at each
other face to face.
But how do we move from the one concept of information to the other? How
do we explain the emergence of thoughts about something from processes
that reside in the transformation of visually encoded data? Cognitive
science doesn’t tell us. And computer models of the brain won’t tell us
either. They might show how images get encoded in digitalised format and
transmitted in that format by neural pathways to the centre where they
are ‘interpreted’. But that centre does not in fact interpret –
interpreting is a process that we do, in seeing what is there before us.
When it comes to the subtle features of the human condition, to the
byways of culpability and the secrets of happiness and grief, we need
guidance and study if we are to interpret things correctly. That is what
the humanities provide, and that is why, when scholars who purport to
practise them, add the prefix ‘neuro’ to their studies, we should expect
their researches to be nonsense."
- Sean Robsville
1 comment:
This is great!
Post a Comment