Tuesday, 17 January 2012

Metameme


Could the metameme have prevented this?


Here's a classic Richard Dawkins text from the archives. It's worth revisiting now and then to see how religions can so easily take on the form of predatory mind-viruses. Fortunately Buddhism, being a rational philosophy, is immune to this problem. 

In Geek-week I've been looking at the use of computer analogies to illuminate some of the more obscure aspects of Buddhist philosophy. This post is a golden oldie on memes as the cultural analogs of computer viruses: specifically those malignant religious memes which take over the believers' minds, and in extreme cases turn them into zombie killing machines.

From the Buddhist point of view, memes are contagious intellectually-formed delusions, which harness and synergize the three poisons of the mind to enable their infectious spread.



The metameme
One interesting question is whether the meme theory is itself a meme ('The Metameme') and whether its spread could block and give immunity to more pernicious memes, much like the harmless cowpox virus can block out the lethal smallpox virus.

If you are a sexually repressed teenager, who suddenly realises that the promise of 72 virgins for killing kuffars is nothing more than a mechanism for a mind-virus to ensure its dominance over competing memes, by eliminating their carriers, then you may be less enthusiastic about blowing yourself and fellow passengers to pieces in a train or bus.

In fact, maybe most of the top Death Cult franchisees have sussed out that their murderous mumbo-jumbo is in fact mind-parasite, because they never blow themselves up or get their own kids to blow themselves up. It's always some poor gullible adolescent.


Here's the original article, slightly reformatted for ease of reading in blogger layout...


Viruses of the Mind
Richard Dawkins

1991

The haven all memes depend on reaching is the human mind, but a human mind is itself an artifact created when memes restructure a human brain in order to make it a better habitat for memes. The avenues for entry and departure are modified to suit local conditions, and strengthened by various artificial devices that enhance fidelity and prolixity of replication: native Chinese minds differ dramatically from native French minds, and literate minds differ from illiterate minds. What memes provide in return to the organisms in which they reside is an incalculable store of advantages --- with some Trojan horses thrown in for good measure. . . Daniel Dennett, Consciousness Explained

1 Duplication Fodder

A beautiful child close to me, six and the apple of her father's eye, believes that Thomas the Tank Engine really exists. She believes in Father Christmas, and when she grows up her ambition is to be a tooth fairy. She and her school-friends believe the solemn word of respected adults that tooth fairies and Father Christmas really exist. This little girl is of an age to believe whatever you tell her. If you tell her about witches changing princes into frogs she will believe you. If you tell her that bad children roast forever in hell she will have nightmares.

The nun's story

I have just discovered that without her father's consent this sweet, trusting, gullible six-year-old is being sent, for weekly instruction, to a Roman Catholic nun. What chance has she?

A human child is shaped by evolution to soak up the culture of her people. Most obviously, she learns the essentials of their language in a matter of months. A large dictionary of words to speak, an encyclopedia of information to speak about, complicated syntactic and semantic rules to order the speaking, are all transferred from older brains into hers well before she reaches half her adult size. When you are pre-programmed to absorb useful information at a high rate, it is hard to shut out pernicious or damaging information at the same time.

With so many mindbytes to be downloaded, so many mental codons to be replicated, it is no wonder that child brains are gullible, open to almost any suggestion, vulnerable to subversion, easy prey to Moonies, Scientologists and nuns. Like immune-deficient patients, children are wide open to mental infections that adults might brush off without effort.


DNA, too, includes parasitic code. Cellular machinery is extremely good at copying DNA. Where DNA is concerned, it seems to have an eagerness to copy, seems eager to be copied. The cell nucleus is a paradise for DNA, humming with sophisticated, fast, and accurate duplicating machinery.

Cellular machinery is so friendly towards DNA duplication that it is small wonder cells play host to DNA parasites --- viruses, viroids, plasmids and a riff-raff of other genetic fellow travelers. Parasitic DNA even gets itself spliced seamlessly into the chromosomes themselves. ``Jumping genes'' and stretches of ``selfish DNA'' cut or copy themselves out of chromosomes and paste themselves in elsewhere. Deadly oncogenes are almost impossible to distinguish from the legitimate genes between which they are spliced. In evolutionary time, there is probably a continual traffic from ``straight'' genes to ``outlaw,'' and back again (Dawkins, 1982).

DNA is just DNA. The only thing that distinguishes viral DNA from host DNA is its expected method of passing into future generations. ``Legitimate'' host DNA is just DNA that aspires to pass into the next generation via the orthodox route of sperm or egg. ``Outlaw'' or parasitic DNA is just DNA that looks to a quicker, less cooperative route to the future, via a squeezed droplet or a smear of blood, rather than via a sperm or egg.

For data on a floppy disc, a computer is a humming paradise just as cell nuclei hum with eagerness to duplicate DNA. Computers and their associated disc and tape readers are designed with high fidelity in mind. As with DNA molecules, magnetized bytes don't literally ``want'' to be faithfully copied. Nevertheless, you can write a computer program that takes steps to duplicate itself. Not just duplicate itself within one computer but spread itself to other computers. Computers are so good at copying bytes, and so good at faithfully obeying the instructions contained in those bytes, that they are sitting ducks to self-replicating programs: wide open to subversion by software parasites.

Any cynic familiar with the theory of selfish genes and memes would have known that modern personal computers, with their promiscuous traffic of floppy discs and e-mail links, were just asking for trouble. The only surprising thing about the current epidemic of computer viruses is that it has been so long in coming.

 

 

2 Computer Viruses: a Model for an Informational Epidemiology

Computer viruses are pieces of code that graft themselves into existing, legitimate programs and subvert the normal actions of those programs. They may travel on exchanged floppy disks, or over networks. They are technically distinguished from ``worms'' which are whole programs in their own right, usually traveling over networks. Rather different are ``Trojan horses,'' a third category of destructive programs, which are not in themselves self-replicating but rely on humans to replicate them because of their pornographic or otherwise appealing content. Both viruses and worms are programs that actually say, in computer language, ``Duplicate me.'' Both may do other things that make their presence felt and perhaps satisfy the hole-in-corner vanity of their authors. These side-effects may be ``humorous'' (like the virus that makes the Macintosh's built-in loudspeaker enunciate the words ``Don't panic,'' with predictably opposite effect); malicious (like the numerous IBM viruses that erase the hard disk after a sniggering screen-announcement of the impending disaster); political (like the Spanish Telecom and Beijing viruses that protest about telephone costs and massacred students respectively); or simply inadvertent (the programmer is incompetent to handle the low-level system calls required to write an effective virus or worm).

The famous Internet Worm, which paralyzed much of the computing power of the United States on November 2, 1988, was not intended (very) maliciously but got out of control and, within 24 hours, had clogged around 6,000 computer memories with exponentially multiplying copies of itself.

``Memes now spread around the world at the speed of light, and replicate at rates that make even fruit flies and yeast cells look glacial in comparison. They leap promiscuously from vehicle to vehicle, and from medium to medium, and are proving to be virtually unquarantinable'' (Dennett 1990, p.131). Viruses aren't limited to electronic media such as disks and data lines. On its way from one computer to another, a virus may pass through printing ink, light rays in a human lens, optic nerve impulses and finger muscle contractions.

A computer fanciers' magazine that printed the text of a virus program for the interest of its readers has been widely condemned. Indeed, such is the appeal of the virus idea to a certain kind of puerile mentality (the masculine gender is used advisedly), that publication of any kind of ``how to'' information on designing virus programs is rightly seen as an irresponsible act.

I am not going to publish any virus code. But there are certain tricks of effective virus design that are sufficiently well known, even obvious, that it will do no harm to mention them, as I need to do to develop my theme. They all stem from the virus's need to evade detection while it is spreading.

A virus that clones itself too prolifically within one computer will soon be detected because the symptoms of clogging will become too obvious to ignore. For this reason many virus programs check, before infecting a system, to make sure that they are not already on that system. Incidentally, this opens the way for a defense against viruses that is analogous to immunization. In the days before a specific anti-virus program was available, I myself responded to an early infection of my own hard disk by means of a crude ``vaccination.''

Instead of deleting the virus that I had detected, I simply disabled its coded instructions, leaving the ``shell'' of the virus with its characteristic external ``signature'' intact. In theory, subsequent members of the same virus species that arrived in my system should have recognized the signature of their own kind and refrained from trying to double-infect. I don't know whether this immunization really worked, but in those days it probably was worth while ``gutting'' a virus and leaving a shell like this, rather than simply removing it lock, stock and barrel. Nowadays it is better to hand the problem over to one of the professionally written anti-virus programs.

A virus that is too virulent will be rapidly detected and scotched. A virus that instantly and catastrophically sabotages every computer in which it finds itself will not find itself in many computers. It may have a most amusing effect on one computer ---- erase an entire doctoral thesis or something equally side-splitting --- but it won't spread as an epidemic.

Some viruses, therefore, are designed to have an effect that is small enough to be difficult to detect, but which may nevertheless be extremely damaging. There is one type, which, instead of erasing disk sectors wholesale, attacks only spreadsheets, making a few random changes in the (usually financial) quantities entered in the rows and columns. Other viruses evade detection by being triggered probabilistically, for example erasing only one in 16 of the hard disks infected. Yet other viruses employ the time-bomb principle. Most modern computers are ``aware'' of the date, and viruses have been triggered to manifest themselves all around the world, on a particular date such as Friday 13th or April Fool's Day. From the parasitic point of view, it doesn't matter how catastrophic the eventual attack is, provided the virus has had plenty of opportunity to spread first (a disturbing analogy to the Medawar/Williams theory of ageing: we are the victims of lethal and sub-lethal genes that mature only after we have had plenty of time to reproduce (Williams, 1957)). In defense, some large companies go so far as to set aside one ``miner's canary'' among their fleet of computers, and advance its internal calendar a week so that any time-bomb viruses will reveal themselves prematurely before the big day.
 
Again predictably, the epidemic of computer viruses has triggered an arms race. Anti-viral software is doing a roaring trade. These antidote programs -- ``Interferon,'' ``Vaccine,'' ``Gatekeeper'' and others --- employ a diverse armory of tricks. Some are written with specific, known and named viruses in mind. Others intercept any attempt to meddle with sensitive system areas of memory and warn the user.

The virus principle could, in theory, be used for non-malicious, even beneficial purposes. Thimbleby (1991) coins the phrase ``liveware'' for his already-implemented use of the infection principle for keeping multiple copies of databases up to date. Every time a disk containing the database is plugged into a computer, it looks to see whether there is already another copy present on the local hard disk. If there is, each copy is updated in the light of the other. So, with a bit of luck, it doesn't matter which member of a circle of colleagues enters, say, a new bibliographical citation on his personal disk. His newly entered information will readily infect the disks of his colleagues (because the colleagues promiscuously insert their disks into one another's computers) and will spread like an epidemic around the circle. Thimbleby's liveware is not entirely virus-like: it could not spread to just anybody's computer and do damage. It spreads data only to already-existing copies of its own database; and you will not be infected by liveware unless you positively opt for infection.
 
Incidentally, Thimbleby, who is much concerned with the virus menace, points out that you can gain some protection by using computer systems that other people don't use. The usual justification for purchasing today's numerically dominant computer is simply and solely that it is numerically dominant. Almost every knowledgeable person agrees that, in terms of quality and especially user-friendliness, the rival, minority system is superior. Nevertheless, ubiquity is held to be good in itself, sufficient to outweigh sheer quality. Buy the same (albeit inferior) computer as your colleagues, the argument goes, and you'll be able to benefit from shared software, and from a generally large circulation of available software. The irony is that, with the advent of the virus plague, ``benefit'' is not all that you are likely to get. Not only should we all be very hesitant before we accept a disk from a colleague. We should also be aware that, if we join a large community of users of a particular make of computer, we are also joining a large community of viruses --- even, it turns out, disproportionately larger.

Returning to possible uses of viruses for positive purposes, there are proposals to exploit the ``poacher turned gamekeeper'' principle, and ``set a thief to catch a thief.'' A simple way would be to take any of the existing anti-viral programs and load it, as a ``warhead,'' into a harmless self-replicating virus. From a ``public health'' point of view, a spreading epidemic of anti-viral software could be especially beneficial because the computers most vulnerable to malicious viruses --- those whose owners are promiscuous in the exchange of pirated programs --- will also be most vulnerable to infection by the healing anti-virus. A more penetrating anti-virus might --- as in the immune system --- ``learn'' or ``evolve'' an improved capacity to attack whatever viruses it encountered.

I can imagine other uses of the computer virus principle which, if not exactly altruistic, are at least constructive enough to escape the charge of pure vandalism. A computer company might wish to do market research on the habits of its customers, with a view to improving the design of future products. Do users like to choose files by pictorial icon, or do they opt to display them by textual name only? How deeply do people nest folders (directories) within one another? Do people settle down for a long session with only one program, say a word processors, or are they constantly switching back and forth, say between writing and drawing programs? Do people succeed in moving the mouse pointer straight to the target, or do they meander around in time-wasting hunting movements that could be rectified by a change in design?

The company could send out a questionnaire asking all these questions, but the customers that replied would be a biased sample and, in any case, their own assessment of their computer-using behavior might be inaccurate. A better solution would be a market-research computer program. Customers would be asked to load this program into their system where it would unobtrusively sit, quietly monitoring and tallying key-presses and mouse movements. At the end of a year, the customer would be asked to send in the disk file containing all the tallyings of the market-research program. But again, most people would not bother to cooperate and some might see it as an invasion of privacy and of their disk space.

The perfect solution, from the company's point of view, would be a virus. Like any other virus, it would be self-replicating and secretive. But it would not be destructive or facetious like an ordinary virus. Along with its self-replicating booster it would contain a market-research warhead. The virus would be released surreptitiously into the community of computer users. Just like an ordinary virus it would spread around, as people passed floppy disks and e-mail around the community. As the virus spread from computer to computer, it would build up statistics on users behavior, monitored secretly from deep within a succession of systems. Every now and again, a copy of the viruses would happen to find its way, by normal epidemic traffic, back into one of the company's own computers. There it would be debriefed and its data collated with data from other copies of the virus that had come ``home.''

Looking into the future, it is not fanciful to imagine a time when viruses, both bad and good, have become so ubiquitous that we could speak of an ecological community of viruses and legitimate programs coexisting in the silicosphere. At present, software is advertised as, say, ``Compatible with System 7.'' In the future, products may be advertised as ``Compatible with all viruses registered in the 1998 World Virus Census; immune to all listed virulent viruses; takes full advantage of the facilities offered by the following benign viruses if present...'' Word-processing software, say, may hand over particular functions, such as word-counting and string-searches, to friendly viruses burrowing autonomously through the text.

Looking even further into the future, whole integrated software systems might grow, not by design, but by something like the growth of an ecological community such as a tropical rain-forest. Gangs of mutually compatible viruses might grow up, in the same way as genomes can be regarded as gangs of mutually compatible genes (Dawkins, 1982). Indeed, I have even suggested that our genomes should be regarded as gigantic colonies of viruses (Dawkins, 1976). Genes cooperate with one another in genomes because natural selection has favored those genes that prosper in the presence of the other genes that happen to be common in the gene pool. Different gene pools may evolve towards different combinations of mutually compatible genes. I envisage a time when, in the same kind of way, computer viruses may evolve towards compatibility with other viruses, to form communities or gangs. But then again, perhaps not! At any rate, I find the speculation more alarming than exciting.


At present, computer viruses don't strictly evolve. They are invented by human programmers, and if they evolve they do so in the same weak sense as cars or aeroplanes evolve. Designers derive this year's car as a slight modification of last year's car, and then may, more or less consciously, continue a trend of the last few years --- further flattening of the radiator grill or whatever it may be. Computer virus designers dream up ever more devious tricks for outwitting the programmers of anti-virus software. But computer viruses don't --- so far --- mutate and evolve by true natural selection. They may do so in the future. Whether they evolve by natural selection, or whether their evolution is steered by human designers, may not make much difference to their eventual performance. By either kind of evolution, we expect them to become better at concealment, and we expect them to become subtly compatible with other viruses that are at the same time prospering in the computer community.

DNA viruses and computer viruses spread for the same reason: an environment exists in which there is machinery well set up to duplicate and spread them around and to obey the instructions that the viruses embody. These two environments are, respectively, the environment of cellular physiology and the environment provided by a large community of computers and data-handling machinery. Are there any other environments like these, any other humming paradises of replication?



3 The Infected Mind

I have already alluded to the programmed-in gullibility of a child, so useful for learning language and traditional wisdom, and so easily subverted by nuns, Moonies and their ilk. More generally, we all exchange information with one another. We don't exactly plug floppy disks into slots in one another's skulls, but we exchange sentences, both through our ears and through our eyes. We notice each other's styles of moving and dressing and are influenced. We take in advertising jingles, and are presumably persuaded by them, otherwise hard-headed businessmen would not spend so much money polluting the air with them.

Think about the two qualities that a virus, or any sort of parasitic replicator, demands of a friendly medium,. the two qualities that make cellular machinery so friendly towards parasitic DNA, and that make computers so friendly towards computer viruses. These qualities are, firstly, a readiness to replicate information accurately, perhaps with some mistakes that are subsequently reproduced accurately; and, secondly, a readiness to obey instructions encoded in the information so replicated.

Cellular machinery and electronic computers excel in both these virus-friendly qualities. How do human brains match up? As faithful duplicators, they are certainly less perfect than either cells or electronic computers. Nevertheless, they are still pretty good, perhaps about as faithful as an RNA virus, though not as good as DNA with all its elaborate proofreading measures against textual degradation.

Evidence of the fidelity of brains, especially child brains, as data duplicators is provided by language itself. Shaw's Professor Higgins was able by ear alone to place Londoners in the street where they grew up. Fiction is not evidence for anything, but everyone knows that Higgins's fictional skill is only an exaggeration of something we can all do. Any American can tell Deep South from Mid West, New England from Hillbilly. Any New Yorker can tell Bronx from Brooklyn. Equivalent claims could be substantiated for any country. What this phenomenon means is that human brains are capable of pretty accurate copying (otherwise the accents of, say, Newcastle would not be stable enough to be recognized) but with some mistakes (otherwise pronunciation would not evolve, and all speakers of a language would inherit identically the same accents from their remote ancestors). Language evolves, because it has both the great stability and the slight changeability that are prerequisites for any evolving system.



The second requirement of a virus-friendly environment --- that it should obey a program of coded instructions --- is again only quantitatively less true for brains than for cells or computers. We sometimes obey orders from one another, but also we sometimes don't. Nevertheless, it is a telling fact that, the world over, the vast majority of children follow the religion of their parents rather than any of the other available religions. Instructions to genuflect, to bow towards Mecca, to nod one's head rhythmically towards the wall, to shake like a maniac, to ``speak in tongues'' --- the list of such arbitrary and pointless motor patterns offered by religion alone is extensive --- are obeyed, if not slavishly, at least with some reasonably high statistical probability.



Less portentously, and again especially prominent in children, the ``craze'' is a striking example of behavior that owes more to epidemiology than to rational choice. Yo-yos, hula hoops and pogo sticks, with their associated behavioral fixed actions, sweep through schools, and more sporadically leap from school to school, in patterns that differ from a measles epidemic in no serious particular. Ten years ago, you could have traveled thousands of miles through the United States and never seen a baseball cap turned back to front. Today, the reverse baseball cap is ubiquitous. I do not know what the pattern of geographical spread of the reverse baseball cap precisely was, but epidemiology is certainly among the professions primarily qualified to study it. We don't have to get into arguments about ``determinism''; we don't have to claim that children are compelled to imitate their fellows' hat fashions. It is enough that their hat-wearing behavior, as a matter of fact, is statistically affected by the hat-wearing behavior of their fellows.

Trivial though they are, crazes provide us with yet more circumstantial evidence that human minds, especially perhaps juvenile ones, have the qualities that we have singled out as desirable for an informational parasite. At the very least the mind is a plausible candidate for infection by something like a computer virus, even if it is not quite such a parasite's dream-environment as a cell nucleus or an electronic computer.

It is intriguing to wonder what it might feel like, from the inside, if one's mind were the victim of a ``virus.'' This might be a deliberately designed parasite, like a present-day computer virus. Or it might be an inadvertently mutated and unconsciously evolved parasite. Either way, especially if the evolved parasite was the memic descendant of a long line of successful ancestors, we are entitled to expect the typical ``mind virus'' to be pretty good at its job of getting itself successfully replicated.

Progressive evolution of more effective mind-parasites will have two aspects. New ``mutants'' (either random or designed by humans) that are better at spreading will become more numerous. And there will be a ganging up of ideas that flourish in one another's presence, ideas that mutually support one another just as genes do and as I have speculated computer viruses may one day do. We expect that replicators will go around together from brain to brain in mutually compatible gangs. These gangs will come to constitute a package, which may be sufficiently stable to deserve a collective name such as Roman Catholicism or Voodoo. It doesn't too much matter whether we analogize the whole package to a single virus, or each one of the component parts to a single virus. The analogy is not that precise anyway, just as the distinction between a computer virus and a computer worm is nothing to get worked up about. What matters is that minds are friendly environments to parasitic, self-replicating ideas or information, and that minds are typically massively infected.



Like computer viruses, successful mind viruses will tend to be hard for their victims to detect. If you are the victim of one, the chances are that you won't know it, and may even vigorously deny it. Accepting that a virus might be difficult to detect in your own mind, what tell-tale signs might you look out for? I shall answer by imaging how a medical textbook might describe the typical symptoms of a sufferer (arbitrarily assumed to be male).

1. The patient typically finds himself impelled by some deep, inner conviction that something is true, or right, or virtuous: a conviction that doesn't seem to owe anything to evidence or reason, but which, nevertheless, he feels as totally compelling and convincing. We doctors refer to such a belief as ``faith.''

2. Patients typically make a positive virtue of faith's being strong and unshakable, in spite of not being based upon evidence. Indeed, they may feel that the less evidence there is, the more virtuous the belief (see below).
This paradoxical idea that lack of evidence is a positive virtue where faith is concerned has something of the quality of a program that is self-sustaining, because it is self-referential (see the chapter ``On Viral Sentences and Self-Replicating Structures'' in Hofstadter, 1985). Once the proposition is believed, it automatically undermines opposition to itself. The ``lack of evidence is a virtue'' idea could be an admirable sidekick, ganging up with faith itself in a clique of mutually supportive viral programs.

3. A related symptom, which a faith-sufferer may also present, is the conviction that ``mystery,'' per se, is a good thing. It is not a virtue to solve mysteries. Rather we should enjoy them, even revel in their insolubility.
Any impulse to solve mysteries could be serious inimical to the spread of a mind virus. It would not, therefore, be surprising if the idea that ``mysteries are better not solved'' was a favored member of a mutually supporting gang of viruses.



Take the ``Mystery of Transubstantiation.'' It is easy and non-mysterious to believe that in some symbolic or metaphorical sense the eucharistic wine turns into the blood of Christ. The Roman Catholic doctrine of transubstantiation, however, claims far more. The ``whole substance'' of the wine is converted into the blood of Christ; the appearance of wine that remains is ``merely accidental,'' ``inhering in no substance'' (Kenny, 1986, p. 72). Transubstantiation is colloquially taught as meaning that the wine ``literally'' turns into the blood of Christ. Whether in its obfuscatory Aristotelian or its franker colloquial form, the claim of transubstantiation can be made only if we do serious violence to the normal meanings of words like ``substance'' and ``literally.'' Redefining words is not a sin, but, if we use words like ``whole substance'' and ``literally'' for this case, what word are we going to use when we really and truly want to say that something did actually happen? As Anthony Kenny observed of his own puzzlement as a young seminarian, ``For all I could tell, my typewriter might be Benjamin Disraeli transubstantiated....''


Roman Catholics, whose belief in infallible authority compels them to accept that wine becomes physically transformed into blood despite all appearances, refer to the ``mystery'' of transubstantiation. Calling it a mystery makes everything OK, you see. At least, it works for a mind well prepared by background infection. Exactly the same trick is performed in the ``mystery'' of the Trinity. Mysteries are not meant to be solved, they are meant to strike awe. The ``mystery is a virtue'' idea comes to the aid of the Catholic, who would otherwise find intolerable the obligation to believe the obvious nonsense of the transubstantiation and the ``three-in-one.'' Again, the belief that ``mystery is a virtue'' has a self-referential ring. As Hofstadter might put it, the very mysteriousness of the belief moves the believer to perpetuate the mystery.


An extreme symptom of ``mystery is a virtue'' infection is Tertullian's ``Certum est quia impossibile est'' (It is certain because it is impossible''). That way madness lies. One is tempted to quote Lewis Carroll's White Queen, who, in response to Alice's ``One can't believe impossible things'' retorted ``I daresay you haven't had much practice... When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before breakfast.'' Or Douglas Adams' Electric Monk, a labor-saving device programmed to do your believing for you, which was capable of ``believing things they'd have difficulty believing in Salt Lake City'' and which, at the moment of being introduced to the reader, believed, contrary to all the evidence, that everything in the world was a uniform shade of pink. But White Queens and Electric Monks become less funny when you realize that these virtuoso believers are indistinguishable from revered theologians in real life. ``It is by all means to be believed, because it is absurd'' (Tertullian again). Sir Thomas Browne (1635) quotes Tertullian with approval, and goes further: ``Methinks there be not impossibilities enough in religion for an active faith.'' And ``I desire to exercise my faith in the difficultest point; for to credit ordinary and visible objects is not faith, but perswasion [sic].''


I have the feeling that something more interesting is going on here than just plain insanity or surrealist nonsense, something akin to the admiration we feel when we watch a ten-ball juggler on a tightrope. It is as though the faithful gain prestige through managing to believe even more impossible things than their rivals succeed in believing. Are these people testing --- exercising --- their believing muscles, training themselves to believe impossible things so that they can take in their stride the merely improbable things that they are ordinarily called upon to believe?

While I was writing this, the Guardian (July 29, 1991) fortuitously carried a beautiful example. It came in an interview with a rabbi undertaking the bizarre task of vetting the kosher-purity of food products right back to the ultimate origins of their minutest ingredients. He was currently agonizing over whether to go all the way to China to scrutinize the menthol that goes into cough sweets. ``Have you ever tried checking Chinese menthol... it was extremely difficult, especially since the first letter we sent received the reply in best Chinese English, `The product contains no kosher'... China has only recently started opening up to kosher investigators. The menthol should be OK, but you can never be absolutely sure unless you visit.'' These kosher investigators run a telephone hot-line on which up-to-the-minute red-alerts of suspicion are recorded against chocolate bars and cod-liver oil. The rabbi sighs that the green-inspired trend away from artificial colors and flavors ``makes life miserable in the kosher field because you have to follow all these things back.'' When the interviewer asks him why he bothers with this obviously pointless exercise, he makes it very clear that the point is precisely that there is no point:
That most of the Kashrut laws are divine ordinances without reason given is 100 per cent the point. It is very easy not to murder people. Very easy. It is a little bit harder not to steal because one is tempted occasionally. So that is no great proof that I believe in God or am fulfilling His will. But, if He tells me not to have a cup of coffee with milk in it with my mincemeat and peaces at lunchtime, that is a test. The only reason I am doing that is because I have been told to so do. It is something difficult.
Helena Cronin has suggested to me that there may be an analogy here to Zahavi's handicap theory of sexual selection and the evolution of signals (Zahavi, 1975). Long unfashionable, even ridiculed (Dawkins, 1976), Zahavi's theory has recently been cleverly rehabilitated (Grafen, 1990 a, b) and is now taken seriously by evolutionary biologists (Dawkins, 1989). Zahavi suggests that peacocks, for instance, evolve their absurdly burdensome fans with their ridiculously conspicuous (to predators) colors, precisely because they are burdensome and dangerous, and therefore impressive to females. The peacock is, in effect, saying: ``Look how fit and strong I must be, since I can afford to carry around this preposterous tail.''

To avoid misunderstanding of the subjective language in which Zahavi likes to make his points, I should add that the biologist's convention of personifying the unconscious actions of natural selection is taken for granted here. Grafen has translated the argument into an orthodox Darwinian mathematical model, and it works. No claim is here being made about the intentionality or awareness of peacocks and peahens. They can be as sphexish or as intentional as you please (Dennett, 1983, 1984). Moreover, Zahavi's theory is general enough not to depend upon a Darwinian underpinning. A flower advertising its nectar to a ``skeptical'' bee could benefit from the Zahavi principle. But so could a human salesman seeking to impress a client.

The premise of Zahavi's idea is that natural selection will favor skepticism among females (or among recipients of advertising messages generally). The only way for a male (or any advertiser) to authenticate his boast of strength (quality, or whatever is is) is to prove that it is true by shouldering a truly costly handicap --- a handicap that only a genuinely strong (high quality, etc.) male could bear. It may be called the principle of costly authentication. And now to the point. Is it possible that some religious doctrines are favored not in spite of being ridiculous but precisely because they are ridiculous? Any wimp in religion could believe that bread symbolically represents the body of Christ, but it takes a real, red-blooded Catholic to believe something as daft as the transubstantiation. If you believe that you can believe anything, and (witness the story of Doubting Thomas) these people are trained to see that as a virtue.

Let us return to our list of symptoms that someone afflicted with the mental virus of faith, and its accompanying gang of secondary infections, may expect to experience.


4. The sufferer may find himself behaving intolerantly towards vectors of rival faiths, in extreme cases even killing them or advocating their deaths. He may be similarly violent in his disposition towards apostates (people who once held the faith but have renounced it); or towards heretics (people who espouse a different --- often, perhaps significantly, only very slightly different --- version of the faith). He may also feel hostile towards other modes of thought that are potentially inimical to his faith, such as the method of scientific reason which may function rather like a piece of anti-viral software.

The threat to kill the distinguished novelist Salman Rushdie is only the latest in a long line of sad examples. On the very day that I wrote this, the Japanese translator of The Satanic Verses was found murdered, a week after a near-fatal attack on the Italian translator of the same book. By the way, the apparently opposite symptom of ``sympathy'' for Muslim ``hurt,'' voiced by the Archbishop of Canterbury and other Christian leaders (verging, in the case of the Vatican, on outright criminal complicity) is, of course, a manifestation of the symptom we discussed earlier: the delusion that faith, however obnoxious its results, has to be respected simply because it is faith.



Murder is an extreme, of course. But there is an even more extreme symptom, and that is suicide in the militant service of a faith. Like a soldier ant programmed to sacrifice her life for germ-line copies of the genes that did the programming, a young Arab or Japanese [??!] is taught that to die in a holy war is the quickest way to heaven. Whether the leaders who exploit him really believe this does not diminish the brutal power that the ``suicide mission virus'' wields on behalf of the faith. Of course suicide, like murder, is a mixed blessing: would-be converts may be repelled, or may treat with contempt a faith that is perceived as insecure enough to need such tactics.

More obviously, if too many individuals sacrifice themselves the supply of believers could run low. This was true of a notorious example of faith-inspired suicide, though in this case it was not ``kamikaze'' death in battle. The Peoples' Temple sect became extinct when its leader, the Reverend Jim Jones, led the bulk of his followers from the United States to the Promised Land of ``Jonestown'' in the Guyanan jungle where he persuaded more than 900 of them, children first, to drink cyanide. The macabre affair was fully investigated by a team from the San Francisco Chronicle (Kilduff and Javers, 1978).
Jones, ``the Father,'' had called his flock together and told them it was time to depart for heaven.
``We're going to meet,'' he promised, ``in another place.''
The words kept coming over the camp's loudspeakers.
``There is great dignity in dying. It is a great demonstration for everyone to die.''
Incidentally, it does not escape the trained mind of the alert sociobiologist that Jones, within his sect in earlier days, ``proclaimed himself the only person permitted to have sex'' (presumably his partners were also permitted). ``A secretary would arrange for Jones's liaisons. She would call up and say, `Father hates to do this, but he has this tremendous urge and could you please...?' '' His victims were not only female. One 17-year-old male follower, from the days when Jones's community was still in San Francisco, told how he was taken for dirty weekends to a hotel where Jones received a ``minister's discount for Rev. Jim Jones and son.'' The same boy said: ``I was really in awe of him. He was more than a father. I would have killed my parents for him.'' What is remarkable about the Reverend Jim Jones is not his own self-serving behavior but the almost superhuman gullibility of his followers. Given such prodigious credulity, can anyone doubt that human minds are ripe for malignant infection?

Admittedly, the Reverend Jones conned only a few thousand people. But his case is an extreme, the tip of an iceberg. The same eagerness to be conned by religious leaders is widespread. Most of us would have been prepared to bet that nobody could get away with going on television and saying, in all but so many words, ``Send me your money, so that I can use it to persuade other suckers to send me their money too.'' Yet today, in every major conurbation in the United States, you can find at least one television evangelist channel entirely devoted to this transparent confidence trick. And they get away with it in sackfuls. Faced with suckerdom on this awesome scale, it is hard not to feel a grudging sympathy with the shiny-suited conmen.

Until you realize that not all the suckers are rich, and that it is often widows' mites on which the evangelists are growing fat. I have even heard one of them explicitly invoking the principle that I now identify with Zahavi's principle of costly authentication. God really appreciates a donation, he said with passionate sincerity, only when that donation is so large that it hurts. Elderly paupers were wheeled on to testify how much happier they felt since they had made over their little all to the Reverend whoever it was.

5. The patient may notice that the particular convictions that he holds, while having nothing to do with evidence, do seem to owe a great deal to epidemiology. Why, he may wonder, do I hold this set of convictions rather than that set? Is it because I surveyed all the world's faiths and chose the one whose claims seemed most convincing? Almost certainly not. If you have a faith, it is statistically overwhelmingly likely that it is the same faith as your parents and grandparents had. No doubt soaring cathedrals, stirring music, moving stories and parables, help a bit. But by far the most important variable determining your religion is the accident of birth. The convictions that you so passionately believe would have been a completely different, and largely contradictory, set of convictions, if only you had happened to be born in a different place. Epidemiology, not evidence.

6. If the patient is one of the rare exceptions who follows a different religion from his parents, the explanation may still be epidemiological. To be sure, it is possible that he dispassionately surveyed the world's faiths and chose the most convincing one. But it is statistically more probable that he has been exposed to a particularly potent infective agent --- a John Wesley, a Jim Jones or a St. Paul. Here we are talking about horizontal transmission, as in measles. Before, the epidemiology was that of vertical transmission, as in Huntington's Chorea.


7. The internal sensations of the patient may be startlingly reminiscent of those more ordinarily associated with sexual love. This is an extremely potent force in the brain, and it is not surprising that some viruses have evolved to exploit it. St. Teresa of Avila's famously orgasmic vision is too notorious to need quoting again. More seriously, and on a less crudely sensual plane, the philosopher Anthony Kenny provides moving testimony to the pure delight that awaits those that manage to believe in the mystery of transubstantiation. After describing his ordination as a Roman Catholic priest, empowered by laying on of hands to celebrate Mass, he goes on that he vividly recalls
the exaltation of the first months during which I had the power to say Mass. Normally a slow and sluggish riser, I would leap early out of bed, fully awake and full of excitement at the thought of the momentous act I was privileged to perform. I rarely said the public Community Mass: most days I celebrated alone at a side altar with a junior member of the College to serve as acolyte and congregation. But that made no difference to the solemnity of the sacrifice or the validity of the consecration.  
It was touching the body of Christ, the closeness of the priest to Jesus, which most enthralled me. I would gaze on the Host after the words of consecration, soft-eyed like a lover looking into the eyes of his beloved... Those early days as a priest remain in my memory as days of fulfilment and tremulous happiness; something precious, and yet too fragile to last, like a romantic love-affair brought up short by the reality of an ill-assorted marriage. (Kenny, 1986, pp. 101-2)
Dr. Kenny is affectingly believable that it felt to him, as a young priest, as though he was in love with the consecrated host. What a brilliantly successful virus! On the same page, incidentally, Kenny also shows us that the virus is transmitted contagiously --- if not literally then at least in some sense --- from the palm of the infecting bishop's hand through the top of the new priest's head:
If Catholic doctrine is true, every priest validly ordained derives his orders in an unbroken line of laying on of hands, through the bishop who ordains him, back to one of the twelve Apostles... there must be centuries-long, recorded chains of layings on of hands. It surprises me that priests never seem to trouble to trace their spiritual ancestry in this way, finding out who ordained their bishop, and who ordained him, and so on to Julius II or Celestine V or Hildebrand, or Gregory the Great, perhaps. (Kenny, 1986, p. 101)
It surprises me, too.


4 Is Science a Virus?

No. Not unless all computer programs are viruses. Good, useful programs spread because people evaluate them, recommend them and pass them on.

Computer viruses spread solely because they embody the coded instructions: ``Spread me.'' Scientific ideas, like all memes, are subject to a kind of natural selection, and this might look superficially virus-like. But the selective forces that scrutinize scientific ideas are not arbitrary and capricious. They are exacting, well-honed rules, and they do not favor pointless self-serving behavior. They favor all the virtues laid out in textbooks of standard methodology: testability, evidential support, precision, quantifiability, consistency, intersubjectivity, repeatability, universality, progressiveness, independence of cultural milieu, and so on. Faith spreads despite a total lack of every single one of these virtues. 

You may find elements of epidemiology in the spread of scientific ideas, but it will be largely descriptive epidemiology. The rapid spread of a good idea through the scientific community may even look like a description of a measles epidemic. But when you examine the underlying reasons you find that they are good ones, satisfying the demanding standards of scientific method. In the history of the spread of faith you will find little else but epidemiology, and causal epidemiology at that. The reason why person A believes one thing and B believes another is simply and solely that A was born on one continent and B on another. Testability, evidential support and the rest aren't even remotely considered. For scientific belief, epidemiology merely comes along afterwards and describes the history of its acceptance. For religious belief, epidemiology is the root cause.


5 Epilogue

Happily, viruses don't win every time. Many children emerge unscathed from the worst that nuns and mullahs can throw at them. Anthony Kenny's own story has a happy ending. He eventually renounced his orders because he could no longer tolerate the obvious contradictions within Catholic belief, and he is now a highly respected scholar. But one cannot help remarking that it must be a powerful infection indeed that took a man of his wisdom and intelligence --- President of the British Academy, no less --- three decades to fight off. Am I unduly alarmist to fear for the soul of my six-year-old innocent?

Acknowledgement

With thanks to Helena Cronin for detailed suggestion on content and style on every page.

References

Browne, Sir T. (1635) Religio Medici, I, 9
Dawkins, R. (1976) The Selfish Gene. Oxford: Oxford University Press.
Dawkins, R. (1982) The Extended Phenotype. Oxford: W. H. Freeman.
Dawkins, R. (1989) The Selfish Gene, 2nd edn. Oxford: Oxford University Press.
Dennett, D. C. (1983) Intentional systems in cognitive ethology: the ``Panglossian paradigm'' defended. Behavioral and Brain Sciences, 6, 343--90.
Dennett, D. C. (1984) Elbow Room: The Varieties of Free Will Worth Wanting. Oxford: Oxford University Press.
Dennett, D. C. (1990) Memes and the exploitation of imagination. The Journal of Aesthetics and Art Criticism, 48, 127--35.
Grafen, A. (1990a) Sexual selection unhandicapped by the Fischer process. Journal of Theoretical Biology, 144, 473--516.
Grafen, A. (1990b) Biological signals as handicaps. Journal of Theoretical Biology, 144, 517--46.
Hofstadter, D. R. (1985) Metamagical Themas. Harmondsworth: Penguin.
Kenny, A. (1986) A Path from Rome Oxford: Oxford University Press.
Kilduff, M. and Javers, R. (1978) The Suicide Cult. New York: Bantam.
Thimbleby, H. (1991) Can viruses ever be useful? Computers and Security, 10, 111--14.
Williams, G. C. (1957) Pleiotropy, natural selection, and the evolution of senescence. Evolution, 11, 398--411.
Zahavi, A. (1975) Mate selection --- a selection for a handicap. Journal of Theoretical Biology, 53, 205--14.

Text taken from Dennett and His Critics: Demystifying Mind, ed. Bo Dalhbom (Cambridge, Mass.: Blackwell, 1993).

Typed 9 March 1995 | Last changed 2 September 2001 (thanks to Mitch Porter, Steve Bliss, Richard Smith, Brendan Lalor and Eric Meyer for typo warnings) [CRS]

Original Page hosted by the Center for the Study of Complex Systems

 

 --+-

RELATED POSTS

BUDDHIST PHILOSOPHY AND RELIGIOUS DELUSIONS

 

Were the London Beheaders and Boston Bombers mentally ill?


Islam will Dominate 

 




 

Saturday, 14 January 2012

Algorithmic compression and the three modes of existential dependence in Buddhism

In Buddhist philosophy, all functioning phenomena are said to exist in three ways, known as the three modes of existential dependence:
  • Causality
  • Structure
  • Mental Designation or Meaning




(1)  Causal dependency.
Functioning objects exist in dependence on the causes and conditions that brought about their existence in the first place, and continue to maintain their existence (e.g. acorn, soil, rain, air and sunlight for an oak tree).  In particular, causal dependencies show a high degree of regularity (oak trees aren't produced from  chestnuts, and the planets don't wander around the solar system randomly,  but are constrained by Newton's laws). 



(2)  Compositional and structural dependency.   (Sometimes known as 'mereological' dependency')

Functioning phenomena exist dependently upon their parts, and upon the way that those parts are arranged (structural features such as  aspects, divisions, directions etc). 

The parts of a functioning phenomenon are known as the 'basis of designation', which, when arranged in an appropriate manner, prompt the observer to designate the entire structure as a single entity.    Thus the correct arrangement of pistons, cylinders, crankshaft, spark plugs etc is designated 'engine', and the correct arrangement of engine, wheels, chassis etc is designated 'car'.   



But neither engine nor car can exist as independent entities, apart from their bases of designation.   See Mereological Dependence in Buddhist Philosophy for a detailed discussion.



(3) Conceptual dependency
This is the most subtle mode of existential dependency, and concerns the way that things exist in dependence of our minds designating them by concept and name.  


For example, what is a box?   Is there some kind of ideal prototype box existing in the Platonic realm of ideal forms, or does a box exist only by arbitrary convention in the mind of the box-user, or from the collective minds of box-users?

If I say "I'll get a box to put this stuff in", then most people will understand that I'm going to fetch a container which performs the conventional function of a box, i.e. holds things. To do this it must have a bottom and at least three sides (like some chocolate boxes), though usually four. A lid is optional.



But if we were to cut the sides of a box down, it would perform the functions of a tray.



The box exists from causes and conditions (the box-maker, the wood from which it is made, the trees, sunlight, soil, rain, lumberjacks etc.)

The box exists in dependence upon its parts (bottom and three or more sides).


The box also exists because I and others decide to call it a box, not because of some inherent `boxiness' that all boxes have as a defining essence.

If it were a big cardboard box, and I cut a large L-shaped flap out of one side so it hinged like a door, then I could turn it upside down and it would be a child's play-house.

If I cut the sides of a wooden box down a centimetre at a time, then the box would get shallower and shallower. At some point the box would cease to exist and a tray would have begun to exist. So at some arbitrary point did the essence of `boxiness' miraculously disappear,  and 'trayfulness' jump in to the undefined structure?



Where does box end and tray start? 
I don't know. Maybe there's an EU directive forbidding the construction of boxes with insufficiently high sides, or specifying that all boxes must have lids permanently attached to avoid any possible confusion with trays.

EU standard box
 

Or perhaps there's a Tray Descriptions Act enforcing a maximum height for trays.

But whichever way, as well as existing in dependence on its parts, and on its causes and conditions, the box exists in dependence upon our minds (or the collective minds of the EU Box-Standards Inspectorate). 

The minds project 'box' over a certain collection of parts.  And those parts can be the common bases of designation of both a box and a tray.


Mental designation goes all the way up, and all the way down
Developments in 20th century physics have shown that the observer is part of the system, both at the very smallest levels of reality (quantum physics) and at the very largest (relativity).   These findings confirm what Buddhists have been saying for thousands of years; that the observer is part of the system at all levels of reality, not just in our everyday world of domestic storage containers.



Causal regularities in Buddhist philosophy
Unlike Islam, which completely rejects the laws of science and insists that everything happens moment-to-moment because of God's arbitrary will, Buddhism has always viewed regularities in the working of the universe as axiomatic.




As Jay L. Garfield states in 'The Fundamental Wisdom of The Middle Way' (footnote 29 p 116)

'The Madhyamika position implies that we should seek to explain regularities by reference to their embeddedness in other regularities, and so on. To ask why there are regularities at all, on such a view, would be to ask an incoherent question.  The fact of explanatorily useful regularities in nature is what makes explanation and investigation possible in the first place and is not something itself that can be explained.'


The mathematical laws governing the motion of the planets can be simulated by clockwork



The mathematical and algorithmic nature of regularities
Although asking why there are explanatorily useful regularities in nature may be ultimately incoherent, to ask why these take a mathematical form is a valid subject for enquiry.    

The standard computer analogy for causality is to regard the laws of physics as being analogous ('isomorphic') to algorithms, with the physical objects being analogous to the datastructures the algorithms act upon.  

From an article by Gregory Chaitin...

'My story begins in 1686 with Gottfried W. Leibniz's philosophical essay Discours de métaphysique (Discourse on Metaphysics), in which he discusses how one can distinguish between facts that can be described by some law and those that are lawless, irregular facts. Leibniz's very simple and profound idea appears in section VI of the Discours, in which he essentially states that a theory has to be simpler than the data it explains, otherwise it does not explain anything. The concept of a law becomes vacuous if arbitrarily high mathematical complexity is permitted, because then one can always construct a law no matter how random and patternless the data really are. Conversely, if the only law that describes some data is an extremely complicated one, then the data are actually lawless.

Today the notions of complexity and simplicity are put in precise quantitative terms by a modern branch of mathematics called algorithmic information theory. Ordinary information theory quantifies information by asking how many bits are needed to encode the information. For example, it takes one bit to encode a single yes/no answer. Algorithmic information, in contrast, is defined by asking what size computer program is necessary to generate the data. The minimum number of bits---what size string of zeros and ones---needed to store the program is called the algorithmic information content of the data. Thus, the infinite sequence of numbers 1, 2, 3, ... has very little algorithmic information; a very short computer program can generate all those numbers. It does not matter how long the program must take to do the computation or how much memory it must use---just the length of the program in bits counts...

...How do such ideas relate to scientific laws and facts? The basic insight is a software view of science: a scientific theory is like a computer program that predicts our observations, the experimental data. Two fundamental principles inform this viewpoint. First, as William of Occam noted, given two theories that explain the data, the simpler theory is to be preferred (Occam's razor). That is, the smallest program that calculates the observations is the best theory. Second is Leibniz's insight, cast in modern terms---if a theory is the same size in bits as the data it explains, then it is worthless, because even the most random of data has a theory of that size. A useful theory is a compression of the data; comprehension is compression. You compress things into computer programs, into concise algorithmic descriptions. The simpler the theory, the better you understand something'


In summary: If a computer program or algorithm is simpler than the system it describes, or the data set that it generates, then the system or data set is said to be 'algorithmically compressible'.   

This concept of algorithmic simplicity/complexity can be extended from the realms of mathematics into physical systems.   The complexity of a physical system is the length of the minimal algorithm than can simulate or describe it.    Thus the orbits of the planets, which seemed so complex to the ancients, were shown by Newton to be algorithmically compressible into a few short equations.



Visually complex but algorithmically simple



The computer model of the three levels of dependency
So causal dependency can be modelled as algorithms, and compositional/structural dependency can be modelled as datastructures, but where does that leave conceptual dependency?

According to Buddhist philosophy, the function of the mind cannot be reduced to physical or quasi-physical processes. 

The mind is clear, formless, and knows its object.  Its knowing the object constitutes the conceptual dependency,
which is fundamental, axiomatic and cannot be explained in terms of other phenomena, including algorithms and datastructures.

Buddhism versus Materialism
The question that separates the Materialist from the Buddhist is whether there is anything left to explain about reality once algorithms and and datastructures have been factored out.   

The Materialist would answer that algorithms and datastructures offer a complete explanation of the universe, without any remainder.  The Buddhist would claim that a third factor, mind, is also required.


The Mother of all Algorithms
The mind itself is not algorithmically compressible, but is responsible for carrying out algorithmic compression.

Algorithms, as executed, do not contain within themselves any meaning.  For example, the following two statements reduce to exactly the same algorithm within the memory of a computer

(i) IF RoomLength * RoomWidth > CarpetArea THEN NeedMoreCarpet = TRUE

(ii) IF Audience * TicketPrice > HireOfVenue THEN AvoidedBankruptcy = TRUE

Such considerations have led critics of philosophical computationalism to claim that algorithms can only contain syntax, not semantics. Hence computers can never understand their subject matter. All assignments of meaning to their inputs, internal states and outputs have to be defined from outside the system.

This may explain why the process of writing algorithms does not in itself appear to be algorithmic. The real test of computationalism would be to produce a general purpose algorithm-writing algorithm. A convincing example would be an algorithm that could simulate the mind of a programmer sufficiently to be able to write algorithms to perform such disparate activities as controlling an automatic train, regulating a distillation column, and optimising traffic flows through interlinked sets of lights.

According to the computationalist view this 'Mother of all Algorithms' must exist as an algorithm in the programmer's brain, though why and how such a thing evolved is rather difficult to imagine. It would certainly have conferred no selective advantage to our ancestors until the present generation (even so, do programmers outreproduce normal people?).

The proof of computationalism would be to program the Mother of all Algorithms on a computer. At present no one has the slightest clue of how to even start to go about producing such a thing.

According to Buddhist philosophy this is hardly surprising, as the Mother of all Algorithms is itself NOT an algorithm and never could be programmed. The mother of all algorithms is the formless mind projecting meaning onto its objects (i.e. conceptually  designating meaning on to the sequential and structural components of the algorithm as it is being written).

The non-algorithmic dimension of mind, of understanding of meaning, is needed to turn the user's (semantically expressed)  requirements into the purely syntactic structural and causal relationships of the algorithmic flowchart or code.



Minds, machines and meaning
The computer analogy of conceptual dependency, as far as one is possible,  would be the 'meaning' of symbolic variables which gets stripped out of high level languages during compilation to machine code.  This removal of meaning is inevitable because a machine cannot understand, interpret, use or manipulate meaning.  Only minds can grasp meaning, hence the programmer's lament:


I'm sick and tired of this machine
I think I'm going to sell it
It never does do what I mean
But only what I tell it



Neuroenvy

"...So just what can be proved about people by the close observation of their brains? We can be conceptualised in two ways: as organisms and as objects of personal interaction. The first way employs the concept ‘human being’, and derives our behaviour from a biological science of man. The second way employs the concept ‘person’, which is not the concept of a natural kind, but of an entity that relates to others in a familiar but complex way that we know intuitively but find hard to describe. Through the concept of the person, and the associated notions of freedom, responsibility, reason for action, right, duty, justice and guilt, we gain the description under which human beings are seen, by those who respond to them as they truly are. When we endeavour to understand persons through the half-formed theories of neuroscience we are tempted to pass over their distinctive features in silence, or else to attribute them to some brain-shaped homunculus inside. For we understand people by facing them, by arguing with them, by understanding their reasons, aspirations and plans. All of that involves another language, and another conceptual scheme, from those deployed in the biological sciences. We do not understand brains by facing them, for they have no face. 

We should recognise that not all coherent questions about human nature and conduct are scientific questions, concerning the laws governing cause and effect. Most of our questions about persons and their doings are about interpretation: what did he mean by that? What did her words imply? What is signified by the hand of Michelangelo’s David? Those are real questions, which invite disciplined answers. And there are disciplines that attempt to answer them. The law is one such. It involves making reasoned attributions of liability and responsibility, using methods that are not reducible to any explanatory science, and not replaceable by neuroscience, however many advances that science might make. The invention of ‘neurolaw’ is, it seems to me, profoundly dangerous, since it cannot fail to abolish freedom and accountability — not because those things don’t exist, but because they will never crop up in a brain scan.

Suppose a computer is programmed to ‘read’, as we say, a digitally encoded input, which it translates into pixels, causing it to display the picture of a woman on its screen. In order to describe this process we do not need to refer to the woman in the picture. The entire process can be completely described in terms of the hardware that translates digital data into pixels, and the software, or algorithm, which contains the instructions for doing this. There is neither the need nor the right, in this case, to use concepts like those of seeing, thinking, observing, in describing what the computer is doing; nor do we have either the need or the right to describe the thing observed in the picture, as playing any causal role, or any role at all, in the operation of the computer. Of course, we see the woman in the picture. And to us the picture contains information of quite another kind from that encoded in the digitalised instructions for producing it. It conveys information about a woman and how she looks. To describe this kind of information is impossible without describing the content of certain thoughts — thoughts that arise in people when they look at each other face to face.

But how do we move from the one concept of information to the other? How do we explain the emergence of thoughts about something from processes that reside in the transformation of visually encoded data? Cognitive science doesn’t tell us. And computer models of the brain won’t tell us either. They might show how images get encoded in digitalised format and transmitted in that format by neural pathways to the centre where they are ‘interpreted’. But that centre does not in fact interpret – interpreting is a process that we do, in seeing what is there before us. When it comes to the subtle features of the human condition, to the byways of culpability and the secrets of happiness and grief, we need guidance and study if we are to interpret things correctly. That is what the humanities provide, and that is why, when scholars who purport to practise them, add the prefix ‘neuro’ to their studies, we should expect their researches to be nonsense."


- Sean Robsville