The death of obedience

Andrew Sullivan considers the iconic role of traditional marriage for many conservatives in a very nice piece entitled Modernity, Faith, And Marriage. He writes of Rod Dreher, who…

… longs, as many do, for a return to the days when civil marriage brought with it a whole bundle of collectively-shared, unchallenged, teleological, and largely Judeo-Christian, attributes. Civil marriage once reflected a great deal of cultural and religious assumptions: that women’s role was in the household, deferring to men; that marriage was about procreation, which could not be contracepted; that marriage was always and everywhere for life; that marriage was a central way of celebrating the primacy of male heterosexuality, in which women were deferent, non-heterosexuals rendered invisible and unmentionable, and thus the vexing questions of sexual identity and orientation banished to the catch-all category of sin and otherness, rather than universal human nature.

To tell Rod something he already knows: Modernity has ended that dream. Permanently.

And he continues:

If conservatism is to recover as a force in the modern world, the theocons and Christianists have to understand that their concept of a unified polis with a telos guiding all of us to a theologically-understood social good is a non-starter. Modernity has smashed it into a million little pieces.[…] The only way to force all these genies back into the bottle would require the kind of oppressive police state Rod would not want to live under.

Naturally, Sully has the answer: his Oakeshottian attempt to infuse faith with doubt, and thus accept…

… that our civil order will mean less; that it will be a weaker set of more procedural agreements that try to avoid as much as possible deep statements about human nature.

But to achieve this, we must confront the institutions of reaction; the ones that demand “obedience”. Andrew and I both grew up in a country in which Queen Elizabeth the Second was “Fidei Defensor” (Defender of the Faith), and Andrew still professes membership in a religious institution of which the head, Pope Benedict XVI, frequently exhorts his followers to “obedience” and bemoans the evil influence of education.
Obedience is dead. Ditto deference, and all forms of argument from authority. The Enlightenment made them absurd; pluralism makes them unworkable.
Here’s an amusing bit of cognitive dissonance: imagine a Pope such as John Paul II and Benedict claiming that they were “deriving their just powers from the consent of the governed”? Silly, isn’t it?

Humanism and religion

Mark Rowlands posted a piece at Secular Philosophy in which he argued that humanism is…

… in essence, a secular form of Christianity. The idea that humans are the most valuable animals in the world makes no sense when it is removed from a theological context in which that animal is created in the image of God.

I think that this is bullshit, frankly. While both humanism and Christianity are socially-constructed systems based on admixtures of faith, idealism, and pragmatism, the “image of God” idea is not something that they have in common. What follows is a slightly edited version of my comment on that thread.

It is reasonable to suppose that every creature has an innate bias in favour of the survival of its kin, simply because such a preference has strong survival value for its genes… [It]makes perfect sense for me to regard my child as the most important creature on the planet.

For simple creatures, this innate bias is expressed in a variety of behaviours: mating patterns, competition with non-kin, protective and even self-sacrificial ways of defending offspring. For thinking, social creatures one would expect a variety of cognitive and social mechanisms to emerge that would reinforce this same bias. Unlike instinctive behaviours, such mechanisms are more plastic, since they interact with and are affected by a wide range of other social forces.

How big is the circle of “kin”? This is a critical factor that we see in many social creatures, from termites to chimpanzees. Over human history the original kin circle of the nomadic group has been broadened under a variety of pressures: tribal, racial, national, and so forth.

So what of humanism and Christianity? They are similar in that both seek to exploit the “kin bias” instinct to advance a particular idea. However their objectives seem to be diametrically opposed. Christianity, and all religions, seek to inject a supernatural authority into a worldview, and declare that only those who accept this authority are “kin”. Humanism proposes that there is no supernatural authority, and that the only reasonable way of defining “kin” is in terms of the species homo sapiens. All other definitions have historically been exploited for sectarian, tribal, or racial purposes, and should be rejected because of this.

Common sense and nonsense about taking offense

The Barefoot Bum takes a look at the notion of taking offense. First, it’s OK to feel offended; in a society of diverse views, beliefs and cultures it’s inevitable.

If you’re offended, you’re free to say so. It might be important that you’re offended. Sometimes I offend people unintentionally, and a civilized person should never give offense unintentionally. A civilized person also never gives offense gratuitously, for no other reason than to make someone else feel bad. If you’re going to give offense, you should do so intentionally and with reason and purpose.

(And that, I think, should be PZ Myers’ retort to Andrew Sullivan.) The point is not whether we feel offended, but what we do about it. Civilization is about talking, not rioting.
So far, so good. But then The Bum makes an interesting leap:

Fundamentally, all ethical beliefs are about being offended; without the concept of taking offense, each person would object only to physical harm he or she personally suffered. It is taking offense when we care about harm caused to others and condemn acts that harm others.

I don’t think that this quite works. First, there’s a large class of ethical beliefs which have deeply non-rational roots. (Yes, I’m thinking of Fischer and Ravizza’s famous “trolley” problem.) In such cases, expressions of being “offended” are almost certainly no more than social convention (One is expected to express ethical conflicts in this kind of language.) There are other ethical dilemmas which don’t seem to fit The Bum’s broad brush. Consider what we might call the “ACLU problem”, Evelyn Hall‘s “I disapprove of what you say, but I will defend to the death your right to say it.” The “disapprove” bit fits The Bum’s model, but “defend” is also viewed as an ethical stance.
To get a full picture of this, I think that we need to go beyond “offended”, and introduce another one of today’s “fighting words”, “disrespect”. “Offense” is generally a reaction to an action by another person, while “disrespect” speaks to the other’s attitude. With “offense”, the situation is clear: X did Y, Z was offended by Y, and we can debate about the ethics of Y. The problem with “disrespect” is that it is… well, “inchoate” feels close, though it’s not quite right. X did Y, Z felt that this showed that X disrespected him, and even if we resolve the question of Y we can’t clear the air about Z’s perception of X. Typically Z demands a compensating action from X to demonstrate X’s respect.
So I think that The Bum’s attempt to define ethics in terms of taking offense is backwards. We take offense over questions of an ethical nature which also arouse feelings of disrespect. (Not all ethical questions do.) And the way in which we act when we’ve taken offense is strongly (completely?) determined by the feelings of disrespect that are triggered, and may have little to do with the “Y” of the matter.

Rewriting history (as we all do, all the time)

Further confirmation (and expansion) of Libet‘s classic work demonstrating that we start to act before we are conscious of our decision to do so, and rewrite our subjective experience so that we feel that we’re in control:

Dutch researchers led by psychologist Ap Dijksterhuis at the University of Amsterdam recently found that people struggling to make relatively complicated consumer choices — which car to buy, apartment to rent or vacation to take — appeared to make sounder decisions when they were distracted and unable to focus consciously on the problem.
Moreover, the more factors to be considered in a decision, the more likely the unconscious brain handled it all better, they reported in the peer-reviewed journal Science in 2006. “The idea that conscious deliberation before making a decision is always good is simply one of those illusions consciousness creates for us,” Dr. Dijksterhuis said.

(From the Science Journal at WSJ.com)

Philosophers and evolution

Yesterday evening, PZ Myers shared with us some of his email (2,000 messages a day, after excluding spam) to illustrate the variety of personae who feel compelled to rant against evolution (sorry, “Darwinism”). As he pointed out, some of them are clearly bright, thinking individuals who are spectacularly ill-informed, having been fed a diet of religious nonsense in their homes, churches, and (illegally but inevitably) schools. The other half are just plain whacko, some of them dangerously so.
But there is another group of antievolutionists that it’s worth noting. It may be numerically tiny, but they tend to punch above their weight. I’m referring to a small group of academic philosophers, of whom the most vocal is undoubtedly Jerry Fodor. 3 quarks daily just reported on an exchange in the latest issue of Mind & Language in which Fodor makes a fool of himself and Dan Dennett and others pile on to show him up. Dan’s piece is quite devastating: Fodor’s argument, according to Dan…

… has the startling conclusion:

Contrary to Darwinism, the theory of natural selection can’t explain the distribution of phenotypic traits in biological populations.

Now this really is absurd. Silly absurd. Preposterous. It is conclusions like this, built upon such comically slender stilts, that give philosophy a bad name among many scientists. Fodor’s argument really does follow from his premises, though, so far as I can see, so I am prepared to treat it as a classic reductio. A useful reductio, as we all learned in our first logic course, has just one bad premise that eventually sticks out like a sore thumb, but in this case we have an embarrassment of riches: four premises, all of them false. I will leave as an exercise for the reader the task of seeing if any presentable variation of Fodor’s argument can be constructed in which some or all of these are replaced by truths.

Dan concludes by pointing out the damage that Jerry’s kind of nonsense can do:

I cannot forebear noting, on a rather more serious note, that such ostentatiously unresearched ridicule as Fodor heaps on Darwinians here is both very rude and very risky to one’s reputation. (Remember Mary Midgley’s notoriously ignorant and arrogant review of The Selfish Gene? Fodor is vying to supplant her as World Champion in the Philosophers’ Self-inflicted Wound Competition.) Before other philosophers countenance it they might want to bear in mind that the reaction of most biologists to this sort of performance is apt to be—at best: ‘Well, we needn’t bother paying any attention to him. He’s just one of those philosophers playing games with words’. It may be fun, but it contributes to the disrespect that many non-philosophers have for our so-called discipline.

And he’s right. Science needs the philosophers of science, to remind them of the epistemological underpinnings of the discipline, and to police the boundaries between science and metaphysics. Of course there are some philosophers who misread the zeitgeist and try to maintain a philosophical stake in a scientific debate. ((For a good example of this, I recommend a brief dip into the recent collection “Contemporary Debates in Philosophy of Mind”. But please keep it brief.)) But I think they are in a minority.
In addition to Dan’s piece, there are useful perspectives from Peter Godfrey-Smith and Elliott Sober. (The other article cited, by Kirk and Susan Schneider, addresses a completely different aspect of Fodor’s work.)
P.S. Iain commented that the links I provided don’t work for him. If you run into problems, I suggest that you click through to the 3quarks piece and link from there.

Darwin's uncomfortable truths

Derb has a fabulous piece over at Taki’s Magazine about Darwin, evolution, and the uncomfortable consequences of this simple but revolutionary idea.

It cannot be denied, though, that Darwinism’s metaphysical implications are hard to square with any view of human nature not flatly biological; and this applies as much to the “blank slate” egalitarianism of the irreligious Left as to the soul-based universalism of the religious Right. This is inevitable. As an empirical view of living matter, chasing down its truths one by one through thickets of patient observation, Darwinism is bound to offend systems derived from introspection, revelation, or social approval.

(Channeling Pinker, of course.)

Only one view of human nature can be correct. Either we are the ensouled favorites of an omniscient deity; or we are biology and nothing else; or we are biological vehicles for a perfectly plastic uniform essence whose every trait is a consequence of the world immediately around us. The first option, in current American society, is largely the property of the political Right; the third, of the political Left. The middle option has no true political home, any more than Pythagoras’ Theorem has. Like Pythagoras’ Theorem, it is much the most useful of the three, and very likely true. Unlike the theorem, though, it tells us things about ourselves we cannot bear to hear. For that reason, it will probably never have wide acceptance.

The Consciousness of John Derbyshire

When I first read about this year’s Towards A Science Of Consciousness conference, I really wanted to attend it. Good intentions were overtaken by other plans, and I wasn’t able to fit it into my schedule. Fortunately, NRO’s John Derbyshire was more pesistent, and his excellent account of the conference was almost as good as being there. At least I didn’t have to sit through all that nonsense about quantum consciousness. Derb captured the contradictions of this pseudo-argument. First:

It’s possible to explain [presentiment] via known quantum effects. You just have to drop some common-sense assumptions about time and causation! Sheehan argued that the explanatory power you get by bringing quantum weirdness into biology makes it worthwhile.

Well, yes. Bringing in poltergeists would explain a lot of things as well, wouldn’t it? But at what a cost…? And then:

Stuart [Hameroff] worked up a plausible model of the brain as a quantum computer, with the tubulin protein molecules of those neuron microtubules as the qubits — “Schrödinger’s protein”. There’s a slight drawback here: Far as we know, quantum computing can only work at temperatures near absolute zero, i.e. 590 degrees Fahrenheit colder than a working brain. Stuart phrased this objection as: “The brain is too warm and wet for delicate quantum-mechanical effects.”

Indeed. Not to mention the problem of scale: quantum effects get averaged out into the non-quantum models of classical physics at the submolecular level; where’s the causal mechanism?
Anyway, thanks to Derb for the blow-by-blow. Maybe next time.

"Mindfucking"

The philosopher Colin McGinn ((I always enjoy McGinn’s work, although I frequently disagree with him. However I think his CD-based course of lectures “Eternal Questions, Timeless Approaches” is the best introduction to philosophy that’s available today.)) has a a new book coming out on the subject of psychological manipulation. I’m going to be interested to see how he distinguishes between teaching and what he’s calling “Mindfucking”. In his blog, he talks about “rationality” as a way of distinguishing the two, but I’m not sure that this stands up to scrutiny….

Two excellent contributions to the "pseudoscience FAQ"

Well, it’s not a real “FAQ” – but perhaps it would make sense to create a page which links some of the more egregious questions to the blog postings which best demolish them. Here are two examples.
First Phil (the Bad Astronomer) addresses the old canard “Is science faith-based?”

Science is not based on faith. Science is based on evidence. We have evidence it works, vast amounts of it, billions of individual pieces that fit together into a tapestry of reality. That is the critical difference. Faith, as it is interpreted by most religions, is not evidence-based, and is generally held tightly even despite evidence against it.

And then Sean nails telekinesis, and the rest of parapsychology, in a piece called “Telekinesis and Quantum Field Theory”. This is long, but well worth your time. He considers claims about spoon-bending, and points out:

* Spoons are made of ordinary matter.
This sounds uncontroversial, but is worth explaining. Spoons are made of atoms, and we know what atoms are made of — electrons bound by photons to an atomic nucleus, which in turn consists of protons and neutrons, which in turn are made of quarks held together by gluons. Five species of particles total: up and down quarks, gluons, photons, electrons. That’s it.
There is no room for extra kinds of mysterious particles clinging, aura-like, to the matter in a spoon. That’s because we know how particles behave. If there were some other kind of particle in the spoon, it would have to interact with the ordinary matter we know is there…

Of course what applies to spoons also applies to brains. Sean’s arguments against spoon-bending also work against telepathy:

It’s a little bit less cut and dried, because in the case of telepathy the influence is supposedly traveling between two human brains, rather than between a brain and a spoon. The argument is exactly the same, but there are those who like to pretend that we don’t understand how the laws of physics work inside a human brain. It’s certainly true that there is much we don’t know about thought and consciousness and neuroscience, but the fact remains that we understand the laws of physics in the brain regime perfectly well. To believe otherwise, you would have to imagine that individual electrons obey different laws of physics because they are located in a human brain, rather than in a block of granite.

(My emphasis.)
Indeed. And yet supposedly eminent scientists and philosophers like Roger Penrose argue for some mysterious form of “quantum mind”. It seems as if some people just need a certain amount of mystery in their lives, and for them subatomic physics provides a satisfactory alternative to supernatural forces. In spite of this:

The philosopher David Chalmers half-jokingly claims that the motivation for Quantum Mind theories is: “a Law of Minimization of Mystery: consciousness is mysterious and quantum mechanics is mysterious, so maybe the two mysteries have a common source.”

I think it’s more to do with conservation of mystery than minimization; when one kind of mystery looks like evaporating try to exploit another, unrelated mystery.

The brain causes the mind. What's so hard about that?

Steven Novella dispatches dualism – whether it be the religious “woo” of Chopra and Egnor, or the philosophical zombie-inspired speculations of Chalmers:

The “easy problem” and “hard problem” of consciousness are more meaningfully described as the scientific questions and philosophical questions of consciousness. The context of my prior article was the scientific question – what causes consciousness. The materialist hypothesis – that the brain causes consciousness – has made a number of predictions, and every single prediction has been validated. Every single question that can be answered scientifically – with observation and evidence – that takes the form: “If the brain causes the mind then…” has been resolved in favor of that hypothesis.
For example, if the brain causes the mind then: there will be no documented mental function in the absence of brain function; altering the brain biologically will alter the mind functionally; mental development will correlate with brain development; and mental activity will correlate with brain activity (this holds up no matter what method we use to look at brain activity – EEG to look at electrical activity, PET scanning to look at metabolic activity, SPECT scanning to look at blood flow, and functional MRI to look at metabolic and neuronal activity).

The whole subjectivity and qualia stuff just annoys me. How do you build a brain? The easiest way (though it’s a bit time-consuming) is to evolve one. Start with simple sensor-effector nets. Add in some feedback loops to allow for memory, comparison, simulation, and learning. Why does pain “feel” the way it does? Because of the way sensory nerves are wired up to the pattern recognizers in the brain. Why does redness “feel” the way it does? Because of the way the various processing feedback loops are wired to pattern recognizers in the brain. Could it be different? Perhaps – but there’s definitely some efficiency associated with making these systems stable and predictable, so some of the low level stuff is likely to get hard-wired. And that’s about it. No qualia. It has to “feel” like something, because of the way it’s wired up, and the way it feels is simply the result of the way it’s wired, and the way we interpret the resulting pattern. No “what is it like to be an X?”it is what it is, the result of brute contingency. Nothing ineffable. There are no philosophical zombies. We have no need of that hypothesis.
The fall-out from this is significant, of course. No souls. No life after death. (What could survive?) Ergo no ultimate reward or punishment, so no system of morality based on cosmic threats and promises. Ultimately, no philosophy of mind. (Sorry, Dan.) No transubstantiation, demonic possession, or metempsychosis. No reincarnation. No out-of-body experiences. And as Pinker and others have noted, a bunch of folk-psychology notions bite the dust.
But we get to grow up.