Archive for the “Philosophy” Category

Over at Camels with Hammers, Daniel demonstrated that the “Argument from Personal Incredulity” is not restricted to mysterians or theists. I tried to post this as a comment, but for some reason it wasn’t approved, so I’ll repeat it here:

Daniel wrote:

I share the suspicion that robots could not have an internal, subjective side of experience.

To which I responded:

Sounds like an argument from personal incredulity to me. Let’s unpack it a bit. Are you a dualist? If so, then you can certainly argue that there is some essential difference between a human brain and a robot brain; you just have the thorny question of explaining how it is causally efficacious. If not, then you have a different kind of hard problem: to explain why it is impossible to create a synthetic substrate which can function in all the functionally relevant ways that the spongy grey stuff in your skull does.

Now perhaps you are arguing that in practice based on currently available technology, such a thing is impossible. Even there, I suspect that you are indulging in a little species-chauvinism. We humans have a natural tendency to exaggerate the competence of our brain function. We actually observe, reason, remember, pattern-match, infer and deduce far less than we imagine we do, with much less accuracy or consistency, and we confabulate like crazy to fill in the gaps when this becomes apparent.

Introspection is simply a form of feedback loop: a process in which a part of a complex system monitors and reasons about the operations of the system itself. Such feedback loops occur all over nature – and engineering. (Check under the hood of your car.) They are fundamental to basic processes like homeostasis, learning, and troubleshooting. It is highly implausible that a sophisticated robot would not incorporate such design patterns.

Comments 1 Comment »

I just finished reading the slim account of a debate between Dan Dennett and Alvin Plantinga, entitled “Science and Religion: Are They Compatible?”.

Dennett’s conclusion is straightforward:

Plantinga wanted to show… that science and religion are not just compatible: Science depends on theism to underwrite its epistemic self-confidence. […] [But] Our capacity to discover the facts, and to have good reasons for believing that we have done so, is explicable without appeal to inexplicable or irreducible genius, immaterial minds, or a divine helping hand. […] Let Plantinga, like Behe, try to show us the irreducible complexity in our minds that could not possibly have evolved (by genetic and cultural evolution). He will find, as Behe has, that his inability to imagine how this is possible is not the same as a proof that it is impossible. Richard Dawkins calls this the Argument from Personal Incredulity, and it is an obvious fallacy.

This fallacy — the Argument from Personal Incredulity – is one that has always fascinated me. Of course many (?most) people who say that they “can’t imagine how X could have happened” have probably never actually tried to imagine it: they have a prior commitment to a position that holds that it could not have done so, and that’s good enough for them. And in some cases they may lack the background to actually reason about the subject. But how about the others? What factors might underlie an honest, good-faith Argument from Personal Incredulity concerning the reality of evolution, from the origins of life to human consciousness?

It seems to me that there are a number of failures of imagination which, while individually innocuous, could have a cumulative effect. Let me list a few of them; I’m sure that you can think of others. Each of these items warrants a lengthy exposition, but for now let me simply summarize:

  • Lack of appreciation of very big (and very small) numbers. The enormous age of the planet (143 Petaseconds). The vast number of cells in an organism – or on the planet. The short time needed for a chemical reaction to play out, or for a mutated gene to thrive or be extinguished.
  • A tendency to think backwards, deterministically, all-or-nothing rather than forwards, incrementally, in parallel, with contingency. This builds on the large numbers involved: the entire planet (and indeed the universe) is full of experiments in selection, all proceeding in parallel, all interacting, and all subject to myriad contingencies.
  • Lack of appreciation of how much can be accomplished by so little. To understand the power of complex systems, start with simple ones. Look at the range of capabilities of single-celled organisms, or artificial life systems. Now apply scale and parallelism to these simple components.
  • Hubristic over-confidence in the capacity and ineffability of the human mind. Our brains are good at handling fragments of analysis, recognition, inference, and recollection; they’re also quite good at weaving together a story to fill in the gaps and fix up the mistakes. We “remember” things we haven’t seen, and “decide” to do things that have already started. (Philosophical arguments about mental capability – things like Searle’s “Chinese Room” and Jackson’s “Mary” – assume a total competence which is demonstrably at odds with the way the brain actually works.)

Comments Comments Off

Over the last few days I’ve been reading (and occasionally contributing to) a lengthy blog thread entitled A Central “Argument” in Feser’s Final Chapter, “Aristotle’s Revenge” « Choice in Dying. The starting point was a back-and-forth between Erin MacDonald, the thoughtful author of the Choice in Dying blog, and Edward Feser, an intemperate advocate of Aristotelianism and Roman Catholic “natural law”. The comments provide an excellent contrast between those who believe that teleology of some kind is inescapable, and those who feel that at best it’s a consequence of the way that our language reflects our intentional stance (cf. Dan Dennett), and at worst it’s just a crude attempt to smuggle in a purposive deity. Good clean philosophical fun. Recommended.

Comments Comments Off

I have a confession to make. I don’t understand Quantum Mechanics.

Now there’s no shame in that; Richard Feynman famously said “I think it is safe to say that no one understands Quantum Mechanics.” But the reason I mention this is that I’m pretty sure that I don’t understand it less than I used to. This is progress.

I’ve always known that I didn’t understand QM, because my common-sense interpretation of the words that physicists used to describe QM violated … well, common-sense. So I thought to myself, “Which is more likely? (a) My understanding is correct, and it’s OK that it seems absurd, because it’s supposed to seem absurd. Or (b) my understanding is wrong, and the real explanation is quite different. (And still possibly absurd from a common-sense point of view.)” Obviously(?!) (b) seems much more likely, so I put QM aside and tried to make sense of scientific discourse without looking too closely at it.

People are quite good at faking stuff like that – almost as good as they are at holding mutually contradictory beliefs without their heads exploding.

Recently I read The Grand Design by Stephen Hawking and Leonard Mlodinow. In general, I quite liked it, although I thought it a bit repetitious, and less well organized than it could have been. However in one chapter the authors take a crack at explaining the basics of QM, and it was a revelation to me. Specifically, I realized how my common-sense interpretation of the language of QM had led me to the particularly absurd conclusion which I’d correctly rejected. And I understood how the authors’ explanations of the ideas of QM made sense – though not common-sense.

I’m not going to try to reproduce my understanding here – at least, not yet. I’ve taken QM off the shelf, so to speak, and I’m looking forward to reading (and hopefully understanding) more, without those earlier misunderstandings getting in the way. I know that to really make progress I’m going to have to understand at least some of the mathematics, which will be a challenge. I think it will be a worthwhile one.

However the most important thing about this episode for me was that it reinforced something I believe very strongly, and wish that others did too. It’s simply this: common-sense, intuition, instinct, call it what you will, is a function of our evolved human brains. It was selected for, along with other skills that were adaptive for our survival. It applies to the world we experience, and interact with, at our scale: medium size objects, medium sized environment, medium periods of time. It works pretty well for rocks, and foodstuffs, and small groups of people and other animals, and actions like running, catching and throwing. But outside that range, there’s no reason to expect it to be reliable – and it isn’t.

From 1mm to 1km, 1 second to 60 years, 1 gram to 1 tonne, 1 kph to 100 kph, and -20C to 100C, we’re pretty good. But the subatomic world doesn’t behave the same as the rocks or the trees, any more than the larger universe does. The regularity, and even causality, that we build our common-sense view of the world on simply don’t work at radically different sizes or times. And this isn’t simply a matter of faith: we can measure it, and we’ve learned to rely upon what we measure. Every time you use your computer, or consult a sat-nav, or take a modern drug, you are relying on the fact that a bunch of scientists and engineers looked at the data, did the math, created explanatory models, tested them, and relied upon the evidence rather than “common-sense”.

I could add a couple of paragraphs about the relationship between this big idea and religion, particularly the arguments that are offered for the existence of god, but if you’re smart you’ll already understand them, and if not you’ve probably given up by now.

Comments 2 Comments »

David Chalmers just blogged that his collection of papers, The Character of Consciousness, has finally been published. It first showed up on Amazon back in 2007, and my email inbox includes a slightly testy exchange with David about the ever-changing publication date. Never mind. My copy should be here on Wednesday, and I’m looking forward to reading and reviewing it. I don’t agree with his somewhat “mysterian” views, but I’ve always felt that the best way to understand one’s own position is to read the best of the opposition, and David certainly represents this.

While I was ordering this book, I checked to see if Chalmers showed up anywhere else. He did: as an author of Mind and Consciousness: 5 Questions. This is a collection of essays by many leading lights in the philosophy of mind, edited by Patrick Grim. I hadn’t heard of it before, but ordered it immediately. Even if one has read some of the pieces before, a well-edited anthology can be an invaluable way of capturing the state of an academic debate.

Comments Comments Off

Over at Common Sense Atheism, Luke has posted an excellent commentary on the recent decision by several well-known philosophers (Keith Parsons and John Beversluis) to give up on the philosophy of religion:

The problem is not that philosophy of religion has lower standards than other areas of philosophy do. The problem is that standards in analytic philosophy in general are (compared to those in science) relatively low.
[...]
We need not look very far for examples. Consider the mainstream arguments in philosophy of mind about the possibility of zombies. David Chalmers argues that because he can imagine a world with all the same physical facts but no qualia, therefore physicalism is false. And this argument is highly respected and hotly debated in philosophy of mind, where many of the smartest people in philosophy do their work.

Such an argument from “what I can imagine” would be laughed out of a scientific conference with jeers of “Come back when you have evidence you idiot!” But standards are considerably lower in analytic philosophy, and such arguments are taken seriously and widely debated.

However Luke suggests that there is reason to hope. He points out,

In fact, one way to see the naturalistic project in philosophy since Quine is that naturalists want to raise the standards of argument and evidence in philosophy. We’ve noticed that the high standards in the physical sciences help make them so productive, and so we want to raise the standards in philosophy so that they are as close to the standards of science as possible. Thus, strict naturalists pay close attention to arguments that are roughly scientific in structure and rise close to the same standards of argumentation and evidence, and we pay less attention to arguments with lower standards, such as those that typify, say, theistic philosophy of religion or moral realism.

Comments 2 Comments »

I forgot to mention that yesterday’s piece on truth came about because one of the temporary bloggers at Andrew Sullivan’s Dish, Zoe Pollock, saw fit to link to the original piece by Bill Vallicella. Apparently I wasn’t the only person who took exception to Vallicella’s nonsense: in today’s Dish, Pollock quotes from three critics of the piece. This was particularly evocative:

Mr. Vallicella, in the greatest traditions of Monotheist sophistry, asks, “What does Hitch lose by believing?” and he answers, showing his own nihilistic disdain for truth and faith, “Nothing.” Such is how he sees it. To an existentialist, however, you are your morality and your philosophy; what you think and do IS who you are; in other words, the truth of your existence is everything. To believe now, to run fearfully to a god he has never considered feasible out of some coward’s hope that a last minute plea would postpone oblivion, to lie to himself so grandly, would be for Mr. Hitchens to lose everything.

I wanted to see what Dr. Vallicella himself might have to say about all this, so I visited his blog again. Unfortunately, he doesn’t allow comments (and his trackbacks don’t work), so I have no idea if he realized how thoroughly he’s been fisked. Never mind. I browsed some more of his contributions, and came across another piece of pure nonsense which was crying out for demolition.

In “‘Suicide Bomber’ or ‘Homicide Bomber’?”, Vallicella castigates Bill Keller, the Executive Editor of the New York Times, for using the term “suicide bomber”. This, to Vallicella, is simply wrong:

Keller took exception to the practice of some conservatives who label what are more commonly known as suicide bombers as ‘homicide bombers,’ claiming that ‘suicide bombers’ is the correct term. Keller claimed in effect that a person who blows himself up is a suicide bomber, not a homicide bomber.

This is a clear example of muddled thinking. Note first that anyone who commits suicide ipso facto commits homicide.* If memory serves, St. Augustine somewhere argues against suicide using this very point. The argument goes something like this: (1) Homicide is wrong; (2) Suicide is a case of homicide; ergo, (3) Suicide is wrong. One can easily see from this that every suicide bomber is a homicide bomber. Indeed, this is an analytic proposition, and so necessarily true.

More importantly, the suicide bombers with whom we are primarily concerned murder not only themselves but other people as well. As a matter of fact, almost every suicide bomber is a homicide bomber not just in the sense that he kills himself, but also in the sense that he kills others. There are two points here. As a matter of conceptual necessity, every suicide bomber is a homicide bomber. And as a matter of contingent fact, every suicide bomber, with the exception of a few solitary individuals, is a homicide bomber.

If anyone is guilty of muddled thinking, it;’s Vallicella. Let’s approach this systematically.


What the bomber destroys:
Property Lives
Does the bomber survive? Yes ? ?
No ? ?

There are four possible types of (successful) bombers, as shown in this table. The question is, what labels are useful for the various types? We could distinguish between those who (?merely) destroy property (the first column) and those who kill people (the second). We could call the second column “homicide bombers”, to distinguish them from those who seek to destroy property. Or we could focus on the distinction between those who survive their attack (row one), and those who die (row two). It makes sense to call the second row “suicide bombers”; this adjectival use of “suicide” goes back many years.

It’s important to note that there are examples of bombers in all four quadrants. During the protracted campaign by the Provisional IRA and its splinter groups in Northern Ireland and England, there were many attacks against property (with warnings to try to avoid loss of life) and against people (with no such warning), and in almost all of them the bombers survived. Attacks against property in which the perpetrator dies are less common, but not unknown. And in recent years we’ve seen many examples of attacks that were intended to kill others and in which the bomber intended (or at least expected) to die.

But Vallicella isn’t really interested in this degree of subtlety. He’s “primarily concerned” with bombers who kill other people: those falling into the second column. That’s fine: Western society places a high value on life (well, some life). So all of the bombers that Vallicella cares about are “homicide bombers”. We could drop the word “homicide”, and we still know what we mean. But there is a difference between the Real IRA bomber in Omagh and the bomber from Al-Qaeda in Basra. One walks away unscathed, the other dies. Most people feel that this is a distinction worth observing in our use of language, and which is captured by the term “suicide bomber”.

Of course Vallicella’s parenthetical observation about “suicide being a form of homicide” is irrelevant. Everyday language adapts to meet the needs of real people (and advertisers, and politicians), and is not dependent on theological taxonomies.

Vallicella claims that his arguments are “simple and luminous”. Simple they certainly are – though perhaps “simplistic” would be closer to the mark. Never mind; I’ve seen enough of this “Maverick”.

Comments 2 Comments »

Bill Vallicella describes himself as a “Maverick Philosopher” and a “recovering academician”. Perhaps if he sought to recover the rigor and discipline of academic philosophy we would be spared nonsense like his latest piece on Christopher Hitchens and death. Here’s his conclusion:

What would Hitch lose by believing? Of course, he can’t bring himself to believe, it is not a Jamesian live option, but suppose he could. Would he lose ‘the truth’? But nobody knows what the truth is about death and the hereafter. People only think they do. Well, suppose ‘the truth’ is that we are nothing but complex physical systems slated for annihilation. Why would knowing this ‘truth’ be a value? Even if one is facing reality by believing that death is the utter end of the self, what is the good of facing reality in a situation in which one is but a material system?

If materialism is true, then I think Nietzsche is right: truth is not a value; life-enhancing illusions are to be preferred. If truth is out of all relation to human flourishing, why should we value it?

The argument here seems to be, in matters that lie beyond our epistemic horizon, why should we not prefer “life-enhancing illusions” to “truth”? As a general rule, when a writer ends a piece with a question, and offers no answer to the question, you should be suspicious. In most cases, there are perfectly obvious answers to the question, but to actually trot them out would undermine the rhetorical flourish that the author is seeking. Or it’s a way of disguising an Argument From Personal Incredulity. Either way, it’s a cheap trick.

It is clear that we do – and should – value truth in matters of fact and evidence. (If Dr. Vallicella disagrees, I would like to see how he deals with everyday life.) We don’t need to invoke any heavy-duty metaphysical notion of Truth; ordinary, everyday, consensus-based, empirical testable varieties of truth are good enough. Evolution has endowed many lifeforms, including humans, with a variety of powerful tools for detecting truth and falsehood and for storing the outputs of these tools. At the same time, evolution has exploited these capabilities by enabling other lifeforms to trick and defeat these tools. We have plenty of examples, from flowers that mimic insects to pool sharks in Atlantic City.

Truth-detection, truth-deception: it’s an evolutionary arms race. And in humans this competition has spread from the purely biological to the cultural. We can see this in the value placed on skepticism about poorly-supported truth claims, and the adoption of various mechanisms – jury trials, double-blind tests, peer-reviewed papers – to try to minimize the likelihood of subjective bias and self-delusion. And societies that emphasize these values – in law, medicine, science, technology, commerce, and so forth – tend to flourish.

So the proposition that “truth is out of all relation to human flourishing” seems groundless. And when Dr. Vallicella asks, “what is the good of facing reality?”, the answer is pretty clear: because that’s what humans do. It’s not a question of “what good is it” – you might as well ask “what is the good of living?” It’s a brute fact. We face reality, and try to establish truths about it. Our ability to do so affects our success in surviving and passing on our genes (and culture) to the next generation. We each encounter different elements and aspects of reality, but we have no choice about facing the reality which we encounter.

Presumably the self-styled “maverick” (a word that has forever been tainted by McCain and Palin) is referring to hypotheses which lie beyond the epistemic horizon: matters about which, as he says, “nobody knows what the truth is”. If we don’t know what the truth is, what is the harm of adopting “life-enhancing illusions”? There are three obvious retorts.

The first is that, empirically, our truth-judgement capabilities aren’t wired to detect which questions fall into which category. We don’t simply turn those brain centers off when a transcendental topic pops up. This means that we cannot avoid bringing our usual arsenal of critical tools to these subjects. And, historically, we have done so, and it has kept armies of theologians and apologists in business.

Secondly, suspending notions of truth in these areas doesn’t really help, because there seems to be a vast range of alternative “life-enhancing illusions” on offer. Which should we choose? Perhaps Dr. Vallicella feels that it doesn’t matter: any comforting story is better than the stark reality of an indifferent universe. But these are not unencumbered choices: each is embedded in a rich network of cultural, social, and dogmatic propositions and norms, many of which definitely DON’T fall into the “nobody knows what the truth is” category. How does one choose? Are truth and reason irrelevant? Such considerations make a nonsense of the purported dichotomy of “life-enhancing illusions” vs. “truth”.

And finally, there is the inconvenient truth that the epistemic horizon keeps on moving. Five hundred years ago, witchcraft, fairies, ghosts and demonic possession were things that “everybody” knew to be true. Perhaps a Victorian ancestor of Dr. Vallicella would have described seances and spirit communications as “life-enhancing illusions”. Today, I assume that if Dr. Vallicella developed symptoms of “possession”, he would expect to be treated medically for schizophrenia.

We have always warped our everyday notions of truth and evidence to accommodate irrational “life-enhancing illusions”. As the epistemic horizon expands, it takes time and effort to roll back these distortions. Today we treat parents who rely on prayer to heal their sick children as criminals, rather than respecting their antiquated “life-enhancing illusions”. Yet many religious believers (Roman Catholics and Moslems) still hold that they should not be accountable to civil courts, and the evidentiary rules that have been adopted to make the search for truth more reliable. Such ideas are intrinsically divisive, and have no place in a heterogeneous society.

Dr. Vallicella’s “life-enhancing illusions” are not free. They have baggage. And they are incompatible with a commitment to reason. They do not “enhance” my life, nor that of countless others.

Comments 2 Comments »

Colin McGinn has written a marvelous essay on “Why I am an Atheist”. He begins by pointing out that the atheist cannot rationally limit his stance to a simple assertion of non-belief: he…

… doesn’t just find himself with a belief that there is no God; he comes to that belief by what he takes to be rational means—that is, he takes his belief to be justified. He may not regard his atheistic belief as certain, but he certainly takes it to be reasonable—as reasonable as any belief he holds. Just by holding the belief he regards himself as rationally entitled to it (or else he wouldn’t, as a responsible believer, believe it—that being the nature of belief). Also, given the nature of belief, he takes himself to know that there is no God: for to believe that p is to take oneself to know that p. The atheist, like any believer in a proposition, regards his belief as an instance of knowledge (of course, it may not be, but he necessarily takes it to be so). So an atheist is someone who thinks he knows there is no God. Thus he is prepared responsibly to assert that there is no God. The atheist regards himself as knowing there is no God in just the sense that he regards himself as knowing, say, that the earth is round. He claims to know the objective truth about the universe in respect of a divinity—that the universe contains no such entity.

Many theists (and agnostics) protest loudly that such a position is unwarranted, arrogant, and epistemically unreasonable: a an example of the fundamentalism which many atheists criticize in theists. But McGinn will have none of this: theists have exactly the same confident disbelief in many things – other gods, for example – that atheists do. They have no basis for insisting that the atheist should adopt a selective agnosticism:

My state of belief mirrors theirs, except that I affirm zero gods instead of one. (In fact, the idea of many gods has its advantages over the one-god theory: it comports with the complexity of the world and it promotes tolerance.) Yahweh, Baal, Hadad, and Yam: which of these ancient gods do you believe in and which do you think fictitious? I believe in none of them, nor in any others that might be mentioned; if you believe in one of them and disbelieve in the others, then you are just like me with respect to those others. Atheism is not confined to atheists, and the epistemology is the same no matter which gods you disbelieve in.

Having made his case, McGinn confesses that he finds the label of atheist a rather misleading one:

So my state of belief is not that of one continuously denying the existence of God, with an active belief that there is no such entity (though it is true that I am more often in this state than I would be the issue were not constantly debated around me). I am, dispositionally at any rate, in a state of implicit disbelief with respect to God—as I am in a state of implicit disbelief about ghosts, goblins and Santa. I simply take it for granted that there is no God, instead of constantly asserting it to myself. The state of mind I am in while composing this essay is not then my habitual state of mind, and even to be explicitly denying the existence of God strikes me as taking the issue a little too seriously—as it would be to write an essay making explicit my negative implicit beliefs about Santa Claus. So I am really as much post-atheist as post-theist, when it comes to my natural state of mind—just as I suppose most people are post-a-polytheist as well as post-polytheist. Polytheism, for most people, is simply a dead issue, not a subject of active concern. Theism for me is a dead issue, which is why it is misleading to call me an atheist–though it is of course strictly true that I am. It is misleading in just the way it is misleading to speak of a traditional Christian as an a-polytheist or a normal adult as an a-Santa-ist, since it suggests are far more active engagement with the issue than is the case. Many other difficult issues engage my mind and remain unresolved or at least open to serious question, but not my disbelief in God.

He closes with some thoughts about what it might mean for God-talk to remain with us in a purely fictional mode. I’m not holding my breath. All the same, it’s a wonderful essay. (Of course I would say that, wouldn’t I?) If you want to know what I believe, you could do worse than read it.

Comments Comments Off

Over at Sentient Developments, Russell Blackford takes on the philosopher Massimo Pigliucci and his recent piece on the limits of skeptical inquiry. Russell’s comments in general are quite convincing, but one passage particularly caught my attention.

I’ve been hanging out at various Christian apologist websites recently, contributing the odd comment here and there and scratching my head over some of the crazier assertions that people make. And one of the common moves that apologists make, when a unique and supposedly miraculous claim is challenged, is to say that science is unqualified to judge such things because “with God, all things are possible” Of course, this is really no different from Last-Thursdayism: we can’t trust the evidence for anything, because the universe might have been arranged to create that illusion. So I particularly liked Russell’s robust rejection of such moves:

However, what if somebody replies that God arranged for the Earth to look far older than it really is, in order to test our faith? Here, Pigliucci thinks that science and hence skeptical inquiry reaches a limit. He claims, in effect, that philosophers have a reply, whereas scientists must stand mute.

I disagree with this. The scientist is quite entitled to reject the claim, not because it makes falsified predictions or conflicts directly with observations it doesnt but because it is ad hoc. It is perfectly legitimate for scientists working in the relevant fields to make the judgment that a particular hypothesis is not worth pursuing, and should be treated as false, because it has been introduced merely to avoid falsification of a position that is contrary to the evidence.

Scientists might take some interest in claims about a pre-aged Earth if they were framed in such a way as to make novel and testable predictions, but as long as all such claims are presented as mere ad hoc manoeuvres to avoid falsification of the claim that the universe is really 6,000 years old, a scientist is quite entitled to reject it. A philosopher should reject it for exactly the same reason. Philosophers don’t have any advantage over scientists at this point.

Thus, Pigliucci is unnecessarily limiting the kinds of arguments that are available to scientists. He writes as if they are incapable of using arguments grounded in commonsense reasoning, such as arguments that propose we reject ad hoc thesis-saving hypotheses.

Comments Comments Off

Creative Commons Attribution-ShareAlike 3.0 Unported
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.