Category Archives: Science!

The Barbarian Bumblebee


I went on a road trip the other day, and while stretching my legs at a rest stop I noticed a pair of bumblebees buzzing dutifully among a clump of fireweed. I recalled that I read somewhere that bumblebees don’t produce honey, and that they live in underground nests. That had struck me as strange at the time (the honey part, I know full well that they live underground due to personal experience), especially as I had spent most of my childhood with the impression that all bees were bumblebees (as far as I can tell I’ve never seen a wild honey bee in western Washington State, and as a child I didn’t make a habit of visiting beekeepers). After all, isn’t making honey what bees are supposed to do? Isn’t that why they go around collecting nectar in the first place? Do bumblebees make honey after all? And if they don’t, then what in the world are they doing with all that nectar?

So I checked in on bumblebees, and here’s what I found.

Bumblebees do, indeed, refrain from making honey on the whole. They make some honey, but even the largest nests never have more than about 4 ounces of the sticky stuff. The reason for this is that they simply don’t need it. Honeybees make honey so that they will have food to eat through the long winter months. Bumblebees don’t make honey because they don’t plan on surviving through the winter. Every year, when winter comes, young and recently fertilized queen bumblebees find nooks and crannies to hide in and remain dormant during the cold months. The rest of the bumblebees freeze to death or starve. When spring comes the young queens awake and start brand new hives from scratch. The queen builds the first cells of the new hive, lays her first eggs, and collects nectar and pollen to feed her larvae with. It is only after four or five weeks that her children have grown enough to take over the menial labor of collecting food so that she can focus on laying eggs. Because they start over each year from scratch bumblebee hives are at their height they typically only contain about 50 bees or so. What’s especially interesting is that the female worker bumblebees are capable of reproduction, unlike honey bees. The queen bee dominates the early workers and prevents them from becoming fertile, but by the end of the season many of her children start having kids of their own. All of these children are male (bizarrely bumblebees can produce male eggs without mating, and can only produce female bees if they have been fertilized by a male) and flee the hive to find roving young queens to mate with.

Since bumblebees don’t survive through the winter they don’t need to bother with honey. They eat pollen and fresh nectar, straight from the flower. From a human perspective this almost seems like a waste: they collect all that nectar and don’t produce a drop of honey for us to eat! Yet bumblebees are vitally important for agriculture. They are hardworking pollinators, and there are several species of plant that can only be effectively pollinated by bumblebees. Some companies cultivate bumblebees for commercial pollination services, and such bees are used in greenhouses and fields across the globe. Have you ever enjoyed hothouse tomatoes? Chances are good that it was pollinated by a bumblebee.

As soon as I learned all this I was struck by the romantic notion that honey bees, if they could think and talk, would likely look down at bumblebees as uncivilized barbarians. While honey bees make great citylike hives that contain thousands of individuals bumblebees make do with small “tribes” of 50 or so that come and go with the seasons. I could well imagine some scandalized honey bee relating to her friends, over a civilized lunch of honey, that “Those barbarians are so underdeveloped that their workers lay eggs!” And now the romance grows in my mind: a group of hardworking honey bees, cautious and wary as they collect nectar in the wilderness far from their grand city home, encounter a wild and savage bumblebee, hairy, large, uncouth, and uncivilized. One thing I forgot to mention is that bumblebees, unlike honey bees, can sting multiple times without dying. How frightening then must a bumblebee seem to a honey bee; perhaps as frightening and unpredictable as a wild mountain man seems to the modern city dweller, or an African bushman to an African businessman. The bumblebee seems a bushy and sizable creature, with strong limbs and thick fur coat that would no doubt intimidate the more effete and clean-shaven honey bees. To be sure the honey bees have numbers on their side, but it must give them pause to know that this barbarian, obviously their inferior in culture and science, could kill any one of them and walk away from it unharmed. How wild and free must bumblebees seem to a honey bee. They go where they will, they don’t plan for the future, and even their workers can become mothers. Would a honey bee, in a burst of whimsy, almost envy the bumblebee the way that a modern cubicle dweller might envy for a moment the rugged life of the mountain man? Of course the bee, just like the cubicle dweller, would turn back to its work in the end, reminding itself that the grass is always greener on the other side and knowing in its heart that it wouldn’t stand a chance out on its own anyway and that at least it won’t starve come winter.

Science Fiction, Naturalism, and the Singularity


I love science fiction.

Though I haven’t read all that much of it recently.

The problem, I think, is that as I have grown older I have learned too much philosophy and metaphysics to really sit down and enjoy a meaty piece of speculative fiction. To be more accurate, I’ve learned too much of the wrong philosophy. Almost every really serious and thoughtful piece of science fiction I’ve read is heavily based in a naturalistic metaphysic, which is something I reject. This difference of opinion is particularly noticeable when it comes to science fiction opposed to other genres. In many ways metaphysics and philosophy is about models of reality, and different models will predict different things about the future.

For example, your average naturalistic model says that man is a kind of very, very complicated machine. Using that aspect of the model we can predict that someday we will build machines that are as sentient as ourselves. The naturalistic model also holds that the complexity of our bodies and brains is solely based in the natural process of evolution. If this is true then we can also predict that it is very likely that someday the sentient machines that we build will be superior to ourselves. This leads us to the whole concept of the “singularity,” the point at which computers will be smarter than humans and will be capable of designing even smarter computers which design even smarter computers and so on and so on for the foreseeable future. Once this singularity has been reached almost anything will be possible.

Of course it all depends on a purely naturalistic metaphysic.

If you’re like myself then you do not believe that the human mind is the product of a complicated machine. Though I do not fully understand what the mind is I understand enough to have confidence that it is not merely a machine. A machine is incapable of producing free will or reason, for example, and I have far more confidence in the existence of free will and reason than I have in the statement “the mind is what the brain does.” If we take this metaphysical position as our starting point the future looks very different. Computers may increase in processing power by wide margins but they will never be capable of reason or intelligence. Though some programs may be able to mimic human behavior they will only be able to do so by following the instructions of human programmers. Computers will never reach the lowest levels of actual intelligence; much less become our intellectual superiors. They will remain what they are: powerful processing tools. The computer on your desk is the equivalent of an army of accountants working at incredible speed, able to complete complex calculations and follow the commands of the most byzantine flowcharts imaginable, with only one major difference: an army of human accountants can think, while the computer can only obey. It is imaginable that an accountant working in a sea of other accountants could have an idea about a better way to solve the problem at hand than the instructions they’ve been given. The accountant might be completely wrong, of course, but a computer can never be wrong for the same reason that a computer can never be right. It doesn’t even have the capability to make an error without a human accidentally programming that error into it. How can a computer ever become a genius if it is not even capable of becoming stupid?

The only reason to believe that a computer could ever become intelligent is if you begin with the idea that the human mind is the result of a computer. Surely computers will produce intelligence if we can only make them complicated enough! It is a statement taken on faith, and faith alone. The computers we have now are as incapable of intelligence as a pencil and a piece of paper. It is only philosophy that makes them appear to be something more.

And that’s part of the reason why I have trouble getting into hard science fiction these days. The authors take so many things for granted that I simply don’t find plausible. It’s not like a fantasy either, where you can put your preconceptions away. J.K. Rowling does not expect us to believe that there is an actual hidden society of witches and wizards living in Britain, and thus we can enjoy Harry Potter; but the writers of many science fiction works do expect us to believe that the mind is actually a computer. No wonder I find one delightful and the other slightly insufferable.


Bury the Suicide at the Crossroads: Why the Medieval Church Was More Enlightened Than They Appeared



I recently discovered the popular podcast Freakonomics. It’s a kind of educational program in the same vein as Radiolab or This American Life where interesting stories and ideas are shared in an entertaining format. One of the first episodes I listened to was called “The Suicide Paradox” and what I heard there inspired a line of thought that spread through my mind until it was filled to the point where I knew I would have to make a post out of it.

The episode dealt, as the name suggests, with suicide, and they used a fascinating hook in order to draw the listener in. They interviewed a professor who had spent much of his life living with and studying an Amazonian tribe. Early on, when he first came to the tribe, he shared with them a story that had affected his life deeply: the story of his stepmother’s suicide. However, when he was finished, he found that the entire tribe was laughing. When he asked them why they were laughing at such a tragic story they replied that it was because she had killed herself. They found the idea of someone killing themselves as ridiculous. Who had ever heard of such a crazy thing? People kill animals and sometimes kill each other, but what kind of clown would try to kill themselves? No one in the tribe had ever committed suicide. To this day none of them have. I was, of course, very interested in discovering why this tribe was in such a favorable position. Unfortunately this opening story was merely a hook; they never came back to give a satisfying answer to that question. However what they did have to say has given me a theory.

You see the episode also featured interviews with experts in suicide, and those experts, among other things, talked about the Werther Effect. The Werther Effect is named after the character in an 18th century popular book who committed suicide after his love became engaged to another. This book supposedly started a rash of copycat suicides, with young men all over Europe offing themselves in a manner similar to the book. The Werther Effect was shown to be a real phenomenon after a series of sociological studies which showed that the suicide rate goes up after a famous suicide occurs. For example, it is estimated that Marylyn Monroe’s suicide may have led to 200 more suicides in the months to come than would normally have occurred. Because of this many media outlets have a policy of not reporting on suicides, or to do their best not to glorify the act if they do report on it.

The podcast ended by discussing Hungary, a country with one of the highest suicide rates in the world. They interviewed an old Hungarian man who has spent most of his life fighting against suicide, setting up hotlines and doing research and trying to bring the rate of suicides down to a normal level. One thing he mentioned stuck with me: he said that in Hungary suicide is considered a brave act, something courageous even.

All of this led me to think about a subject that used to trouble me about the history of the Christianity. That subject was the treatment of suicides by the Catholic Church during the Middle Ages. In medieval times suicide was not just frowned upon: suicide was a crime. Medieval suicides typically could not be buried in the church graveyard, or in any other consecrated ground. Sometimes their bodies were flung ignobly into a ditch. Others were decapitated before burial, or their bodies were staked to the ground. On occasion suicides were buried under a crossroad so that they would be symbolically stepped on by all who passed by. These punishments were even harsher then than they would be if instituted today: in a medieval village, where practically everyone knows each other and where weddings and funerals are truly communal events, the lack of a proper funeral and the public shame that would bring would be powerful. Everyone in town would know about the suicide, and everyone would know that it brought shame and disgrace.

When I had first learned of these practices they struck me as very barbaric and cruel. After all the person who commits suicide is typically someone who is in deep depression or sorrow. To take a person who was so troubled and sad that they took their own life and then cast such shame and disgrace on them for doing so seemed uncompassionate, liking kicking them while they’re down.

But while reflecting on the Werther Effect, it occurred to me that the Church did everything in their power to make suicide appear as unattractive as possible. To a medieval peasant suicide was never brave, courageous, or glamorous. Though suicides occurred they were not performed for the public eye. Suicide was an act that was best kept secret. Those who committed suicide typically did what they could to make it look like an accident. There was a kind of social taboo against discussing suicide, and if a great or powerful individual committed suicide it was usually glossed over in historical accounts. It was only with men who were considered evil or disgraced that suicide was stated clearly as the probable reason for their deaths.

All of this is to say that what seemed at first to me to be cruel and unenlightened practices by the church now seems to me to be terribly enlightened. We know now that suicide is a dangerous idea, and that it can spread from community to community. Every suicide that is publicized strengthens the idea that suicide is an acceptable option for those who are desperate. Even worse are the suicides that are romanticized, where those who slay themselves are considered brave or tragic or poetic. It is a proven scientific fact that communities that romanticize suicide have far greater rates of suicide than normal, and almost every movie star or hit musician that kills themselves has the potential to inspire hundreds to follow their example. And they do not have to be famous: in countries such as Micronesia or Hungary, where suicide is epidemic, things have reached the point where almost everyone knows someone who has committed suicide, and because of that suicide has become more and more accepted as a common occurrence. The Amazonian tribesman who finds suicide such a novel and unheard of ideal that it inspires laughter is capable of the same amount of depression, frustration, and despair as the man from Hungary for whom suicide has become a reality of day to day life; and we see that the Hungarian is far more likely to do the deed. Once suicide is considered normal than far more people will take their lives into their own hands and end them.

So what did the Church do in medieval times? They made it clear that suicide was an aberration and a crime against God and man alike. They gave the suicide nothing but shame and disgrace. If their actions seem cruel to us then it is only because we do not understand the danger suicide represents to the entire community. Superstitious peasants in Eastern Europe occasionally staked a corpse to his coffin in order to prevent him from rising from his grave to slay the living: the Catholic Church staked the bodies of suicides to the crossroads for the same reason. They needed to prevent this person from killing others with his action. Today we look on suicide as a very personal decision: but if a despairing person decided to kill themselves with a bomb, while sitting in a public square, we would be less sympathetic of their plight. To the church, and to the modern sociologist, every suicide is a suicide bomber, casting their shrapnel to all who hear of their death. A suicide does not simply take their own lives into their hands, but the lives of those around them. In that light the actions of the Church seem perfectly justified.

We have no solid statistics for the suicide rate during the Middle Ages, as record keeping was not always accurate and most records that were kept have not survived. However Dr. Alexander Murray of Oxford found, from his study of official records from the time, that the recorded suicide rate in Essex in the 13th century was .88 occurrences per 100,000 people (though he admits that this number assumes that all suicides were discovered and recorded and is almost certainly low in that respect). In comparison the suicide rate in the United States today is around 10 per 100,000, and in countries such as Hungary it is more than 20 per 100,000. Even if the medieval record keepers of Essex recorded only 1/5 of all suicides then they would still have less than half of the suicide rate of the US. I do not think it is controversial to claim that the Middle Ages had a lower suicide rate than the modern world. I believe we have the Church to thank for that.

You can’t always tell a book by its cover.

Swiftocracy!: Attack of the Toads

For the last two weeks each of my posts have been based off requests. For more information about how that happened, look here.

“Write about Horny Toads and their ability to defend themselves from predators.”


Horny Toads are not toads.

But they do have an abundance of horns.

Horny Toad is a colloquial term for several species of Horned Lizard, all of whom are native to North America. They are not amphibians. Many of them live in the desert, though some can be found in the forests of Idaho and southern Oregon, and in Colorado. They’re small, and fat, and covered in spikes that make then more charismatic than your average fat lizard. They got the name Horny Toad because they look a lot like toads. They have a short, wide snout and a short, fat body. In addition they will inflate their bodies when threatened, which certainly seems to be toadlike behavior. I mean look at this thing:



I wouldn’t blame you for calling it a toad.

Horny Toads, despite their fiercesome appearance, are actually quite small and probably quite tasty. They also don’t move too fast. The Horny Toad hunts by sitting very still and eating any ants that walk by it’s mouth. In order to survive it utilizes four distinct lines of defense.

1. Camouflage. Their dull earth tone scales and bumpy exterior means that this lizard blends right into the landscape while it waits for ants. If you can’t see him then you can’t eat him. Of course camouflage isn’t perfect. When a hungry coyote or bobcat spies him sitting on a rock he’ll have to try…

2. Inflation. The Horny Toad will inflate its body somewhat when threatened. This makes him look bigger, spikier, and can be a little surprising. The hope is that whoever is bothering him will get scared off. If this fails to impress he can always rely on…

3. His spikes. He’s a prickly little critter who may hurt going down. Some predators will be put off by this. If one tries to snatch him up he’ll usually lean down on one side to keep their jaws from getting a grip on his scaly hide. If this doesn’t work he has only one more trick up his sleeve, and it’s a real doosy.

4. Blood shooting eyeballs.

That’s a pretty strange defensive ability, I must admit.

Though it seems like something out of a prospectors tall tale, the fact is that Horny Toads really can shoot blood out of their eyeballs. Well, out of ducts close to the eyes anyway. They can shoot blood up to five feet, which is pretty frightening to a hungry coyote. It’s just plain surprising. Predators are no stranger to blood, but they don’t expect to go spraying until after they start biting. In nature you don’t get second chances, so most animals are wary of anything too surprising. As an added bonus there’s something in the blood that stinks to high heaven if you’re a canine or feline. It makes a predator wonder: why bother trying to eat this freaky stink blood shooting spiky thing when there are plenty of perfectly normal groundhogs around to munch on?

What is most surprising to me is how they shoot the blood out. I was ready to believe that they just have some natural little blood cannon ducts in their eyeballs that fill up with blood and squeeze them out like a super soaker. The truth is that they somehow reduce the flow of blood leaving their head. This builds up pressure until the ducts in their eyes literally explode, the blood vessels bursting outward in a spray of blood. That sounds painful! Imagine if you could do that.

Now stop before you give yourselves nightmares.


Living Below 0 Degrees (Fahrenheit)


As long as I can remember I’ve had a fascination for the cold. When I was 10 or 11 I decided to see how long I could stay outside in a t-shirt on  a chilly fall evening. It was only about 40° F outside, typical for a Washington fall. I stood in the gravel road just a few steps from my front door and set my will against the temperature. The light was dimming. I was cold; but I found that I could take the cold. I could make it a part of me. It could set it aside and withstand. I was just a shivering skinny little nerd with a wild head of hair and round glasses on the outside, but inside I felt like a conqueror. As years went by I tried to increase my tolerance. I learned that the key was to accept the cold. If you fought the cold, if you tried to stay warm mentally, then you would be miserable. You had to make the cold an extension of yourself. I imagined myself as a man made of ice, that my skin was blue as a crevasse and that ice water flowed in my veins. Then I could welcome the cold like a friend. I could pretend that I was in my own element. The cold wasn’t something to escape but embrace. If I concentrated on these ideas then the cold became bearable. Sometimes it even became enjoyable.

Eventually I became satisfied with mental experiments. I stopped deliberately exposing myself to the cold and only used my mental techniques when I had to (ie, I forgot my coat and dang if it ain’t chilly outside). Then the strange wheels of fate turned and I found myself here in Anchorage, Alaska. After an unusually long and warm fall (for Anchorage anyway) winter has finally arrived, and with a vengeance. I got used to the temperature being in the 20s (-6 to -1 °C). Then two days ago it dropped down to 8° (-13° C). Yesterday I woke up and it was -7° (-21° C). And I know that soon enough we’ll be reaching temperatures in the -20° range (-28° C). This is cold like I had never known it before.

Naturally I was curious to see what temperatures in the negative degrees felt like. As a child I’d read Jack London and wonder what such extreme colds would actually feel like to be in. It seems possible to me that others may be curious. So let me tell  you what -7° temperature feels like.

Oddly enough it feels mostly the same as 25°.

The thing about minor extreme cold is that it doesn’t affect your body differently than regular cold. You’re still losing heat to the air around you. The only difference is that at 25° I can go about in my coat, gloves, and hat for hours if I have to and still be fairly warm under the layers. In 0° and below temperature I lose my heat much more rapidly. I took my gloves off to scrape my car’s windshield the other day, just as I would do back home in Washington. I was surprised to find that my hands were painfully cold after only a minute or so of exposure. That’s the thing about this cold. It can deceive you into thinking that you’re safe while it steals away your body heat.

And don’t even think about going outside with wet hair. My head was slightly damp from showering yesterday and in the ten seconds or so it took to walk to the warm car my hair went from “warm and wet” to “dunked in a frozen pond.” I took a ten minute walk at noon yesterday, when the temp was about 8 or 9°, and my face was feeling the pain by the end of it. The rest of my body was fine, covered with my heavy winter coat, gloves, and wooly hat.

I can still remember that Jack London’s story “To Build a Fire” had a man traveling in temperatures of -70° (-56° C). At that point if you spit it’ll be frozen long before it hits the ground. I still have no idea what such extreme temperature feels like. I probably never will (Anchorage, being warmed somewhat by the ocean, gets to about -30° at worst from what I hear). Still I have an idea what it will feel like. For a brief amount of time (maybe just before your first breath) it will feel like any other cold. And it’ll still feel like that until it’s stolen almost all your heat away.

Which will happen very, very fast.

Why Computers Will Never Be Better Than Humans

I heard recently that they’re getting closer to building a computer that can beat master Go players (if you don’t know what Go is, you can learn about it here). I couldn’t find any articles out there confirming that rumor. But it got me thinking about the “conflict” between computers and human beings. I’ve heard people say (usually jokingly) that it won’t be long before computers are better at us than everything. Others more seriously believe that someday humans will become obsolete, made completely inferior to computers. Others retort that a computer will never be capable of creating art, or something similar. I’ve been thinking about it and I’ve come to an interesting conclusion. I don’t believe that computers will ever become “superior” to humans. Why? Because there is nothing a computer can do that a human can’t, given enough time.

Take Deep Blue, for example. Deep Blue was the supercomputer that made headlines by regularly beating chess grandmasters. By all apparent accounts it appears that computers are now better than humans at chess. But when Garry Kasparov faced off with Deep Blue he wasn’t really playing against a soulless machine. He was playing against the entire team of programmers who created Deep Blue, programmers that have been armed with immense processing power. To understand this better lets take a cursory look at how Deep Blue works. Deep Blue, like all computers, follows a huge and complicated set of rules. Deep Blue looks at the position of the pieces on the board and then begins calculating how many possible moves he could make. After calculating a possible move (lets say moving his queen) it then consults a long and complex set of rules that eventually gives Deep Blue a “score” for that position. High scoring positions are better than low scoring ones. After calculating every possible move it can make it then calculates every possible move its opponent could make. The opponents possible moves will then affect the average score of Deep Blue’s own moves. After calculating all that it chooses to use the move that has the highest final score. By always making the “best move” Deep Blue is capable of winning a high percentage of the time.

Now granted, all that I just said is an incredible simplification of what actually happens, and some of the exact details differ from the description I just laid out. Still it is essentially accurate. The important thing to note is that a human, given enough time, can do exactly what Deep Blue can do. If you give a human the entire long, complicated list of rules that makes up Deep Blue’s programming then that human can play just as well as deep blue.

Imagine the scenario now: on one side of the chess board is chess grandmaster Garry Kasparov. On the other side is a short, balding accountant who has never played chess in his life. However the accountant has with him several filing cabinets filled with rules, as well as reams of notebooks and a barrel of pens. Garry makes his move. The accountant, who we’ll call Phil, opens up the first file folder and begins to read. He follows the rules the file lays down exactly. He writes down every possible move, one at a time, to the rules’ exact specification. Then he calculates (using the rules, of course) every possible move that Garry might make in return. He continues his calculations until he reaches the final file folder, performing the last step. He moves one of his pieces according to the result he reached by following the rules. Garry considers the move, then makes one in return. Phil wipes his brow, grabs a fresh pen, and opens up the first file folder again.

That is essentially what happened when Deep Blue beat Kasparov all those years ago. The only difference is that Deep Blue can do calculations faster than Phil can. Deep Blue can process 200,000,000 possible moves per second. Phil is much slower. But they’ll both come to the same answer in the end.

Kasparov was a man who has a deep understanding of chess, combined with intellect, wisdom, and experience. Deep Blue was a mindless computer that followed the rules it was given exactly, rules that were developed by a team of human programmers working for several years. When Deep Blue beat Kasparov it didn’t prove that computers were superior to humans. It only proved that it was possible to make a set of complicated rules that would almost guarantee victory at chess.

And that’s why I don’t worry about humans becoming obsolete.

A Chat With Mr. Enlightenment

I’ve been off here and there on the internet, as I’m wont to do. Lately I ran across a little mess that I had to poke my nose into. I ended up getting into a discussion with a certain atheist (I almost hesitate to call him that: not that he isn’t an atheist, but that his behavior is so regrettable that I don’t want to insult the many articulate, thoughtful, and reasonable atheists I know by putting him and them in the same category).  To make a long story short the discussion came down to me asking him for evidence that naturalism is true. He responded with something along the lines of “300 years of scientific progress.” I kindly asked him to explain what he meant by that, and what exactly scientific progress had to do with philosophical naturalism, and he merely rattled off as many scientific fields as he could think of. “Biology, geology, chemistry, physics” etc. When I asked him, again, for a specific argument he merely replied with “e=mc2”.

I never did get a straight answer out of him, but it reminded me of a passage from C.S. Lewis’s first published novel The Pilgrim’s Regress. The book is purely allegorical, following after the example of The Pilgrim’s Progress by describing the journey of a man named John from his home in the land of Puritania to the wild lands of various human philosophies, customs, and fads before finally returning home again. The particular passage I’m thinking of came soon after John left Puritania when he was picked up by a nice old fat man on a cart by the name of Mr. Enlightenment. John left Puritania in search of a beautiful island that he experienced visions of back home. All his life he’s been taught about the Landlord (who represents God) by Stewards (who are essentially pastors and priests). Mr. Enlightenment soon strikes up a conversation with John.

“‘And where might you come from, my fine lad?’ said Mr. Enlightenment

‘From Puritania, sir,’ said John.

‘A good place to leave, eh?’

‘I am so glad you think that,’ cried John. ‘I was afraid—‘

‘I hope I am a man of the world,’ said Mr. Enlightenment. ‘Any young fellow who is anxious to better himself may depend on finding sympathy and support in me. Puritania! Why, I suppose you have been brought up to be afraid of the Landlord.’

‘Well, I must admit I sometimes do feel rather nervous.’

‘You may make your mind easy, my boy. There is no such person.’

‘There is no Landlord?’

‘There is absolutely no such thing–I might even say no such entity–in existence. There never has been and never will be.’

‘And this is absolutely certain?’ cried John; for a great hope was rising in his heart.

‘Absolutely certain. Look at me, young man. I ask you–do I look as if I was easily taken in?’

‘Oh, no,’ said John hastily. ‘I was just wondering, though. I mean–how did they all come to think there was such a person?’

‘The Landlord is an invention of those Stewards. All made up to keep the rest of us under their thumb: and of course the Stewards are hand in glove with the police. They are a shrewd lot, those Stewards. They know which side their bread is buttered on, all right. Clever fellows. Damn me, I can’t help admiring them.’

‘But do you mean that the Stewards don’t believe it themselves?’

‘I dare say they do. It is just the sort of cock and bull story they would believe. They are simple old souls most of them–just like children. They have no knowledge of modern science and they would believe anything they were told.’

 John was silent for a few minutes. Then he began again:

‘But how do you know there is no Landlord?’

‘Christopher Columbus, Galileo, the earth is round, invention of printing, gunpowder!’ exclaimed Mr. Enlightenment in such a loud voice that the pony shied.

‘I beg your pardon,’ said John.

‘Eh?’ said Mr. Enlightenment.

‘I didn’t quite understand,’ said John.”

Mr. Enlightenment’s “answer” to John’s question was something so ridiculous I’d never imagined I’d find an actual human being making it. Rattling off a series of unrelated scientific achievements tells us nothing about the existence of God, or the veracity of philosophical naturalism. Yet here I found it thrown at me in an actual discussion.

As I’ve said before, science and Christianity (theism in general, actually) get along perfectly well philosophically. I will never understand why science is used as an argument against it. It brings to my mind Mr. Enlightenment’s closing words to John on the subject:

“When you have had a scientific training you will find that you can be quite certain about all sorts of things which now seem to you only probable.”

Empirical and “Evidence”


Can you prove that I have a liver?

I mean yes, obviously, if we wanted to we could see whether I have a liver or not. You could cut me open and take a look (or, less barbarously, put me through an MRI). That would tell us pretty reliably whether I do indeed have a liver. But nobody has ever cut me open, and I’ve never had a full body MRI. Can you find evidence that I have a liver?

Well that depends on what you will accept as evidence.

Fide Dubitandum (the blog I highlighted on Monday) dealt with this issue a few days ago. That post, and the discussion that followed in the comments, got me thinking about evidence. What kind of evidence do we find acceptable when talking about God? For many the only kind of evidence they will accept is empirical evidence. Empirical means that something can be observed and tested. A fish is empirical because I can touch it, weigh it, see it, smell it, and experiment on it. If anyone asked me to prove that fish existed then I could show them a fish. They could touch it, weight it, see it, etc., for themselves. It would be empirical evidence for the existence of fish (or at least for that fish, anyway). For many people this is the kind of evidence they want when asking “Is there a God?” They want something they can see and smell and experiment on. When theists are unable to produce empirical evidence they proclaim that God must not exist. They often imply that if you still believe in God despite of the lack of empirical evidence then you must be an anti-intellectual who merely takes it on faith that God exists. And it’s true, I do take it on faith that God exists. I don’t have empirical evidence for God. I also don’t have empirical evidence for the existence of my liver.

Nobody has ever seen, smelled, weighed, or experimented on my liver. It has never been directly observed by anyone. Yet I believe it exists all the same. I have faith that my liver exists. Why? Because every (healthy) dead person we have cut open has had a liver. Doctors have seen, smelled, touched, weighed, etc., livers inside of every normal person they’ve cut open. What’s more, everyone who has had their liver removed (or whose liver has ceased to function due to disease) soon dies. These two observations are empirical.

From these two observations I make a crude logical proof:

1. All dead human beings that are cut open are found to have a liver within them.

2. All human beings who have been found to have no functioning liver have fallen sick and died.

3. I am a healthy, living human being. Therefore, I must have a liver.

For that reason I have faith that if you cut me open tomorrow you would find a liver inside of me. What is important to realize, however, is that I don’t know empirically that I have a liver. I have faith that I have a liver due to deductive reasoning. I have never seen my liver, but nobody would call me unreasonable believing that it exists. Similarly, I have never seen God but I have good reason to believe that he exists as well. To use one example (out of many) here is one bit of deductive reasoning that leads me to believe in God. It is self evident from our observations and experiences that some things are contingent in their existence on other things. “Contingent” in this context means that we can imagine such a thing not necessarily existing. The computer you are reading this blog on is contingent because it could conceivable have not existed. The computer has not always existed; once it was merely a collection of parts scattered around a factory, and before that it was raw elements taken from the Earth. The computer had to have been created by something. But then that leads to a problem; what created the computer’s creator? And who created the creator of the computer? So on and so on, in an infinite regression. But an infinite line of creators in logically impossible. From this, we can make another (crude) proof:

1. All things that come into existence have a creator.

2. Things exist.

3. Therefore, something must exist that has always existed.

Now this does not prove the existence of God. But it does show that somewhere there must be an eternal and uncreated Something that everything else is based off of. For naturalists this Something is Nature. For theists this Something is God. Now I have other good reasons for believing that the Something is God and not Nature, and I’ve talked briefly about some of them in previous posts. But my overall point remains. Nobody has ever observed, weighed, measured, or tested something that by necessity has always existed. It would be impossible to observe something to have always existed unless the observer has also always existed as well. In this way there is no empirical evidence that such an entity to exist. However we still can reasonably believe in it’s existence despite the impossibility of ever finding empirical evidence for it. I have faith in God’s existence the same way I have faith in my liver’s existence: confidently and reasonably without need of empirical evidence.


Blog Spotlight: Fide Dubitandum

I’d like to take today’s post and dedicate it to highlighting a fellow blogger. I may or may not do this in the future. Fide Dubitandum first caught my eye after its author commented on my first blog post on science and naturalism. I decided to check out his blog and I haven’t regretted it. I follow several WordPress blogs, but in all honesty his is the only one I actually read. When I get an email saying that he’s put up another post I click on the link to it with glee. He writes clearly, is very intelligent, and his comment threads are filled with civil and well managed debate. Most of his posts are critiques of the so called New Atheists’ philosophy. If you are a fan of reason then you should enjoy his ability to cut through the cobwebbs of muddled thinking and self-contradictory assertions that makes up much of popular atheism these days.

On top of all that he updates regularly, something I can’t even claim these days (to my constant shame). Check him out! You won’t regret it. Be warned though: better click on that link when you have plenty of free time. Once you start reading it can be hard to stop.

Does Naturalism Hurt Science? Another Voice Speaks Out

Recently a friend of mine shared an article with me from the Huffington Post. Now I’m not a big fam of the Huff Post. Philosophically they have a staff of writers and a readership that tends to be very anti-religion, anti-bible, and committed to naturalistic humanism. But when I read the article I was presently surprised at what I found. Here a scientist (and as far as  I know a non-Christian) dares to point out the inconsistency with science as a learning tool and “science” as naturalism. Here’s a small sample:

Science has been successful because it has been open to new discoveries. By contrast, committed materialists have made science into a kind of religion. They believe that there is no reality but material or physical reality. Consciousness is a by-product of the physical activity of the brain. Matter is unconscious. Nature is mechanical. Evolution is purposeless. God exists only as an idea in human minds, and hence in human heads.
These materialist beliefs are often taken for granted by scientists, not because they have thought about them critically, but because they haven’t. To deviate from them is heresy, and heresy harms careers.

You can find the rest of the article here; it’s well worth a read. I truly hope that someday we will be able to separate science as a tool from naturalism as a philosophy. Both science and philosophy will be better for it.