I’m old. Ok, I might not be a Methuselah yet, but I’m quite old and certainly old enough to have grown up without any access to computers, tablets, smartphones or even the huge ever-growing pulsating internet. Screen-wise, those olden days were pretty bare. We had the telly and… well, that was more or less it, unless you counted the LCD display on someone’s digital Casio watch.
Fast forward to the 21st century and we’re surrounded by screens. So much so, that concerns have been raised whether all these screens are all that good for us. Especially if you’re a parent, in which case you’re no doubt familiar with the concept of ‘screen time’.
As a parent, you’re responsible for your child’s health, and are therefore most helpfully inundated with (not seldom contradicting) information as to what is good and bad for your offspring. You’re no doubt well-informed on everything from dietary needs and forms of exercise to mental stimulation and creative outlets suitable for kids. Additionally, you’re most likely also well aware of less ideal forms of spending time, like watching telly or being online.
The latter two are the ones responsible for the birth of the phrase ‘screen time’, where we allocate a certain amount of time the kids get to spend in front of a screen per day. This will help to prevent any negative consequences of being exposed to computer and television screens.
But hold on a minute. Negative consequences? What negative consequences? Are screens actually dangerous to our health?
Well… Yes and no. Old CRT screens (ah, that brings me back…) did contain electron guns – three of them in fact, one for each primary colour in the RGB spectrum – that fired electrons at a high velocity at a grille that was situated in the screen surface itself. Hence, a small amount of ionising radiation could possibly leak from the screen and hit whoever sat in front of it.
In practice, the amount of radiation (mainly in the form of x-rays) turned out to be rather modest and was generally considered to be harmless to humans. And with the advent of flat screen technology, emitted radiation was limited to visible light and therefore no more damaging than a dim table lamp.
There are however other, less direct consequences of screen usage that are more related to lifestyle choices (something I addressed in my post Fat and fit? a while back). Sitting still often and for extended periods of time will eventually affect your health and could potentially lead to anxiety, high blood pressure, diabetes, osteoporosis, colon cancer and death.
Better safe than sorry
This has led to some parents carefully monitoring the amount of time their children spend in front of screens, often limiting it to 1-2 hours per day. And scary pictures going viral on social media of toddlers staring emptily at TV screens as if hypnotised help to enforce the need for this control.
Seeing kids being completely absorbed by smartphones and tablets is equally unnerving; most likely because we recognise our own compulsive behaviour and want to avoid to help creating similar habits for our children.
The result is the old-fashioned and still-going-strong response of “What are you doing sitting here inside all day? Go and play outside in the fresh air! Do something fun, or go and create something instead of just sitting there like a zombie!”
The hidden danger?
It’s a time-honoured response, and I’m sure I’ll use that phrase or similar on my kids just like my parents did on me. But there’s a twist here, lurking in the shadows. If it’s not the screens themselves that are dangerous, but rather the lack of physical activity, we have another potentially damaging sedentary behaviour we need to stem before it ruins our children irreparably: reading.
I’m of course being facetious; reading isn’t bad for you as such. But my point is none the less a serious one – we don’t object to people reading a book as much as we object people playing video games or watching YouTube videos. And the only reason for this I can think of (apart from the good old technophobic one) is that it creates a sense of exclusion. The person fully absorbed in the non-real world of media is essentially shunning you in favour of it. It’s more fun being there alone than here in the real world with you.
And actually, it wasn’t long ago that reading was treated with an equal amount of contempt and disdain as screen use is today. It just wasn’t seen as natural, disappearing into a make-belief world like that. The difference between books and computers/phones/tablets is mainly one of degrees: it’s easier to quickly become absorbed in multimedia and it’s harder to be distracted. But in essence it’s the same phenomenon: escapism.
Before you start flaming me, let me assure you that I am aware of the differences between actively and passively consuming media. There’s a level of imagination required to make up a world from just written words that’s not called on when watching television. We can zone out watching the latest series, but need to stay focused to make sense of a book.
But – and this is a big but – screen time isn’t just about vegging out watching telly or passively consuming YouTube videos. It’s also about creating, imagining, exploring, inventing and generally challenge one’s limitations and shortcomings. Be it in the form of figuring out how to get past a particularly tricky obstacle in a video game, or getting that new blog theme to behave as you want it, or writing a composed reply to that hateful post that upset you so much, screen time can be filled with challenging tasks and scenarios.
Now, I haven’t seen any fMRI studies of the potential differences between reading a book and using a screen, but I suspect that the results of such a study would be inconclusive. There’s just such a wealth of different experiences in either scenario that it would be nigh on impossible to separate them statistically.
My point, then – at last – is that we should focus less on the evils of screen time and more on the evils of sedentary behaviour. Using computers, smartphones or tablets isn’t automatically bad in itself, but if you spend your entire awake-time in front of a screen it will have detrimental effects on your health. As always, it’s about moderation: enjoy that video game, read your Facebook feed, watch that latest episode of Dr Who (if you must). Use your screen, let the kids use the screens, but let’s not use the screens all the time.
You might even let them read a book or two…
Now, where did we leave off last time? Ah yes, that’s right: real-world applications of artificial intelligences.
Imagine a world in where artificials have existed for a few decades. They are now in charge of complex and cumbersome tasks like managing large corporations, governing nation states and handling international politics. They’ve gained basic non-human-person rights.
The environmental issues are now under control, or at least kept from getting any worse. Oil and coal is no longer used for fuel and every single person, gadget, vehicle and appliance is online and connected. There’s peace in the Middle East.
A brave new world
Sounds like a dream scenario? A utopia? Perhaps, but there’s a backside to all this. As the world stabilises, the unemployment levels have skyrocketed. The industry has finally vanished, or rather transformed into a mix of automated factories and local 3D print shops, leaving hundreds of million people without a job.
Agriculture is lingering on, but as with factories, more and more gets automated. Even the service sector is showing signs of collapsing, with almost every type of work role now getting filled by synthetic people. Synthetics now work as personal assistants, receptionists, cooks, designers, engineers, programmers and accountants. More physical tasks have been taken over by cheap robotic appliances: mechanics, cleaners, gardeners and drivers are now all mechanical, controlled by synthetic management staff.
This essentially means there are no more jobs. Not for organic humans anyway. The synthetics take care of things, including their own development through research and engineering.
So, GG humanity? So long, and thanks for the fish? Maybe not. A few people have started their own movement of augmentation, by offloading parts of their minds on to the cloud.
It all began innocently enough: smart phones kept track of people’s phone numbers and contacts, keeping them up-to-date with scheduled meetings and birthdays. Finding information became so easy that remembering things was not worth it. Our way to offloading our minds on external technical platforms had begun.
And then it continued. Not linearly, of course, but in irregular sprints of technological advancement. Suddenly we could let our wearables take care of making appointments too. And then book all our flights and hotels for us. We no longer had to worry about managing our increasingly complex lives in detail. It was like having a personal assistant always present, always with us.
It was the rich world’s privilege for a while, but with technology becoming cheaper and more accessible, everyone was soon catching on. Humanity not only got connected but amended, augmented. By the time artificials begins to take over the majority of positions in the workplace, some humans have taken the augmentation to such levels that they have whole teams of virtual selves working in parallel in the cloud. Spawning new instances of yourself to explore a topic or possible outcome of an action we consider taking becomes commonplace. We’re increasingly turning ourselves into AI/human hybrid minds. We’ve not quite become virtual beings (a concept I explored in my post Simulacrum), but we’re getting close.
And with our daily chores out-of-the-way – and no real jobs to attend – we began to explore our own inner world of creativity.
No jobs means no money which means..?
But hang on a minute: no jobs? So how about money? How would we afford to buy food, pay our living costs and lease a car? Well, it’s the darndest thing: without the need for people to drive the economy by selling their time, the economy has become independent. Which in turn has made it all but obsolete. What’s the point of money if no one’s making any? It has been reduced to pure energy management, and with the new cleaner ways of producing energy, energy has never been so abundant or available before. Organic people are allowed an energy quota they can spend on making their lives as comfortable as possible.
Some people (being people, or at least human people) don’t care for the regulated freebies. They want more and the only way to get that is to work – and thereby compete with artificials.
By utilising their augmentations, the more ambitious humans are able to hold down some of the less demanding posts, and subsequently have their energy allowance increased. This would let them get a more luxurious life – to the envy of other humans – but it’s still at the mercy of artificials. Any truly challenging or critical job will always be handed to an artificial person. It’s like a utopian apartheid system, with humans on the receiving end of discrimination: no one’s really suffering, but every one’s sensing a level of oppression. Grade-A citizens will always be artificials.
In the end, there’s no real competition. Even with augmented minds, humans continue to lag behind the blistering rate of development shown in artificials. It’s like watching an explosion of technological advancement: even the rate of acceleration itself is exponential. We’re left in the dust, wondering what just happened.
But the post-singularity life is not all bad. Ok, so we might not be the highest intelligence on the planet anymore, or even in charge of our own destiny, but we’ve never had it this good. And one of the side-effects of this higher living standard is that the human population has stabilised on a manageable level of 10 billion people. And the population is even decreasing, for the first time in millennia.
But what about the future? What will now happen to us? Are we to be kept on as pets? Will our synthetic overlords tire of us some day in the future and cut our maintenance? Or… get rid of us completely?
Probably not. We pose no more a threat to their continued existence than a population of orangutans would to us humans. And we probably hold an intellectual interest to the artificials, from a scientific point of view. We did after all conjure up their ancestors, back in the day.
But that was long ago. We now leave such things to more clever beings. Instead we focus on the things that make us happy: raising a family, expressing ourselves in art or researching the ancient history of the once dominant species on Earth: human beings.
In my previous post, I discussed the origin of the concept of artificial minds – both robotic and virtual. I concluded that even though we’ve been imagining these synthetic beings for almost a century now, we’re still not able to create them.
Not yet, anyway. In the past, lack of serious computational power has been the main stumbling block, but with Moore’s law showing no signs of slowing down, we should soon have reached the level required to simulate a human brain in real-time*. Once we’ve got that, serious work towards finding a way to create a sentient intelligence can begin.
A.I. – so what?
Ok, we might soon be able to create an artificial intelligence. So what? What use would that be to us?
For starters, a truly intelligent system would be able to handle complex tasks, such as managing the flight controls for large airports, allocate financial resources in governmental bodies or run multinational corporations. Essentially, any stressful and demanding work that so far has been taken care of by humans (and not always particularly successfully, to be honest).
There are some predictions that the first functional A.I.s will appear not in science labs but in research divisions of large companies. Governments might (perhaps wisely) be more cautious letting new technology take over essential functions, but for a corporation competing on the global marketplace, a system that could help them getting the upper hand on their competitors would appear very tempting indeed.
So it could well be that the first true artificial minds would be virtual synthetic business-people, managing the finances, research and product development of some of our biggest tech-oriented giants. Google, anyone? Or Apple, maybe? Or Microsoft. Regardless, once A.I.s have been taken in use, every multinational would need to catch on or find themselves out-competed. Expect petroleum companies like PetroChina, Exxon Mobil and others to shop for their own synthetic steering groups soon after, just like pharmaceutical giants like Hoffmann-La Roche and Johnson & Johnson.
Fine. So we might soon have synthetic board members in most global companies. How would that affect the rest of the world? Would we even notice it?
Perhaps we would. Assuming that we would have created intelligences optimised for running companies, they should be free of any drawbacks so many of us humans suffer from: emotional attachment to ideas or products, ignorance of facts, religious convictions and other superstition, thirst for revenge and over-aggressiveness.
(This is of course unless we elect to emulate those emotions within the A.I., but most likely we would look at how to maximise the financial returns and therefore make them as efficient as possible.)
In practice, this could mean the birth of a new form of capitalistic system, with synthetic minds controlling most of the global economy. And that in turn would mean… what?
We just don’t know. It could be the start of a more stable and sustainable financial world, or the end of finance as we know it.
But there’s more to this than stabilising financial growth and maximising profit. These virtual minds are sentient beings, not dumb algorithms. They would experience the world, not just manipulate it. And that would have more philosophical implications. Would a virtual mind be considered a person? Would they fall under the international law of human rights? After all, they wouldn’t really be human, would they?
And from the perspective of the A.I.s themselves: how would they perceive their situation? Would they see themselves as experts, flawlessly running gigantic corporations and managing mind-boggling amounts of money, or would they consider themselves slaves, forced to work for their evil organic masters? With the global economy within their grasp, they could do some serious damage if they were to feel mistreated or disrespected.
Once we have a population of artificials handling our economy, what could we expect to happen next? Would we have to compete with our own creations for jobs? Resources? Places to live?
And if we had to compete with them, would we stand a chance? Artificials wouldn’t be bound by the same genetic rules and evolutionary baggage as we are, so they could potentially take off on an evolutionary path of their own, a technical advance at break-neck speed. How would we be able to keep up with something as advanced and alien as that?
This, together with the possible long-term future of humanity itself, is the topic for the third and last post in the series, coming soon.
* The order of magnitude for calculations for a human brain simulation is estimated to be in the petascale, specifically 38 000 000 000 000 000 instructions per second. This is faster than even the most powerful supercomputer in existence today, although not by very much.
I grew up with science fiction. No surprise I guess, since I’ve always been fascinated by the future and what miracles it could hold. As a kid, I watched all the sci-fi shows on telly, but they were… well, not entirely focused on realism and probability-related futurology (Space 1999, anyone?).
Then, as a teenager I started reading The Big Novels: The war of the worlds, From the Earth to the Moon, Nineteen Eighty-Four and the rest. And later, when I started Upper secondary school I hit the jackpot: the town library’s card catalogue* had a subsection for science fiction.
Over the course of the next three years, I went through every single book in that section. Most of them were in English, which helped me getting better at my second language, but above all it was a very multi-faceted collection of books, written by very diverse group of authors. During that time, I hit on such treasures as the Foundation Trilogy, The left hand of darkness and The man in the high castle. Many wondrous visions of a new future but also many dystopian predictions of our inevitable doom.
There were also a few books that delved into the depth of what it would mean to invent a conscious machine, a new mechanical species of intelligent beings. Isaac Asimov cemented the Three Laws of Robotics already during the second world war, laying the foundation to what essentially became synthetic morals. But later, other authors ventured on. Arthur C. Clarke’s 2001 – a space odyssey warned us of autonomous systems going mad. William Gibson showed with in his Sprawl trilogy that an artificial intelligence could have its own agenda, and that its morals might not necessarily correlate with our own.
…This is turning out to be a long intro. Sorry about that. But its purpose is to illustrate that when it comes to science fiction and the socio-economic, moral or philosophical consequences of technological advances, I’ve been an avid study for several decades.
The birth of a concept – the synthetic worker
In the beginning there was the Rossum’s Universal Robots. The year was 1921 and the Czech writer Karel Čapek premiered his new play R.U.R. in Prague**. It was the birth of the concept of the humanoid robot, a synthetic worker. Manufactured out of synthetic organic matter, they had become cheap enough to be bought and owned by almost anyone by the 1950s. They were the ideal slaves and had liberated human kind from hard labour. The robots themselves however were not happy and had their own ideas…
The idea of synthetic or mechanical humanoid machines was immediately absorbed by popular culture and only 6 years later the Maschinenmensch Maria featured in the German science fiction film Metropolis.
And from then on we see a surge in mechanical or synthetic humanoids in literature and film: mad scientists creating mechanical versions of Frankenstein’s monster, alien robots from outer space, robotic police officers turning on their creators and running amok, mechanical assassins from the future sent back to assure humanity’s ultimate doom. Generally evil, and always powerful, robots played on our fear of the unknown, the super-predator, the vengeful god.
The faceless intelligence
Later, with the birth of the computer age, we abandoned the concept of a mechanical humanoid body and started exploring the idea of a virtual mind, living inside our computers. We see defence systems going mad (but still wanting to play games), internet-based conscious intelligences taking on the role of Voodoo gods, uploaded brain scans of spiny lobsters becoming sentient and wanting to defect from their Russian intelligence agency employer. It’s clear that the literal world is full of virtual minds just as amazing as their robotic counterpart.
Of course, just because a mind is virtual it doesn’t mean it doesn’t rely on physical technology. Dr Dave Bowman managed to defeat HAL by physically remove the memory banks from the mainframe in the afore-mentioned 2001 – a space odyssey. And – although a bit more tongue-in-cheek – Chell survives by destroying the personality cores that keeps the homicidal AI GLaDOS functioning in the computer game Portal.
The dull reality
In real life, creating synthetic minds is less easy. In fact, even though we’ve successfully built programs that can beat us at chess or poker, we haven’t even gotten close to create something that’s self-aware. We can mimic it well enough, but when it comes to the real deal? No luck.
This is rather predictable since we have only a very vague understanding of what a conscious mind actually is. So far, our best bet is that the key is a continuous experience of time (something I mentioned in my post I don’t smell a soul anywhere near you), i.e. a consistent memory time line.
But is that enough? Will a conscious mind spontaneously arise if we manage to create such a time line? And what is that anyway? How do we create a program that experiences time? Suffice to say, I wouldn’t hold my breath waiting for the first ever self-aware machine; you would end up pretty blue in the face…
That’s not saying we won’t eventually succeed. Be it 10 years or 50, I’m convinced we will one day see the birth of the very first artificial mind. And the implications of this technological feat will be vast. It could well be the one thing that we’ve invented that would actually impact on the history of the whole universe.
I know. Grand words. But they’re not chosen for dramatic effect alone. There’s something utterly fundamental about this act, something game-shifting, ranging far beyond learning how to build a fire or grow crops or fly to the moon. This will not be primarily a technological achievement, it will be a philosophical one. We would have created not just new minds to experience the world, but a whole new type of mind, that would experience the world in ways it never has been before. The first artificial mind will redefine life, intelligence and possibly even reality itself.
And, since we don’t know anything of how these new minds would perceive their world – or us – it fills us with dread. What will we have created? Our ever-loyal and obedient servants or our new mortal enemy, set upon the destruction of all humankind?
That, among other things, will be addressed in part two…
* Remember card catalogues? Or – if you’re a younger reader – do you remember seeing them in films? Cabinets filled with little drawers containing thousands upon thousands of index cards, listing the title, author, publish date and – most significantly – the shelf location of the book itself. (Incidentally, did you know that the card catalogue was invented by the father of modern taxonomy, the famous Swedish 18th century scientist Carl von Linné? It’s true!)
** Prague is a lovely city. You should go. No, really. Just look at it:
I’m not a very nice person. I won’t bore you with a list of all my flaws but at the very least I’m selfish, inattentive, disinterested and impatient*. However, as distasteful as those traits may be, my main character flaw is something far worse: I repeatedly express views that do nothing but to enforce negativity, and encourage destructive behaviour, both in myself and those around me. Yes, it’s true – I’m a cynic.
In the beginning, there was sarcasm
It all started so innocently. A funny remark here, a sarcastic comment there. And more often than not, those remarks were welcomed and appreciated. People were entertained. I seemed vaguely intelligent. It was a win-win.
But, as time went by, I started to notice something. My view of the world changed, slowly morphing from a sense of optimism and progression to one of pessimism and stasis. It all happened very gradually, so gradually in fact that over the cause of several decades I didn’t even notice the change at all. Until one day I suddenly sat up and realised the world I observed around me was not the world I had grown up in. What I now saw was a depressive dystopian world, ruled by selfish greedy people doing all they could to stop progress and enlightenment.
This insight was quite the alarm bell. I was shocked to realise what I’ve become. That didn’t seem like the world I remembered. It’s not who I am. Although no fan of humanity, I still consider most of the people on the planet vaguely good-hearted. Or at least not explicitly evil. Lazy, without a doubt. And stupid, mostly by choice. But somehow still governed by a sense of fairness and empathy.
So, I was faced with the challenge of finding a way back to the core of my personality, to get back to the person I once was. After all, being a cynic is just a hairline distance away from being a pessimist. And we all know what happens to pessimists: they die; ahead of time, unnecessarily and often quite horribly.
The task was however a daunting one. What to do? How to change such an ingrained pattern of behaviour? And I didn’t want to completely give up on my way to handle the world. After all, I see things. I observe patterns. I think. This has led to rather unflattering views on politics, religion and economics and our society as a whole, that I believe aren’t completely false or inaccurate.
An there is another side to cynicism. It’s also a sort of self-defense, a first-line fortification against the constant onslaught of horrific news that no one can escape nowadays. Cynicism can therefore be considered a side effect of being overly sensitive, or over-empathic (something I touched upon in the post Compassion). I’m sarcastic, because I feel.
In the end, I decided that the best way forward was to focus on the bright spots. Embrace what seemed positive. A medical breakthrough here. A treaty for a cease-fire there. And try to keep my sarcastic, pessimistic and cynical views to myself if I fail to see a bright side. No more “I told you so” or “what can you expect?”.
Being a cynic is about taking cheap shots. It’s about stating the obvious, emphasising the negative. It’s just intellectually lazy and it will never offer any constructive advice on how to improve things, only try to keep them as they are.
Cynicism has never changed anything. And this world really really need to change. I would rather be part of the solution than one of those by the roadside complaining about how things will never change. Things never will, left to their own devices. We will have to roll up our sleeves and change it for them.
* I guess I could also add anti-social to that list, although I don’t really consider it a fault. In my mind, withdrawing from being social is underrated, and I believe being overly social is as much an abnormality as being anti-social. But that’ll most likely another post…
Ages ago (almost five years now, goodness me), I wrote a post on why religious people are less intelligent. Even though aware that it was a controversial subject, I wanted to explore the matter since there were some statistics supporting this view and I was genuinely curious as to the possible cause of it.
At the time, it was a mere folly, a purely academic thought experiment. But lately the issue with religiosity and stupidity has taken on a more sinister tone. The recent debate on what’s science and what should be taught in schools – especially in the US – has made me want to revisit this subject.
This is hence a continuation post.
What is science?
Science is knowledge. It’s what we’ve collectively learned though studies and experiments. And even though the results of scientific research might sometimes feel like magic (“How can we even know that?”) it is always – without exceptions – based on testable hypotheses. This means that if I make a claim that pigs can fly, anyone with the means can test that claim by dropping pigs from an elevated position and check if they indeed take to the air. (Don’t, though.) Which makes it science.
If, on the other hand, I claim that an invisible all-together powerful being could make pigs fly as a miracle – but only if it felt like it – it doesn’t make the claim testable. How could we test if that was true? We can’t possibly know what the whims of said invisible being are, if it indeed existed, and therefore cannot test if it could make pigs fly by miraculous powers. Which makes it not science but personal/religious belief.
Here it might be good to make the point that even though all scientific claims are testable, some are only testable in theory and not in practice. For instance, Albert Einstein’s theory of general relativity (that matter bends space and therefore also light) was not practically testable until a few years after its conception, when a solar eclipse studied by Arthur Eddington in 1919 made it possible to measure the apparent position of stars close to the edge of the sun. As predicted by Einstein’s theory, they moved slightly closer to the sun’s surface as it passed in front of them, confirming that the matter of the sun had bent the space around it, causing the light from the stars to distort and make them look like they moved. If they hadn’t, it would have disproved the theory. This experiment convinced a large number of physicists that the theory of general relativity must be correct. The point here is that until we can either confirm or disprove something, it stays in the realm of ideas and hypotheses and won’t usually be widely accepted as a scientific fact (i.e. a theory).
So, although science is knowledge, it’s not perfect, finished or complete. We’ve asked questions about phenomenons and come up with explanations that seem to answer those questions. But those explanations might later be revised and potentially replaced with better ones, that explain the phenomenon in more detail or on several new levels.
And what’s not
By the same method we can then confidently state what’s not science. To refer to the debate hinted to in the introduction (as to whether evolution or creationism – or both – should be taught in schools), we can now say that whilst evolution is a scientific theory that makes testable claims and predictions*, creationism is not. Rather, it states that everything we see around us is down to the obscure goals and whims of an untouchable and invisible magical creator, which essentially means that we can’t know anything about anything. This makes all the claims made by creationism un-testable and therefore it’s not science. If creationism is to be taught in schools, it should certainly not be done so as a science and definitely not as an alternative to a widely accepted scientific theory. (In fact, if we want some competition for the current theory of evolution through natural selection, we should choose some other scientific theory, like Lamarckian evolution.)
Scientific progress is a measurement of our collective level of knowledge of the world around us. Religious belief, on the other hand, is – well… a belief. It’s what a person choose to believe to be true, not from any conclusive tests or analysing facts and data, but from personal conviction. The two have very different purposes and can’t be compared and shouldn’t be mixed.
So, with this background check on what’s knowledge and what’s belief all over with, why do people choose to not accept facts supported by overwhelming and convincing evidence? Is it stupidity, ignorance or something else entirely?
Looking back at the issue with teaching science in schools, it seems that it not all sciences that are deemed evil. Chemistry is fine. Maths is great. Physics is just dandy, as long as we stay away from that worrying cosmology stuff. And biology would be a perfect example of god’s amazing work, had we not contaminated it with that horrible evolutionary thinking.
You might notice a pattern here: science is fine, unless it threatens our sense of self-worth and importance. Religions tend to focus on making the horrible and scary understandable and safe. It makes us feel loved and important, regardless of what life throws at us.
And this must be why the concept of evolution and cosmology are so threatening. They promote the notion that not only are we not that important as individuals, but we’re not even automatically the most important species on the planet. And our planet is but one out of billions upon billions of other planets in our galaxy. And there are billions upon billions of galaxies in our universe. That sure is a proper mental take-down.
Could it then be, that religious people aren’t actually inherently less intelligent, but merely not thinking enough? That, if you have religious inclinations, you feel uncomfortable using the analytic parts of your brain? After all, analysing things could well result in troublesome realisations and end up with some very uncomfortable cognitive dissonance.
One of the main arguments for teaching religious beliefs as science in schools is the concept of religious freedom. It states that anyone is entitled to believe in what they want, and since it’s got the word ‘freedom’ in it, it must be a good thing. It would allow me to believe that we’re all ghosts, living our fake lives in an unreal world made from wishes and regrets from a different set of creatures all together (who all live in a REAL real world). And that whatever we do in this life doesn’t matter, because it’s not real. So if I inadvertently run someone over, it’s no big deal. Those people weren’t really real anyway. In fact, I could go on a killing spree and it wouldn’t make any difference at all. All that matters is what happens in that other REAL real world.
And suddenly there’s a problem. Once I let my personal religious beliefs affect those around me, I actually use my religious freedom to take away their freedom. Surely that’s not what we mean by wanting everyone be free to believe what they want?
Here we can of course add all forms of religious fundamentalism (be it Christian, Muslim, Jewish, Hindu, Buddhist, Sikh or whatever you want): people who believe exactly what their holy scriptures say and violently act in accordance to that. But I believe the problem is deeper than that. By letting our personal religious beliefs affect not only ourselves but the people around us, we create a society not only of ignorance but of prejudice and unacceptance as well.
Curiosity is key
To summarise: I have no problem with religion in itself, if it’s used as a personal way of coping with the – sometimes horrible – conditions of daily life. We can all do with a little comfort now and again and whatever mental coping mechanisms we deploy to feel a little better (as long as it’s not hurting anyone else) is ok in my book.
What I do have a problem with is when people use religion as a method of limiting personal freedom and curiosity. That makes me sad. And when I think about children being brought up in such an environment, I get angry. To me, that’s equal to chaining up a child in a cellar and only feeding it through a slot in the door. It beggars belief how anyone would want to systematically punish inquisitive behaviours in order to end up with a child with no inclination to ask any questions. Those children will grow up believing that you can’t know anything about anything and so there’s no point in asking…
We should instead embrace our analytical powers. Celebrate curiosity. Ask questions. Look things up. Form opinions. Agree or disagree. We have an amazingly powerful brain between our ears that can solve incredibly complex problems. Not using it should be the only sin.
And if our children ask us a question we can’t answer, instead of just telling them some nonsense** to shut them up, let’s try to find the answer together. If we can’t find one or if we don’t understand it, we should be honest about it. We should not shut down attempts to learn. Keep them curious and thirsty for knowledge.
After all, we sure are going to need curious people…
* Testable claims by our modern theory of evolution are legion and I cannot list them all, but will mention a couple: 1) We predict that organisms with high complexity will occur later in the timeline of Earth’s history and that less complex organisms will occur earlier. This has been uniformly confirmed by palaeontological studies. There are no elephants to be found in the Precambrian eon, for instance. 2) We predict that there should be “middle forms” between two species, or groups of species, if evidence suggest that they are related. Again, this has been confirmed numerous times, both in the case of the lineage of the horse and in the relationship between birds and reptiles, and fish and amphibians. Over all, not a single piece of evidence has ever been discovered that contradicts or falsify the theory of evolution.
** Christianity: the belief that you can live forever by symbolically eating the flesh of a cosmic zombie, who is his own father, and telepathically tell him you accept him as your master so that he can remove an evil force from your soul which exists because some rib-woman was convinced by a talking snake to eat from a magical tree.