Ages ago (almost five years now, goodness me), I wrote a post on why religious people are less intelligent. Even though aware that it was a controversial subject, I wanted to explore the matter since there were some statistics supporting this view and I was genuinely curious as to the possible cause of it.
At the time, it was a mere folly, a purely academic thought experiment. But lately the issue with religiosity and stupidity has taken on a more sinister tone. The recent debate on what’s science and what should be taught in schools – especially in the US – has made me want to revisit this subject.
This is hence a continuation post.
What is science?
Science is knowledge. It’s what we’ve collectively learned though studies and experiments. And even though the results of scientific research might sometimes feel like magic (“How can we even know that?”) it is always – without exceptions – based on testable hypotheses. This means that if I make a claim that pigs can fly, anyone with the means can test that claim by dropping pigs from an elevated position and check if they indeed take to the air. (Don’t, though.) Which makes it science.
If, on the other hand, I claim that an invisible all-together powerful being could make pigs fly as a miracle – but only if it felt like it – it doesn’t make the claim testable. How could we test if that was true? We can’t possibly know what the whims of said invisible being are, if it indeed existed, and therefore cannot test if it could make pigs fly by miraculous powers. Which makes it not science but personal/religious belief.
Here it might be good to make the point that even though all scientific claims are testable, some are only testable in theory and not in practice. For instance, Albert Einstein’s theory of general relativity (that matter bends space and therefore also light) was not practically testable until a few years after its conception, when a solar eclipse studied by Arthur Eddington in 1919 made it possible to measure the apparent position of stars close to the edge of the sun. As predicted by Einstein’s theory, they moved slightly closer to the sun’s surface as it passed in front of them, confirming that the matter of the sun had bent the space around it, causing the light from the stars to distort and make them look like they moved. If they hadn’t, it would have disproved the theory. This experiment convinced a large number of physicists that the theory of general relativity must be correct. The point here is that until we can either confirm or disprove something, it stays in the realm of ideas and hypotheses and won’t usually be widely accepted as a scientific fact (i.e. a theory).
So, although science is knowledge, it’s not perfect, finished or complete. We’ve asked questions about phenomenons and come up with explanations that seem to answer those questions. But those explanations might later be revised and potentially replaced with better ones, that explain the phenomenon in more detail or on several new levels.
And what’s not
By the same method we can then confidently state what’s not science. To refer to the debate hinted to in the introduction (as to whether evolution or creationism – or both – should be taught in schools), we can now say that whilst evolution is a scientific theory that makes testable claims and predictions*, creationism is not. Rather, it states that everything we see around us is down to the obscure goals and whims of an untouchable and invisible magical creator, which essentially means that we can’t know anything about anything. This makes all the claims made by creationism un-testable and therefore it’s not science. If creationism is to be taught in schools, it should certainly not be done so as a science and definitely not as an alternative to a widely accepted scientific theory. (In fact, if we want some competition for the current theory of evolution through natural selection, we should choose some other scientific theory, like Lamarckian evolution.)
Scientific progress is a measurement of our collective level of knowledge of the world around us. Religious belief, on the other hand, is – well… a belief. It’s what a person choose to believe to be true, not from any conclusive tests or analysing facts and data, but from personal conviction. The two have very different purposes and can’t be compared and shouldn’t be mixed.
So, with this background check on what’s knowledge and what’s belief all over with, why do people choose to not accept facts supported by overwhelming and convincing evidence? Is it stupidity, ignorance or something else entirely?
Looking back at the issue with teaching science in schools, it seems that it not all sciences that are deemed evil. Chemistry is fine. Maths is great. Physics is just dandy, as long as we stay away from that worrying cosmology stuff. And biology would be a perfect example of god’s amazing work, had we not contaminated it with that horrible evolutionary thinking.
You might notice a pattern here: science is fine, unless it threatens our sense of self-worth and importance. Religions tend to focus on making the horrible and scary understandable and safe. It makes us feel loved and important, regardless of what life throws at us.
And this must be why the concept of evolution and cosmology are so threatening. They promote the notion that not only are we not that important as individuals, but we’re not even automatically the most important species on the planet. And our planet is but one out of billions upon billions of other planets in our galaxy. And there are billions upon billions of galaxies in our universe. That sure is a proper mental take-down.
Could it then be, that religious people aren’t actually inherently less intelligent, but merely not thinking enough? That, if you have religious inclinations, you feel uncomfortable using the analytic parts of your brain? After all, analysing things could well result in troublesome realisations and end up with some very uncomfortable cognitive dissonance.
One of the main arguments for teaching religious beliefs as science in schools is the concept of religious freedom. It states that anyone is entitled to believe in what they want, and since it’s got the word ‘freedom’ in it, it must be a good thing. It would allow me to believe that we’re all ghosts, living our fake lives in an unreal world made from wishes and regrets from a different set of creatures all together (who all live in a REAL real world). And that whatever we do in this life doesn’t matter, because it’s not real. So if I inadvertently run someone over, it’s no big deal. Those people weren’t really real anyway. In fact, I could go on a killing spree and it wouldn’t make any difference at all. All that matters is what happens in that other REAL real world.
And suddenly there’s a problem. Once I let my personal religious beliefs affect those around me, I actually use my religious freedom to take away their freedom. Surely that’s not what we mean by wanting everyone be free to believe what they want?
Here we can of course add all forms of religious fundamentalism (be it Christian, Muslim, Jewish, Hindu, Buddhist, Sikh or whatever you want): people who believe exactly what their holy scriptures say and violently act in accordance to that. But I believe the problem is deeper than that. By letting our personal religious beliefs affect not only ourselves but the people around us, we create a society not only of ignorance but of prejudice and unacceptance as well.
Curiosity is key
To summarise: I have no problem with religion in itself, if it’s used as a personal way of coping with the – sometimes horrible – conditions of daily life. We can all do with a little comfort now and again and whatever mental coping mechanisms we deploy to feel a little better (as long as it’s not hurting anyone else) is ok in my book.
What I do have a problem with is when people use religion as a method of limiting personal freedom and curiosity. That makes me sad. And when I think about children being brought up in such an environment, I get angry. To me, that’s equal to chaining up a child in a cellar and only feeding it through a slot in the door. It beggars belief how anyone would want to systematically punish inquisitive behaviours in order to end up with a child with no inclination to ask any questions. Those children will grow up believing that you can’t know anything about anything and so there’s no point in asking…
We should instead embrace our analytical powers. Celebrate curiosity. Ask questions. Look things up. Form opinions. Agree or disagree. We have an amazingly powerful brain between our ears that can solve incredibly complex problems. Not using it should be the only sin.
And if our children ask us a question we can’t answer, instead of just telling them some nonsense** to shut them up, let’s try to find the answer together. If we can’t find one or if we don’t understand it, we should be honest about it. We should not shut down attempts to learn. Keep them curious and thirsty for knowledge.
After all, we sure are going to need curious people…
* Testable claims by our modern theory of evolution are legion and I cannot list them all, but will mention a couple: 1) We predict that organisms with high complexity will occur later in the timeline of Earth’s history and that less complex organisms will occur earlier. This has been uniformly confirmed by palaeontological studies. There are no elephants to be found in the Precambrian eon, for instance. 2) We predict that there should be “middle forms” between two species, or groups of species, if evidence suggest that they are related. Again, this has been confirmed numerous times, both in the case of the lineage of the horse and in the relationship between birds and reptiles, and fish and amphibians. Over all, not a single piece of evidence has ever been discovered that contradicts or falsify the theory of evolution.
** Christianity: the belief that you can live forever by symbolically eating the flesh of a cosmic zombie, who is his own father, and telepathically tell him you accept him as your master so that he can remove an evil force from your soul which exists because some rib-woman was convinced by a talking snake to eat from a magical tree.
I flatter myself to be rather a mild-mannered and tolerant fellow. Only rarely do I get angry and shout at people in public. Most of the time I manage to mind my own business and try to remember that there’s most likely a perfectly good reason for that particular person’s weird and annoying behaviour.
But: when seating myself behind the wheel of my auto-mobile, things change. Any unfortunate passenger will be forced to witness a most distasteful Dr. Jekyll to Mr. Hyde transformation.
I shout and gesticulate emphatically. I insult and curse. I make rude gestures. But most of all, I can feel this fiery rage filling me up – consuming me – and making me forget all that rational and good.
“Don’t make me angry – you wouldn’t like me when I’m angry”
It seems this is not an isolated phenomenon. I’m not the only person going Hulk in the car. We have coined the term road rage for just this type of behaviour. And it doesn’t seem to be linked to any particular demographics like age, gender or social status. It’s a global thing.
You know me by now: any global human behaviour trigger my curiosity. I immediately start to wonder how this can be, what the origin of it is and if there’s an evolutionary advantage or explanation for it.
So what is this thing called road rage? Well, as those obsessive-compulsive of you enough to follow the link to Wikipedia above can confirm, it’s a display of excessive driving behaviour including erratic and/or threatening manoeuvring, verbal abuse and insulting gestures aimed at drivers of other vehicles. It seems to be exceedingly common in situations of traffic congestion.
Now. We all know that we put on a pretence of civilisation and good behaviour to cover up our more base instincts. Behind that thin lacquer of logic and afterthought lies an ocean of emotional turmoil and flash rage. It’s our primal core – directly inherited from our ape ancestors.
Imagine if you will a bus full of chimpanzees, going on a long trip. The group consists of a mix of different ages, genders and social statuses and most of them don’t know each other. We can easily predict what would happen: within minutes the bus would erupt in chaos with howling and screaming and flinging of distasteful substances.
By contrast, a bus full of human strangers, with the passengers forced into each others’ personal spaces for an extended period of time, fails to erupt into a full-on riot. We all pretend everything is fine and keep our emotions in check, regardless* of the level of stress we feel.
So what’s different with driving a car? Why can’t we control our primal anger in traffic when we’re so good at it elsewhere?
The sense of driving
Let us analyse the sense of driving a car for a moment. As individuals, we’re rather small and powerless. We might be the current top predator on Earth, but we’re still rather weak and slow. We’re not able to soar the skies at great speed like a bird of prey, leap over tall fences like a kangaroo or lift huge trunks of trees like an elephant. Ok, we’re pretty good long-distance runners, but on average we’re rather – well: average.
And in our modern societies, that feeling of powerlessness is enforced by being part of huge organisations and members of giant nation states. Whatever we do, it most likely won’t affect a huge number of people.
But it’s all good. We repress and control our sense of meaninglessness and lack of power. We go about our daily lives, doing whatever we do for a living. We politely converse with neighbours and colleagues. Society prevails. All in under control.
Then, taking the wheel of our car and getting on the road we suddenly feel a surge of power. Here we are, tiny little monkeys, controlling and manoeuvring a tonne or more of metal and glass. At the push of a pedal, the engine roars. At the twist of the wheel, the metal-beast turn. We can go anywhere, at any speed. We have limitless power. We’re gods.
Until that pillock chose to cut in right in front of you, blocking the road ahead. In an instant, all the power’s gone, all the freedom has evaporated. You’re back at being a cog in the machinery, a mindless drone, forced by others to behave.
Take my power, take my pride – take my sense of control
This sudden loss of power – a power that was given to you only moments earlier – is too much for our fragile minds to handle. We can’t abide this take-back of the rarest of gifts, this sense of being in control of great power for once, and we snap, reverting back to our more primitive selves. How dare that low-life take away my all-too-limited moment of elation? I just managed to get it, for crying out loud!
And that is why road rage is so often associated with traffic congestion. After all, what other scenario symbolise the loss of power and control better than sitting in a machine able to travel at a hundred miles per hour and still being forced to quietly queue, waiting to move but a yard or two, at no more than a snail’s pace?
The weakest link
So there you have it: we find the power of piloting heavy machinery exhilarating. And, since we so seldom get to experience that kind of sense of power, we’re extremely jealous of it and don’t want to lose it. But – traffic being what it is (mainly from consisting of illogical human drivers, all wanting to maintain that rare sense of control) – we will inevitably lose that power, mostly from being stuck in traffic jams. And, just like pushing a button, cue the road rage.
But perhaps things can be better? Perhaps being aware of the reason for our irrational behaviour could change things? Perhaps we, by seeing things logically, could become better at controlling ourselves and finally rid our society of this ugly phenomenon?
I somehow doubt it. History has shown that knowledge has very little effect on how we drive. We’re all slaves to our pent-up emotions, and I see no improvement until we get rid of the weak link of this scenario: human drivers. There’s a potentially glorious future ahead, free of not only traffic related rage but congestions all together. But that’s another post…
* Add a little alcohol to the mix and the situation will be completely different. A small dose of mental inhibitor and we’re right back where we started: full of uncontrolled anger and rage.
There’s something unpleasant going on. All across Europe, far-right parties are popping up, spreading a message eerily similar to one that you could have heard in Italy and Germany in the early 1930s. Different parties in different countries have their own individual policies but the common denominator is an anger at the current state of affairs. They demand an immediate stop for spending money on welfare to people of “foreign origin”. Borders should be closed, immigrants should be expelled or at least closely supervised and all international aid should be stopped at once. Basically, it’s a message of intolerance.
On a smaller scale, people in all walks of life seem to feel unjustly treated, as if they somehow have missed out on something. There’s a prevalent sense of envy and jaundice, of seeing oneself as a victim of some kind of conspiracy. Everyone else seem to be much better off, and they most surely don’t deserve to. It can be as simple as someone having a later model of a smartphone than you do, or what seems like a better job. Or perhaps a newer car or a bigger house. Regardless of the details, it comes down to haves and have-nots. And if we lived in a society where that could mean the difference between life and death, those things would matter. But in the modern post-industrial countries of Western Europe, we don’t. We have what we need. Our children get fed and educated, we all have clothes to keep us warm and places to live to keep us dry. And the few unlucky ones that don’t should be able to rely on a well-developed welfare system to help them out.
I know, it’s not a perfect system. People who need help sometimes don’t get it. And sometimes people who don’t need help get it anyway. But on the whole, it shouldn’t impact on your life or hinder you from making it a good one. So that sense of envy – which is most likely a left-over function from past times, when life was truly rough – is now more or less obsolete.
But that doesn’t really matter. The green-eyed envy and delusion of being subjected to government-made plots and conspiracies manifest itself as anger and frustration, which in turn easily and quickly escalate into a hot white hate, blinding us from any kind of logical arguments and reasoning. And suddenly we start to long back to simpler times, when all was good and people worked for a living (and with job titles we actually understood the meaning of). This in turn will promote conservative values, like the importance of traditional family structures (ideally banning all those newfangled PC alternative lifestyles all together) and stopping the immigration to finally get rid of all those multicultural influences.
And before we know it, the only party we can vote for is one of the new ultra-right single-issue ones. And that’s exactly what we see across Europe at the moment. From UKIP in Britain, SD in Sweden and NPD in Germany to Fiamma Tricolore in Italy, Front National in France and Svaboda in Ukraine, far-right extremist parties have gained momentum in the last few years. Even though they are still mostly in minority, they are loud and attention-hogging and their presence highlights that people more and more let their anger guide them where to put their votes.
The missing piece
What most traditionalists seem to miss, however, is that things weren’t all that different in the past. We’ve never had that traditional and simple society that everyone seem to remember so fondly, with a single culture, consisting of a single, racially coherent people. We’ve always had immigration and we’ve always been multicultural. And thank goodness for that, or we wouldn’t be able to enjoy the richness of culinary delicacies that are almost without exception imported from abroad – even (or perhaps even particularly) our treasured national dishes. We would not be speaking the languages we are speaking without the countless influences from dozens or more foreign tongues. Our multicultural heritage is what makes us us.
As a species, we have always been a melting pot of ideas, cultural memes and linguistic influences. That is what has ensured that we survived where the other species of humans did not. We absorb new trends and phenomena at a speed and with a delight that is unprecedented in the history of evolution. We are multicultural masters, picking and choosing the best from every new society we encounter. It’s our speciality and the key to our success. But I’m aware that I’m preaching to deaf ears. If anyone who feels threatened by the current state of affairs would end up on this blog and read this post – a rather unlikely scenario, I admit – they would no doubt dismiss it all as stupid socialistic propaganda at worst or as well-meaning but horribly misguided advice at best. The blind hate mentioned above has… well, blinded them.
The stench of intolerance
For me personally there’s an additional aspect to this topic however. I find intolerant people truly unappealing – revolting even. Intolerance disgusts me. The selfishness of the intolerant person make them somehow look smaller, like shrivelled up remnants of something that was once human but what has now been reduced to something much less. Something sub-human. Not from belonging to the “wrong” class or race or culture but from having willingly rid themselves of the most human quality of all: empathy. And in the wake of a lack of empathy, a range of unsavoury concepts eagerly awaits to fill its space: victim blaming, stereotyping and scapegoating, all fed by the celebration of ignorance, disrespect and contempt.
To be clear: we all have the full range of human emotions. We all sometimes feel envy, contempt or hate. Sometimes. And that’s the key. Without wanting to sound like a Star Wars fan, if we give in to the dark side it will consume us. All that will be left will be the sad remnant of a once proud, thinking, feeling and empathic human being: an empty husk, filled only with the smelly vapours of mistrust, hate and intolerance. The choice is ours and it’s a continuous one. Every day, every time we interact with another human being, we have to decide: “shall I be a real human being, making use of my full range of emotions and mental faculties, or shall I be an intolerant asshole?” Don’t be an asshole.
Contemplate, if you will, the picture above. It’s most likely a familiar sight, have you ever caught a glimpse of a nature documentary on the telly: the African savanna. Zebra, wildebeest and buffalo graze away in the hot sun, and we sense the presence of lions, poised to attack them at any time.
Imagine then, if such is your pleasure, the same scene with just one component missing: grass. Without the tiny green leaves of the plants from the Poaceae family, there would be no savanna but rather a thick forest of thorny acacia trees or perhaps a hot and arid desert. No giant herds of zebra and wildebeest. No lions to stalk them.
A world before grass
It might seem like grasses have always been around, that they’re somehow a very ancient group of plants, but in fact they’re quite a recent* addition to the planet’s collection of biomes. Even though the earliest species of grass appeared some 60 million years go – just before the end of the era of non-avian dinosaurs – they were just a scarce and limited selection of plants growing near rivers and lakes.
In those pre-grass days, the world was a different place, covered by thick forests and deserts. No open plains full of grazing animals, no evolutionary race between swift herbivores and even swifter predators. It was at once a slower world, with most animals walking leisurely about and a more violent one with ambush predators lying in wait in the lush undergrowth.
And this was the state of the world for hundreds of millions of years. From the ancient fern forests of the late Devonian 360 million years ago, through the swampy oxygen-rich forests of the Carboniferous, the vast deserts of the Permian and Triassic, the lush conifer forests of the Jurassic to the first flowering plants of late Cretaceous, forests gave way to deserts and desert in turn were overgrown by forests. Not until the end of the Oligocene, some 20-25 million years ago, did grasses start to spread to more arid plains and form the steppes, savannas and prairies we see today. And it took another 15 million years before the modern C4 grasses like maize, millet and sugarcane started to make an appearance.
A dangerous opportunity
This might all be fascinating on a theoretical level, but there’s more to the story of grasses than a study in biome evolution. The spread of grasses and the formation of wide and open grasslands changed the adaptive path for many animals, from antelopes and horses to carnivores and birds. But there was another group of animals that was also affected, a small insignificant family of primates suddenly finding themselves exposed in the open: hominids.
Our ancestors got it rough. Not only were they small and defenceless, they lost their natural habitat and had to make do in a very competitive landscape, filled with powerful and dangerous herbivores like buffalo, rhinoceros and elephants and hunted by fast and furious predators like lions, leopards and hyenas. This, in combination with the constantly changing climate, forced us to develop tools and weapons and get us to rise up on our hind legs and become bipedal.
But as harsh as it was, the new grasslands also promised something new: a huge hunting ground full of game and lots of grass seeds and roots to eat. And as the savanna expanded north and met the Mediterranean Sea, so did our human ancestors, spreading on to the Middle East and then east into Asia and north into Europe.
Our green little friends
The story doesn’t end there either, though. Not only has grasses changed the face of the Earth and facilitated the evolution of our own species, they have also been instrumental in taking us from a few thinly spread hunter-gatherer tribes to being the most widespread mammal in the world, sporting the most advanced societies the planet has ever seen.
Some 10,000-15,000 years ago, just at the end of the last glacial period, we were getting a bit crowded. There were tribes of humans all over Europe, Middle-east, Asia and Africa and we were running out of places to gather food. Something had to be done.
Cue the agricultural revolution. Instead of walking miles and miles to find the herbs, roots and seeds we needed to stay alive, we started growing them around where we lived. And the main thing we grew was different types of grasses: wheat, rye, barley, millet, rice, and maize.
It wasn’t a revolution without casualties however. Rather than providing us with a reliable source of food, the first cultivated grasses were prone to bad harvests which made sure that starvation and malnutrition became a regular occurrence in human societies. In fact, it got so bad that the average life expectancy was dramatically lowered and the people who survived to adulthood grew up to be significantly shorter and weaker than our hunter-gatherer ancestors. Essentially, agriculture was making us frail and sickly.
But we persevered – most likely because there was no real alternative; we’d effectively run out of space and had to make our own food from now on – and fast-forwarding a few thousand years to present day we’re much more accomplished farmers. We now rely on a range of grass seeds for our daily food and as a result they make up the absolute majority of what we consume; be it in the form of noodles, rice, porridge, bread or pasta. As chance would have it, we turned out the only grass-eating ape** on the planet and a very successful one at that. Grasses surely are our green little friends.
* As always, ‘recent’ is a relative term, and in context of speciation and evolution usually refer to some million years but fewer than a hundred million.
** We might be the only grass-eating ape, but there is another primate that’s also a graminivore: the Gelada, an East African highland baboon. They eat their grasses raw, however. As we cook our grass seeds, we get more nutrition out of them and are hence better than them and can feel appropriately smug and superior.
You’re at the grocery store, doing the weekly shopping when you come over a little peckish. Considering something for the road, you see a packet of Twinkies:
It seems an innocent enough treat, if a bit calorie rich. But on a hunch you turn the packet over, start reading the label and find the following:
Enriched Wheat Flour (ferrous sulfate, niacin, thiamine mononitrate, riboflavin and folic acid), sugar, corn syrup, water, high fructose corn syrup, vegetable and/or animal shortening (containing one or more of partially hydrogenated soybean, cottonseed or canola oil, and beef fat), dextrose, whole eggs, 2% or less of: modified corn starch, cellulose gum, whey, sodium acid pyrophosphate, baking soda, monocalcium phosphate, salt, cornstarch, corn flour, corn syrup solids, mono and diglycerides, soy lecithin, polysorbate 60, dextrin, calcium caseinate, sodium stearol lactylate, wheat gluten, calcium sulfate, natural and artificial flavors, caramel color, sorbic acid, E102 (Yellow 5), E129 (Red 40)
“Whoa. That’s a lot of ingredients.” you say to yourself. “And how do you even pronounce ‘sodium acid pyrophosphate’? Or ‘calcium caseinate’? And that ‘thiamine mononitrate’ sounds really nasty. Better off picking up some fruit and have a healthy natural snack. Bananas are good and filling, I’ll get some bananas instead.”
All natural banana
Bananas are a good tasty snack. And I won’t mention the highly unnaturally selection having being performed upon it to end up with the cultivated version of the wild banana berry*.
But consider the ingredients list for a natural banana, if such a list was legally required:
Water (75%), sugars (12%) (glucose (48%), fructose (40%), sucrose (2%), maltose (<1%)), starch (5%), E460 (3%), amino acids (<1%) (glutamatic acid (19%), aspartic acid (16%), histidine (11%), leucine (7%), lycine (5%), phenylalanine (4%), arginine (4%), valine (4%), alanine (4%), serine (4%), glycine (3%), threonine (3%), isoleucine (3%), proline (3%), tryptophan (1%), cystine (1%), tyrocine (1%), methionine (1%)), fatty acids (1%) (palmitic acid (30%), omega-6 fatty acid: linoleic acid (14%), omega-3 fatty acid: linoleic acid (8%), oleic acid (7%), palmitoleic acid (3%), stearic acid (2%), lauric acid (1%), myristic acid (1%), capric acid (<1%)), ash (<1%), phytosterols, E515, oxalic acid, E300, E306 (tocopherol), phylloquinone, thiamin, E101, E160a, 3-methylbut-1-yl ethanoate, 2-methylbutyl ethanoate, 2-methylpropan-1-ol, 3-methybutyl-1-ol, 2-hydroxy-3-methylethyl butanoate, 3-methybutanal, ethyl hexanoate, ethyl butanoate, pentyl acetate, E1510, natural ripening agent (ehtylene gas)
That list is even longer than the Twinkie one. And just as full of scary-sounding things like ‘tocopherol’ and ‘2-hydroxy-3-methylethyl butanoate’. Also, ‘ash’? Really? And ‘ehtylene gas’ – isn’t that what they run those welding machines on?
Now, I don’t want to scare you off ever eating bananas again – that’d be silly. Rather, the point I’m somewhat laboriously making is that chemistry is complicated. And organic chemistry even more so. Our naturally occurring foods often contains more weird chemicals than our man-made varieties.
The Frankenstein syndrome
There seem to be a prevalent mistrust of anything synthetic. A fear of plastics, chemical additives, man-made fibres, metallic alloys and other manufactured compounds. I’d like to label it a phobia (chemophobia would be the correct term), but perhaps it’s not a completely groundless fear? We’ve after all heard about countless incidents involving man-made chemicals and substances: factory emissions, illnesses from synthetic materials used in buildings and fabrics, allergic reactions to newly discovered chemical compounds and so on. It does really seem like whenever we invent something, something bad follows.
It’s called the Frankenstein syndrome – a fear of our own creations – and it seems to be widespread in modern society. But why are we so ready to mistrust new inventions? Are we really that bad inventors/scientists/chemists that we constantly and over and over again release monsters into the wild?
It could be argued that this phenomenon is just a variant of the old Luddite hatred of everything new, but I think there’s more to it than that. In addition to the mistrust of the unknown – which, to be fair, is pretty reasonable – there is a fear of loss of control. Once we’ve created something new, we effectively let go of it and let it run rampant. And even though the creator(s) usually swears by their new product and assure us it’s benevolent, observers from the outside of the lab are less convinced.
And so the mistrust and fear start off rumours and urban myths. Like the ones about the Brilliant Blue FCF colouring agent used in certain sweets. Rumours would have it that the – rather unnatural-looking and therefore surely harmful – food colouring induced hyperactivity in kids that had enjoyed some blue M&M’s. And even though numerous studies showed that this was not in fact the case, those rumours refused to die and parent kept on going through the bags of M&M’s to rid them from blue ones before their kids got their hands on them.
There is also an element of denialism at work here. Healthy living is now a fairly common lifestyle choice and people striving to eat organic food without artificial chemicals. This is all a Good Thing, but the notion that we could somehow live chemical-free lives is a false one – even with the rather incorrect definition of ‘chemical’ meaning ‘synthetic’. There is no place on Earth today that is not already affected by man-made chemicals. The water, air and soil is everywhere treated with a cocktail of assorted compounds, ranging from the beneficial to more harmful ones.
Of course, the level of artificial contamination will vary somewhat and the choice to add your own in form of synthetic fertilisers and herbicides/insecticides/fungicides will affect this level further. But be in no doubt – there are no produce to be had on this planet that are free from artificial chemicals.
Ok, so what is my point? Should we panic or should we just resign and give up? Is it the end of the world or is it nothing to worry about?
Well, neither really. Artificial chemicals might not be the end of the world as we know it and they are certainly not automatically evil, but it would make sense to keep an eye out. Some of the chemicals we’ve released have been rather nasty indeed, especially when in form of different metals. We still remember the effects of DDT and mercury compounds and even today PCB is present in the environment. Clearly a more stringent view on how we test man-made chemicals before releasing them into nature or put them in our food has been required.
But let’s not go overboard. A lot of what we see in the ingredients list for our groceries are compounds readily found in nature. As always, more knowledge about what’s what and what could potentially be harmful is required. Combined of course with a will to actually learn and accept new findings. So we need a little less of the “I don’t care what they say; I’ll never trust these additives!” or “All newly invented things are always great and should be welcomed with open arms” and more of “Ok, let’s see if there are any good studies on this.” Less dogma and more research.
After all, knowledge is a good thing. Make sure you get yours; that’s an additive we cannot do without.
I’d like to thank James Kennedy for the inspiration for this post – and for painstakingly listing the ingredients for a bunch of natural products.
* Just for the record – this is what a natural wild banana berry looks like:
Every morning I get out of bed and stumble into the kitchen. There I make myself a mug of hot dark Kazaar lungo. The first sip tastes almost like hot smoke, the second is already softer, smoother. Almost instantly I can feel the buzz: my brain switches into gear and I start to function on a higher level. I’m awake.
1. Caffeine uptake and half-life
Unfortunately, the instant wake-up effect of coffee seems to be a myth. Depending on your metabolism, it takes up to 45 minutes for the caffeine in coffee to enter your blood stream. So the effect I’m experiencing is either a placebo or it’s to do with the bitterness of the coffee itself. This also means there should be no issue taking a cup of coffee right before bed time, as long as you fall asleep before the effect of the caffeine kicks in. In fact, I often take a strong coffee before I go to bed to help me sleep. And as to how long it takes for caffeine to leave the system, its half-life is between 5 and 8 hours. So if you’re not used to coffee, the effects of a single cup could stay with you most of a working day.
2. Origin and evolution
Caffeine can be found in a number of plants, either in the seeds (like coffee seeds or ‘beans’, the berries of guarana or the kola nut) or the leaves (like in the tea-plant). Evolutionary, caffeine has evolved to act as an insecticide, paralysing any larvae that feeds off the plant. It also seem to work as an inhibitor for seedlings of the same species, making sure no other plants grow too close in order to avoid competition for resources. Additionally, caffeine seem to trigger a reward behaviour in honey bees pollinating the plant, encouraging them to revisit similar plants, which would increase the probability of successful reproduction.
3. The colour of caffeine is – clear?
Rather counter-intuitively, caffeine is actually clear when mixed with water, so the colour intensity of the coffee or tea is a poor indicator as to how much caffeine it contains. For instance, dark roasted coffee contains less caffeine than light roasted, due to the fact that the roasting process removes some of the caffeine. And pale tea often contain as much – if not more – caffeine as black tea.
4. Tea vs coffee
Speaking of tea: you’ve probably heard that tea contains more caffeine that coffee. This is strictly true, if measured by dry weight, but as a prepared beverage coffee contains many times more caffeine than tea. Which – if you think about it – you already knew from the effect of coffee compared with the much weaker effect of tea.
5. Kola, cola or coke?
Just like coffee is originally from Africa, so is the kola tree. The nuts of that tree contains caffeine and has traditionally been chewed to produce a stimulant effect.
More recently, extracts from the kola nut have been used in certain soft drinks to give them a similar effect (and also possibly create an addition to the product in the consumer). However, coca leaves are no longer used, so perhaps one of the more known brands should consider changing the name?
I’ve written a post on chocolate before, where I explained all its amazing benefits. In addition to all its other qualities, dark chocolate also contains quite an amount of caffeine, as much as coffee in fact.
The effect is however reduced due to the combination of theobromine and theophylline that are present in relatively high levels. This is why you don’t get the same buzz from chocolate. (What? No, sorry. That buzz is the effect of all the sugar found in chocolate. I’ll save the pros and cons of sugar for another post.)
7. Side effects
There are several misconceptions about the effect of caffeine (and coffee in particular) on our health. One is that drinking too much coffee will cause gastric ulcers, and that we should limit our intake to only one or two cups a day.
This is false. Gastric ulcers are caused not by coffee but by the bacterium Helicobacter pylori, something the Australian doctors Barry Marshall and Robin Warren proved in the 1980s: Dr Marshall deliberately drank a concoction containing this microbe and within days, he had developed several gastric ulcers, without drinking copious amounts of coffee.
8. Health benefits
In fact, rather than being detrimental to our health, caffeine seem to offer some protection against a range of diseases, including Parkinson’s, Type 2 diabetes, liver cirrhosis and certain types of cancer. You would need to consume a lot of coffee to get these effects though; more than 4-5 cups of coffee per day.
9. Loo breaks
Another common misconception is that caffeine is strongly diuretic, and that you will have to run to the loo all the time if you drink tea, coffee or cola drinks. The truth is that caffeine is only mildly diuretic and only in people who are not used to it. One of the amazing things about caffeine is that all the side effects (sleep disturbances, nervousness, minor muscle tremors etc) wear off as you get used to the drug. The benefits however (alertness, increased concentration capabilities, reduction of physical fatigue etc.) all stay with you, regardless of how long you’ve been using caffeinated drinks.
Even so, as with most alkaloids, caffeine is toxic to human beings in high enough concentrations. It would however take 80-100 cups of coffee to achieve a lethal dose. The risk of overdosing on coffee is therefore rather remote, but there has still been at least one reported death attributed to caffeine: a man suffering from liver cirrhosis overdosed on caffeinated mints.
11. Memory enhancement
But I’ll end this list with a new discovery: it would seem that caffeine will help us with memory consolidation, i.e. the process of converting short-term memories to long-term memories. In a recent study, people who consumed two cups of espresso just after a memory test outperformed the placebo group. There seem to be a sweet spot at 300 mg, though, so don’t overdo it. Two cups of espresso is just enough – no more, no less.
The dark mistress
Overall, caffeine seem to be quite a drug. No real side effects (and the minor ones that do exist, fade away over time), and several mental and physical benefits. But drinking tea or coffee isn’t down to logic; it’s a life style – a passion, even. Once we get over the bitterness of the dark mistress, we just can’t get enough of her. Which in the cold light of logical thinking might be a drawback, but I really don’t care. Give me another mug of that strong dark hot stuff.
A while ago – several years ago, actually – I wrote a post on electric cars, bemoaning the lack of market penetration, even in the 21st century. It was sort of a prequel to the post The future isn’t what it used to be, where I commented on the lack of technical advancement. There’s another side to these two stories and that’s the one about autonomous vehicles, a.k.a. self-steering cars.
Just like electric cars, self-steering cars have been around for a long time, albeit in a limited sense. Already in the 1920s, there were successful experiments with remote-controlled cars driving in heavy city traffic. But of course the computer power to create fully autonomous cars didn’t exist back then, and instead the research was focusing on getting autonomous cars follow magnetic or electric rails hidden in the streets. This railroad car technology never took off, due to the potentially astronomical costs of amending all the roads in the world with guide rails.
Even with the replacement of guide rails for electronic devices to detect road and lane edges in the 1960s, the cost was still too high except for limited field trials, and it wasn’t until 20 years later we got the first hints at cars being able to detect the roads and lanes all by themselves.
And now, 30 years on, we have fully autonomous cars driving in real live city traffic on a daily basis. Companies like Mercedes-Benz, Volvo, Volkswagen, Ford, Toyota, Audi, BMW, Nissan and GM are all currently testing driverless cars. And of course we have the famous Google cars.
And gone are any needs to have amended roads with sensors and guide rails. Modern cars know what a road is from looking at it, and know how to stay on it. They also know how to keep their distance from surrounding traffic and how to navigate intersections and multi-lane highways. They do this with higher precision than human drivers and at higher speeds.
In fact, as the technology has matures so quickly (relatively speaking), governments around the world are finding themselves with outdated traffic legislation and are scrambling to catch up. Germany and UK have already passed laws that allow driverless cars to operate in traffic, with the owner of the car responsible for any accidents – even if he or she wasn’t driving at the time.
So what will the future bring? When will we be able to buy our first driverless car? And will we want to?
Well, the benefits of autonomous cars are numerous. First and foremost it will almost certainly cut down road traffic accidents by at least 95%. With high precision driving systems (that never get tired, lost, frustrated, drunk or sick), aided by radar and infrared sensors (allowing them to see in the dark or fog) we would soon enter a period of time where people getting hurt or killed in traffic would be major news. It would be more common to be hit by lightning or winning the lottery than being part of a car crash.
Secondly, it would make traffic much smoother. Human beings aren’t exactly renowned for the logical thinking – and this is especially true when driving – so much of the traffic congestion problems we experience in cities today is down to irrational driver behaviour. Not so with autonomous cars. They will let other vehicles in, they will keep safe distances and reasonable speeds, they will know which routes to avoid at certain times of day and they will communicate with each other in a polite and relaxed manner.
Thirdly, we have convenience. Apart from being able to let go of the rather stressful activity of keeping a metric tonne of heavy machinery on the road at high speeds, we would be able to have our cars pick us up at home and drop us off at work and then go somewhere else to park for the day. No more need to look for parking spaces or waiting for the car to heat up and defrost on cold winter mornings. And, we wouldn’t have to worry about having had a drink for dinner, or being too young or old to drive, or suffering from a disability of some sort. The car would take us where we need to go.
And lastly, it’s the financial aspect. Even though self-steering cars will no doubt be prohibitively expensive at first, prices will soon drop and we could expect autonomous cars to become cheaper than manual cars at some point in the near future. Add to this the reduced need for insurance and the optimal fuel-economy of autonomous cars and you’re sitting on a winner. In the bigger picture, society at large will also benefit, since costs for road traffic accidents and their related human traumas add up to astronomical amounts on a yearly basis.
There are however a few obstacles on the road (no pun intended) to that bright new future. And they’re not related to technical limitations – undoubtedly the technology needed for autonomous cars will become better, cheaper and smaller over time, but even what we have today is perfectly adequate.
No, the problem is more one of human nature. It’s our own inbuilt fears and hangups that will prove to be the most difficult obstacle. And for once I’m not talking about the Luddite syndrome of hating and fearing everything new (that’s another post). No, this time it’s more about loss of control.
Humans are an industrious bunch of monkeys. We keep on inventing more and more advanced ways of staying ahead of the game, of keeping ourselves safe and alive. Fire, stone tools, fur clothes, huts and canoes. And lately interstate multi lane highways, high-rise buildings, water closets and streamed high-definition IP-based television.
But the downside of all our inventions is that it makes us think we’re in control; that we somehow can control life. And that feeling of control is something we don’t want to give up. We assume we always know best. We really are the most arrogant primates on the planet.
This could have an effect on the uptake of self-steering vehicles. Even if autonomous cars will be better drivers than even the most seasoned and experienced rally driver, we will harbour an inbuilt mistrust towards them. A machine could never really drive a car, surely? How would it know what to do if something happens? It would never be as good a driver as I. Or – what if something goes wrong? What if it malfunctions? Then we’ll be stuck in an out-of-control car, running down the streets at rush hour at 90 mph. It’ll be a nightmare!
Yes. It certainly would. But let’s think about it factually: how many airplane crashes have you heard or read about that were caused by autopilot failure? And how many that were contributed to human error? Granted, driving a car is more difficult than flying an airplane, but even so: people being distracted or reacting too slow or being just plain drunk is the main cause of road traffic accidents. Not the cruise control running amok or the automatic break system failing.
Like it or not…
In the end it won’t really matter. Technology have a tendency to march on regardless of any concerns for safety or lack of freedom. Already this year, Mercedes will be selling their S-klasse, with autonomous steering, breaking and lane control systems. Volvo and Ford will follow with their semi-autonomous cars and next year both Audi and Nissan will join the ranks, closely followed by Toyota and Cadillac. And within six years, we will see the first fully autonomous vehicles be available on the market, with Mercedes-Benz, Volvo, BMW and Nissan selling completely self-steering cars in stores around the world. Within five more years, they will be joined by Ford and Daimler.
So it looks like there will be a dozen or so different models of autonomous cars driving around in our everyday traffic within a few years. We will no doubt hate them at first, as they will drive carefully and keep to the speed limits. We will also hate them because they will be very expensive cars and we would also like to be able to afford one. But after a few years, these feelings will most likely fade away, and it’s not unlikely that if you buy a brand new car in 10 years time you will opt for the more convenient self-driving one; if nothing else because of the huge savings you’ll make on the insurance premium.
And fast-forward another 10 years and we can expect to find old-fashioned manually steered vehicles only at the bottom of the range. Instead, all mid-class vehicles will be autonomous and some will boast new trendy features like downloadable driver profiles, so that you can be driven around by famous rally or racing drivers. Or – if you prefer – perhaps a boy-racer profile? Or a senior citizen one? Or a distracted parent one? Sky’s the limit…
Either way, we will have gotten so used to the convenience and safety of autonomous traffic that we will start lobbying for a more extensive and thorough driver’s test programme for those who still choose to drive manually. Within a few more years, a driver’s licence will be as costly and rare as a pilot licence.
The future is bright, the future is now
No doubt my predictions in this post will be wrong. Predictions about the future always are. Mostly because they’re too conservative or too linear. In the 1980s, no one could even imagine the socio-economic impact of the internet. Just like no one in the early 1900s would have been able to predict the meteoric rise in motorised traffic.
But one thing is pretty clear: some years from now, when we’ve gotten old(er), we will most certainly be able to rant on about the good old days to our grandchildren. The good old days when we were still allowed to drive, and cars would still run on highly carcinogenic fossil-based fuels, like petrol. Or diesel, even.
And our grandkids will no doubt roll their eyes, stop pretend to listen or care, get into their chic fuel-cell powered autonomous personal vehicles and drive off, somewhere far far away from your grumpy old self.
* Might not be factually true, although I sure hope it will be.