I watched a Swedish television commercial the other day. It consisted of depicting a slightly portly middle-aged woman winning a range of different Olympic sports. The point of it – I think – was to show that middle-aged women are better than you think at things you didn’t think they could do; an idea that the company behind the commercial – an internet service provider focusing on online gaming – was eager to enforce.
But regardless of the message or the motive behind the commercial, it got me thinking: does being overweight stop you from being a healthy human being? Is what we’ve been told actually true – that being overweight is a sure ticket to heart conditions, diabetes, circulatory problems and all the rest? Or is there something else hidden here? Could we have oversimplified the issue?
There is such a mountain of statistical data showing links between numerous diseases and being overweight that it has become the conventional wisdom of the medical profession. And not just the medical profession either; the same links are used in the insurance business and the fitness industry as well as generic healthcare services. There is a lot of money to be made on this, and to reinforce the idea that being overweight is the same as being unhealthy. No one wants to be unhealthy after all, and being told that you are will most like trigger a behavioural change, in order to fight this evil overweight and become healthy again. It’s then a piece of cake (mmm, cake…) to sell in services that cater to that need to lose weight.
Case in point: when I had my yearly medical a while back, it was pointed out that I could do with losing some excess weight and that additional exercise would be a good idea. My first reaction was to immediately start planning how to lose this overweight to assure my good health. But then I had second thoughts. Ok, so I don’t participate in any team sports or spend my free time at the gym, but I eat (more or less) healthily and I walk 6-7 km at a brisk pace daily. That should at least make sure I’m not exceptionally unfit, shouldn’t it? I mean, one doesn’t want to be manic about fitness, does one?
But, on the other hand, those statistics are a frightening read…
Lies, damned lies and statistics
It turns out that the truth is a little more complex than what the statistics are indicating. What we think we see in the statistics could just be common symptoms rather than cause and effect. Even though it is indeed true that people who are overweight are more likely to also suffer from diabetes, heart conditions and circulatory problems, it doesn’t automatically follow that the former is the cause of the latter. Rather, an equally valid answer would be that an unhealthy lifestyle is the cause of both. This would mean that you could get all the health problems listed above without being overweight and that you could be overweight without developing a single one of them.
Now, obviously I can’t deny that certain conditions are linked directly to being overweight. If you happen to be very overweight, you might start suffering from pains in your joints, and your heart would have to work harder to power a bigger and heavier body. But that’s not my point. My point is that in the current culture of manic weight loss, even healthy young people who are just above the ‘normal’ weight (or indeed at or under it) are desperately trying to lose weight by dieting and excessive training and exercise. That is not healthy behaviour, and is just a few tiny steps away from turning into full-blown eating disorders like anorexia or bulimia.
The tide is turning
Lately, reports have started to challenge the old wisdom. Studies have been carried out to more closely investigate the link between obesity and the lack of fitness. They all show that keeping fit is much more important than losing weight. Obese people who exercise regularly and lead a healthy life have a much lower risk of morbidity than people who are of ideal weight but unfit.
But don’t take my word for it. Here are a few quotes, starting with the Harvard Health Policy Review:
“A fit man carrying 50 pounds of body fat had a death rate less than one-half that of an unfit man with only 25 pounds of body fat.”
And the Annals of Epidemiology:
“Consistently, physical inactivity was a better predictor of all-cause mortality than being overweight or obese.”
And from The President’s Council on Physical Fitness and Sports:
“Active obese individuals actually have lower morbidity and mortality than normal weight individuals who are sedentary … the health risks of obesity are largely controlled if a person is physically active and physically fit.”
And finally the International Journal of Obesity Related Metabolic Disorders:
“An interesting finding of this study is that overweight, but fit men were at low risk of all-cause mortality.”
To summarise: being unhealthy is much more dangerous than being overweight. And being overweight is not the same as being unhealthy. Although they can sometimes correlate, they are independent, and should be viewed as such.
So, if you’re feeling guilty lusting after that cheeseburger or pizza – don’t worry. As long as you have a reasonably healthy lifestyle, and it’s not in direct conflict with any pre-existing medical conditions, go ahead. Enjoy. Life is short enough; live it a little.
P.S. Just after finishing this post I read on New Scientist that not only does being overweight have no negative effects our health and longevity but it could actually be beneficial. Indeed, carrying a few extra pounds seems to make you live longer than if you’re at your ‘ideal’ weight. There you go: yet another reason not to forgo that dessert.
P.P.S. I’ve added a motivational “Fat and fit” pullover inspired by this post to my Zazzle store:
Whenever I finish a blog post I say to myself: “There. I’m done. This will be my last post. I’ll never blog again.” It feels like I’m empty. Done. Finished. And if I go against my better judgement and try to force myself to open WordPress and click Add New Post I end up staring at the dreaded blank page.
But after a while (days or sometimes weeks) I get this itch, this urge to write. An idea has formed, or a need to explore a topic in more detail. It connects with other ideas and factoids I’ve collected over the years and before I know it I once more find myself sitting in front of my computer and starting on another post.
This seems to be my process. I need these periods of downtime in order to be creative. And, being aware of this, I don’t really mind. It is as it is. It’s a small price to pay to be able to express myself in text.
But this has put me in mind: what does it really mean being able to write? Is it important? And I don’t mean being able to write your name to sign for that delivery, but actually put your thoughts down in writing in a way that’s understandable to others. Is that in any way essential? Or is it like being able to sculpt or play the sitar – nice if you know how to do it, but not really important for your everyday life?
There’s a form of illiteracy spreading that takes the form of not being able to express one’s ideas and thoughts clearly enough in text for someone else to understand them. People suffering from this new illiteracy know how to write, but not how to write understandably. Their writing reveals a severe lack of understanding of basic grammar and spelling, and only rudimentary knowledge of sentence structure.
This form of illiteracy has in fact spread all the way up to the higher levels of the education system. Uppsala University is Sweden’s oldest and most prestigious university, and has traditionally been ranking well both in Sweden and internationally. But lately the professors teaching courses there have noticed a significant drop in the students’ ability to write. They don’t seem to understand that changes in word order changes the meaning of a sentence, they only have a very limited vocabulary and they suffer from a severe lack of grammatical knowledge in general. They no longer use capital letters at the beginning of sentences or full stops at the end. It’s come to the point where they can’t write reports or read and understand academic texts.
Does it matter?
But does it really matter? If everyone is on the same – albeit less than ideal – level of understanding, wouldn’t the language simply adjust and become simplified in itself? Why do we need this advanced linguistic knowledge anyway? What does it matter if students are on the literacy level of a 13-year-old? Aren’t they still smart enough? Don’t they still think unique thoughts and come up with new ideas?
Perhaps they do. Perhaps language doesn’t affect the way we think. And with the advent of new technologies we might never have to write things ever again. Voice-to-text solutions are limited today but they show encouraging signs of maturing into usable tools for everyday situations. And with text reading algorithms reading out loud for us we could perhaps bypass the written language all together, or at least banish it to our computers and make it into a machine language? In the near future, we could have devices interpreting the nerve signals we send to our larynx and tongue as we subvocalise our thoughts, and then easily store those thoughts digitally on the cloud, send them to our friends or publicise them to a wider audience. All of it without ever touching a keyboard or picking up a pen.
But hang on. If we’re no longer able to write comprehensible sentences, what would those subvocalised thoughts really look like? If we lack the ability to put our thoughts together according to strict grammatical rules, how would we be able to communicate them to other people? If we don’t all follow the same rules, wouldn’t we simply drift apart and end up being utterly incapable of understanding each other? We would be split up and isolated, just like in a modern version of the tower of Babel*.
Language and thought
I’ve made a lot of questions in this post, but the central one would have to be ‘Does language affect the way we think?’. And to answer that question I’d like to return to my favourite subject: human evolution.
In the beginning there was no language. Humans – or pre-humans, I guess - made do without ever uttering a single word. Sure, we had different calls and gestures for different things, ‘words’ if you like for things like ‘leopard’, ‘water’ and ‘crocodile’ (just like a lot of other animals), but no language as such. That lack of linguistic capability could be seen not just in the physical structure of our bodies (lack of space for a lowered and elongated larynx, the diminutive size of the hypoglossal nerve canal), but in our culture and tool industry as well. As our linguistic prowess increased so did our sophistication in tool making and arts and crafts. There seem to be a direct correlation between inventions and the use of language.
This interesting connection could well be evidence of us humans having to be able to think things through in words and sentences in order to make sense of them. Until we can put an idea into words we only perceive it as a hunch, something just beyond the grasp of our minds. So in that sense, being able to form coherent sentences is an essential requirement for constructive thoughts and ideas. Without language our minds are blind, fumbling around without a chance of ever coming up with any original thoughts of their own.
So, yes: a proper understanding of language is essential for our capability of thinking original thoughts. We need a language with a fixed set of grammatical rules in order to make sense of the confusing and ever-changing collection of ideas we have inside our minds. And if we want to communicate those ideas to others – the basis for human culture – we need everyone else to use the same grammatical rules in order for them to understand what we’re saying.
Evolution or degeneration?
Language isn’t a fixed thing. It is constantly changing and evolving. New words and grammatical rules are adopted regularly and old ones disappear and are left by the roadside of the history of language like old fast food wrappers and discarded empty cans of soda pop.
But, whatever changes a language goes through it has to be a global change, a change everyone (at least eventually) is onboard with. Otherwise the language will start to degenerate and become a blunter and blunter tool. And our thoughts and minds will become blunter with it. So let us keep our language and our minds as sharp as possible. We are going to need them. Badly.
* For the record, the tale of the tower of Babel has always confounded me. What is the moral suppose to be? “Don’t try to do great things”? “Be wary of God, for he is a mean bastard and will mess you up good”? Honestly, does anyone have any ideas?
It has come to my attention that people are sometimes confused about the theory* of evolution. This might take several forms, from mixing up facts or misunderstanding processes to an outright denial of the whole thing. This bothers me. No one should have to go through life without a basic understanding of the most revolutionary theory of science ever conceived.
I’ve therefore written this blog post in order to try to explain the theory of evolution in simple, non-sciency, everyday terms.
But before I begin, I have to mention the 154 year old book On The Origin Of Species By Means Of Natural Selection by Charles Darwin. It’s arguably the most important book ever written and it has influenced not only biologists and other scientists but popular culture and society as a whole. It has made Charles Darwin the only scientist to ever become immortalised by having an ism named after him (by contrast, there is no Newtonism or Einsteinism). It’s also a very well-written book, easy to read (even though it can sometimes feel like it is mostly about pigeons) and I recommend that you download your free copy right now.
All done and ready to go? Good. OK, here we go.
The theory of evolution is based on three components: heredity, variation and selection. I’ll go through these one by one before summarising the modern theory of evolutionary.
The first part of the theory of evolution is the concept of heredity, that character traits are inherited from the parents to the offspring. We are all familiar with this phenomenon and it’s been a well-known fact for millennia. What hasn’t been known is the nature of heredity, how it actually works. Until quite recently, we believed it worked like mixing paint, so that the offspring became a mix of its parents’ traits.
Thanks to an 19th century Austrian monk called Gregor Mendel, we now know this is not the case, and that traits are in fact passed on as discrete units – genes. We’ve also figured out that genes are made up of a special kind of molecule called DNA and that they are collected in their thousands into chromosomes.
The second part of the theory of evolution is the observation that offspring seem to vary. Within a litter of animals or a collection of seedlings, there will be slight (or sometimes not so slight) variations in appearance, strength, endurance and so on. We see this in our own children, in our pets and in our garden plants, but it’s equally true for wild animals, plants, fungi and all other life-forms that reproduce sexually. In addition to visible traits, there are also hidden or more subtle variations in traits like capability to process certain types of food, resistance to pathogens and so on.
The reason for this variation is due to how the genes of the parents are combined. With sexual reproduction (as opposed to cloning), the chromosomes are recombined within the offspring to form a random variation of the traits the parent possess. It works sort of like a tumbler, where the chromosomes from the parents are mashed together, with different bits of the offspring’s chromosomes coming from either the mother or the father.
There is also another form of variation which occurs at a slow but more or less regular rate: mutations. A mutation is when a gene is changed so that it results in a different function. There is a range of different causes for mutations like certain chemicals, radiation or even infections. Most of the time, the mutation takes place in a somatic cell (i.e. a regular cell that makes up the body of the organism**, like a skin cell or a muscle cell) but sometimes it happens in an egg or a sperm and then the mutation becomes hereditary and the offspring could potentially display a completely different character trait than that of either parent organism.
The final part of the puzzle is the concept of natural selection. This is the key to understanding how evolution works, and is the most often misunderstood part of the theory of evolution. In essence it’s a variation of the artificial selection we’ve been undertaking for centuries on our livestock, pets and plants. We can easily see how our careful selection of desired traits has resulted in the multitude of breeds and stocks we have in agriculture today. Natural selection works in a similar way in that traits that are to a benefit for the organism will help it keep healthy and produce more offspring. Over generations, that particular trait will become more and more common in a population until the population is different enough from other populations to no longer reproduce with them and it becomes a new species.
Now, all of this takes time. For any kind of visible change to take place 100s if not 1000s of generations will have to pass. In larger organisms this would take many thousand or even million years, but in smaller ones – like bacteria – the changes can be seen within weeks or even days.
Evolution in a nutshell
And that’s the theory of evolution in a nutshell. It’s heavily based on Darwin’s idea of natural selection, but with the deeper understanding of heredity and variation that modern genetics has given us and can be summarised in one paragraph:
The theory of evolution states that all life is related and that traits within each organism are hereditary, albeit with some variation. This variation is played out in competition where the most fit for their environment produces more offspring and that variant becomes more numerous as a result.
It’s a very neat theory that clearly explains how life have managed to become so diverse and how it all links back to one single organism (called LUCA – Last Universal Common Ancestor), way back in the mist of time some 3.8 billion years ago, when life first appeared on the planet. In the 150 odd years since the theory was first conceived, it has withstood countless attempts to disprove it but only grown stronger as more and more facts pile on in its favour.
It’s all pretty amazing and in my opinion truly awe-inspiring in its breath-taking simplicity.
* A quick note on the use of the word ‘theory’: rather than representing a vague idea or hunch as is often the case in civilian use, a theory is the highest order of empirical knowledge in the world of science; once a set of hypotheses have been combined, they can then be elevated, through extensive testing and multiple attempts to disprove it, to the final stage of being a scientific theory. For instance, we have come up with the theory of gravity in order to explain the everyday phenomenon we experience of not being flung into outer space. It doesn’t mean that we don’t know if gravity exists or not, just that we have constructed a theory for how it works. (Coincidentally, the theory of gravity is on much shakier grounds than the theory of evolution.)
** Mutations that occur in regular somatic cells sometimes cause cancer. In fact, we all get multiple new cancerous cells every day, but our immune system is very good at tracking them down and destroying them. It’s only when that protective system fails that we develop cancer.
I’ve spent a lot of the last few weeks on the road. Well, that’s not strictly true: up in the air would be more accurate, since I’ve mainly been traveling by plane. Come to think of it, it almost feels like I’ve seen more of the airports than the actual destinations.
But either way, I’ve been visiting both Helsinki and Stockholm repeatedly lately, giving me the opportunity to enjoy some of the cultivated pleasures of city-life: cappuccinos in big paper mugs, delicious Indian food and gigantic tanks full of stingrays. There are certainly some perks with being in a big city, something I perhaps notice more now as I live in the countryside on Åland.
Life in the city
As I’ve mentioned before, I grew up in Stockholm. Or rather in a suburb of Stockholm. I enjoyed it, what with the tarmacked bicycle paths, street lights and playgrounds. Me and my friends played in the yards between the concrete apartment buildings and we could go wherever we wanted on our bikes: school, the beach, grocery store.
Years later, when moving back to Stockholm in my 20s, I still enjoyed it. I studied evolutionary biology at Stockholm University, located in perhaps the most beautiful part of Stockholm: Norra Djurgården. And after finishing my studies, I started working as an entomologist at the Swedish Museum of Natural History. Those were good times. It was also the time when me and Fiancée (then Girlfriend) met and got together.
But life in the city as a student (or research assistant) was not always fun. Lack of money and uncertain living prospects were like dark clouds at the horizon. And research funding wasn’t going all that great either. To appreciate city-life you definitely need to have some money.
We moved away for a while and I worked as a science teacher, but returned a second time for another couple of years when I studied multimedia and web design before moving on again.
Life in the sticks
During my childhood we travelled yearly to Åland to spend the summer holidays in the countryside in our summer home. That might sound fancy, but I can assure you it was anything but.
We didn’t have any running water, electricity, showers or indoor bathrooms. Everything was back to basics: when it got dark, we lit candles. If it got cold, we made a fire in the fireplace. We fished redfin perch that we smoked in our homemade wood-burning smoker that doubled as the laundry cooker. Rainwater was collected to add to the small rations of freshwater we’d brought with us.
I really enjoyed the reclusiveness of our country visits for its simple back-to-nature qualities. I played by the water or in the forest that surrounded the cottage. I explored nature and discovered frogs, salamanders, snakes, bats and many fascinating insects. It could be days or even weeks between us seeing another living soul. It was a great experience and it no doubt helped me develop my fantasy and ability to entertain myself in my mind.
I wouldn’t call myself a nature romantic, but growing up in such a close proximity to nature has really affected me, and to this day the smell of sun-warmed moss and salty seawater still bring back memories from my childhood wherever I am.
So. Which is better? City or countryside? On what side of this dualistic exercise do I find myself? Having recently moved back to the Åland countryside, I might be considered biased, but the truth is that I really enjoy living here. We’ve got nature on our doorstep, decent access to shops and stores and – thanks to the marvels of technology – I’m still connected to all my people across the globe. And if we really want or need to go away, Stockholm is just a boat ride and car trip away.
I really love Stockholm. It’s a beautiful city with a great atmosphere. In a way it reminds me of Manchester, with its mix of history and modern architecture. But I don’t want to live there. It’s great visiting from time to time (if nothing else to fuel up on cappuccinos), but the high tempo gets old after a few days and starts to wear me down. And the traffic is a nightmare. By contrast, life in the countryside is less stressful even though it’s also less exciting. And there’s a distinct lack of coffee places and Indian restaurants.
But in the end it comes down to this: there are a lot of pillocks in this world**. Living in the countryside you won’t be able to avoid them, but – by means of pure mathematics – there will be fewer of them around. And that, to me, is as good a reason as any. I’m staying here.
* With no conscious references to Kehlsteinhaus, the Nazi WWII retreat at the German border to Austria.
** Present company excluded. Obviously.
You find yourself on a mountain slope, overlooking a wide green valley. The sun is beating down, but is not all that hot yet; after all, it’s still only mid-morning. You can smell the wet dirt from last night’s rainfall and the air is dense with the noise of insects. In the distance, big herds of wildebeest and zebra are slowly moving across the plain, eagerly feeding off the fresh green grass. A small group of elephants are drinking by the shallow river, two of the smaller calves are playing in the water, spraying muddy water from their trunks into the air. White-backed vultures are circling high above, riding the thermals, looking for carrion. This is Ethiopia. Or rather, it will be – in another 1.7 million years.
Turning east, you can make out a thin pillar of smoke rising from the mountain slope. Moving closer, you realise it’s smoke from a campfire; there are humans around. Soon you can hear them, and carefully looking through some dense shrubs you can see them too: a family group of some 15-20 people, collected around the fire getting ready to have breakfast. They talk and laugh, and the children are running around, chasing each other with small twigs in their hands. It’s all very idyllic and familiar, but these aren’t modern humans. They don’t even belong to our species. What you’re looking at are the evolutionary grandparents of modern humans – this is Homo ergaster.
The campfire in the middle of the clearing is more than just a fire. It’s more than the latest advance in technology. It’s more than a heat source or a means of protection. It’s more than a tool for hardening wooden spearheads and curing animal skin. It’s more fundamental than that. Fire is intrinsically linked with the evolution of humans; it’s the reason we’ve evolved to the modern form, abandoning the ‘chimpanzee-on-two-legs’-look that we’ve favoured for millions of years. Fire made us human. It shaped our evolution and kick-started the rapid increase in brain size.
What is so special about fire, then? Sure, it’s a convenient tool and useful in many different ways, but claiming it to be the origin of man is a bit preposterous, isn’t it? Well, no, actually, it’s a plain scientific observation. Fire allowed us to do something no other human species – or indeed any other primate – had been able to do: cook our food.
Life is a constant struggle. Perhaps not so much today, but 1.7 million years ago it certainly was. And for our great-great-great-uncle Homo habilis - the chimpanzee-on-two-legs I mentioned above – this was particularly true. They were small creatures, only standing 1.3 m (4′ 3″) tall, and they had none of the advanced tool-making to help them like Home ergaster had. They really weren’t much more than bipedal apes.
Homo habilis inhabited the same grassy plains in Africa as Homo ergaster. In fact, they co-inhabited, and would no doubt have met on numerous occasions. But even though they were from the same era of human evolution they couldn’t have been more different. Homo habilis had a very flat nose and a protruding jawbone. Big jaw muscles flanked its face, allowing it to chew on tough roots and nuts. By comparison, Homo ergaster could easily have passed for a modern human if dressed in the right clothes. Sure, they still had pronounced eyebrow arches and a sloping forehead but the body was more or less indistinguishable from ours – tall and muscular and made for running.
So what had happened? What had transformed the tiny bipedal chimp to an Olympian athlete? Food. Cooked food. And not even a change in diet; Homo ergaster was eating more or less the same things Homo habilis was: roots, nuts, fruits and the occasional small animal. The difference was that ergaster was cooking its food whilst habilis was eating it raw. And what a difference it made. By heating up the vegetables and meat, fibres were softened and broken up and proteins were denatured. This not only made nutrients more readily available, but it also made the food easier to chew. As a result, it allowed ergaster to devolve those big jaws and jaw muscles, and the additional easy-access nutrients created spare time from foraging; time that could be put to use experimenting with tool-making and developing cultural behaviours. The combination of all this lead to ergaster growing its brains, starting with a 50% increase (compared with habilis) and ending up even bigger some 500,000 years later.
Good food, big brain
That big brain came to good use. The environment was becoming more and more volatile and changeable, with long droughts replaced with lush forests and lakes. The climate was turning unpredictable and only the most adaptive of creatures could keep up.
The increased brain also allowed Homo ergaster to evolve a complex spoken language, which helped them communicate effectively and allowed them to live in bigger social groups, cooperating and helping each other.
It also made us the most deadly assassins ever to have walked the earth. No animal, big or small, was safe, and we had soon developed methods and weapons that allowed us to hunt prey so effectively that we could count on a regular diet of meat.
The energy-rich diet also allowed us to grow tall, reaching 1.9 m (almost 6′ 3″). In addition, it would help in maintaining an even bigger and even more expensive brain, starting an evolution of brain size growth that only ended with the Neanderthals reaching the absolute maximum* possible. Anything bigger, and it would be impossible for a human female to give birth to the baby.
And finally, the new cooked diet made it possible for us to venture out of Africa, exploring Asia and Europe. This gave rise to a whole new species of humans which, if longevity is to be the criteria, was the most successful of them all: Homo erectus. Meanwhile, back in Africa, ergaster turned into Homo heidelbergensis, emigrated again to Asia (evolving into Denisovans) and to Europe, turning into Neanderthals. Later still, heidelbergensis became Homo sapiens, emigrated from Africa (again), colonising Europe (again) and Asia (again) and eventually – for the first time ever – reaching the New World. We had finally colonised the whole world**.
‘Natural’? Don’t talk to me about ‘natural’!
It’s amazing to think that all this, all that is human, is a result of the simple use of fire to cook our food. If we hadn’t discovered fire, we would still be the same bipedal apes that was Homo habilis. Or rather, we’d have gone extinct. Homo habilis didn’t make it. They just weren’t flexible enough, inventive enough or curious enough.
So don’t come and talk to me about ‘natural’ and how good everything natural is. Homo habilis were natural and now they’re dead. By contrast, Homo ergaster used technology to change what they ate and to alter themselves physically – both their bodies and their brains. This was far from natural, but it allowed them to evolve and adapt. And their descendants are still here today, populating the planet - one of which is currently writing this blog post. Being natural is for fossils; embracing technology is the future. And it’s still as true today as it was 1.7 million years ago.
* Neanderthals had the biggest brain any human species ever had, with females having an average brain capacity of 1,300 cc and males 1,600 cc. Compare this with our own average brain capacity of 1,100 cc for women and 1,350 cc for men.
** Except Antarctica, which we only started to colonise some hundred years ago.
It’s been a long cold snowy winter. It started in late September last year and it only just ended. That’s almost seven months, more than twice as long as a regular winter.
And this is not the first extreme winter we’ve had in Europe lately. The winter 2010/2011 was also very cold, with plenty of snow, as was the winter in 2011/2012.
So what’s this all about? Aren’t we supposed to be suffering from the greenhouse effect? Aren’t the polar ices meting away from global warming, keeping all the polar bears on shore, rooting through our rubbish bins?*
There’s very little doubt that we’re indeed experiencing global warming. Temperatures have been rising drastically for the last hundred years or so. There’s also very little doubt that human-produced emissions are behind this rise. The increased temperatures have already had an effect on the weather systems on the planet. Heavy rainfalls have become more common, as have floods and tropical typhoons. And – rather counter-intuitively – the number of very cold nights have increased as well.
The reason for these meteorological anomalies aren’t completely clear, but it’s pretty certain that we’re pushing our climate out of its point of equilibrium and into a state of chaos. The expected short-term results are extreme weather, failed crops and increased deforestation.
It has also affected the world’s glaciers. The global glacier mass balance has shown negative values for 19 consecutive years now. This means that on average, we’re losing more glacier ice each summer than what builds up during winter.
We’ve already seen the effect of this on the Arctic ice sheet. The amount of summer ice is continuously diminishing and it’s impacting the Arctic wildlife in a drastic way. Polar bears are indeed struggling to find food when they can’t hunt from the ice, and the amount of sunlight hitting the naked ocean surface increases the amount of algae and might even affect the ocean currents.
“But it’s getting colder, not warmer”
I know. This still doesn’t explain why the winters should suddenly have become so much harsher. If the planet is warming up, why are the winters now so cold and snowy?
Well, perhaps global warming could explain the snowy part. The increased precipitation caused by global warming would also result in more snow in the winters.
For the low temperatures we have to look elsewhere, and the obvious culprit is changes in wind patterns. We’ve had a lot of cold Arctic winds the last few winters, and it’s lowered the mean temperatures by several degrees. But that doesn’t really explain the phenomenon in full. The question then would be why has the wind patterns changed?
Before trying to answer that question, let’s call the phenomenon by its proper name (nothing can be investigated thoroughly without having a proper name for it, surely?); it’s known as the Arctic Dipole Anomaly. It’s a pressure pattern over the North American parts of the Arctic that’s accompanied by a low pressure zone over Europe. Since air tend to flow from high pressures to low, cold Arctic winds have replaced the otherwise milder Atlantic winds typically dominating European winters. The origin of this Arctic Dipole Anomaly is not known, however, but it’s likely that it’s also linked to global warming.
The end of something, or the beginning of something else?
As I’ve mentioned before, we’re currently living in an ice age. It might not feel like we do, since we’re enjoying a temporary interglacial thaw, but we do. The presence of polar glaciers – although diminishing – is a clear indicator that so is the case. The current ice age, known as Pleistocene glaciation, is already some 2.58 million years old and shows no signs of ending anytime soon. Every 100,000 years or so, a new glacial period starts which lasts for 70-80,000 years to be followed by an interglacial period of 15-25,000 years.
So what am I saying? Are we on our way into another glacial period? Is this the end of the Holocene epoch? Well, we don’t know. The glacial-interglacial cycles aren’t exactly regular and seem to be partly governed by chaotic climatological events. But there is this theory of an ice-free Arctic ocean acting as a trigger for glacial periods; that the lack of ice cover would promote moist air to move in over land and result in more snow, adding to the glacial mass balance.
If this is indeed the case, we could be in big trouble, as the Arctic ice sheet is expected to be more or less gone in 5-20 years. And once we’re past the pivot point, the Gulf stream would shut down, further reducing the temperature in Europe. It would be like a failing chain of dependent power generators running out of fuel one after another. The temperature would plummet and land-based glaciers would start to form, first in Scandinavia and northern Russia, and then in Western and Central Europe quickly followed by North America. Instant ice age.**
In Norse mythology, we have the concept of Ragnarök – the end of the world, where gods and humanity will perish and all the land will be washed away by a great sea. It begins with a series of particularly harsh winters, where the snow will stay on through the summer, crops will fail and society fall into chaos. We call this Fimbulwinter, the Great Winter.
Perhaps the old Vikings knew something we’re only now starting to figure out?
* I’ve yet to see any polar bears near my rubbish bin, but I’m certain it’s just a matter of time.
** I say instant, but that’s in geological measures. Even if the changes in temperature and the resulting meteorological effects could be felt within years, we would expect the forming of glaciers to take centuries or even millenia.
I’ve been reading a book. Ok, I’ve been reading several books, as is my habit, and switching between them as I please. But for the moment, I’m mainly reading Present Shock by Douglas Rushkoff. I was lucky enough to see him speak in New York the other month, and he made some very interesting points regarding the history of mass media. As it happens, they coincided with an idea for a blog post I had a while ago, so I thought I might as well use the new info from Rushkoff’s book and get this post done already.
In the old old days, back in the 1980s, they still played music on MTV. You might not think it now, but it was actually a 24 hour channel showing nothing but music videos, with the odd interview thrown in here and there. It was the real Music Television channel.
We all know that’s not been the case for quite some time. Nowadays it’s mainly showing 16 and Pregnant, Teen Mom and Jersey Shores. Reality shows. No scripting – well, not much, anyway – and heavy editing to capture the drama.
Obviously, reality shows are not an invention of MTV. We see them everywhere nowadays, and there are in fact so many of them that there are dozens of shows for each and every letter of the alphabet. We have America’s Next Top Model, Big Brother, Celebrity Circus, Dad Camp, Extreme Makeover, Farmer Wants a Wife, Gay, Straight or Taken?, Hell’s Kitchen, I’m a Celebrity… Get Me Out of Here!, Jackass, Kid Nation, Little Miss Perfect, My Shopping Addiction, Nanny 911, Osbournes, Paris Hilton’s My New BFF, Queer Eye For The Straight Guy, Real Housewives, So You Think You Can Dance, Temptation Island, Undercover Boss, Victoria Beckham: Coming to America, Who Wants To Marry My Dad?, X Factor, You’re Cut Off! Nothing on Z, though. Odd. Perhaps there’s an opening for some kind of Zebra Whisperer show? Or Zombie Dad or something?
There’s something more to it than just capturing moments of the real world though. Almost all of the shows listed above add another element to the mix: humiliation. Whether it’s about getting people to do things they loath or if it’s about exposing people in embarrassing situations, humiliation TV is all about capturing the viewer’s attention by using increasingly shocking material.
Ok, fine; so the current reality television programming is appealing to our most base emotions. That’s hardly news. But reading Rushkoff’s book, I’m beginning to understand the reason behind this trend. And it starts with the end of futurism.
Up till just recently, we were all looking into the future. We had dreams of colonising the planets, creating utopian societies with flying cars and friendly people dressed in white coveralls. But even on a more mundane level, we were leaning into the future. We invested in stocks to see them gain value over time. Even at the turn of the century, we were still trading on the future values of companies and services. An idea’s worth wasn’t what practical use it had today, but rather the potential use it could have in the future.
But that future didn’t really happen, something I’ve mentioned in my post The future isn’t what it used to be. And just after the millennium celebrations had died down, we seemed to suddenly stop and take a good look at what all those promising dot-com companies actually were worth today. And realised that is was… well, not much. The dot-com crash quickly followed, and we stopped leaning into the future and focused more on today.
The end of stories
So what’s this got to do with the fact that we’re currently inundated with reality TV shows? Well, Rushkoff argues – and rather convincingly, I might add – that the end of futurism is linked to a growing mistrust in narrative. Back in the days when the future looked promising and people, companies and governments all proposed future rewards for investments made today, we were also told stories. In almost every aspect of life, we had stories telling us what to believe in, what to think and what to feel. From news channel features to advertising and government propaganda, they all explained things in easy-to-follow stories. And they were all modelled on the old Aristotle’s dramatic structure, consisting of exposition, rising action, climax, falling action and resolution.
But with the growing mistrust in the future, we tended to focus more on the moment, and so complex story arcs were deemed too slow and cumbersome to warrant our attention. This gave rise to the non-story shows like Beavis and Butt-head, where the point of the show was the experience, not the story.
When the major networks started to realise what was going on, they panicked and in a knee-jerk reaction started to produce (or should that be “produce”?) reality TV shows in order to stop people from channel hopping away from potentially boring and complex stories. And to be able to compete, they all made their shows a little bit more dramatic – or humiliating – to make sure that it would grab and keep our attention.
Luckily this is not the end. We won’t have to suffer increasingly more humiliating shows forever. As the management of the major networks’ understanding of non-narrative increases, the current – rather immature – reality shows will fade and get replaced by new types of show.
What kind of shows this will be, I don’t know. But I’m cautiously optimistic and hope they’ll be more interesting than watching some semi-celebrity being forced to eat maggots in a jungle somewhere. The future present really can’t come along fast enough.
P.S. I highly recommend you to read Rushoff’s book ‘Present Shock‘. It’s well written and easy to read and full of interesting ideas and observations.