You find yourself on a mountain slope, overlooking a wide green valley. The sun is beating down, but is not all that hot yet; after all, it’s still only mid-morning. You can smell the wet dirt from last night’s rainfall and the air is dense with the noise of insects. In the distance, big herds of wildebeest and zebra are slowly moving across the plain, eagerly feeding off the fresh green grass. A small group of elephants are drinking by the shallow river, two of the smaller calves are playing in the water, spraying muddy water from their trunks into the air. White-backed vultures are circling high above, riding the thermals, looking for carrion. This is Ethiopia. Or rather, it will be – in another 1.7 million years.
Turning east, you can make out a thin pillar of smoke rising from the mountain slope. Moving closer, you realise it’s smoke from a campfire; there are humans around. Soon you can hear them, and carefully looking through some dense shrubs you can see them too: a family group of some 15-20 people, collected around the fire getting ready to have breakfast. They talk and laugh, and the children are running around, chasing each other with small twigs in their hands. It’s all very idyllic and familiar, but these aren’t modern humans. They don’t even belong to our species. What you’re looking at are the evolutionary grandparents of modern humans – this is Homo ergaster.
The campfire in the middle of the clearing is more than just a fire. It’s more than the latest advance in technology. It’s more than a heat source or a means of protection. It’s more than a tool for hardening wooden spearheads and curing animal skin. It’s more fundamental than that. Fire is intrinsically linked with the evolution of humans; it’s the reason we’ve evolved to the modern form, abandoning the ‘chimpanzee-on-two-legs’-look that we’ve favoured for millions of years. Fire made us human. It shaped our evolution and kick-started the rapid increase in brain size.
What is so special about fire, then? Sure, it’s a convenient tool and useful in many different ways, but claiming it to be the origin of man is a bit preposterous, isn’t it? Well, no, actually, it’s a plain scientific observation. Fire allowed us to do something no other human species – or indeed any other primate – had been able to do: cook our food.
Life is a constant struggle. Perhaps not so much today, but 1.7 million years ago it certainly was. And for our great-great-great-uncle Homo habilis - the chimpanzee-on-two-legs I mentioned above – this was particularly true. They were small creatures, only standing 1.3 m (4′ 3″) tall, and they had none of the advanced tool-making to help them like Home ergaster had. They really weren’t much more than bipedal apes.
Homo habilis inhabited the same grassy plains in Africa as Homo ergaster. In fact, they co-inhabited, and would no doubt have met on numerous occasions. But even though they were from the same era of human evolution they couldn’t have been more different. Homo habilis had a very flat nose and a protruding jawbone. Big jaw muscles flanked its face, allowing it to chew on tough roots and nuts. By comparison, Homo ergaster could easily have passed for a modern human if dressed in the right clothes. Sure, they still had pronounced eyebrow arches and a sloping forehead but the body was more or less indistinguishable from ours – tall and muscular and made for running.
So what had happened? What had transformed the tiny bipedal chimp to an Olympian athlete? Food. Cooked food. And not even a change in diet; Homo ergaster was eating more or less the same things Homo habilis was: roots, nuts, fruits and the occasional small animal. The difference was that ergaster was cooking its food whilst habilis was eating it raw. And what a difference it made. By heating up the vegetables and meat, fibres were softened and broken up and proteins were denatured. This not only made nutrients more readily available, but it also made the food easier to chew. As a result, it allowed ergaster to devolve those big jaws and jaw muscles, and the additional easy-access nutrients created spare time from foraging; time that could be put to use experimenting with tool-making and developing cultural behaviours. The combination of all this lead to ergaster growing its brains, starting with a 50% increase (compared with habilis) and ending up even bigger some 500,000 years later.
Good food, big brain
That big brain came to good use. The environment was becoming more and more volatile and changeable, with long droughts replaced with lush forests and lakes. The climate was turning unpredictable and only the most adaptive of creatures could keep up.
The increased brain also allowed Homo ergaster to evolve a complex spoken language, which helped them communicate effectively and allowed them to live in bigger social groups, cooperating and helping each other.
It also made us the most deadly assassins ever to have walked the earth. No animal, big or small, was safe, and we had soon developed methods and weapons that allowed us to hunt prey so effectively that we could count on a regular diet of meat.
The energy-rich diet also allowed us to grow tall, reaching 1.9 m (almost 6′ 3″). In addition, it would help in maintaining an even bigger and even more expensive brain, starting an evolution of brain size growth that only ended with the Neanderthals reaching the absolute maximum* possible. Anything bigger, and it would be impossible for a human female to give birth to the baby.
And finally, the new cooked diet made it possible for us to venture out of Africa, exploring Asia and Europe. This gave rise to a whole new species of humans which, if longevity is to be the criteria, was the most successful of them all: Homo erectus. Meanwhile, back in Africa, ergaster turned into Homo heidelbergensis, emigrated again to Asia (evolving into Denisovans) and to Europe, turning into Neanderthals. Later still, heidelbergensis became Homo sapiens, emigrated from Africa (again), colonising Europe (again) and Asia (again) and eventually – for the first time ever – reaching the New World. We had finally colonised the whole world**.
‘Natural’? Don’t talk to me about ‘natural’!
It’s amazing to think that all this, all that is human, is a result of the simple use of fire to cook our food. If we hadn’t discovered fire, we would still be the same bipedal apes that was Homo habilis. Or rather, we’d have gone extinct. Homo habilis didn’t make it. They just weren’t flexible enough, inventive enough or curious enough.
So don’t come and talk to me about ‘natural’ and how good everything natural is. Homo habilis were natural and now they’re dead. By contrast, Homo ergaster used technology to change what they ate and to alter themselves physically – both their bodies and their brains. This was far from natural, but it allowed them to evolve and adapt. And their descendants are still here today, populating the planet - one of which is currently writing this blog post. Being natural is for fossils; embracing technology is the future. And it’s still as true today as it was 1.7 million years ago.
* Neanderthals had the biggest brain any human species ever had, with females having an average brain capacity of 1,300 cc and males 1,600 cc. Compare this with our own average brain capacity of 1,100 cc for women and 1,350 cc for men.
** Except Antarctica, which we only started to colonise some hundred years ago.
It’s been a long cold snowy winter. It started in late September last year and it only just ended. That’s almost seven months, more than twice as long as a regular winter.
And this is not the first extreme winter we’ve had in Europe lately. The winter 2010/2011 was also very cold, with plenty of snow, as was the winter in 2011/2012.
So what’s this all about? Aren’t we supposed to be suffering from the greenhouse effect? Aren’t the polar ices meting away from global warming, keeping all the polar bears on shore, rooting through our rubbish bins?*
There’s very little doubt that we’re indeed experiencing global warming. Temperatures have been rising drastically for the last hundred years or so. There’s also very little doubt that human-produced emissions are behind this rise. The increased temperatures have already had an effect on the weather systems on the planet. Heavy rainfalls have become more common, as have floods and tropical typhoons. And – rather counter-intuitively – the number of very cold nights have increased as well.
The reason for these meteorological anomalies aren’t completely clear, but it’s pretty certain that we’re pushing our climate out of its point of equilibrium and into a state of chaos. The expected short-term results are extreme weather, failed crops and increased deforestation.
It has also affected the world’s glaciers. The global glacier mass balance has shown negative values for 19 consecutive years now. This means that on average, we’re losing more glacier ice each summer than what builds up during winter.
We’ve already seen the effect of this on the Arctic ice sheet. The amount of summer ice is continuously diminishing and it’s impacting the Arctic wildlife in a drastic way. Polar bears are indeed struggling to find food when they can’t hunt from the ice, and the amount of sunlight hitting the naked ocean surface increases the amount of algae and might even affect the ocean currents.
“But it’s getting colder, not warmer”
I know. This still doesn’t explain why the winters should suddenly have become so much harsher. If the planet is warming up, why are the winters now so cold and snowy?
Well, perhaps global warming could explain the snowy part. The increased precipitation caused by global warming would also result in more snow in the winters.
For the low temperatures we have to look elsewhere, and the obvious culprit is changes in wind patterns. We’ve had a lot of cold Arctic winds the last few winters, and it’s lowered the mean temperatures by several degrees. But that doesn’t really explain the phenomenon in full. The question then would be why has the wind patterns changed?
Before trying to answer that question, let’s call the phenomenon by its proper name (nothing can be investigated thoroughly without having a proper name for it, surely?); it’s known as the Arctic Dipole Anomaly. It’s a pressure pattern over the North American parts of the Arctic that’s accompanied by a low pressure zone over Europe. Since air tend to flow from high pressures to low, cold Arctic winds have replaced the otherwise milder Atlantic winds typically dominating European winters. The origin of this Arctic Dipole Anomaly is not known, however, but it’s likely that it’s also linked to global warming.
The end of something, or the beginning of something else?
As I’ve mentioned before, we’re currently living in an ice age. It might not feel like we do, since we’re enjoying a temporary interglacial thaw, but we do. The presence of polar glaciers – although diminishing – is a clear indicator that so is the case. The current ice age, known as Pleistocene glaciation, is already some 2.58 million years old and shows no signs of ending anytime soon. Every 100,000 years or so, a new glacial period starts which lasts for 70-80,000 years to be followed by an interglacial period of 15-25,000 years.
So what am I saying? Are we on our way into another glacial period? Is this the end of the Holocene epoch? Well, we don’t know. The glacial-interglacial cycles aren’t exactly regular and seem to be partly governed by chaotic climatological events. But there is this theory of an ice-free Arctic ocean acting as a trigger for glacial periods; that the lack of ice cover would promote moist air to move in over land and result in more snow, adding to the glacial mass balance.
If this is indeed the case, we could be in big trouble, as the Arctic ice sheet is expected to be more or less gone in 5-20 years. And once we’re past the pivot point, the Gulf stream would shut down, further reducing the temperature in Europe. It would be like a failing chain of dependent power generators running out of fuel one after another. The temperature would plummet and land-based glaciers would start to form, first in Scandinavia and northern Russia, and then in Western and Central Europe quickly followed by North America. Instant ice age.**
In Norse mythology, we have the concept of Ragnarök – the end of the world, where gods and humanity will perish and all the land will be washed away by a great sea. It begins with a series of particularly harsh winters, where the snow will stay on through the summer, crops will fail and society fall into chaos. We call this Fimbulwinter, the Great Winter.
Perhaps the old Vikings knew something we’re only now starting to figure out?
* I’ve yet to see any polar bears near my rubbish bin, but I’m certain it’s just a matter of time.
** I say instant, but that’s in geological measures. Even if the changes in temperature and the resulting meteorological effects could be felt within years, we would expect the forming of glaciers to take centuries or even millenia.
I’ve been reading a book. Ok, I’ve been reading several books, as is my habit, and switching between them as I please. But for the moment, I’m mainly reading Present Shock by Douglas Rushkoff. I was lucky enough to see him speak in New York the other month, and he made some very interesting points regarding the history of mass media. As it happens, they coincided with an idea for a blog post I had a while ago, so I thought I might as well use the new info from Rushkoff’s book and get this post done already.
In the old old days, back in the 1980s, they still played music on MTV. You might not think it now, but it was actually a 24 hour channel showing nothing but music videos, with the odd interview thrown in here and there. It was the real Music Television channel.
We all know that’s not been the case for quite some time. Nowadays it’s mainly showing 16 and Pregnant, Teen Mom and Jersey Shores. Reality shows. No scripting – well, not much, anyway – and heavy editing to capture the drama.
Obviously, reality shows are not an invention of MTV. We see them everywhere nowadays, and there are in fact so many of them that there are dozens of shows for each and every letter of the alphabet. We have America’s Next Top Model, Big Brother, Celebrity Circus, Dad Camp, Extreme Makeover, Farmer Wants a Wife, Gay, Straight or Taken?, Hell’s Kitchen, I’m a Celebrity… Get Me Out of Here!, Jackass, Kid Nation, Little Miss Perfect, My Shopping Addiction, Nanny 911, Osbournes, Paris Hilton’s My New BFF, Queer Eye For The Straight Guy, Real Housewives, So You Think You Can Dance, Temptation Island, Undercover Boss, Victoria Beckham: Coming to America, Who Wants To Marry My Dad?, X Factor, You’re Cut Off! Nothing on Z, though. Odd. Perhaps there’s an opening for some kind of Zebra Whisperer show? Or Zombie Dad or something?
There’s something more to it than just capturing moments of the real world though. Almost all of the shows listed above add another element to the mix: humiliation. Whether it’s about getting people to do things they loath or if it’s about exposing people in embarrassing situations, humiliation TV is all about capturing the viewer’s attention by using increasingly shocking material.
Ok, fine; so the current reality television programming is appealing to our most base emotions. That’s hardly news. But reading Rushkoff’s book, I’m beginning to understand the reason behind this trend. And it starts with the end of futurism.
Up till just recently, we were all looking into the future. We had dreams of colonising the planets, creating utopian societies with flying cars and friendly people dressed in white coveralls. But even on a more mundane level, we were leaning into the future. We invested in stocks to see them gain value over time. Even at the turn of the century, we were still trading on the future values of companies and services. An idea’s worth wasn’t what practical use it had today, but rather the potential use it could have in the future.
But that future didn’t really happen, something I’ve mentioned in my post The future isn’t what it used to be. And just after the millennium celebrations had died down, we seemed to suddenly stop and take a good look at what all those promising dot-com companies actually were worth today. And realised that is was… well, not much. The dot-com crash quickly followed, and we stopped leaning into the future and focused more on today.
The end of stories
So what’s this got to do with the fact that we’re currently inundated with reality TV shows? Well, Rushkoff argues – and rather convincingly, I might add – that the end of futurism is linked to a growing mistrust in narrative. Back in the days when the future looked promising and people, companies and governments all proposed future rewards for investments made today, we were also told stories. In almost every aspect of life, we had stories telling us what to believe in, what to think and what to feel. From news channel features to advertising and government propaganda, they all explained things in easy-to-follow stories. And they were all modelled on the old Aristotle’s dramatic structure, consisting of exposition, rising action, climax, falling action and resolution.
But with the growing mistrust in the future, we tended to focus more on the moment, and so complex story arcs were deemed too slow and cumbersome to warrant our attention. This gave rise to the non-story shows like Beavis and Butt-head, where the point of the show was the experience, not the story.
When the major networks started to realise what was going on, they panicked and in a knee-jerk reaction started to produce (or should that be “produce”?) reality TV shows in order to stop people from channel hopping away from potentially boring and complex stories. And to be able to compete, they all made their shows a little bit more dramatic – or humiliating – to make sure that it would grab and keep our attention.
Luckily this is not the end. We won’t have to suffer increasingly more humiliating shows forever. As the management of the major networks’ understanding of non-narrative increases, the current – rather immature – reality shows will fade and get replaced by new types of show.
What kind of shows this will be, I don’t know. But I’m cautiously optimistic and hope they’ll be more interesting than watching some semi-celebrity being forced to eat maggots in a jungle somewhere. The future present really can’t come along fast enough.
P.S. I highly recommend you to read Rushoff’s book ‘Present Shock‘. It’s well written and easy to read and full of interesting ideas and observations.
I’ve been thinking quite a lot about the state of our societies lately, and in particular the way we govern them. Democracy seems to be the method of choice at the moment, something I mentioned a while back (years ago, actually) in The age of democracies. Even though I still stand by the views of that post, it was sort of a rant, with me complaining about how bad things are without really coming up with any suggestions on how to make things better. So, hence this post; what we need to do to achieve a more viable form of government; a Democracy 2.0 if you will.
For those of you who don’t care for jumping between posts or just feel a bit lazy, I’ll quickly recap what I see as the problems of today’s society:
- People don’t think. Even though we arguably have the most advanced brains on the planet, we sure don’t like to use them a lot. It’s the four levels of ignorance – people don’t read, people don’t listen, people don’t think and people don’t care. This doesn’t bode well for a democratic society where everyone’s vote is equal, regardless of one’s knowledge (or lack thereof) of current affairs. Read more in my post The limbic society.
- Modern societies are extremely complex. Today’s societies are suffering from a multitude of extremely complex issues, many of which are very long-term in scale. Our current method of government, with new governments being elected every three of four years, more or less guarantee that these issues will become marginalised and ignored. This was discussed in the previously
mentioned post The age of democracies
- No one really cares. Again, this was mentioned in my Limbic society post, where I highlighted how difficult it is to get people to care about things, even if it affects them directly. If we can’t even get people to change their habits in order to save their own lives, what hope is there for us ever getting a working democracy?
So much for the problems. Whining’s done. Now let’s focus on the solutions. What can we do to change the world?
First thing is pretty obvious: let’s extend the period the elected government stays in power. Instead of changing the government every three or four years, we could let it rule for 25 or even 50 years. This would allow it to make less popular decisions like diverting funds to address environmental problems, or tackle unemployment and welfare issues efficiently, without risking being voted out of power and replaced by some extremist fringe party.
There are some drawbacks with this method though. Being allowed to vote only once or twice in your lifetime means that making the wrong choice could result in you being stuck with a rubbish (in the best case) or malicious (in the worst case) government, that has either no capability or incentive to make any improvements. If we’re not careful we’d end up with what is essentially a one-party state – including personal cults and government mind-control and propaganda – with no hope for change for another half a century or so.
Ok, so perhaps that’s not the best way to go. But the pseudo-dictatorship of long-term democracy could point the way to another possible solution: getting rid of democracy all together.
Dictatorship is such an ugly word. It conjures images of cold-hearted tyrants, oblivious of their subjects’ lives and problems and with little or no interest in making changes for the better. But in the early days of European democracy, back in the 18th century, it was seen as a viable alternative to letting the unwashed masses in on the power. After all, what did a peasant from Cornwall know about the diplomatic issues between Germany, France and Britain? Or a goatherder from Lyon? No, it would be better if a well-educated and benign ruler took charge of the society, and – with the aid of advisers – ruled the country in the best interest of everyone.
This form of government does have its merits. With a single powerful person in charge, necessary changes can be made swiftly and efficiently. And if that sounds like the way successful businesses are run it’s no coincidence; they’ve all recognized that democracy is not the way to go when you need to get things done – a strong and dynamic leadership is required.
Dictatorship is not all sunshine and fluffy bunnies though. Get the wrong person in charge and you’re in for a lifetime of suffering. And, let’s admit it, there are plenty of wrong people around.
Robots to the rescue
A third possible solution would be to get rid of human rulers all together and replace them with autonomous systems controlling all the complex aspects of modern society. If we could come up with computer systems clever enough to pass laws, manage the global finance and conduct diplomatic negotiations, we should have no need for humans in our governments. And, hand on heart, are we really doing all that good of a job ourselves currently? Or historically?
Technical issues aside (for one thing, making sure no one could maliciously add code to modify the system), this approach has some drawbacks. It would need to be a self-improving system, capable of learning from its own mistakes. And being an artificial system, people might have issues with it, not wanting to be governed by machines. (Humans are strange like that.) I can foresee an anti-machine underground movement, performing terrorist attacks against what they would see as an evil dictatorship, even if things would be better than ever. The perceived lack of freedom could potentially fuel a violent revolt, bringing us back to a world of scarcity and suffering.
As you can see, the solutions outlined above all have potential downfalls. None of them would be able to fix all the problems by themselves. This seems to be more difficult than I thought.
But then I had an idea. I was reading Douglas Rushkoff’s book Present Shock, where he described how we’re willing to abdicate our free will to apps and programs as long as they are convenient and make our lives easier. Think of Facebook, Amazon, FourSquare, PayPal and many others. Even when news reaches us of new freedom-limiting terms and conditions for these apps, we keep on using them. After all, how could we not? They’re just so convenient. And anyway, everyone else is using them, so how bad can it really be?
My idea was that we create a kind of butler-ware app. An app that is designed to make the complex issues of modern-day society easily digestible and understandable. It would become your personal advisor, not only in political matters but in any aspect of our lives where we need some guidance. Do you wonder how refined sugar affect our bodies? The app would inform you of the latest findings. How could you make sure your daily commute had the least possible negative impact on the environment? The app would know. Will computer games harm your children’s brain development? Ask the app.
And when it comes to voting, it wouldn’t even have to work. It’s main purpose would be to give the impression that we still had some input on the governing of the society. The actual governing could then be taken care of by autonomous systems (see above) behind the scenes. This ruse would take away the issue with humans revolting to ‘free us from the tyranny of the machines’. It might not be democracy, but it would be an efficient, peaceful and humane form of government.
A brave new world
So there you have it. We’ve just solved the combined problems of environmental issues, political turbulence, poverty and over-population. With a docile and malleable populace, and powerful automatic systems governing the world, we’ve essentially created a utopia. Well, apart from the fact that people are never happy anyway. They will always find something to be upset about. But the main point is that the world is safe, and the humans are safe with it.
By the way, sorry about the length if this post, but we have after all saved the world. That’s worth a few extra words, is it not?
I met a good friend of mine in New York the other day. Ok, so that might not sound particularly noteworthy if it wasn’t for a couple of facts: firstly, I don’t live in New York (or in the United States, as it happens) and secondly, we had never met before.
But let’s start at the beginning.
I got to know Amy on Twitter some 16 months ago. She was introduced to me by another Twitter friend (Hi Lisa!) and I started to follow her on Twitter and on her blog*. She turned out to be funny, intelligent, quirky and kind. Over time I got to know her well, and nowadays we email, text, tweet, Facebook and/or blog post comment each other 10 times a day or more.
We’ve been sending each other packets with stuff from our respective countries. After the house fire, Amy sent me some American sweets and a bunch of CDs with music for when I was driving across Europe during the move from Isle of Man to Finland. In turn, I’ve sent her Finnish chocolate, European music and a few Swedish films (no, not that kind of Swedish films – don’t be silly). I also sent her some salty liquorice (a.k.a. salmiakki), which she bravely tried:
Go to 3:30 if you’re in a hurry and want to cut to the chase. But you’ll miss a lot of awesome Amy-talk though.
We’ve always joked about the possibility to meet in real life some day. Amy wrote an open letter to some rich people with a proposition to sponsor her trip to Europe. And I’ve always wanted to go to New York, which isn’t that far from where Amy lives. But with a family and two small children it didn’t really seem plausible.
But then, an opportunity presented itself. The company I work for regularly send people on courses and conferences to help their employees to expand their knowledge and improve themselves in their field of expertise. So, whilst home on paternity leave a while back, one of my colleagues contacted me and said he and another of my colleagues were going to a web development conference in New York, and wondered if I would be interested to come along? Now, the WebVisions conference looked really interesting, with lots of talks and workshops on what we’re currently working on, so I was really eager to go. Plus, it was taking place in Manhattan, which was very cool. What if I would be able to meet up with Amy for a few hours? Wouldn’t that be the best?
The trip got approved and planned and I contacted Amy regarding the possibility to meet up on Saturday. She got really excited and was not at all worried that I’d be some kind of psychopath or serial killer. Or even a female truck driver**. Imagine that. So she decided she would take the train down to New York City on Saturday morning and then the train back late in the evening. That way, we’d get a full 12 hours of New Yorking together.
The weeks flew by, and before I knew it, it was time to get on the plane to New York. Only, we wouldn’t go directly to New York. First stop was Helsinki, to pick up the third member of our little group. Then on to Frankfurt for transfer to New York JFK. So the trip turned out to be truly multi-lingual, starting in Swedish, then turning Finnish, then German and finally American English.
It was a long journey. Starting early Tuesday morning, we were on the go for 25 hours before finally arriving at our hotel in Manhattan late Tuesday evening. As we got there, we were tired and hungry and I’m not too proud to admit it – we McDonalded. *hangs head in shame*
New York is a big city. Now, I’ve been in big cities before. London is pretty big and Kuala Lumpur is even bigger. But New York is bigger still; its metropolitan area is home to some 18 million people, way more than the combined population of Sweden and Finland.
New York City has a buzz, a pulse that’s very energetic. That’s not only down to the amount of traffic (which is substantial). It’s something to do with the attitude of the people living there. They are always in a hurry, always on their way somewhere else. But New Yorkers are still friendly people and will help you if you’re lost or have problems with your Metrocard, which is kind of impressive.
We stayed at the hotel Eventi, which was very fancy, with big rooms and great views. In the mornings, we had breakfast in a diner and then took the subway to the conference in Lower Manhattan. It made me feel very metropolitan.
The days in New York flew by, the conference ended and suddenly it was Saturday. Amy’s train was due in at Penn Station 9:30 in the morning, and then we were to meet in the lobby of my hotel, which was just a couple of blocks away from the station.
At 9:30 sharp I got a text from Amy: “Am here!!! On my way, maybe 5, 10 minutes?” I told her was going down to the lobby and wait for here there. So there I sat, waiting. I was getting nervous. What if we didn’t have anything to talk about? What if it was going to be all awkward? What if she didn’t like me in real life? Perhaps this was actually a really bad idea? But then suddenly there she was! We smiled, hugged and talked and everything was just fine. No awkwardness at all. So she dropped her stuff off in my room, and out we went to do some New Yorking.
Now, I’m not going to describe the whole day in this post. Suffice to say that we went to the Central Park Zoo. We had lunch. We did some shopping. We walked all over the place. And we talked and talked and talked. (If you’re interested in a more in-depth account of the day, I’ll refer you to Amy’s posts (yes, that’s ‘posts’ as in plural, because there are several) on the subject: Part 1, Part 2, Part 3, Part 4 and Part 5. She goes into plenty of details on what we did and where we went. Also, she describes it much better than I ever could, being a published author and everything. I whole-heartedly recommend you read them if you haven’t already.)
As I mentioned in the beginning, we’ve been friends for some 16 months. We’ve tweeted a lot. We’ve blogged and commented on each others posts. We’ve even Skyped a couple of times. We just had never met in real life. But as it turned out, that wasn’t really an issue. Real life-Amy is just like online-Amy: funny, quirky, intelligent and kind. She’s generous and friendly and always excited about learning new things. She also seem to genuinely enjoy discussing even the most abstract and odd of subjects. In short: she’s just like the Amy I already knew and loved from the Internet.
No, not ‘online friends’…
So yes, we met online. We’ve got to know each other online. We almost exclusively communicate online. And, until that day in New York at the beginning of March, we’d never met. All of that is true, but I don’t see that as much of a problem. Ok, it’s of course preferable to be able to meet in real life. But who are we to dictate how we’re supposed to meet? Finding friends is a rare thing and finding really good friends is even rarer. I would feel very ungrateful complaining about the medium of our friendship.
And actually? The time difference is more of a problem than the online thing. Me being in Finland (GMT+2) and Amy being in USA (GMT-5) means that we’re only awake at the same time for short periods of time each day. This is rather annoying. But it does have the benefit of always having someone to talk to if Baby Boy wakes me up in the middle of the night.
In the end, the simple truth is this: I’m so very lucky to know Amy. And luckier still to be able to call her my friend. Because that’s what we are: good friends.
Plain and simple.
* This might sound stalkery, but I can assure you that this is normal online behaviour and not nefarious at all.
** Amy’s dad is convinced that everyone on the internet are cat-fishing female truck drivers.
We’re incredibly lucky. The world could have been a completely different place; a world without humans. There would have been no Giza pyramids, no wall of China, no Roman empire, no International Space Station. *dramatic pause* No hummus.
It’s a story of hardship, disasters and conflicting theories.
In a world far far away, a long time ago
Modern humans show a very limited genetic variation. In fact, if we randomly take two people from anywhere in the world and compare their DNA, they would be more genetically similar than two mountain gorillas from the same troop*. All humans alive today are, in essence, cousins.** But how can that be? What happened to our genetic variation? To get the answer to that we need to – once again – visit those prehistoric grassy plains in Africa from where we came.
Several hundred thousands years ago, modern humans existed only in Africa, and were slowly expanding north through what was then the fertile grassy plains of Sahara. But then, some 70,000 years ago, the Indonesian volcano Toba erupted in a vast cloud of ash and smoke, triggering a volcanic winter of several years and kick-starting the latest ice age. Our population suddenly collapsed to a fraction of its previous size and only a few thousand humans remained in the whole world.
This was obviously disastrous. Our genetic gene pool shrunk to a puddle and even a single epidemic outbreak could easily had killed off all humans in one go. And thus, even to this day, human beings have a very limited genetic variation.
It’s a lovely theory, full of drama and perseverance against all odds. But unfortunately it looks like it might be incorrect. The Toba super-eruption did take place, and the latest ice age (or rather the latest glacial period in the current ice age) did start about that time, but studies of the full genetic material of humans show no evidence of a drastic population bottleneck at that time. Bummer.
There is however another theory that can explain our lack of genetic variation.
Going out with a fizz, not a bang
In the year 2000, a study of human population bottlenecks found that there had indeed been a drastic reduction of the human population. But it hadn’t happened 70,000 years ago. And it wasn’t a single dramatic event. Instead, the study found that early humans most likely suffered a sustained drawn-out population bottleneck effect ca. 2 million years ago. It seems likely that our world-wide population was as low as 2,000 individuals for perhaps as long as 100,000 years.
Now, 2 million years ago human beings weren’t modern. In fact, even though they were our direct ancestors and begun the unbroken lineage to the current human population, they were sufficiently different from us to be defined as a different species: Homo ergaster.
But regardless what chronospecies we belonged to, it seemed like we had a really hard time. With a world population of only 2,000 individuals for an extended period of time, we would – in todays conservation terminology – be classified as a critically endangered species. But, as luck would have it, instead of going extinct, we spawned several new species from isolated pockets of populations. And even if most of those species went extinct, more than a million years later Homo ergaster was still around, together with a sister species that had spread to Asia: Homo erectus. And Homo heidelbergensis had appeared (and would later give rise to Homo neanderthalensis, the elusive and still un-named Denisovans as well as modern humans: Homo sapiens). So our toughest challenge was also the key to our most proliferate speciation and helped us to spread across the world.
But a story about the origin of modern humans wouldn’t be complete without mentioning Adam and Eve. No, not the biblical ones. The real ones. Let’s start with Eve, since she came first.
All humans in the world today are descendants of a single woman who lived some 200,000 years ago. She is known as Eve, or Mitochondrial Eve.
Now, the concept of Mitochondrial Eve might warrant some further explanation. Mitochondria are the power plants of our cells, converting glucose and other molecules to ATP – the main chemical energy form for the cell. They have their own DNA, separate from the main DNA that is housed in the cell’s nucleus. It stays unaffected by cell division and recombination and only really change through the slow process of mutation. As mitochondria are inherited exclusively from the mother (sperm cells only transfer nuclear DNA to the egg cell during fertilisation), we can trace the human lineage on the maternal side by studying mutations in the mitochondrial DNA.
Even though all living human beings are descendants of a single female, it doesn’t follow that there was only one woman left, and that her children had to mate with each other (ew!). Rather, many thousand women probably existed, spread over a number of tribes. But chance would have it that all the other women’s daughters sooner or later didn’t have any daughters of their own. And so their lineages of mitochondrial DNA were broken.
The end result is that all humans alive today have inherited their mitochondria from the set that an ordinary tribes woman somewhere in eastern Africa carried in her cells some 200,000 years ago. Which I think is pretty cool.
There is also a real Adam, called Y-chromosomal Adam, even though he’s not as interesting as Eve.
Y-chromosomal Adam is the man whose Y-chromosome can be traced to every living man today. Since the Y-chromosome doesn’t recombine with other Y-chromosomes (only males carry one, and they only carry a single one), it can be tracked back in time like the mitochondria of Eve. But, since only men carry Y-chromosomes, only men are direct descendants of Adam, not all humans. Which is why it’s not as interesting as the mitochondrial Eve thing.
A bit more interesting is that Eve lived about 200,000 years ago and Adam some 142,000 years ago. That’s a gap of 60,000 years – give or take a few thousand years. So Adam and Eve never met. In fact, the biblical story sort of sounds like a crude misinterpretation of our phylogenetic history, as if it had been told to some illiterate goat herders many thousand years ago, passed on as a magic fairytale from generation to generation until finally written down thousands of years later. Which of course it couldn’t have been. I mean, who would have told them the story for a start? The human technology at the time – although impressive – did not include advanced genetic sequencing and supercomputers for analysing the results.
So it looks like the biblical Adam and Eve have very little to do with reality. Something we probably should be thankful for, considering the story’s inherent sexist, demeaning and generally disturbing nature. It is a fairytale, and not even a very good one.
Luckily the truth is much more interesting.
* Mountain gorillas live in groups of 20-50 animals called troops. One dominant male (a silver back) mates with all the females in the troop, limiting the genetic variation within the group. But even so, they display a much higher level of genetic variation than humans.
** Our limited genetic variation might in part help explain our instinctive dislike of incest. Since we’re all so very closely related already, any incestuous behaviour might result in heavily inbred offspring, with a high probability of them ending up suffering from some horrible inherited disease. This is not so much an issue in other, more genetically diverse, species. Like rats, for instance, who are much more liberal in their mating behaviour.