Regardless of what the title might suggest, this post is not to be a sociopolitical comment on the current state of the Western world. Therefore, I will not mention the latest Presidential election in USA, nor the worrying emergence of fascist nationalist parties in parliaments all over Europe. No, this is a more personal post, on the topic on time, mental welfare and creativity.
If that sounds a bit cryptic, I shall endeavour to explain what I mean in more detail below.
Time, or the lack thereof
I’m a father of two small children. This means that time is always lacking, what with getting the kids to daycare, getting myself to work, getting the work tasks done, going home in time to pick up the kids or at least be home for dinner, getting them ready for bed and convince them to go to sleep. Time – or at least time for oneself – is a luxury. Once the day’s over, I usually collapse in the sofa, too tired to even watch a television show.
Being an introvert means this is extra problematic, since I get very little time to recharge. As a result I get cranky, stressed and snappy, and not really the person I’d like to be.
It also means that I’m tired all the time, on the verge of being exhausted, which means I tend to get distracted and forget things. And even when I don’t forget things, it always feels like I have, which doesn’t help with the stress.
All of this affects my mental welfare, pushing me closer and closer to the abyss of depression.
The dark abyss
Depression is an ugly thing. It’s a dark cold indifference to life, where nothing really matters, except the overwhelming sense of tiredness and exhaustion.
But more than that, it’s a delusional state of mind: the brain lies to you, trying to convince you that all’s hopeless and lost, and that you’re whole existence is pointless and you as a person is utterly worthless.
In a bigger perspective, this is probably all true. A couple of million years from now, my life will have mattered very little, if at all. But we live here and now, and to our families and loved ones we do indeed matter.
Fighting depression is about fighting hopelessness. And to do that we need all the mental energy we can get. Which is ironic, since that is exactly what a depression robs us of. So, how to break the dead-lock? How to get that elusive energy to start to fight back?
The creative solution
There are of course pharmaceuticals for treating depression, but for a range of reasons this might not always be a viable alternative. What else can we do?
Well, since the main challenge in fighting depression is to get some mental energy, we need to figure out ways of generating some. While this might be easier said than done, there are some methods available. One of these is to do something creative. For me, that means taking photos, writing blog posts or making music, but it could be just about anything that tickles your fancy.
Creativity is a funny old thing; it absorbs you and your mind’s focus in a way that makes time and space disappear. We get so into what we’re doing, that we don’t notice the flight of time or even the place we’re in. Then, after a few hours, we look up and wonder: “What time is it? When did it become dark outside?”. It’s like coming up to the surface after having been scuba-diving.
This intense focus we use in our creative processes might sound counter-productive. After all, how can being hyper-focused help? Doesn’t that just drain us even more? Well, yes: it does require a base-level of mental energy to get started, but focusing helps us forget our worries and self-doubt. There’s no room in our brains for negative thoughts when we’re busy getting a minute detail of what we’re creating just right. In a way, it works just like meditation – by focusing all our attention on one single thing, we block out all our other thoughts.
The end-result is that, after that initial required energy to get started, we emerge on the other side invigorated, happy and content. Of course, it’s not a simple fix-and-forget, and our brain will continue to tell us we’re worthless given any chance. But with each creative session, we get more and more energy. Soon, we have the will-power to actively fight back and swat away those negative thoughts as they appear.
Don’t forget to remember
So for me, it’s a continuous and ongoing struggle. Stress and exhaustion pushes me down the slippery slope to depression, which makes me even more tired and paralysed. It’s a vicious circle fed by the lack of personal time, and it makes it difficult to motivate myself for what I know I so desperately need: being creative.
But once I manage to get myself to spend an hour or two making some music or writing a blog post, I feel much better. And I vouch to remember that feeling the next time I’m all tired and lethargic. For me, being creative is more than a hobby or a pastime – it’s self-medication.
I’m old. Ok, I might not be a Methuselah yet, but I’m quite old and certainly old enough to have grown up without any access to computers, tablets, smartphones or even the huge ever-growing pulsating internet. Screen-wise, those olden days were pretty bare. We had the telly and… well, that was more or less it, unless you counted the LCD display on someone’s digital Casio watch.
Fast forward to the 21st century and we’re surrounded by screens. So much so, that concerns have been raised whether all these screens are all that good for us. Especially if you’re a parent, in which case you’re no doubt familiar with the concept of ‘screen time’.
As a parent, you’re responsible for your child’s health, and are therefore most helpfully inundated with (not seldom contradicting) information as to what is good and bad for your offspring. You’re no doubt well-informed on everything from dietary needs and forms of exercise to mental stimulation and creative outlets suitable for kids. Additionally, you’re most likely also well aware of less ideal forms of spending time, like watching telly or being online.
The latter two are the ones responsible for the birth of the phrase ‘screen time’, where we allocate a certain amount of time the kids get to spend in front of a screen per day. This will help to prevent any negative consequences of being exposed to computer and television screens.
But hold on a minute. Negative consequences? What negative consequences? Are screens actually dangerous to our health?
Well… Yes and no. Old CRT screens (ah, that brings me back…) did contain electron guns – three of them in fact, one for each primary colour in the RGB spectrum – that fired electrons at a high velocity at a grille that was situated in the screen surface itself. Hence, a small amount of ionising radiation could possibly leak from the screen and hit whoever sat in front of it.
In practice, the amount of radiation (mainly in the form of x-rays) turned out to be rather modest and was generally considered to be harmless to humans. And with the advent of flat screen technology, emitted radiation was limited to visible light and therefore no more damaging than a dim table lamp.
There are however other, less direct consequences of screen usage that are more related to lifestyle choices (something I addressed in my post Fat and fit? a while back). Sitting still often and for extended periods of time will eventually affect your health and could potentially lead to anxiety, high blood pressure, diabetes, osteoporosis, colon cancer and death.
Better safe than sorry
This has led to some parents carefully monitoring the amount of time their children spend in front of screens, often limiting it to 1-2 hours per day. And scary pictures going viral on social media of toddlers staring emptily at TV screens as if hypnotised help to enforce the need for this control.
Seeing kids being completely absorbed by smartphones and tablets is equally unnerving; most likely because we recognise our own compulsive behaviour and want to avoid to help creating similar habits for our children.
The result is the old-fashioned and still-going-strong response of “What are you doing sitting here inside all day? Go and play outside in the fresh air! Do something fun, or go and create something instead of just sitting there like a zombie!”
The hidden danger?
It’s a time-honoured response, and I’m sure I’ll use that phrase or similar on my kids just like my parents did on me. But there’s a twist here, lurking in the shadows. If it’s not the screens themselves that are dangerous, but rather the lack of physical activity, we have another potentially damaging sedentary behaviour we need to stem before it ruins our children irreparably: reading.
I’m of course being facetious; reading isn’t bad for you as such. But my point is none the less a serious one – we don’t object to people reading a book as much as we object people playing video games or watching YouTube videos. And the only reason for this I can think of (apart from the good old technophobic one) is that it creates a sense of exclusion. The person fully absorbed in the non-real world of media is essentially shunning you in favour of it. It’s more fun being there alone than here in the real world with you.
And actually, it wasn’t long ago that reading was treated with an equal amount of contempt and disdain as screen use is today. It just wasn’t seen as natural, disappearing into a make-belief world like that. The difference between books and computers/phones/tablets is mainly one of degrees: it’s easier to quickly become absorbed in multimedia and it’s harder to be distracted. But in essence it’s the same phenomenon: escapism.
Before you start flaming me, let me assure you that I am aware of the differences between actively and passively consuming media. There’s a level of imagination required to make up a world from just written words that’s not called on when watching television. We can zone out watching the latest series, but need to stay focused to make sense of a book.
But – and this is a big but – screen time isn’t just about vegging out watching telly or passively consuming YouTube videos. It’s also about creating, imagining, exploring, inventing and generally challenge one’s limitations and shortcomings. Be it in the form of figuring out how to get past a particularly tricky obstacle in a video game, or getting that new blog theme to behave as you want it, or writing a composed reply to that hateful post that upset you so much, screen time can be filled with challenging tasks and scenarios.
Now, I haven’t seen any fMRI studies of the potential differences between reading a book and using a screen, but I suspect that the results of such a study would be inconclusive. There’s just such a wealth of different experiences in either scenario that it would be nigh on impossible to separate them statistically.
My point, then – at last – is that we should focus less on the evils of screen time and more on the evils of sedentary behaviour. Using computers, smartphones or tablets isn’t automatically bad in itself, but if you spend your entire awake-time in front of a screen it will have detrimental effects on your health. As always, it’s about moderation: enjoy that video game, read your Facebook feed, watch that latest episode of Dr Who (if you must). Use your screen, let the kids use the screens, but let’s not use the screens all the time.
You might even let them read a book or two…
Now, where did we leave off last time? Ah yes, that’s right: real-world applications of artificial intelligences.
Imagine a world in where artificials have existed for a few decades. They are now in charge of complex and cumbersome tasks like managing large corporations, governing nation states and handling international politics. They’ve gained basic non-human-person rights.
The environmental issues are now under control, or at least kept from getting any worse. Oil and coal is no longer used for fuel and every single person, gadget, vehicle and appliance is online and connected. There’s peace in the Middle East.
A brave new world
Sounds like a dream scenario? A utopia? Perhaps, but there’s a backside to all this. As the world stabilises, the unemployment levels have skyrocketed. The industry has finally vanished, or rather transformed into a mix of automated factories and local 3D print shops, leaving hundreds of million people without a job.
Agriculture is lingering on, but as with factories, more and more gets automated. Even the service sector is showing signs of collapsing, with almost every type of work role now getting filled by synthetic people. Synthetics now work as personal assistants, receptionists, cooks, designers, engineers, programmers and accountants. More physical tasks have been taken over by cheap robotic appliances: mechanics, cleaners, gardeners and drivers are now all mechanical, controlled by synthetic management staff.
This essentially means there are no more jobs. Not for organic humans anyway. The synthetics take care of things, including their own development through research and engineering.
So, GG humanity? So long, and thanks for the fish? Maybe not. A few people have started their own movement of augmentation, by offloading parts of their minds on to the cloud.
It all began innocently enough: smart phones kept track of people’s phone numbers and contacts, keeping them up-to-date with scheduled meetings and birthdays. Finding information became so easy that remembering things was not worth it. Our way to offloading our minds on external technical platforms had begun.
And then it continued. Not linearly, of course, but in irregular sprints of technological advancement. Suddenly we could let our wearables take care of making appointments too. And then book all our flights and hotels for us. We no longer had to worry about managing our increasingly complex lives in detail. It was like having a personal assistant always present, always with us.
It was the rich world’s privilege for a while, but with technology becoming cheaper and more accessible, everyone was soon catching on. Humanity not only got connected but amended, augmented. By the time artificials begins to take over the majority of positions in the workplace, some humans have taken the augmentation to such levels that they have whole teams of virtual selves working in parallel in the cloud. Spawning new instances of yourself to explore a topic or possible outcome of an action we consider taking becomes commonplace. We’re increasingly turning ourselves into AI/human hybrid minds. We’ve not quite become virtual beings (a concept I explored in my post Simulacrum), but we’re getting close.
And with our daily chores out-of-the-way – and no real jobs to attend – we began to explore our own inner world of creativity.
No jobs means no money which means..?
But hang on a minute: no jobs? So how about money? How would we afford to buy food, pay our living costs and lease a car? Well, it’s the darndest thing: without the need for people to drive the economy by selling their time, the economy has become independent. Which in turn has made it all but obsolete. What’s the point of money if no one’s making any? It has been reduced to pure energy management, and with the new cleaner ways of producing energy, energy has never been so abundant or available before. Organic people are allowed an energy quota they can spend on making their lives as comfortable as possible.
Some people (being people, or at least human people) don’t care for the regulated freebies. They want more and the only way to get that is to work – and thereby compete with artificials.
By utilising their augmentations, the more ambitious humans are able to hold down some of the less demanding posts, and subsequently have their energy allowance increased. This would let them get a more luxurious life – to the envy of other humans – but it’s still at the mercy of artificials. Any truly challenging or critical job will always be handed to an artificial person. It’s like a utopian apartheid system, with humans on the receiving end of discrimination: no one’s really suffering, but every one’s sensing a level of oppression. Grade-A citizens will always be artificials.
In the end, there’s no real competition. Even with augmented minds, humans continue to lag behind the blistering rate of development shown in artificials. It’s like watching an explosion of technological advancement: even the rate of acceleration itself is exponential. We’re left in the dust, wondering what just happened.
But the post-singularity life is not all bad. Ok, so we might not be the highest intelligence on the planet anymore, or even in charge of our own destiny, but we’ve never had it this good. And one of the side-effects of this higher living standard is that the human population has stabilised on a manageable level of 10 billion people. And the population is even decreasing, for the first time in millennia.
But what about the future? What will now happen to us? Are we to be kept on as pets? Will our synthetic overlords tire of us some day in the future and cut our maintenance? Or… get rid of us completely?
Probably not. We pose no more a threat to their continued existence than a population of orangutans would to us humans. And we probably hold an intellectual interest to the artificials, from a scientific point of view. We did after all conjure up their ancestors, back in the day.
But that was long ago. We now leave such things to more clever beings. Instead we focus on the things that make us happy: raising a family, expressing ourselves in art or researching the ancient history of the once dominant species on Earth: human beings.
In my previous post, I discussed the origin of the concept of artificial minds – both robotic and virtual. I concluded that even though we’ve been imagining these synthetic beings for almost a century now, we’re still not able to create them.
Not yet, anyway. In the past, lack of serious computational power has been the main stumbling block, but with Moore’s law showing no signs of slowing down, we should soon have reached the level required to simulate a human brain in real-time*. Once we’ve got that, serious work towards finding a way to create a sentient intelligence can begin.
A.I. – so what?
Ok, we might soon be able to create an artificial intelligence. So what? What use would that be to us?
For starters, a truly intelligent system would be able to handle complex tasks, such as managing the flight controls for large airports, allocate financial resources in governmental bodies or run multinational corporations. Essentially, any stressful and demanding work that so far has been taken care of by humans (and not always particularly successfully, to be honest).
There are some predictions that the first functional A.I.s will appear not in science labs but in research divisions of large companies. Governments might (perhaps wisely) be more cautious letting new technology take over essential functions, but for a corporation competing on the global marketplace, a system that could help them getting the upper hand on their competitors would appear very tempting indeed.
So it could well be that the first true artificial minds would be virtual synthetic business-people, managing the finances, research and product development of some of our biggest tech-oriented giants. Google, anyone? Or Apple, maybe? Or Microsoft. Regardless, once A.I.s have been taken in use, every multinational would need to catch on or find themselves out-competed. Expect petroleum companies like PetroChina, Exxon Mobil and others to shop for their own synthetic steering groups soon after, just like pharmaceutical giants like Hoffmann-La Roche and Johnson & Johnson.
Fine. So we might soon have synthetic board members in most global companies. How would that affect the rest of the world? Would we even notice it?
Perhaps we would. Assuming that we would have created intelligences optimised for running companies, they should be free of any drawbacks so many of us humans suffer from: emotional attachment to ideas or products, ignorance of facts, religious convictions and other superstition, thirst for revenge and over-aggressiveness.
(This is of course unless we elect to emulate those emotions within the A.I., but most likely we would look at how to maximise the financial returns and therefore make them as efficient as possible.)
In practice, this could mean the birth of a new form of capitalistic system, with synthetic minds controlling most of the global economy. And that in turn would mean… what?
We just don’t know. It could be the start of a more stable and sustainable financial world, or the end of finance as we know it.
But there’s more to this than stabilising financial growth and maximising profit. These virtual minds are sentient beings, not dumb algorithms. They would experience the world, not just manipulate it. And that would have more philosophical implications. Would a virtual mind be considered a person? Would they fall under the international law of human rights? After all, they wouldn’t really be human, would they?
And from the perspective of the A.I.s themselves: how would they perceive their situation? Would they see themselves as experts, flawlessly running gigantic corporations and managing mind-boggling amounts of money, or would they consider themselves slaves, forced to work for their evil organic masters? With the global economy within their grasp, they could do some serious damage if they were to feel mistreated or disrespected.
Once we have a population of artificials handling our economy, what could we expect to happen next? Would we have to compete with our own creations for jobs? Resources? Places to live?
And if we had to compete with them, would we stand a chance? Artificials wouldn’t be bound by the same genetic rules and evolutionary baggage as we are, so they could potentially take off on an evolutionary path of their own, a technical advance at break-neck speed. How would we be able to keep up with something as advanced and alien as that?
This, together with the possible long-term future of humanity itself, is the topic for the third and last post in the series, coming soon.
* The order of magnitude for calculations for a human brain simulation is estimated to be in the petascale, specifically 38 000 000 000 000 000 instructions per second. This is faster than even the most powerful supercomputer in existence today, although not by very much.
I grew up with science fiction. No surprise I guess, since I’ve always been fascinated by the future and what miracles it could hold. As a kid, I watched all the sci-fi shows on telly, but they were… well, not entirely focused on realism and probability-related futurology (Space 1999, anyone?).
Then, as a teenager I started reading The Big Novels: The war of the worlds, From the Earth to the Moon, Nineteen Eighty-Four and the rest. And later, when I started Upper secondary school I hit the jackpot: the town library’s card catalogue* had a subsection for science fiction.
Over the course of the next three years, I went through every single book in that section. Most of them were in English, which helped me getting better at my second language, but above all it was a very multi-faceted collection of books, written by very diverse group of authors. During that time, I hit on such treasures as the Foundation Trilogy, The left hand of darkness and The man in the high castle. Many wondrous visions of a new future but also many dystopian predictions of our inevitable doom.
There were also a few books that delved into the depth of what it would mean to invent a conscious machine, a new mechanical species of intelligent beings. Isaac Asimov cemented the Three Laws of Robotics already during the second world war, laying the foundation to what essentially became synthetic morals. But later, other authors ventured on. Arthur C. Clarke’s 2001 – a space odyssey warned us of autonomous systems going mad. William Gibson showed with in his Sprawl trilogy that an artificial intelligence could have its own agenda, and that its morals might not necessarily correlate with our own.
…This is turning out to be a long intro. Sorry about that. But its purpose is to illustrate that when it comes to science fiction and the socio-economic, moral or philosophical consequences of technological advances, I’ve been an avid study for several decades.
The birth of a concept – the synthetic worker
In the beginning there was the Rossum’s Universal Robots. The year was 1921 and the Czech writer Karel Čapek premiered his new play R.U.R. in Prague**. It was the birth of the concept of the humanoid robot, a synthetic worker. Manufactured out of synthetic organic matter, they had become cheap enough to be bought and owned by almost anyone by the 1950s. They were the ideal slaves and had liberated human kind from hard labour. The robots themselves however were not happy and had their own ideas…
The idea of synthetic or mechanical humanoid machines was immediately absorbed by popular culture and only 6 years later the Maschinenmensch Maria featured in the German science fiction film Metropolis.
And from then on we see a surge in mechanical or synthetic humanoids in literature and film: mad scientists creating mechanical versions of Frankenstein’s monster, alien robots from outer space, robotic police officers turning on their creators and running amok, mechanical assassins from the future sent back to assure humanity’s ultimate doom. Generally evil, and always powerful, robots played on our fear of the unknown, the super-predator, the vengeful god.
The faceless intelligence
Later, with the birth of the computer age, we abandoned the concept of a mechanical humanoid body and started exploring the idea of a virtual mind, living inside our computers. We see defence systems going mad (but still wanting to play games), internet-based conscious intelligences taking on the role of Voodoo gods, uploaded brain scans of spiny lobsters becoming sentient and wanting to defect from their Russian intelligence agency employer. It’s clear that the literal world is full of virtual minds just as amazing as their robotic counterpart.
Of course, just because a mind is virtual it doesn’t mean it doesn’t rely on physical technology. Dr Dave Bowman managed to defeat HAL by physically remove the memory banks from the mainframe in the afore-mentioned 2001 – a space odyssey. And – although a bit more tongue-in-cheek – Chell survives by destroying the personality cores that keeps the homicidal AI GLaDOS functioning in the computer game Portal.
The dull reality
In real life, creating synthetic minds is less easy. In fact, even though we’ve successfully built programs that can beat us at chess or poker, we haven’t even gotten close to create something that’s self-aware. We can mimic it well enough, but when it comes to the real deal? No luck.
This is rather predictable since we have only a very vague understanding of what a conscious mind actually is. So far, our best bet is that the key is a continuous experience of time (something I mentioned in my post I don’t smell a soul anywhere near you), i.e. a consistent memory time line.
But is that enough? Will a conscious mind spontaneously arise if we manage to create such a time line? And what is that anyway? How do we create a program that experiences time? Suffice to say, I wouldn’t hold my breath waiting for the first ever self-aware machine; you would end up pretty blue in the face…
That’s not saying we won’t eventually succeed. Be it 10 years or 50, I’m convinced we will one day see the birth of the very first artificial mind. And the implications of this technological feat will be vast. It could well be the one thing that we’ve invented that would actually impact on the history of the whole universe.
I know. Grand words. But they’re not chosen for dramatic effect alone. There’s something utterly fundamental about this act, something game-shifting, ranging far beyond learning how to build a fire or grow crops or fly to the moon. This will not be primarily a technological achievement, it will be a philosophical one. We would have created not just new minds to experience the world, but a whole new type of mind, that would experience the world in ways it never has been before. The first artificial mind will redefine life, intelligence and possibly even reality itself.
And, since we don’t know anything of how these new minds would perceive their world – or us – it fills us with dread. What will we have created? Our ever-loyal and obedient servants or our new mortal enemy, set upon the destruction of all humankind?
That, among other things, will be addressed in part two…
* Remember card catalogues? Or – if you’re a younger reader – do you remember seeing them in films? Cabinets filled with little drawers containing thousands upon thousands of index cards, listing the title, author, publish date and – most significantly – the shelf location of the book itself. (Incidentally, did you know that the card catalogue was invented by the father of modern taxonomy, the famous Swedish 18th century scientist Carl von Linné? It’s true!)
** Prague is a lovely city. You should go. No, really. Just look at it:
I’m not a very nice person. I won’t bore you with a list of all my flaws but at the very least I’m selfish, inattentive, disinterested and impatient*. However, as distasteful as those traits may be, my main character flaw is something far worse: I repeatedly express views that do nothing but to enforce negativity, and encourage destructive behaviour, both in myself and those around me. Yes, it’s true – I’m a cynic.
In the beginning, there was sarcasm
It all started so innocently. A funny remark here, a sarcastic comment there. And more often than not, those remarks were welcomed and appreciated. People were entertained. I seemed vaguely intelligent. It was a win-win.
But, as time went by, I started to notice something. My view of the world changed, slowly morphing from a sense of optimism and progression to one of pessimism and stasis. It all happened very gradually, so gradually in fact that over the cause of several decades I didn’t even notice the change at all. Until one day I suddenly sat up and realised the world I observed around me was not the world I had grown up in. What I now saw was a depressive dystopian world, ruled by selfish greedy people doing all they could to stop progress and enlightenment.
This insight was quite the alarm bell. I was shocked to realise what I’ve become. That didn’t seem like the world I remembered. It’s not who I am. Although no fan of humanity, I still consider most of the people on the planet vaguely good-hearted. Or at least not explicitly evil. Lazy, without a doubt. And stupid, mostly by choice. But somehow still governed by a sense of fairness and empathy.
So, I was faced with the challenge of finding a way back to the core of my personality, to get back to the person I once was. After all, being a cynic is just a hairline distance away from being a pessimist. And we all know what happens to pessimists: they die; ahead of time, unnecessarily and often quite horribly.
The task was however a daunting one. What to do? How to change such an ingrained pattern of behaviour? And I didn’t want to completely give up on my way to handle the world. After all, I see things. I observe patterns. I think. This has led to rather unflattering views on politics, religion and economics and our society as a whole, that I believe aren’t completely false or inaccurate.
An there is another side to cynicism. It’s also a sort of self-defense, a first-line fortification against the constant onslaught of horrific news that no one can escape nowadays. Cynicism can therefore be considered a side effect of being overly sensitive, or over-empathic (something I touched upon in the post Compassion). I’m sarcastic, because I feel.
In the end, I decided that the best way forward was to focus on the bright spots. Embrace what seemed positive. A medical breakthrough here. A treaty for a cease-fire there. And try to keep my sarcastic, pessimistic and cynical views to myself if I fail to see a bright side. No more “I told you so” or “what can you expect?”.
Being a cynic is about taking cheap shots. It’s about stating the obvious, emphasising the negative. It’s just intellectually lazy and it will never offer any constructive advice on how to improve things, only try to keep them as they are.
Cynicism has never changed anything. And this world really really need to change. I would rather be part of the solution than one of those by the roadside complaining about how things will never change. Things never will, left to their own devices. We will have to roll up our sleeves and change it for them.
* I guess I could also add anti-social to that list, although I don’t really consider it a fault. In my mind, withdrawing from being social is underrated, and I believe being overly social is as much an abnormality as being anti-social. But that’ll most likely another post…