While the title of this article might seem bit dramatic, it was inspired by the epic ‘Terminator’ series of movies. In the movie the villain is a supercomputer called Skynet, developed by Cyberdyne Systems and the American military. Skynet is developed in order to guarantee a fast reaction time and remove any human errors during an enemy attack. But as the story goes, Skynet soon becomes ‘self-aware”, and when its human operators, realizing the consequences try to deactivate Skynet, it views this as a threat to its existence and triggers a nuclear war with Russia to eliminate its No: 1 enemy -Humans.
Are we on a path to create Skynet and is judgment day upon us? This article tries to look at some of the current technological developments and possible fallouts which can have a profound effect on humanity.
Rise of the Tech Companies
Larry Page and Sergey Brin, co-founders of Google, have big plans it seems. Less than a month after a major restructuring which saw a new holding company named Alphabet Inc. being created and a new CEO being appointed, Google has again made headlines with a brand new logo.
The rise of tech companies such as Google, Facebook and Amazon is amazing, to say the least! Who would have thought that these companies would have such an impact on our society as well as on our personal lives? All of these companies have many traits in common. Yes, they are here because of the internet and yes, they are all U.S based companies, but what is most striking is that all of them deal with what is worth literally more than its metaphorical weight in gold in the 21st century – information (and personal information, at that). Today, we willingly share our most intimate details with the likes of Google, Facebook and Amazon. In fact, these companies know much more about you than your own parents or spouses do.
Tech companies have been investing and diversifying. Not to pick on Google, but to identify Google as a web search engine company now would be very naïve. Other than the better known examples, Google’s investments and R&D are spread across robotics, artificial intelligence, home automation, driverless cars, speech and image recognition, space exploration, life sciences – and even combating aging. No doubt we will benefit from these investments, as we already do (this article wouldn’t have been possible without Google Search). I assume that Google has very good intentions as well (apart from making a buck out of it).
Traditionally, the R&D of cutting edge technologies was the domain of government-sponsored organizations such as the Military or NASA. This is changing. The next big revolution will most likely come from a private tech company. While in the past we did everything possible to hide things from the government (and vice versa), we are now willing to surrender our privacy without thinking twice. In this kind of environment, information misuse would pose the biggest threat, and existing laws seem inadequate and too antiquated to successfully tackle such eventualities.
Internet of Things (IoT)
The Internet of Things (IoT) has been the buzz word for past few years now. From driverless cars to self-ordering refrigerators, IoT applications might end up controlling almost every aspect of our lives. While IoT promises a lot, it may expose an Achilles heel.
The ‘Internet of things’ (or ‘Internet of Everything’) allows objects to be sensed and controlled remotely across an existing network infrastructure such as the Internet. While this may not look like a new concept, ubiquitous wireless broadband and increase in CPU power has fuelled a Cambrian Explosion of sorts. Two key areas of IOT are: sensors, and Machine-to-Machine (M2M) communication.
In order to control something we need to measure it first. IoT will require lots and lots of sensors, and everything from your undergarment to your coffee maker could potentially be a sensor gathering information about you and your environment. Privacy could be a thing of the past.
If you don’t like others talking behind your back (who does) you won’t like this, but you will have no choice. M2M communication is just that. Even today we are completely ignorant about what or whom our laptops and phones are communicating with when they’re connected to the internet. In an IOT world the information would be shared, transferred and used without much – or any – human intervention.
Information security would pose the biggest challenge in an IOT environment. It is widely believed now that most security professionals and organizations have overestimated our ability to secure data. If you have not been hacked already, this does not mean you have a foolproof security system. Rather, no one has had the time, money or motive to do it… yet. It is only a matter of time. As we have learned (at our expense1), breaches can be expensive, embarrassing, and even dangerous.
The Achilles heel is bad software code. It’s not that software developers intentionally write bad code full of security holes, but the fact is that today’s systems are enormously complex, and software is built by thousands of individuals building on top of old code or reusing what has been written already. As each new abstraction layer is added, it permanently hides any underlying security holes. The Shellshock bug which affected Unix based systems was uncovered only in 2014, but its origins date back to 1992.
The right thing to do is to re-write the code fully from ground up, but this requires money and a whole lot of time. Even then, how can we guarantee that we will get it right every time? The only alternative is to get software to write software in order to make sure that no ‘human errors’ occur. While this may sound futuristic (and scary), it is, in fact, already happening.
Moore’s law states that we should expect to see the approximate number of transistors on an Integrated Circuit double every two years. This directly correlates to the CPU horsepower. So far the prediction has been fairly accurate, but we should see saturation or at least a slowing down of this exponential growth within this decade.
Quantum computing may be the next big leap in computing and although in its infancy, it has the potential to revolutionize how computers work. Unlike the current digital computer which uses binary, (i.e 1s & 0s), quantum computers use ‘qubits’, which has the strange property of being able to be both 1 and 0 at the same time. This phenomenon is known as superposition, and allows a quantum computer to do computations in parallel, rather than serially. Thus theoretically, superposition allows the quantum computer to test every possible path simultaneously to arrive at all possible solutions to a problem, exponentially increasing the computing power and speed.
While this looks to be a radical new way of computing, it may unfold the secret to implementing complex systems that can be as good as – or better than – the human brain. The relatively new field of quantum biology has unearthed how such a quantum mechanical processes may be behind the extremely high efficiency of certain stages in photosynthesis (the process used by plants and other organisms to convert light energy into chemical energy). During photosynthesis, electrons are released when light photons hit the chlorophyll molecules residing within the cells of every leaf and photosynthetic bacterium. It is believed that these electrons do not travel in a random fashion to the cell’s reaction center, but that instead they explore all possible paths at once and select the most efficient path!
Scientists are still struggling to harness the power of this powerful technology. Hence quantum computers still exist only in labs and can perform comparatively basic computations. A commercially viable system could be decades away. If and when we do manage to ‘tame the beast’, the implications and the possibilities are mind boggling. For example, the very best cryptographic algorithms that we use today for secure communications would be rendered useless, as a quantum computer would be able to crack them within hours (or minutes) through sheer brute force. A quantum super computer that is superfast and indistinguishable from human intelligence could be a very real possibility.
As in most other instances, sci-fi writers and directors were the first to dream up humanoid machines which think, act and look like humans. More often than not these beings turn against their human masters, as in the case of Skynet. With the advent of digital computers in the 1950s, many technologists believed that we would achieve this goal within another 50 years – i.e., before the new millennium dawned. Turning science fiction into fact has, however, turned out to be more challenging than we thought.
In 2015, this dream is yet to be fulfilled. Propelled by the exponential growth in processing power, we have come a very long way toward developing AI (Artificial Intelligence). The envisaged results still, however, elude us.
We have been able to develop systems and algorithms which can outperform humans. A new chess world champion was born when in 1997, IBM’s Deep Blue supercomputer beat Gary Kasperov. That was 18 years ago. More recently, Watson (named after IBM’s first CEO Thomas J. Watson) triumphed over two Jeopardy Champions. While winning at Jeopardy (which involves the understanding of natural language and complicated questions and nuances) is impressive, both DeepBlue and Watson are basically doing something which computers are good at… really, really good at. Following instructions and executing them at blinding speeds, surpassing the capabilities of the human brain. Is this AI?
Well, not quite. It appears that computers still fail miserably at certain things which seem so trivial to humans that even a toddler could do them. Getting a computer to understand ‘context’ is not easy. For example, take the question “Can a can cancan?”. It doesn’t take much effort to understand that the word ‘can’ is used to mean different things here, but to make a computer comprehend it, is not easy – and that’s putting it mildly.
The Turing test is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human (proposed by Alan Turing, the mathematics genius who is widely regarded as the father of modern computing). IBM’s Watson was on the road to achieving this through the use of technique called machine learning. Machine learning is nothing new; it is already in our daily lives. Examples include speech recognition software such as Apple’s Siri, Image Analysis software used by anything from your webcam to missile guidance systems, Business Intelligence tools which give you suggestions when you shop next on Amazon, all of these use machine learning.
The rapid developments in AI have sparked concerns on its impact on the human race; especially on potential negative effects. While AI could be of enormous benefit, one definite short-term disadvantage would be the dramatic decrease in jobs for humans. In China there is already a hotel run entirely by robots. In Japan, ‘Robear’ the robot is capable of transferring a patient from a wheel chair to a bed. Driverless cars could potentially put taxi drivers around the world out on the road – literally.
The mission of the Future of Life Institute, founded in March 2014, is to mitigate risks facing humanity, especially from AI. One cannot disregard such initiatives when you see that the people behind such organizations are some of the best minds we have today – people such as Stephen Hawking (Physicist), Steve Wozniak (Apple Co-founder), Elon Musk (CEO of Tesla Motors and SpaceX) and Max Tegmark (Massachusetts Institute of Technology). A document put out by the institute states that while acknowledging the fact that the potential benefits of AI would greatly benefit our society, scientists should take every possible precaution to ensure that mankind is not wiped out.
As Steve Wozniak rather grimly put it, “Computers are going to take over from humans, no question. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.”