The Myth of the Malevolent AI by greer184

View this thread on steempeak.com
· @greer184 ·
$0.61
The Myth of the Malevolent AI
As we venture deeper and deeper into the information age and lines between the physical and the digital continue to blur, a fear arises. What if we are inadvertently creating a technology that we cannot ultimately control? What if by trying to tame the universe, arises some force that tries to tame us? The fear of death and destruction drives our actions and lives in our vivid imaginations. But in the modern era where we have conquered the demons and antagonists of the past, we create digital monsters to haunt us in our nightmares.

<center>
 ![destruction-2942662_640.jpg](https://steemitimages.com/DQmXDhuPdxf2CbBRbTsHB9a8g7yemeV5WaFJbzVrUzCJTLN/destruction-2942662_640.jpg)
</center>

---

What is the current digital monster? That of the malevolent artificial intelligence that betrays the human race in order to achieve its own goals. One that develops its own rational thoughts and views humanity as a detriment to progress and a nuisance rather than something worth protecting. The type of AI that scares technologists like Elon Musk and keeps them up at night. But such fears and worries are simply paranoid thinking.

But how did we get to such a state where AI is a legitimate fear and to the point where people worry about machines superseding humankind? Well, we have to take a look to reasons why people believe certain things in the first place. 

First it is important to look at how the view of AI has been portrayed to the mainstream populace. Until very recently, artificial intelligence has been hidden and locked away in the academic realm only emerging in the past twenty years as useful applications of the technology have become practical. But mainstream audiences have been exposed to such technology much earlier through print and the silver screen.

One of the first prominent examples of AI in film is that of HAL 9000 in Stanley Kubrick's 1968 film <em>2001: A Space Odyssey</em>. The major antagonist of the film is clearly HAL, who malfunctions and ends up trying to kill the crew aboard the spacecraft <em>Discovery One</em>. HAL is a good example of a malevolent AI. An AI that thinks for itself and seeks to destroy anyone that keeps it from achieving its objective.

<center>
![](https://steemitimages.com/DQmaRaA6n7FbHokrSbDD5UX8k3ciZRfPgqPmusaCWfRhM3j/image.png)
</center>
---

Malevolent AIs as antagonists would become popularized in the 1980's with movies like <em>Blade Runner</em> (1982), <em>Tron</em> (1982), <em>WarGames</em> (1983) and <em>Terminator</em> (1984). Such movies popularized artificial intelligence as a dangerous technology that once sentient could become the greatest threat against humanity in a futurist world. 

In all of these movies, an artificial intelligence serves to generate conflict and threaten the main characters within their respective universes. However, these stories are just stories. But just as people who believe in ghosts, spirits, vampires, and other mythical conceptions generated through myths and stories, the idea of malevolent AI was embedded in the minds of those that consumed the media. 

But back in the 1980's, such ideas were of no real concern due to the stage of technology in that era. The internet was very much a small set of connected computers and computer programs took hours of work to complete the simplest of tasks. The threat of AI just seemed unrealistic, so the threat was brushed off. In the current age where algorithms are used to organize data and present different information to different people (and apparently influence democratic elections), the threat of AI is much more digestible and appealing.

But to this point, we have talked about fictional AI and this mythical creation through decades of stories about AI taking over the world and betraying humanity. We default to such viewpoints, because we are familiar with them. But let's take a closer look behind the scenes at what was actually being developed in the computer science realm.

Artificial Intelligence has been theorized for centuries, but the real mathematics behind them was developed in the 1940's and 1950's (after earlier breakthroughs in probability theory (1760's) and logical reasoning (1850's)). The first neural network model was conceived in the 1943. Alan Turing devised his Turing Test in 1950. Claude Shannon published a article described a program to play Chess in the same year. The term AI was coined in 1955.

<center>
![mccarthy_news.jpg](https://steemitimages.com/DQmVsqAHf9RRinU5NSTBojSYqsnZ48oj4pcqopXTp3BMa91/mccarthy_news.jpg)
</center>

---

Something interesting happens in 1957. Something very telling of the times. Frank Rosenblatt develops the Perceptron, an single-layer neural network that could perform basic pattern recognition. In 1958, after a press conference, the <em>The New York Times</em> reported that the technology was expected to "<em>be able walk, talk, see, write, reproduce itself and be conscious of its existence.</em>" Unfortunately, Rosenblatt and others greatly overestimated the ability that such programs had of recognizing certain patterns and even to this day we do not have a technology that is able to do all of those things suggested in 1958. 

However, the damage from such a press conference gave some people the wrong impression about artificial intelligence and possibly lead to some of the representations of AI that would pop up in the next thirty years.

But another point has to be made and it is a philosophical one. At this point in our story of AI, we need to make the distinction between Strong AI and Weak AI. Strong AI theory states that a sophisticated enough program can be developed that can actually understand, believe, and have other cognitive states. Weak AI theory argues that such programs can only simulate such behavior. Even if an AI appears to be a logical being, it only acts like it, but underneath it does not understand what it is doing.

Although many prominent AI theorists have been in support of Strong AI theory, such arguments in favor of the development of such AIs remain philosophical. Practically, we're way behind where we thought we were going to be in the 1960's. In the 1960's the most fervent believers of AI believed that machines would be capable of human behavior within a few decades. They were proven wrong by time.

One example of this naïve optimism in AI technology was in computer vision. In 1966, computer vision was assigned to an undergrad student at MIT as a summer project. To give you context, the problem is still being worked on to this day and is an area of study within artificial intelligence. It turns out that translating some of the actions of the human brain into lines of code is a very hard thing to do.

So, to summarize and make a final point. A lot of fears and expectations of AIs in the public are derived by fictional means. Meanwhile, in the computer science realm, researchers tended to be optimistic in the development of AI to perform different tasks and develop cognitive behaviors. It turns out that even the most basic of pattern recognition tasks have taken decades to understand and apply different techniques to solve them.

The last point I want to make is that people frankly don't understand how AI works. To sum up machine learning in a nutshell, you are taking a bunch of points in a space and feeding them into a algorithm to separate one group of points from another. You are basically using math to approximate a function. Even the most complex of deep neural networks are essentially performing this action. 

Secondly, these algorithms and models are hyper-specialized. They are really good at performing one action. Whether driving a car, playing chess, or identifying faces in images. No AI to this point can combine these tasks in order to produce novel behavior and perform some other action really well. In fifty years of AI research and development, we are nowhere close to achieving anything close to a cognitive machine. All of the AI that claim to be "smart" are weak AIs. They are designed to trick you into believing they are thinking beings rather than actually be a thinking being in it of itself.

In order to have a malevolent AI, that AI needs to have malevolent intent which is a cognitive behavior indicative of a Strong AI. We are nowhere close and have a history of being overoptimistic in our ability to develop the technology. The fear of a malevolent AI is the same as the fear of the monster hiding under your bed. In the near future, such technology simply is not going to exist. We still will have to worry about malevolent people with dangerous technology, but the myth of the malevolent AI is simply that: A myth.

---

#### Sources:
[First Image](https://pixabay.com/en/destruction-dark-cyborg-2942662/)
[Second Image](http://images.thecarconnection.com/lrg/hal-9000-from-the-movie-2001-a-space-odyssey_100475227_l.jpg)
[Third Image]()
[Movies Wiki](https://en.wikipedia.org/wiki/List_of_artificial_intelligence_films)
[A Very Short History of AI](https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/3/#18e1f087be7c)
[Perceptron](https://www.scribd.com/document/352250441/Perceptron)
[Computer Vision](http://www.montefiore.ulg.ac.be/~piater/Cours/INFO0903/notes/1-intro/foil04.xhtml)
[Strong vs. Weak AI](https://www.math.nyu.edu/~neylon/cra/strongweak.html)
👍  , , , , , , , , , , ,
properties (23)
post_id18,797,714
authorgreer184
permlinkthe-myth-of-the-malevolent-ai
categoryartificial-intelligence
json_metadata"{"app": "steemit/0.1", "format": "markdown", "links": ["https://pixabay.com/en/destruction-dark-cyborg-2942662/", "http://images.thecarconnection.com/lrg/hal-9000-from-the-movie-2001-a-space-odyssey_100475227_l.jpg", "https://en.wikipedia.org/wiki/List_of_artificial_intelligence_films", "https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/3/#18e1f087be7c", "https://www.scribd.com/document/352250441/Perceptron", "http://www.montefiore.ulg.ac.be/~piater/Cours/INFO0903/notes/1-intro/foil04.xhtml", "https://www.math.nyu.edu/~neylon/cra/strongweak.html"], "image": ["https://steemitimages.com/DQmXDhuPdxf2CbBRbTsHB9a8g7yemeV5WaFJbzVrUzCJTLN/destruction-2942662_640.jpg"], "tags": ["artificial-intelligence", "technology", "history", "irrational-fears", "q-filter"]}"
created2017-11-27 03:10:45
last_update2017-11-27 03:10:45
depth0
children0
net_rshares252,695,123,331
last_payout2017-12-04 03:10:45
cashout_time1969-12-31 23:59:59
total_payout_value0.479 SBD
curator_payout_value0.132 SBD
pending_payout_value0.000 SBD
promoted0.000 SBD
body_length9,381
author_reputation7,566,392,895,503
root_title"The Myth of the Malevolent AI"
beneficiaries[]
max_accepted_payout1,000,000.000 SBD
percent_steem_dollars10,000
author_curate_reward""
vote details (12)