Installing Ubuntu Linux on Terminator T-800, HOWTO. by puffosiffredi

View this thread on steempeak.com
· @puffosiffredi · (edited)
$28.10
Installing Ubuntu Linux on Terminator T-800, HOWTO.
<html>
<p>Before to start with a post about Artificial Intelligence, I apologize for my english, since I am not a native speaker.&nbsp;</p>
<p>This is not about technology itself. I will probably write more about Kohonen-like networks &nbsp;in the future. This is about... trying to explain AI to my mother. I hope she will be a little less anxious. (which is impossible, given the fact she is an <em>italian mother)</em>.&nbsp;</p>
<p>Anyhow, this &nbsp;is about explaining the reason nobody in the AI field will never, ever, build <strong>Skynet</strong>.</p>
<p>Have you read about the Artificial Intelligence being one of the bigger threats for the humankind? Of course. Have you read about how machines could decide to exterminate us, boring humans? Of course. Stephen Hawkins said that, right? Well, no. He never said that. Actually he did quite a long reasoning about military utilization of the AI , but this was "cherry picked" by the press .</p>
<p>Then... la laaaaaa. Here I am. You sort-of-summoned me. I am one of the crazy people which is doing this terrible things to the humanity. It is my fault (too) &nbsp;the first AI we build will become self-conscious, will realize &nbsp;we are the people which allows &nbsp;Miley Cyrus to exist, and of course it will go for extermination. (What else? Miley Cyrus, you know?)</p>
<p>Joking apart, I think this "fear of the AI" is a bit out of control. And this is , in my opinion, for two main reasons:</p>
<ol>
  <li>Hollywood. Many (good) movies about Artificial Intelligence doing this and that.</li>
  <li>Ourselves. We are pretty unable (failing) to explain what AI actually is.&nbsp;</li>
</ol>
<p>Now, I am not sure It makes sense to blame Hollywood to produce very emotional movies. Really: I liked Terminator, at least the idea behind it. (sort of). I liked The Matrix, too (sort of). It's their job to do frightening movies, so I would not blame them because of that. &nbsp;</p>
<p>&nbsp;If we need someone to blame, we only need a mirror.&nbsp;</p>
<p>Let's give it a try to fix it. What Artificial Intelligence is?</p>
<p>Well, if we go back to history, Plato was thinking that the most clear evidence of a person being intelligent was the capability of doing mathematics. This seems ok, until we don't remember what Plato was calling "mathematics" is something most of our calculators are able to do. Right in our smartphones. Even if we go ahead with mathematics and we go with algebra and so, programs like Mathematica, MathLab and more can do almost everything Plato had in mind. Even if we put proofing theorems, I'm sorry to say many Proof Assistants (LeCoq, Matita, Lean, HOL, and more) can do <em>much more</em> than Plato had in mind. Still, is hard to say our laptops are "intelligent". Plato would say that, by example.</p>
<p>Later, playing Chess took the place of this "definition". Until we had machines able to play Chess, so that, even this was outdated. Seems a "cat-n-mouse" game, right?</p>
<p>Why I mention that? Because when we talk about "intelligence", we are assuming that:</p>
<ol>
  <li>If you are human you are capable to be intelligent.</li>
  <li>If you are capable to be intelligent, you are somehow human, or human-like.</li>
</ol>
<p>This is the main reason most of us thinks that "intelligent" means , more or less, something that "only humans can do". So that, actual devices able to make decisions (even better than our ones) are named "smart" and not "intelligent". &nbsp;In general, when a machine is able to do something which was human-only before, people stops thinking at this activity as "intelligence", and we call the machine "smart". &nbsp;Siri is not intelligent: it is a "smartphone". Smart. Not "intelligent".</p>
<p>Even more important, this two assumptions are driving people to think &nbsp;a machine which is "intelligent" will look like a human being, will talk like a human being, will be self-conscious, will have "feelings" and will take terrible decisions. Because this is what humans are doing.&nbsp;</p>
<p>Because the main bias about "intelligence" sounds like &nbsp;" intelligent means... <em>like us</em>".&nbsp;</p>
<p>On top of this stack of mistakes, there are other narratives. Like the Transhumanist one, which is doing questions about the "Singularity". The singularity is defined as an Artificial Intelligence able to do what our mind is doing, but better (or more. Not clear). &nbsp;(In theory this definition covers any machine which is able to manage numbers, by example, like a calculator. We cannot do computing at such a speed. Human brain is terrible with numbers. )</p>
<p>Since the assumption is that "intelligent" = "human", then "more intelligent than human" means "more human than human", so I can understand why people is concerned. &nbsp;In my opinion here the issue is <strong>the bias</strong> : the bias of being able to think "intelligence" only when associated to "human being".</p>
<p>Now, let's go back and check what we do actually in AI field. How I would define AI to my mother? &nbsp;Talking machines? (bad) Decision making? Creativity?</p>
<blockquote>I would define Artificial Intelligence the ability to<strong> mimic</strong> functions normally associated to the human specific behavior. This is my personal opinion.</blockquote>
<p>One example is Computer Vision. Anyone can build a cheap camera today, but this is not "Vision". Vision is , more or less, when you know that some piece of color is actually a pen, below you have a table, and the pen is on the table. Maybe you think this is done more with eyes than with brain... I'm sorry to say, most of the operation we call "to see" is made by the brain. And this is quite a job.&nbsp;</p>
<p>Guess what: Computer Vision was considered Artificial Intelligence, in the very period almost no machine was able to do it successfully. Now that some cars are able to understand there is another car in front of them, and to estimate the distance, then we consider Computer Vision not to be artificial intelligence.</p>
<p>Another example is Natural Language Processing. This is the ability to listen someone speaking, and get what he wanted to say, normally proven by providing a proper response back, in the same language, or at least some behavior which is consistent.&nbsp;</p>
<p>At the beginning of Information Technology, artificial languages were that limited, everybody was thinking "machines may only process, while talking is for humans". Now people can buy commercial products which are able to talk , and answer properly most of times (more than some idiot I know, to be honest) , then is very hard to think Alexa or Siri are intelligent. It was easy to do when only humans were capable to do that.</p>
<p>Now is the turn of learning. Machine learning is the current frontier, just because right now to match the human capability to learn with (or without) examples is quite hard. We have some systems which can learn from examples , create information trees from examples, and some other system (i.e. Kohonen-like systems I am into) &nbsp;are able to learn with no supervision, which means with no examples.&nbsp;</p>
<p>So, when we go to sound , everybody work, what is exactly "Artificial Intelligence?".</p>
<p>Artificial Intelligence<strong> is a product. A product someone must sell.</strong></p>
<p><strong>This is the very point. This is the REAL point.</strong></p>
<p>When I say "a product" I mean a machine which is purposely designed to follow a contract. You buy a broom under some implicit contract: the broom &nbsp;is useful to clean your floor. You buy a car under the implicit contract, the car is supposed to move, to keep you alive , to be able to run on the streets we have. And so on.</p>
<p>If we try to imagine how car will improve, we take "the contract" and we say: "car will be able to make a better use of the streets we have, they will improve our security, they will move more people".&nbsp;</p>
<p>Each and every thing an existing machine is capable to do, was implemented. Designed. And IT COSTED MONEY. If you want a machine which purposely decides to kill you, <strong>you must PAY for this function</strong>. Yep.</p>
<p>What we expect from Artificial Intelligence? What is the contract?&nbsp;</p>
<ol>
  <li>Artificial Intelligence will do for me something I cannot do alone.</li>
  <li>Artificial Intelligence will do for me something I do not want to do.</li>
  <li>Artificial Intelligence will do for me something better than I do.</li>
  <li>Artificial intelligence does not require to ask another person to do something in my place.</li>
</ol>
<p>So, when a customer orders some "Artificial Intelligence" , what the order says is like: "this will make this boring reporting I must do every week". "This product will make better stock market predictions". "This AI will move the camera to zoom on the face of the criminal, after someone is doing a violent crime in some public space". It is a product, right? Must be useful.</p>
<p>A product is useful by contract. The only way to sell a product is to make something which, somehow, fulfills a contract. The result of this is, actually the machines we build are designed with the contract in mind. When we do some neural network in charge of doing scalping in the Forex market, we do not implement the capability to decide if the humankind is worth to exist or not. The customer wants to make money. This is what it will pay for.&nbsp;</p>
<p>Until the AI is a product, built for people which expects the product to work in some ways, the product will do what the customer is paying for. End of story. Sure we could mention that the Army could be the customer, and they want killing machines, still the implicit contract is that "our killing machines aren't killing us".&nbsp;</p>
<p>Here we are, now I know the next objection of you: self-consciousness.&nbsp;</p>
<p>Because when something is "self-conscious" , it could "decide" we are stupid , rebel and then kill us.&nbsp;</p>
<p>Here we go into the issue of "conscious".&nbsp;</p>
<p>There is many people discussing the relationship between brain (wet-ware) and computing. By Example, &nbsp;I would like to introduce you a guy, Henry Markram. ( <a href="https://en.wikipedia.org/wiki/Henry_Markram">https://en.wikipedia.org/wiki/Henry_Markram</a> ). When you want to discuss about self-consciousness, he is one of the best people to do that. He got ~1Bn€ to build the first simulation of the whole cortex functions of the human brain, just to say. He did a pretty nice work thinking to "liquid state computing".&nbsp;</p>
<p>Now, when you go reading works about "consciousness", and you do computing, what happens first is ... you get lost. You can read works from Winfried Denk, Timothy Bliss, and many other, to understand that...</p>
<ol>
  <li>We are not alone in our brain. We have many "consciousness" inside.</li>
  <li>Most of the functions of human intelligence are ... something we would define "a personality".</li>
  <li>We have more than one image of ourselves, built by our brain.</li>
  <li>We have more than one idea of reality running in our brains.</li>
</ol>
<p>Imagine we have a team in our brain. So there is a desk where a guy sits, and this guy names "James. E. Fear". This guy is in charge of "fear". It is his job. We would say this is quite a disturbed guy, always thinking in terms of fear. Actually James fears everything. If you talk with him, he will tell you how the floor is such a terrible threat to you. To don't mention the window. He knows terrible stories about windows. To make terrible stories is its job in the team. There are many other "people" in this team , each one has a precious function in our brain: you can say whatever about James, but, trust me, if you see a lion in front of you... <em>do what James says!&nbsp;</em></p>
<p>Nevertheless, if your girlfriend asks you to go into a restaurant, probably you should NOT listen at James: the nuclear mayhem he has in mind is not probably what will happen into the bistro you will choose. It's gonna be fun.</p>
<p>Joking apart, according with most of eminent scientists &nbsp;&nbsp;a self-conscious brain is some kind of teamwork, where many different views , functions and thoughts and ideas are competing together , and the way they are feed and the way they play into the team will produce something you call "consciousness".&nbsp;</p>
<p>And each of this pieces is complex enough to fit the "common sense" definition of "personality". Yes, <em>James E Fear</em> could look like &nbsp;a " full person" if you could isolate its function alone.&nbsp;</p>
<p>This is just to give an idea of the jeopardy a "self-conscious" brain is.&nbsp;</p>
<p>At the current state of the art, "consciousness" is not something we can discuss about when talking about machines: the reason is, the humankind has a little knowledge of what "real" consciousness is, so that we cannot say what it actually is , enough to reproduce it.</p>
<p>Any discussion about "machines being self-conscious" is completely void, just because this term is not defined enough to be reproduced into a machine. The reason why we are not going to produce "Skynet" , defined as a self-conscious machine, is very easy:</p>
<ol>
  <li>Even if we may be able to build such an hardware, we don't know how it should work.</li>
  <li>It is not a job for computer experts to understand what self-consciousness is: this is a job for other people.</li>
  <li>Nobody would buy some unpredictable machine taking any decision.</li>
</ol>
<p>The first point is about science and engineering: you cannot build something when you don't know how is supposed to be. The second point is that information technology, even with its boost, cannot take the place of neurology, <strong>and the human brain is the only example of something we are SURE is "self-conscious".&nbsp;</strong></p>
<p><strong>Even if it was built by accident, </strong>nobody would buy one. You cannot sell something saying to the customer "this device will cost you $100.000, and will do ... well... something. It depends. Maybe."</p>
<p>Putting all together, you can take your popcorn and watch again your Sci-Fi.&nbsp;</p>
<p>It is not gonna happen.</p>
<p><img src="https://s-media-cache-ak0.pinimg.com/564x/4b/46/ba/4b46ba6bff7ed17311ca99756bff1e49.jpg" width="400" height="325"/></p>
<p><br></p>
</html>
👍  , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and 94 others
properties (23)
post_id1,876,105
authorpuffosiffredi
permlinkinstalling-ubuntu-linux-on-terminator-t-800-howto
categoryfunny
json_metadata"{"app": "steemit/0.1", "format": "html", "links": ["https://en.wikipedia.org/wiki/Henry_Markram"], "image": ["https://s-media-cache-ak0.pinimg.com/564x/4b/46/ba/4b46ba6bff7ed17311ca99756bff1e49.jpg"], "tags": ["funny", "philosophy", "technology"]}"
created2017-01-31 17:28:51
last_update2017-01-31 17:34:03
depth0
children1
net_rshares57,758,138,210,213
last_payout2017-03-04 09:09:12
cashout_time1969-12-31 23:59:59
total_payout_value21.507 SBD
curator_payout_value6.596 SBD
pending_payout_value0.000 SBD
promoted0.000 SBD
body_length14,388
author_reputation10,051,299,692,256
root_title"Installing Ubuntu Linux on Terminator T-800, HOWTO."
beneficiaries[]
max_accepted_payout1,000,000.000 SBD
percent_steem_dollars10,000
author_curate_reward""
vote details (158)
@hilarski ·
https://media.giphy.com/media/RAx4Xwh1OPHji/giphy.gif
👍  , ,
👎  
properties (23)
post_id1,882,346
authorhilarski
permlinkre-puffosiffredi-installing-ubuntu-linux-on-terminator-t-800-howto-20170201t160157553z
categoryfunny
json_metadata"{"app": "steemit/0.1", "image": ["https://media.giphy.com/media/RAx4Xwh1OPHji/giphy.gif"], "tags": ["funny"]}"
created2017-02-01 16:01:57
last_update2017-02-01 16:01:57
depth1
children0
net_rshares1,781,115,952
last_payout2017-03-04 09:09:12
cashout_time1969-12-31 23:59:59
total_payout_value0.000 SBD
curator_payout_value0.000 SBD
pending_payout_value0.000 SBD
promoted0.000 SBD
body_length53
author_reputation526,151,861,331,570
root_title"Installing Ubuntu Linux on Terminator T-800, HOWTO."
beneficiaries[]
max_accepted_payout1,000,000.000 SBD
percent_steem_dollars10,000
author_curate_reward""
vote details (4)