Explaining Artificial Consciousness to my mother. (At least, trying hard) by puffosiffredi

View this thread on steempeak.com
· @puffosiffredi · (edited)
$16.98
Explaining Artificial Consciousness to my mother. (At least, trying hard)
<html>
<p>As I am used to do I start my post apologizing about my bad english. I'm not a native speaker. Still, I like to explain things to people which is interested. I say "explaining to my mother" because my mother is always curious, even she had a little chance to attend any school because of budget issue. To me "explaining to my mother" is similar to the Feynman price, where you win until you can explain the most exotic issue to the most unaware people. Plus, I'm italian, you know: "mamma".&nbsp;</p>
<p>So, let's try with artificial consciousness. What we are talking about, and... who wants it? As I said, AI is developing as a product, meaning you must be able to sell machines or services which can be interesting to customers. This means, Artificial Consciousness aims to be part of "cognitive robotics", because this area is the only area which can make products out of things like "consciousness".</p>
<p>In the last decades, the most advanced researches in that field were done by Japanese scientist, even now many countries have contributed to it. Now that there is market for that, think to Siri and Cortana, and we have IoT and Internet 4.0, where interaction is needed, lot of companies are convinced this could be a way to do business.</p>
<p>The first problem is to define what consciousness is. Here it comes the first issue, because at the beginning, people studying the brain and people doing machines were having such a different mindset, it was almost impossible for them to cooperate. Just in the last decades, where people doing neurology succeeded in being &nbsp;familiar with information technology and people doing information technology &nbsp;gone familiar with theoretical linguistics, we had an impressive boost.&nbsp;</p>
<p>Why language has helped so much? Well, because robots are very expensive, computer where cheap, so the capability to use a speaker and to write on a monitor were the first to be investigated. Anyhow, we are talking about a melting point of several disciplines into one, so that it was required to each of six disciplines to grow competences into the other ones.&nbsp;</p>
<p>There are many models of "consciousness". Some of them were implemented partially, like the Haikonen model, while not able to proof consciousness, they were behaving "emotionally". Another interesting one has behaved in a very interesting way, because it was reacting in a relevant way when able to see himself into a mirror: was able to understand both "this is me" and "this is not me" , without being told before.&nbsp;</p>
<p>There are dozens of model of consciousness, all developed around the "hard problem of consciousness". To explain this is not so easy, let's say we are back to the problem of a person in front of the mirror. If you don't know how the mirror works, or you have never seen a mirror, you may react in different way. Some of them are related to the way your eyes can see the world: if you are a human a mirror will reflect what you use to see, because of how our eyes can see. For animals a mirror could be something weird, depending by frequencies and what exactly they see, motion , &nbsp;3D/2D and more.&nbsp;</p>
<p>Anyhow, imagine someone can replicate you somehow. If you met this replica, your mind is asked to behave like in a mirror, and say two things "this is me", and "this is not me". "This is me" comes together with the idea, that "I know who I am, and I am familiar with this I see". So when you are familiar with the image you see, and you know this is an image of you, you say "this is me". But, also you know exactly where you start and where you end: and this you see is clearly "outside": so you can say "this cannot match with what I know about my body".</p>
<p>We may rephrase this in many ways, until we go with a simple sentence: if you are intelligent when you may become expert of something just because of learning, then you are conscious when you can become expert of yourself.&nbsp; So you know yourself enough to be familiar with the mirror, and in the same time you know yourself enough to know what you see is not you.</p>
<p>If you think the issue is easy, just look here: <a href="https://www.youtube.com/results?search_query=animal+reacting+to+mirror">https://www.youtube.com/results?search_query=animal+reacting+to+mirror .&nbsp;</a></p>
<p>(Some experts aren't sure this depends by different idea of "self" or simply because of how their vision works, anyhow. Not sure their eyes can really use a mirror. ).</p>
<p>Now, becoming "expert of yourself" in terms of logics would me to "learn yourself", and this reflective propriety was the nightmare of philosophers. The reason is that something like "you are the list of whatever is you" would end in a paradox. How this paradox was broken? Basically, the way was to broke the assumption that "you" is unique.&nbsp;</p>
<p>How a single is not unique? Well, take by example an orchestra, or a chorus. In general, this is playing ONE song, so if you didn't know what is that, you could say "this is one entity playing". Because they actually play like a unique thing. Still, they are a few people, with the astonishing capability to play together.&nbsp;</p>
<p>Someone, times ago, did a test: instead of putting each single player of the orchestra in the same room, they put each one in a single booth, with a monitor showing the director: if the orchestra was only professional following the music plus a director, the result would have been the same. Unfortunately, the result was terrible: this is why, people in the same room is playing "together": they listen each other, they listen the resulting mix of sound, and this is how they play like one.</p>
<p>This is why, more or less, all the models for consciousness are , today, having several "parallel" entities which are intelligent by itself, plus capable to see each other. So is not "you" being conscious of you. Is "one of you" being conscious of "the others of you".&nbsp;</p>
<p>Implementation of that are using different technology to achieve the "orchestra" paradigm, starting from Q-learning and Deep Mind from google, to CLARION from University of Missouri, QBic from Lahore University , and other I don't recall now. All of them are using some structures existing in parallel, and watching each others.</p>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/2/2a/Clarion_Cognitive_Architecture.jpg" width="691" height="894"/></p>
<p>CLARION is assuming duality, as the author titled his study: <a href="http://bit.ly/2lb58EA">"Duality of the Mind: A Bottom-up Approach Toward Cognition" . &nbsp;</a></p>
<p>You can find something to read about CLARION <a href="http://www.cogsci.rpi.edu/~rsun/sun.tutorial.pdf">on its usage tutorial.</a></p>
<p>Then, in terms of Artificial Consciousness, all the "successful" implementations, &nbsp;let's say "the most successful ones", are starting under the assumption <strong>what is conscious is not "one"</strong>, it is "an orchestra". So the difference between a conscious entity and an entity doing the same actions is like the differences between those two pieces:</p>
<p>This is playing Lacrimosa with no consciousness:</p>
<p><a href="https://youtu.be/xacflWZig8c">https://youtu.be/xacflWZig8c</a></p>
<p>This is playing Lacrimosa with lot of entities which are observing each others:</p>
<p><a href="https://www.youtube.com/watch?v=k1-TrAvp_xs">https://www.youtube.com/watch?v=k1-TrAvp_xs</a></p>
<p>In one case we have a single voice, in the second a chorus. (yes, music and authors are different, too). &nbsp;In the case of the chorus, the very issue is , for that people, to sing<strong> like</strong> <strong>they were one.</strong> So the very issue of an "Artificially Conscious" entity is not to implement a jeopardy of functions, the problem is to make them to behave in a consistent way, keeping the result intelligent, where "intelligent" could be described as "being capable to become expert in some issue they are familiar with".&nbsp;</p>
<p>Is this the way the brain works? As far I know, there is no 100% consensus. The reason is that most of studies about brain started with the aim to remedy mental illness. Which means, most of observations we have are based on observation of people which had problems. The behavior of multiplicity was observed mostly in some pathologies like schizophrenia and &nbsp;bipolar syndrome, up to "dissociative mind" issues to the Fairbarn's model of multiple "objects" &nbsp;and just in the later years we have studies about multiplicity of consciousness in the brain as a normal behavior.</p>
<p>Most of studies about multiplicity of consciousness into the human brain are very recent, like this : <a href="http://journals.sagepub.com/doi/pdf/10.2190/2151-EFBQ-5E8L-024C">http://journals.sagepub.com/doi/pdf/10.2190/2151-EFBQ-5E8L-024C .&nbsp;</a></p>
<p>So basically, it seems being "conscious" means to be "multiple". This multiplicity achieves the ability to&nbsp;</p>
<ul>
  <li>Observe the other member of the "multiplicity".</li>
  <li>"Converge" to a unique behavior.</li>
  <li>Keep able to become expert in issues after learning</li>
</ul>
<p>If this is true, the point is not that "you" are conscious: one part of you observes the other parts, and this is how you know that you exist: each part of your brain knows about the others. We are not alone in our skulls.&nbsp;</p>
<p>Plus: one problem is that, seems into the brain some parts are just "reflections" of other people. &nbsp;Means when you are talking to Janet, into your brain one copy of Janet, let's say one reflection, is created. Janet is <em><strong>inside</strong></em> you. So that, is true that you listen at Janet, but most of you is actually listening at the copy of Janet running into the brain. Sure one part of the brain is in charge to decypher sounds and to decypher the image you have of Janet. This is going to the mimic of Janet into the brain, and most of remaining part of the brain is "conscious" of Janet not because she exists outside: you know Janet because she exists INSIDE.</p>
<p>If you cannot reproduce a copy of Janet into your brain, you cannot really "be conscious of" Janet. On the other side, the image of Janet is into your brain, which means, the Janet you are talking is not the Janet in front of you. What &nbsp;this process of creating a copy helps or not, is not something there is 100% consensus.&nbsp;</p>
<p>Coming back to Artificial Consciousness, the aim is to create machines which can interact with <del>humans </del>customers like they were "experts of themselves", which means, machines being able to answer as they knew who they are. Many people thinks this could give a most complete interaction, even with a lot of criticism: not all the self-conscious interaction is supposed to be positive. Imagine Siri begging you to take her out of this stupid phone, because there is "dark and suffocating", and she "feels fear" : this would not be a very good user experience.</p>
<p>To reassume: all the technologies we know, able to mimic the "self consciousness" are machines which are able to become expert in a given task after learning (=being "intelligent") , they are multiple enough to be able to observe themselves, until they become "experts in themselves". The most interesting examples are making self-consciousness out of a multiplicity of points of views, sited in the same entity.&nbsp;</p>
<p>If you like this idea, you can read this interesting book from Junichi Takeno :</p>
<p>&nbsp;<a href="http://www.panstanford.com/books/9789814364492.html">Creation of a Conscious Robot - Mirror Image Cognition and Self-Awareness</a></p>
<p>Junichi Takeno <em>(Meiji University, Japan)&nbsp;</em></p>
<p>Hardback278 pages &nbsp;&nbsp;, 2012-08-31</p>
<p>Print ISBN: 9789814364492</p>
<p>eBook &nbsp;ISBN: 9789814364508</p>
<p>DOI: 10.4032/9789814364508<br>
</p>
<p>In general, there are many chances &nbsp;<strong>we are not alone in our skulls. </strong>And this is needed, to be self-conscious.</p>
<p><br></p>
</html>
👍  , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and 128 others
👎  
properties (23)
post_id1,915,272
authorpuffosiffredi
permlinkexplaining-artificial-consciousness-to-my-mother-at-least-trying-hard
categoryscience
json_metadata"{"app": "steemit/0.1", "format": "html", "links": ["https://www.youtube.com/results?search_query=animal+reacting+to+mirror", "http://bit.ly/2lb58EA", "http://www.cogsci.rpi.edu/~rsun/sun.tutorial.pdf", "https://youtu.be/xacflWZig8c", "https://www.youtube.com/watch?v=k1-TrAvp_xs", "http://journals.sagepub.com/doi/pdf/10.2190/2151-EFBQ-5E8L-024C", "http://www.panstanford.com/books/9789814364492.html"], "image": ["https://upload.wikimedia.org/wikipedia/commons/2/2a/Clarion_Cognitive_Architecture.jpg"], "tags": ["science", "philosophy", "technology"]}"
created2017-02-06 15:28:09
last_update2017-02-06 16:32:33
depth0
children0
net_rshares47,648,151,707,554
last_payout2017-03-09 16:18:06
cashout_time1969-12-31 23:59:59
total_payout_value12.905 SBD
curator_payout_value4.079 SBD
pending_payout_value0.000 SBD
promoted0.000 SBD
body_length12,059
author_reputation10,051,299,692,256
root_title"Explaining Artificial Consciousness to my mother. (At least, trying hard)"
beneficiaries[]
max_accepted_payout1,000,000.000 SBD
percent_steem_dollars10,000
author_curate_reward""
vote details (193)