create account

Tx 5a7ec45c84dd820a411da1c6ba7817b41427c01c@18004650

Included in block 18,004,650 at 2017-12-11 23:02:54 (UTC)


Raw transaction

ref_block_num47,764
ref_block_prefix2,749,899,573
expiration2017-12-11 23:12:39
operations
0.
0.comment
1.
parent_author""
parent_permlinkneural
authorpeterthor
permlinkbrain-stack-physical-reality-neural-network
title"Brain Stack: physical reality == neural network"
body"The deep mathematical connection between physical reality and neural networks is amazing.
It does not seem obvious why deep neural networks should be so good at modelling the patterns and properties in our universe. On the surface, it seems impossible for them to do much of anything useful.
Consider an image with 256*256 RGB pixels, where each pixel is represented by a 24-bit color value (8 bits per RGB component).
Number of pixels: 256*256 = 65,536 = 2^16
Number of colors: 2^24 = 16,777,216
Number of distinct pictures: (2^16)^( 2^24) = 2^(16*(2^24)) = 2^(2^28)
= 2^268435456 = 1.43*10^80807124
Estimated number of atoms in the observable universe: 10^80
So, you would think that it would take a horrendously enormous number of universes to have enough material to build a large enough neural network that could categorize such a huge number of possible images completely. This set of images would include pictures of everything, that is possible and that is not possible. It would contain snapshots of mathematical proofs that have never been discovered, and the Magna Carta, and the Mona Lisa, as well as every imaginable configuration of sky diving tyrannosaurus rexes wearing “I Like Ike” buttons. But the vast majority of these images would not look like anything at all. Pretty much like what you would see on an old-fashioned CRT TV that was not tuned to a broadcasting channel (aka "snow").
But the trick is that, basically, the laws of physics apply huge constraints over what is possible, and only those plausible images are recognizable and likely to occur and are significant. This is because the universe is limited to very simple equations, such a Navier Stokes, Maxwell, Hamiltonian, and Boltzmann equations as well as Noether's theorem. There is good reason to think that neural networks mimic these equations in an efficient hierarchical manner, with log complexity.
Why is reality so outrageously constrained? Perhaps reality is itself just one big neural network and we are all just thoughts within that universal brain. And it is brains all the way down. It would explain the mystical observation of the “unreasonable effectiveness of mathematics” noted by physicist Eugene Wigner in 1960. Ditto for Peter Norvig’s idea about the “unreasonable Effectiveness of Data”. How cool is that!?!?!"
json_metadata{"tags":["neural","network","physical","reality"],"app":"steemit/0.1","format":"markdown"}
extensions[]
signatures
0.20071643e22bcecc24ae98026ed8723e59e0c0905bbae602e6fce6057ba8528bc36e48809b6a98872895916a1d65bbfeb65d5d15f175477cdb9bc23d11babad846
transaction_id5a7ec45c84dd820a411da1c6ba7817b41427c01c
block_num18,004,650
transaction_num6

created by @roadscape