popularscience | Recent


· @din3ysaur ·
"Unlock the Cosmos: A Quick Journey through Astrophysics for the Time-Pressed Mind"
https://amzn.to/41AdvuM


"Dive into the celestial wonders of our universe with 'Astrophysics for People in a Hurry.' This captivating book, authored by Neil deGrasse Tyson, offers a condensed yet comprehensive exploration of astrophysics, making complex cosmic concepts accessible to everyone. From the origins of the cosmos to the nature of dark matter and the secrets of black holes, Tyson takes readers on a thrilling journey through the cosmos, unraveling the mysteries of space and time. Whether you're a casual reader or a science enthusiast short on time, this book delivers a fascinating crash course on the wonders of astrophysics, leaving you with a newfound appreciation for the grandeur and complexity of the universe."
👍  , , , , ,
1 reply
· @ibahar ·
$0.07
Internet Birth Secret
http://i66.tinypic.com/2s9x5xl.jpg

The Internet is now a demand, without which modern life cannot be imagined. The Internet is spread across the globe, with a number of computer networks connected to each other, where there is a standard system called IP or 'Internet Protocol' through which data is transmitted.

Full network of Internet networks. It is composed by connecting computer networks with a special gateway or router. Although the internet and the World Wide Web are the same, the Internet and World Wide Web are not the same.

Internet hardware and software infrastructure sets up an international information communication system among many computers. On the other hand, a web service provided through the Internet. It is linked to interactive documents and other collection hyperlinks and URLs.

With the progress of electronic computers in the 1950's, the history of the internet began. In the 1960, the US Military Research Agency Advanced Research Projects Agency (ARPA) developed a computer-based communication system in some US universities and laboratories.

This network was known as 'ARPANET' created in the 'packet switching' process.

The public gets the first idea about the Internet, when the computer science professor Leonard Krinorrak is sending a message through the apprentice to the Stanford Research Institute (SRI). The second part of the Arpanet Network Tool was installed there.

In the late 1960 and early 1970, packet switching networks such as UK NPL, Cyclades, Merit Network, Tim net and Telnet, Mark etc. were introduced. Arpanet project leads to the development of the Internetworking protocol, where multiple individual networks can be added to a network.

In 1981, access to Arpanet was expanded by establishing the National Science Foundation (NSF) Computer Science Network (CS NET). In 1982, Arpanet started using the Internet Protocol Suite (TCP / IP) as the standard networking protocol.

At the beginning of the '80, the National Science Foundation (NSF) established National Super Computing Center at some universities and in 1986, these centers were linked to the CSN net project, and at the same time, NSF NET allowed research and education institutions in the United States to join super computer sites.

http://i64.tinypic.com/15697d5.jpg

Commercial Internet Service Providers (ISP) emerged in the late 80's. In 1990, the Arpanet project was closed. Closing the NSF net Project in 1995, the ban on the use of the Internet was removed. In the 80's, British computer scientist Tim Burners began the World Wide Web (WWW).

http://i63.tinypic.com/mw3bxk.jpg

Since the mid-1990, internet has revolutionized the culture, trade and so on. As well as instant communication ( electronic mail, instant messaging, voice over internet protocol or VoIP and the Internet forums including the World Wide Web, blog, social networking and online shopping web.
👍  , ,
1 reply
· @budz82 · (edited)
1936 Popular Science Article: How Will The World End? (Planet X/Nibiru/Red Kachina)
https://www.youtube.com/watch?v=Wb3ntnoHHtk

How will the World End? Well according to an article published in the magazine Popular Science almost 100 years ago in 1936, it may be from the passing of what they refer to as a Wandering Sun.

<i>"For many millions of years, our planet has circled its parent sun on a time schedule so accurate that it varies only a fraction of a second in each century. The earth is always "on time" throughout every round trip.
Yet, it is quite possible that this regular and peaceful schedule may someday be interrupted by an unforeseen event, which, if it does occur, will probably bring humanity's greatest and perhaps final, catastrophe! By means of simple experiments, you can study the possible ways in which our planet's doom might come, and show the forces which may someday bring ruthless destruction upon a helpless world.
Strangely, the way which offers the greatest menace to the earth is exactly the way in which the earth itself came into being! It is now generally believed that the material which later condensed into the planets of our solar system was drawn out of the sun in huge tidal streamers, raised by the close passage of another; a wandering sun.
This passage may have been almost a grazing collision, for two great, flaming cigars of incandescent matter, millions of miles long, were lifted by the huge tidal forces out of our sun.
Later, when the invader receded, and the duel of gravitational forces subsided, the flaming streamers were left circling around the suns which had torn them loose in their titanic tug of war."</i>

So what do you think? Is this a likely possibility for the end of the world? And could this be the same cosmic body that is commonly referred to today as Planet X, Nibiru or the Red Kachina? Share your thoughts in the comments below, or read the entire 1936 article here: https://books.google.ca/books?id=QygDAAAAMBAJ&lpg=PA52&dq=end+of+world&pg=PA52&redir_esc=y&hl=en#v=onepage&q=end%20of%20world&f=false


----

<b>About The Author:</b>
![barkey.jpg](https://steemitimages.com/DQmPQ5fLpE7TrdUJZT1KbZPU1NXA1zKD2uGFhtyURSiHs24/barkey.jpg)
My name is Brandon (AKA TruthNow88), I am 35 years old and in live just outside of Toronto in the Great White North known as Canada. I have a 1 year old Cocker Spaniel/Golden Retriever crossbreed named Barkley (otherwise known as Monster... I swear this dogs so hyper it may very well be the key to zero-point energy...). I am a Search Engine Optimization specialist, website designer/analyzer and amateur blogger by day, and a coffee drinking, weed smoking, crypto trading conspiracy theorist by night. I love things that get my brain pumping and am really into politics, economics, world events, conspiracy theories, and more recently cryptocurrencies, as well as TV/movies, travel and I am also a huge fan of comedy in general (as they can't make the world go to war if everyone is laughing). I am somewhat new to the crypto world and Steemit in general but recently decided to try to jump in with both feet and see what happens. So far I am loving the people and concepts that Steemit and the crypto world are creating for the future, and I feel honored to be part of its infancy. Anyways I guess that's about it, and remember let nothing hold you back... FULL STEEM AHEAD!
👍  ,
1 reply
· @antv ·
$3.16
Popular Science it's not the magazine that used to be
https://newsrealblog.files.wordpress.com/2009/11/global-warming-meeting-postponed.jpg

I picked up the latest issue of Popular Science you know I used to like this magazine.  It would  teach you how to build something some type of electronic device or something fun but wow, look what has happened to this magazine.

When I looked at the cover I knew something was up it's got an umbrella on fire. The whole magazine is basically on global warming. They use these stats and have this big graph of warmer temperatures per decade and you can tell they twist the stats and make it sound exactly the way they want it to.

And on top of it all they're blaming all the strange weather on global warming weather it's cold weather snow whether it's Hell whatever it doesn't matter what it is it's man's fault and it's global warming.

Then they talked about the swiftly dimming planet.  It probably is dimming after every day you see the Chemtrails in the sky. Nick got practically a centerfold of Heidi: the blame changer she's trying to say that 7 in 10 Americans believe that global warming is real and is happening. It's called climate change folks get used to it it's happened in the past and it will happen in the future the climate is always changing No Matter What man does.

Oh and by the way I think I'll be subscribing to Popular Science not on your life it's just another propaganda magazine.

.
👍  , , , , , , , , , , and 14 others
1 reply
· @krishtopa · (edited)
$8.52
The algorithm of polynomials real roots calculation
![](https://i.imgur.com/f3HUFcr.png)

*The basic idea of this algorithm is very simple and can be described in two sentences. A real root of the polynomial can always be calculated on the area of monotonic variation of the polynomial, between the roots of the derivative of a polynomial. But the derivative of a polynomial - is also a polynomial, however, with fewer exponents, so if you find its real roots, it will be necessary to calculate the roots of the original polynomial between the roots of the derivative by a bisection method.*

--------

The problem of finding roots of algebraic polynomials is known for a long time, at least since the Middle Ages. There is a well-known Lobachevskian method for solving algebraic equations of arbitrary exponent. The essence of the Lobachevskian method can briefly summarize as follows.

Having some initial polynomial it is not difficult to build polinomial2 that have the same roots as the original polynomial, but with the opposite sign. Multiplying the original polynomial and polinomial2, we obtain a polynomial whose roots are the squares of the roots of the original polynomial.

![](https://i.imgur.com/qwkCsi5.gif)

It is very good to repeat it several times. As a result, if the original roots of the polynomial are not equal to each other, their squared values would be really different, and their approximate values are very simply expressed in terms of the coefficients of the corresponding squared polynomial.

In particular, if the coefficient of the highest exponent of the argument of the polynomial is equal to one, then the next-highest coefficient is equal (with opposite sign) to the sum of the roots of the equation, and as the values of these roots are strongly separated, then it can be approximately assumed that this amount is equal to the largest root modulo.

For a polynomial of the 4th exponent with the roots 1, 2, 3, 4 Lobachevskian method after the fourth squaring gives the right roots values. At the same time to represent the polynomial coefficients, only a long double format is necessary.

**Now I'll start to describe a different method.**

This algorithm is about the investigation of serial intervals of monotonic variation of the original polynomial. If on the boundaries of this monotony interval the values of the polynomial have different signs, then the procedure of dividing the interval in half starts to calculate the exact value of the next root. The boundaries of the monotony intervals are the points at which the derivative of a polynomial equals zero, they are the roots of the derivative polynomial. 

![](https://i.imgur.com/1X8NN1T.gif)

Derived polynomial has got a exponent one time less than the original polynomial has, and the process of calculating the coefficients of the derivatives of polynomials must be continued to a polynomial of the first exponent, the root of which could be calculated without the involvement of a procedure of dividing the interval in half. As a result of this step, we obtain two intervals of monotonic changes for a derivative polynomial of the second exponent.

Now we can find two real roots of a derivative polynomial of the second exponent (if they exist), and then come up to the roots of the original polynomial. 

We normalize the polynomial so that the coefficient of the highest exponent of the argument was equal to one. Let M be the largest value modulo among its other coefficients. Then a polynomial value is greater than one for all values of argument larger than M + 1.

Thus, if you want to determine the sign of the polynomial at the infinite value of the argument, the argument should be taken equal to M + 1.

Now I want to comment feature of the implementation of the procedure of dividing the interval in half.

Pilot point pt, located midway between the current ends of ng and vg ng of the interval is calculated by the operator pt = 0.5 * (ng + vg); and the cycle of dividing the interval in half is aborted by (pt <= ng || pt> = vg) break ;

![](https://i.imgur.com/jg9GhZL.jpg)

Due to the finite accuracy of the representation of real numbers in the machine sooner or later there comes a condition in which the operation of division in half, instead of giving the new value gives the value of one original boundary. In this state, we should stop this cycle. This state corresponds to the maximum achievable accuracy of the result.

--------

*Undoubtedly, this method is a valuable tool in the hands of the researcher. However, it's programming for modern computer technologies causes serious difficulties in need of strict guarantees of reliable results in all sorts of special root locations.*


References: [1](http://math.stackexchange.com/questions/47030/why-does-this-distribution-of-polynomial-roots-resemble-a-collection-of-affine-i), [2](http://www2.lv.psu.edu/ojj/courses/cmpsc-201/numerical/roots.html)

*[Follow me](https://steemit.com/@krishtopa), to learn more about **popular science**, **math**, and **technologies***

**With Love,
Kate**

Image credit: [1](http://www.slideshare.net/dmidgette/polynomials-in-the-real-world), [2](http://zeroinfy.com/enrol/index.php?id=346), [3](http://www.writeopinions.com/root-of-a-polynomial), [4](http://math.stackexchange.com/questions/47030/why-does-this-distribution-of-polynomial-roots-resemble-a-collection-of-affine-i)
👍  , , , , , , , , , , and 239 others
2 replies
· @krishtopa ·
$8.27
Optimization of transmission delay
![](https://i.imgur.com/8JjFUV0.jpg)

*Low latency - an important factor that ensures reliable operation and high performance of networks. Applications for real-time communication and streaming are highly dependent on the waiting time. Increasing of the delay of only a few milliseconds can lead to a distortion of the image and voice and financial losses.*

*Providers are trying to monitor network bandwidth and latency fluctuation, but the increase in the "width" of the channel often has no effect on the delay in the network. In this article, we look at the main causes of delay and ways to overcome it.*

--------

# The delay and its impact on the quality of communication

In networks based on packet exchange connection between delay and bandwidth is complex and mixed in the definition. The waiting time is composed of the following components:

- serialization delay - the time that needs the port to transmit a packet
- propagation delay - the time that a bit needs to achieve receiver (determined by the laws of physics)
- overload delay - the time frame spends in the output queue of a network element
-  transmission delay - the time network element spends on analysis, processing, and transmission of the packet

# Traffic management

![](https://i.imgur.com/QKMMOsd.jpg)

"Traffic management" – an ability of a network to handle different types of traffic with different priorities.

This approach is used in networks with limited bandwidth in the critical applications that are sensitive to delays. Management can mean traffic restrictions for specific services, such as e-mail, and the allocation of the channel under the operation of business applications.

For traffic and quality of communication management in an organization's network engineers recommend:
- Set up a network so that you can monitor and make traffic classification
- Analyze network traffic to understand the laws of work of applications
- Implement appropriate separation of access levels
- Conduct monitoring and reporting, to actively manage the changing of traffic distribution schemes

The most effective way to control traffic – is a hierarchical quality of service (H-QoS), which is a combination of network policies, filtering and managing of traffic capacity. H-QoS will not reduce the speed if all network elements provide ultra-low latency and high performance. The main advantage of H-QoS – reduce latency without the need to increase bandwidth.

# Using of NID

Network interface device (NID) make it possible to monitor and optimize traffic at low cost. Typically, such devices are installed in the territory of the subscriber: network towers and other transition points between the network operators.

![](https://i.imgur.com/gY5YdJf.jpg)

NID provides control of all network components. If a device supports the H-QoS, the provider can not only monitor the operation of the network but also make individual settings for each connected user.

# Caching

A relatively small increase in the passageway itself will not solve the problem of low performance of network applications. Caching helps to accelerate the delivery of content and optimize bandwidth. This process can be regarded as a technique of storage resources acceleration - the network works faster, like after the upgrade.

Typically, organizations use caching on several levels. It is worth mentioning the so-called proxy caching. When a user requests any data, its request can be made by a local proxy cache.

Proxy caches are a type of shared cache memory: working with a large number of users and are very good at reducing latency and network traffic. One of the useful options of using a proxy cache is the ability to remotely connect several employees to a set of interactive web applications.

# Data compression

The main task of data compression - to reduce the size of files that are sent over the network. Somewhat it is similar to the caching and can give acceleration effect by increasing the channel capacity. One of the most common methods of compression - algorithm of Lempel - Ziv - Welch, which is used, for example, in the ZIP-archive and UNIX compression utility.

However, in some situations, the data compression can cause problems. For example, the compression does not scale well in terms of use of the resources of RAM and CPU. Also, the compression is rarely beneficial, if the traffic is encrypted. In most encryption algorithms, the output consists of a little repetitive sequence, so that such data can’t be compressed by standard algorithms.

![](https://i.imgur.com/ENASYfa.jpg)

For efficient operation of network applications, you need to solve problems with bandwidth and delay simultaneously. Data compression is aimed at the resolution only of the first problem, so it is important to use it in conjunction with traffic management techniques.

-------

*Today, engineers are constantly conducting research, trying to improve the performance and efficiency of networks, developing improved methods of management, which will eliminate the packet loss during the overloading of the ports, creating protocols for data link layer connection, capable of providing the shortest connection via Ethernet.*

*Maintaining of the high quality of the connection in the networks - an important task for modern organizations.*

References: [1](http://www.jaarverslag-infrabel.be/2014/en/safety/new-traffic-management), [2](https://en.wikipedia.org/wiki/Transmission_delay)

*[Follow me](https://steemit.com/@krishtopa), to learn more about **popular science**, **math**, and **technologies***

**With Love,
Kate**

Image credit: [1](http://www.jaarverslag-infrabel.be/2014/en/safety/new-traffic-management), [2](http://11a0d684ccb55213e041fe54.enjoythesilencephalolith.ru/search/465355a74405c01d8d5247848b42a48c), [3](https://nanohub.org/resources/16459/watch?resid=16460), [4](http://asia.in.ua/it-issledovateli-iz-gonkonga-otkryli-sekret-uskoreniya-peredachi-dannyh-v-24-raza/)
👍  , , , , , , , , , , and 202 others
1 reply
· @digitalbrain · (edited)
$0.93
TIL: First-ever panoramic view of Earth from Aboard International Space Station
Today I Learn that [RT](http://space360.rt.com/) realeased the first ever panoramic view of Earth from the [International Space Station](https://www.nasa.gov/mission_pages/station/main/index.html)!

Cosmonaut [Andrey Borisenko](https://en.wikipedia.org/wiki/Andrei_Borisenko) shows a module called ‘**Cupola**’, from which the inhabitants of the **International Space Station** can observe the Earth:

https://www.youtube.com/watch?v=9oEsN9A9bmw
Credits: [RT](https://www.youtube.com/channel/UCpwvZwUam-URkxB7g4USKpg)


## Follow us @digitalbrain!
👍  , , , , , , , , , , and 52 others
12 replies
· @krishtopa ·
$12.72
Quantum communication: perspectives
![](https://i.imgur.com/cuQJJb0.jpg)

*Telegraph replaced pigeon mail. Radio replaced the telegraph. Radio, of course, will not disappear, but there are other data transmission technologies - wired and wireless. Generations of communication standards replace each other very quickly: 10 years ago, mobile Internet was a luxury, and now we are waiting for the appearance of 5G. In the near future, we will need a fundamentally new technology, which will exceed all current types.*

-------

# Quantum entanglement

One of the main options of the evolution of communication - the use of quantum effects. This technology won’t exclude but may complement traditional forms of communication (although we can’t immediately reject the idea that the network based on quantum entanglement, in theory, could displace other types of communication).

Quantum entanglement - a phenomenon of quantum communication performance. Communication can be maintained even if the particles disperse over long distances as by measuring the quantum characteristics of one of the entangled particles, we automatically know the characteristics of the second. The first protocol of quantum cryptography appeared in 1984. Since then, there were created a lot of experimental and commercial systems based on the phenomena of the quantum world.

![](https://i.imgur.com/XENxCGW.jpg)

Today, quantum communication is used, for example, in the banking sector, where the compliance of special security conditions is required. Companies like Id Quantique, MagiQ, Smart Quantum already offer ready cryptosystems. Quantum technologies for security can be compared with nuclear weapons – it is almost an absolute protection, implying, however, significant implementation costs. If you transmit the encryption key using quantum entanglement, it will not allow hackers intercept any valuable information - at the exit, they will receive a different set of numbers, because the state of the system, to which the external observer interferes, changes.

It was impossible to create a global perfect encryption system, until recent time - within a few dozen kilometers the transmitted signal damped. There have been many attempts to increase this distance. This year, China launched QSS (Quantum experiments at Space Scale) satellite, which must implement quantum key distribution scheme at a distance of over 7,000 kilometers.

![](https://i.imgur.com/ej1bBwq.jpg)

The satellite will be generating two entangled photons and sending them on the Earth. If all goes well, the key distribution using entangled particles will be the beginning of an era of quantum communication. Dozens of such satellites could be the basis not only for a new quantum Internet on Earth, but for space quantum communication: for future settlements on the Moon and Mars, and for long-distance space communication satellites, traveling beyond the solar system.

# Quantum teleportation

In quantum teleportation, there is no material transfer of an object from point A to point B - there is a transfer of "information". Teleportation is used for quantum communications, for example for the transmission of classified information. We must understand that this is not information in the usual way. Simplifying the model of quantum teleportation, we can say that it will help to generate a sequence of random numbers at both ends of the channel, that is, we can create a cipher, which can’t be intercepted. It is the only thing that can be done with the help of quantum teleportation in the foreseeable future.

![](https://i.imgur.com/yBdDBKx.jpg)

For the first time, photon teleportation took place in 1997. Two decades later, teleportation was possible for tens of kilometers over fiber-optic networks. Theoretically, it is already possible to build a quantum network in the city. However, there is a significant difference between laboratory and real conditions. Fiber optic cable is exposed to temperature changes, because of which the refractive index changes. Because of the exposure of the sun, the phase of a photon may shift and that will lead to an error in the protocol definition.

Given the pace of creation of projects in the field of quantum computing and data transferring, in 5-10 years (according to the physicists themselves) quantum communication technology will finally come out of the laboratories and become as familiar as mobile communications.

# Possible disadvantages

In recent years, there are a lot of discussions about information security in quantum communication. Previously it was thought that with the help of quantum cryptography we can transmit the information so that it couldn't be caught in any circumstances. It turned out that it such absolutely reliable system doesn’t exist: Swedish physicists have demonstrated that under certain conditions the quantum communication system can be cracked due to some peculiarities in the preparation of the quantum cipher. In addition, the physics of the University of California proposed a method of weak quantum measurements that actually violate the principle of the observer and allows calculating the state of a quantum system based on indirect evidence.

---------

*However, vulnerabilities are not the reason to abandon the idea of quantum communication. The competition between hackers and developers (scientists) continues on a completely new level: using equipment with high processing power. Not every hacker could get such equipment. In addition, quantum effects may allow speeding up data transfer. Using entangled photons we can transmit twice information per unit of time if they are additionally encoded using polarization direction.*

*Quantum communication - not a panacea, but it remains one of the most promising directions of development of global communications.*

References: [1](http://www.space.com/14738-secret-codes-quantum-leap-space.html), [2](https://fossbytes.com/china-launching-hack-proof-quantum-communications-network-in-2016/)

*[Follow me](https://steemit.com/@krishtopa), to learn more about **popular science**, **math**, and **technologies***

**With Love,
Kate**

Image credit: [1](http://www.notey.com/blogs/china-satellite), [2](http://hardware_software.complexdoc.ru/3623761.html), [3](http://www.space.com/14738-secret-codes-quantum-leap-space.html), [4](https://fossbytes.com/china-launching-hack-proof-quantum-communications-network-in-2016/)
👍  , , , , , , , , , , and 205 others
2 replies
· @digitalbrain ·
$6.26
Fusion Energy Explained
# Today I learned the fusion energy process with this 6 minutes video:

https://www.youtube.com/watch?v=mZsaaturR6E
Credits: [Kurzgesagt – In a Nutshell](https://www.youtube.com/channel/UCsXVk37bltHxD1rDPwtNM8Q)


<center>![fusion](http://fusionforenergy.europa.eu/understandingfusion/whatisfusion/Whatisfusion_2.jpg)
[Image Credits](http://fusionforenergy.europa.eu/)</center>

## And then read some more about this subject:

* [What is Fusion?](http://fusionforenergy.europa.eu/understandingfusion/)
* [Introduction to fusion](http://www.ccfe.ac.uk/introduction.aspx)
* [The Sun's Structure and Nuclear Fusion](http://earthguide.ucsd.edu/virtualmuseum/ita/07_2.shtml)
* [Nuclear Fission and Fusion](http://www.diffen.com/difference/Nuclear_Fission_vs_Nuclear_Fusion)

## Follow us @digitalbrain!
👍  , , , , , , , , , , and 116 others
6 replies
· @krishtopa ·
$18.58
How your WhatsApp Messages are secured with end-to-end encryption
![](https://i.imgur.com/4NAIaSF.jpg)

*End-to-end encryption provides the highest level of protection for users who are seriously concerned about data privacy.* 

***For private data management you no longer have to rely on third-party services that can:***
- *give you not enough protection of your data, giving it to hackers, special services and others;*
- *scan your data and share information to receive commercial benefits and do researches without the privity.*

-------

In the end-to-end encryption keys are used to encrypt and decrypt the information generated and stored only on the leaf nodes of correspondence, that is, on its participant's sides. The server side takes no part in creating keys and therefore has no access to them, it only sees the encoded data transmitted between the parties. 

![](https://i.imgur.com/Aa0XZu0.gif)

# How the end-to-end encryption method works

During the beginning of the session, on the devices of each interlocutor 2 keys are generated: public and private. It is used to decrypt the data, this key does not leave the limits of the local device.
The public key for an open communication channel is transferred to the other party (one or all, if there are several). With the help of the public key interlocutor can only encrypt the source data, and only be the holder of the corresponding private key can decrypt it. Therefore, no matter who intercepts the public key. As a result, it can only transmit its encoded data.

![](https://i.imgur.com/Jbt85ht.jpg)

As a key pair is generated, the interlocutors exchange public keys, and then secure communication begins.

Text, video, audio, files after encryption get to the server where they are stored until the recipient is able to retrieve data. After that, depending on the strategy of the company - owner of the server, the data is either destroyed or stored for some time.

**As we can see: end-to-end encryption - it's really good.**

For modern ICT encryption / decryption will not be a daunting task, not even a difficult task. At the same time, if the conversation involves several interlocutors, then, sending a message, it is necessary for each one you need to encrypt it for both interlocutors, so with increasing amounts of interlocutors the load on the device equally increases. To do this, the developers optimize the means of the organization of group talks.

![](https://i.imgur.com/J5io40D.gif)

The idea of end-to-end is not new. Software for messages and other data encryption PGP(Pretty Good Privacy) was developed in 1991 by Phil Zimmerman. In subsequent years, the algorithm and corresponding software were improved and acquired additional mechanisms.

In 1997, the PGP company Inc. proposed initiative OpenPGP, a free implementation of PGP - GnuPG was created in 1999.

There were no examples of hacking PGP system, so encryption mechanisms based on the implementation of open PGP can be created by developers of different messengers.

![](https://i.imgur.com/MGiDlSK.png)

End-to-end encryption can be a very important requirement in the increasingly popular Internet of things. In a world where every traffic light, and each car is connected to the Internet, smartphones know everything about their owners, authentication and encryption are vital.

The latest news about messengers allows optimistic predictions about the prospects for widespread implementation of features of end-to-end encryption. WhatsApp has recently added this capability to its messenger. Viber and Google seem to be moving in the same direction. All this points to the fact that large companies have adopted new rules of the game.

![](https://i.imgur.com/4RGpNxi.jpg)

--------

*And maybe soon people will stop hoping that the services are not viewing their personal data. Services will even not be able to view the data.*

*To date, in terms of security, we can divide  messengers into two categories: messengers that implement full end-to-end encryption and the second category, they are systems for the exchange of messages that are just trying to be like the first category, although they encrypt outgoing data, these data can be easily read on the server.*

References: [1](http://www.spymasterpro.com/blog/whatsapp-updated-the-android-app-by-default-end-to-end-encryption/), [2](https://www.alltop9.com/enable-whatsapp-end-to-end-encryption-secure-calls-messages/)

*[Follow me](https://steemit.com/@krishtopa), to learn more about **popular science**, **math**, and **technologies***

**With Love,
Kate**


Image credit: [1](http://www.sagmart.com/news/Technology/whatsapp-rolled-out-end-to-end-encryption-on-android-smartphones), [2](http://freesoft.ru/news/whatsapp_zaschitit_dannye_polzovateley_endtoend_sh), [3](http://www.informationsecuritybuzz.stfi.re/hacker-news/the-problem-isnt-end-to-end-encryption-its-cheap-mobile-phones/?sf=rnjpox#aa), [4](https://www.alltop9.com/enable-whatsapp-end-to-end-encryption-secure-calls-messages/), [5](http://www.spymasterpro.com/blog/whatsapp-updated-the-android-app-by-default-end-to-end-encryption/)
👍  , , , , , , , , , , and 218 others
1 reply
· @krishtopa · (edited)
$18.10
Have you ever heard about Data Mining methods?
![](https://i.imgur.com/F7Ypvwh.png)

*The rapid development of information technology, in particular, progress in methods of data collection, storage, and processing has allowed many organizations collect massive amounts of data that must be analyzed. The volume of data is so large that there is not enough power provided by experts. This led to the appearance of such a tool as Data Mining.*

----------

Data Mining - a collective term used to refer to the aggregate of methods of detection in data previously unknown, non-trivial, practically useful and affordable interpretations of knowledge necessary for decision-making in the various spheres of human activity.

The data which were obtained by use of the Data mining describe the new connection between the properties; predict some characteristic values based on others. 

![](https://i.imgur.com/0zVoBCx.jpg)

# The range of tasks that are solved by Data mining includes:

- Classification - structuring of objects of a given class
- Association - revealing associative chains. 
- Clustering - a grouping of events and observations in clusters. 
- Forecasting - prediction on the basis of available data possible developments both progressive and regressive.
- Analysis of changes - the identification of typical situations templates. 

# Models of knowledge representation in Data Mining

- Artificial Neural Networks
- Decision trees, symbolic rules
- Methods of nearest neighbor and k-nearest neighbor
- Support Vector method
- Linear Regression
- Hierarchical cluster analysis methods
- A limited enumeration method

![](https://i.imgur.com/gDtbpjg.png)

Most of the analytical methods used in Data Mining technology are well-known mathematical algorithms and methods. 

# Properties of Data Mining methods

Various methods of Data Mining are characterized by certain properties which can be decisive in the choice of data analysis method. The methods can be compared between themselves, estimating characteristics of their properties.

The basic properties and characteristics of the methods of Data Mining are accuracy, scalability, interpretability and verifiability, labor intensity, flexibility, speed, and popularity.

# Classification of methods

**statistical methods** based on the use of the average experience, which is reflected in the retrospective data;

**cybernetic methods**, comprising a plurality of heterogeneous mathematical approaches.

![](https://i.imgur.com/JprgiXk.jpg)

**Statistical methods of Data mining**
------
*Statistical Data Mining methods are classified into four groups:*

- Descriptive analysis and description of the original data.
- Analysis of relationships (correlation and regression analysis, factor analysis, analysis of variance).
- Multivariate statistical analysis (component analysis, discriminant analysis, multivariate regression analysis, canonical correlation, and others.).
- Time series analysis (dynamic modeling and forecasting).

**Cybernetic methods of Data Mining**
------
*This group includes such methods:*

- artificial neural networks (pattern recognition, clustering, forecasting);
- evolutionary programming;
- genetic algorithms (optimization);
- associative memory (search of analogs, prototypes);
- fuzzy logic;
- decision trees;

![](https://i.imgur.com/5Io5kzU.jpg)

--------

*It should be noted that today DataMining technology is most widely used in solving business problems.* 

***Advances in Data Mining technology used in the banking and other industry for the following common tasks:***

- *detection of fraud with credit cards. By analyzing past transactions, which subsequently turned out to be fraudulent.*
- *customer segmentation. Dividing customers into different categories, the banks makes their marketing efforts more targeted and efficient, offering a variety of services to different customer groups.*
- *development of the automotive industry. When assembling the car manufacturers should take into account the requirements of each customer, so they need to have the ability to predict the popularity of certain characteristics and knowing what characteristics are usually ordered together*

References: [1](http://betterevaluation.org/en/evaluation-options/data_mining), [2](http://socialmedialab.upenn.edu/data-mining)

*[Follow me](https://steemit.com/@krishtopa), to learn more about **popular science**, **math**, and **technologies***

**With Love,
Kate**

Image credit: [1](http://crediti-bez-problem.ru/chto-vybrat-visa-ili-mastercard-kogda-kakaya-sistema-luchshe.html), [2](http://seotuition.ru/Поисковоепродвижение/Стратегиипродвижениясайтов/Датамайнинговыестратегии.aspx), [3](http://betterevaluation.org/en/evaluation-options/data_mining), [4](https://www.youtube.com/watch?v=W44q6qszdqY)
👍  , , , , , , , , , , and 224 others
2 replies
· @inphiknit · (edited)
$0.07
Sacred Geometry 🌀 The Egg of Life: Solidifying in the 3rd Dimension... (With Original Sketches.)
<html>
<p>In the <a href="https://steemit.com/sacredgeometry/@inphiknit/sacred-geometry-the-energies-in-a-sphere">last article on Sacred Geometry, about the Energies in a Sphere</a>, we looked at the Seed of Life, which in 2 dimensions, is represented as 1 central circle, with 6 other circles radiating around it, where one edge of each outer circle crosses through the mid-point of the central one. In the 3rd dimension, each circle becomes a sphere, and we get to <em>see behind the </em>central circle, where we discover an 8th sphere, not depicted in 2d diagrams. The rightmost bundle of cells pictured below take this form. Were it rotated downwards by 45 degrees, we'd perfectly see the Seed of Life in 3d. Packing 8 spheres with this configuration uses the least possible amount of space...&nbsp;</p>
<p><br></p>
<p>But where is the outer membrane? Why don't the spheres just drift off?</p>
<p><img src="https://i.imgsafe.org/9424f03ea1.png" width="976" height="206"/></p>
<p>(<a href="https://i.imgsafe.org/9424f03ea1.png">Source.</a>)</p>
<p>In terms of Sacred Geometry, after only 7 circles, an outer boundary can be defined by simply enclosing the Seed in a larger circle. But what is the <em>thickness </em>of this outer wall?</p>
<p>The Egg of Life will make this determination...&nbsp;</p>
<p>It is depicted in 2d by adding another rotation of 6 circles. This allows us to better see the potential for something 3d to emerge from the previous, 2d level. So we can extract it from the membrane, and explore...</p>
<p><img src="https://i.imgsafe.org/943044ff25.png" width="1264" height="854"/></p>
<p><br></p>
<p>With the Seed in 2d, we have 7 circles, or 7 centres, which can be connected to see which potential geometries emerge. When we do the same with the 13 centres of the Egg of Life, we see a huge increase in complexity and therefore also with possibility... (Seed left, Egg right.)</p>
<p><img src="https://i.imgsafe.org/94332157a3.png" width="1271" height="887"/></p>
<p><br></p>
<p>In the 3rd density Egg, we can see the first appearance of the cube and the tetrahedron. (Where they are properly bound within an outer sphere/membrane.) (Membrane not depicted in each drawing.)</p>
<p><img src="https://i.imgsafe.org/94357202af.png" width="1265" height="887"/></p>
<p><br></p>
<p>And the star tetrahedron...</p>
<p><img src="https://i.imgsafe.org/9436e688f3.png" width="1260" height="846"/></p>
<p><br></p>
<p>For clarity's sake, on the left below, is the Seed of Life again. Notice how the 2nd outer circle is arbitrary, and I could have made it anywhere. Ie. <em>That 1 circle is not sacred geometry, and I should not have drawn it^^ </em>Please also note how the centre points of the Sefirot of the Tree of Life are defined in the Seed, however at this level, they leave no room for volume for those spheres...</p>
<p><br></p>
<p><img src="https://i.imgsafe.org/9438eedc0c.png" width="1265" height="810"/></p>
<p><br></p>
<p>And below, the Egg of Life defining the Tree, where finally, we can get volume into the Sefirot... (The singular of this Hebrew word is; SeFiRah ---&gt; sphere.)</p>
<p><img src="https://i.imgsafe.org/943ae5b7e6.png" width="1260" height="845"/></p>
<h2><br></h2>
<p>Thank you, and please consider following, because the complexity of the geometries/drawings is about to increase dramatically, and it'll get pretty fun...^^</p>
<h2>Other Sacred Geometry Articles with Original Sketches:</h2>
<p><a href="https://steemit.com/mathematics/@inphiknit/the-golden-number-1-618-a-k-a-the-phi-ratio-in-sacred-geometry">Part 1: The Golden Number 1.618 (a.k.a. the Phi Ratio) in Sacred Geometry.</a></p>
<p><a href="https://steemit.com/mathematics/@inphiknit/part-2-the-golden-number-1-618-a-k-a-the-phi-ratio-in-sacred-geometry-with-original-sketches">Part 2: The Golden Number 1.618 (a.k.a. the Phi Ratio) in Sacred Geometry.</a></p>
<p><a href="https://steemit.com/sacredgeometry/@inphiknit/sacred-geometry-the-energies-in-a-sphere">The Energies in a Sphere. Exploring the Seed of Life.</a></p>
<h2>Melech מלך ben Chaya, <a href="https://steemit.com/@inphiknit">@inphiknit</a>&nbsp;</h2>
<p><img src="https://i.imgsafe.org/594fb765a1.png" width="412" height="263"/></p>
</html>
👍  , , , , , , , , , , and 24 others
4 replies
· @inphiknit · (edited)
$14.14
Sacred Geometry: 🌀 The Energies in a Sphere (with Original Sketches.) Exploring the Seed of Life.
<html>
<p>Why does the <a href="http://solarscience.msfc.nasa.gov/SunspotCycle.shtml">majority of solar activity occur along these specific latitudes</a>?</p>
<p><img src="https://pixabay.com/static/uploads/photo/2015/01/16/03/05/solar-flare-601031_640.jpg" width="640" height="640"/></p>
<p>(<a href="https://pixabay.com/static/uploads/photo/2015/01/16/03/05/solar-flare-601031_640.jpg">Source.</a>)</p>
<p><img src="https://pixabay.com/static/uploads/photo/2016/03/09/23/07/solar-flare-1247639_640.jpg" width="640" height="611"/></p>
<p>(<a href="https://pixabay.com/static/uploads/photo/2016/03/09/23/07/solar-flare-1247639_640.jpg">Source.</a>)</p>
<p>Why does <a href="http://www.space.com/27392-saturn-hexagon-vortex-nasa-photo.html">Saturn's North Pole have hexagonal</a> wind patterns?</p>
<p><img src="https://pixabay.com/static/uploads/photo/2015/04/22/21/12/saturn-735334_640.jpg" width="640" height="480"/></p>
<p>(<a href="https://pixabay.com/static/uploads/photo/2015/04/22/21/12/saturn-735334_640.jpg">Source.</a>)</p>
<p>How can a sphere, or the universe, or consciousness, continue expanding?</p>
<p>Why is there quantum entanglement of seemingly unconscious objects like atomic particles?</p>
<p>How can physicality emerge from nothingness?</p>
<p>Is the universe really holographic?</p>
<p>Why do I feel connected to everything sometimes, and not at all connected, at other times?</p>
<p>If everything is One, how can there be separation?</p>
<p><br></p>
<p>Below are some original sketches where by using the teachings of Kabbalah and Sacred Geometry, and through meditation, we can contemplate these and other questions... (Anyone else's exploration would be very different than mine...)</p>
<p><img src="https://i.imgsafe.org/5ba3607e9c.png" width="1278" height="886"/></p>
<p><br></p>
<p><img src="https://i.imgsafe.org/5baaa7db35.png" width="1274" height="892"/></p>
<p><br></p>
<p><img src="https://i.imgsafe.org/5b9462d4d7.png" width="1273" height="894"/></p>
<p><br></p>
<p><img src="https://i.imgsafe.org/5ba6a7782b.png" width="1274" height="821"/></p>
<p>&nbsp;</p>
<p><br></p>
<p>B"H, we're building up the Tree of Life, and next time we will examine certain aspects of the Egg of Life.</p>
<h2>Other Sacred Geometry Articles with Sketches:</h2>
<p><a href="https://steemit.com/mathematics/@inphiknit/the-golden-number-1-618-a-k-a-the-phi-ratio-in-sacred-geometry">Part 1: The Golden Number 1.618 (a.k.a. the Phi Ratio) in Sacred Geometry.</a></p>
<p><a href="https://steemit.com/mathematics/@inphiknit/part-2-the-golden-number-1-618-a-k-a-the-phi-ratio-in-sacred-geometry-with-original-sketches">Part 2: The Golden Number 1.618 (a.k.a. the Phi Ratio) in Sacred Geometry.</a></p>
<h2>Melech מלך ben Chaya, <a href="https://steemit.com/@inphiknit">@inphiknit</a>&nbsp;</h2>
<p><img src="https://i.imgsafe.org/594fb765a1.png" width="412" height="263"/></p>
</html>
👍  , , , , , , , , , , and 148 others
3 replies
· @krishtopa · (edited)
$47.14
The easiest hacking method - Brute force
# The easiest hacking method - Brute force

![](https://i.imgur.com/LyyMg4n.png)

*Brute force - one of the most popular methods of hacking passwords on servers and in various programs. Program-cracker tries to get access to a program (for example, to the mailbox) by brute force in accordance with the criteria specified by the owner of the program: dictionary, length, combinations of numbers, etc.*

----------

The program takes into account the minimum and maximum password length. It requires maximum system resources and takes a lot of time.

Brute force attack involves sequential selection (in other words hacking) of different characters until the necessary combination is selected. The rate of selection depends on the productivity of computer and complexity of the password, but the process can be performed in a linear time.

This method of password guessing is very good as in the end this password will be cracked, but it may take a very, very long time. So that this method of hacking is not always justified, if the user-owner of the hacked service acted quite cleverly, and didn’t use simple passwords like "123», «qwerty», but used both uppercase and lowercase letters, numbers, and special characters that are allowed. If the password is long enough (about 10 characters), then brute force hacking wouldn’t be dangerous.

![](https://i.imgur.com/Lzbb0mo.jpg)

During the development of various cryptographic ciphers, a total busting method is used in the evaluation of its resistance to cracking. This new code is considered to be sufficiently persistent if there is no faster method of hacking it than exhaustive search of all possible keys. Such cryptographic attacks, like brute force, are the most effective, but often take a very long time.

Dictionary attack is the most used type of brute force - the selection of passwords is made from a text file with a pre-compiled dictionary. This method of attack is very effective in mass hacking, for example, to hack an Internet messengers and ICQ accounts. At the same time, there is a fairly high probability that the attack by the dictionary wouldn’t succeed.

Since 2005, the number of attacks carried out on secure SSH-services significantly increased. Even if the server has the newest software, it does not mean that it is impossible to choose a password, especially if the firewall is inactive or is not configured correctly. So to increase the impossibility of hacking you need to configure the firewall properly, it will help protect the server from unpleasant surprises in the future.

![](https://i.imgur.com/34i3c9W.png)

*And finally, let's talk a little bit about the mathematical side of Brut force*

- In terms of knowing some methods of passwords Bruting, the screening of unacceptable values is used (blank passwords, the same recurring characters, etc.). In mathematics, this method is called the branch and bound method.
- Methods of parallel computing, when several passwords are sorted out, are also used in Brut force. This is accomplished in two ways: by conveyer method, and by the method of Brut force of disjoint subsets of all possible passwords.

---------

*Hacking by brute force method is quite slow, but powerful, so hackers use it to this day, and in view of the ever-increasing power of computers and bandwidth of Internet channels, it will be in service for a long time.*

***Remember that all the actions that are aimed at gaining unauthorized access to other people's information are illegal. All methods of brute force, which are described in this article may be used only to find vulnerabilities in the services that belong to you.***

References: [1](https://www.serializing.me/2015/08/12/ssh-brute-force-and-suricata/), [2](http://no-adware.com/blog/brute-force-password-cracker/)

*[Follow me](https://steemit.com/@krishtopa), to learn more about **popular science**, **math**, and **technologies***

**With Love,
Kate**

Image credit: [1](http://www.3ghax.net/2016/04/10-common-attacks-used-by-every-hacker.html), [2](https://www.serializing.me/2015/08/12/ssh-brute-force-and-suricata/), [3](http://no-adware.com/blog/brute-force-password-cracker/)
👍  , , , , , , , , , , and 250 others
5 replies
· @krishtopa ·
$23.24
How to recognize data on your credit card
![](https://i.imgur.com/5RKhz3F.jpg)

*Recognition of data from the credit card is a highly actual and very interesting task from the standpoint of algorithms. Well implemented card recognition software can save people from the need to enter most of the data manually when they make online payments and payments in mobile applications. In terms of recognition, the bank card is a complex document of standard size (85,6 × 53,98 mm), made on a standard form and containing a specific set of fields (both mandatory and optional): card number, cardholder name, date issuance, expiration date, account number, CVV2-code or its equivalent.*

*Although steps of recognition for all three mandatory fields are the same, the complexity widely varies. The easiest is to recognize the card number; the situation is more complicated with the remaining two fields: the validity and name of the card holder.*

*In this article, we will have a closer look at the card validity recognition procedure (recognition of the name is similar).*

----------
# Algorithm of validity recognition

Let the image of the card be already straightened (projective transformation, resulting in getting an image of the card with an orthogonal view with a fixed resolution). The result of the algorithm must be 4 decimal digits: two for the month and two for the year. It is considered that the algorithm gave the correct answer if 4 digits coincide with those that are shown on the card are received. The symbol that separates them is not included.

![](https://i.imgur.com/LHHRjB3.png)

The first step is to locate the field on the card (as opposed to the number, the location of this field is not standardized). The use of "brute force" all over the card areas is unpromising as the corresponding text fragment is very short (usually 5 characters), syntactic redundancy is small, and the probability of false detection of an arbitrary piece of text or even a colorful background area is unacceptably large. Therefore, we apply a trick: we look not for the actual date, but some information area which is located under the card number and have stable geometric structure.

Considered zone is divided into three lines, one of which is often empty. In the case when there is two non-empty row in a zone, their spacing coincides with the three-line zones spacing, or approximately equal to the sum of twice the line spacing and height.

A search of an area and splitting it into 3 lines is complicated by the presence of the background on the card. To solve this problem a combination of filters, the purpose of which is to distinguish the vertical boundaries of letters and blank other parts of the image on a picture of the card. 

**The sequence of filters is a follows:**

- The image is getting gray by averaging the color values of the channels using the formula ![](https://i.imgur.com/tMYg9w4.png)

- The calculation of the vertical borders using the formula![](https://i.imgur.com/e94ELTA.png).

- Filtering small vertical boundaries using mathematical morphology.

*After filtering pixel intensity the processed images are projected on a vertical axis.*

With the help of resulting projection, it is now possible to find the most probable position of the lines, suggesting the absence of horizontal borders in the spacing. We minimize the amount by the projection over all periods and the initial phases of a predetermined range:

Since local minimum is usually quite pronounced and at the outer boundaries of the text, the optimal value is four (meaning that the card has three lines, and hence, four local minimums). As a result, we find the parameters that define the centers of line spacing, as well as the outer limits of the text.

![](https://i.imgur.com/F5L6ubF.png)

Now the area of the search can be substantially reduced at the same time taking into account the original shape of the area and found the position of the lines on the image. For such a crossing is the set of possible positions of the substrings are generated by, and we’ll work with them.

Each substring-candidate is segmented into characters, given that all the symbols on these cards monospaced. This allows using dynamic programming algorithm to search for profiles without inter-symbol character recognition (all you need to know - the permissible range for the character width).

After segmentation into characters, it's time for detection using the artificial neural network (ANN).

**Note  couple of facts:**

- For recognition, the convolutional neural network trained using the cuda-convnet tool are used.
- Alphabet of trained network includes numbers, punctuation marks, space, and a sign of non-symbols ( "garbage").

----------

*Thus, for each symbol, we get an image array containing a pseudo-probability location estimation symbol alphabet corresponding to a given image. It seems that the correct answer is to build a line that consists of the best options. However, the ANN is sometimes mistaken. Part of the ANN errors can be corrected using post-processing due to existing constraints on the estimated value of the date (for example, there is no 13th month). It uses an algorithm called "roulette", iteratively enumerating all possible options of "line reading" in descending order of total pseudo-probability. The first one that meets the existing restrictions option is considered to be the answer.*

References: [1](http://creditcardpool.com/credit-card-parts-functions-component/)


***[Follow me](www.steemit.com/@krishtopa), to be the first to learn about my publications devoted to popular science and educational topics***

**With Love,
Kate**




Image credit: [1](http://crediti-bez-problem.ru/chto-vybrat-visa-ili-mastercard-kogda-kakaya-sistema-luchshe.html), [2](http://creditcardpool.com/credit-card-parts-functions-component/)
👍  , , , , , , , , , , and 162 others
3 replies
· @krishtopa · (edited)
$48.27
Monte Carlo simulation - the best risk analysing method
![](https://i.imgur.com/lZYB6vd.png)

*Monte Carlo method can be defined as a method for simulating random variables with the purpose of calculation the characteristics of their distributions.*

*The Monte Carlo method have a significant impact on the development of methods of computational mathematics (for example, the development of numerical integration methods) and in solving many problems it can be successfully combined with other computational methods and it even complements them. Its application is justified particularly in those tasks that allow the theoretical probabilistic description. All this is because of getting a natural response with a given probability in probabilistic problems and a substantial simplification of the decision procedure.*

-------

Every time in the selection of further action Monte Carlo simulation method gives a possibility to decision maker, to study a range of possible impacts and to assess the probability of their occurrence.

![](https://i.imgur.com/7ToD2kl.jpg)

As part of the Monte Carlo simulation method, risk analysis is performed using models of possible results. During the creation of such models any factor which is peculiar uncertainty is replaced by a range of values - the probability distribution. Then multiple calculations of results are performed, each time using a different set of random values of probability functions. Monte Carlo simulation method provides a distribution of values of the possible consequences.

![](https://i.imgur.com/qpenskl.gif)

# The most common probability distributions are listed below 

- **Normal distribution.** is used to describe the deviation from the average, the user defines the average or expected value and standard deviation. Values that are located in the middle, close to the average, are characterized by the highest probability. The normal distribution is symmetric and describes many common phenomena.


- **Uniform distribution.** All values are equally likely to have a certain value; the user simply defines the minimum and maximum. Examples of variables that can have a uniform distribution are production costs or income from future sales of a new product.

- **Triangular distribution.** The minimum, most likely and maximum values are defined. The greatest probability has such values that are located near the point of maximum likelihood. 

- **The lognormal distribution.** Values have a positive asymmetry, and in contrast to the normal distribution are asymmetric. This distribution is used to reflect the values that do not fall below zero but can receive unlimited positive values.

In Monte Carlo simulation method values are chosen randomly from the initial probability distributions. Each sample value is called iteration; results obtained from the sample are recorded. In the process of simulation, such a procedure is carried out hundreds or thousands of times, and the result becomes a probability distribution of possible consequences. Thus, the Monte Carlo simulation method gives a much better idea of the possible events. It gives an indication not only of what can happen but also the probability of such action.

![](https://i.imgur.com/YIh3p5K.gif)

-----------
***To finish I would like to name the benefits of the Monte Carlo method in comparison with the deterministic analysis:***

- *Probabilistic results. The results show not only a possible event but also the probability of their occurrence.*
- *No need to make any assumptions about the symmetry of the distribution laws of input/output variables*
- *Can be applied even for nonlinear models of observation variables*
- *Correlation of initial data. The Monte Carlo method can model the interdependent relationship between the original variables.*
-*The uncertainty of the input quantities may be arbitrarily large*

References: [1](http://glceurope.blogspot.ru/2015/06/monte-carlo-simulation-multi-variable.html) [2](http://www.palisade.com/risk/monte_carlo_simulation.asp) [3](http://www.investopedia.com/terms/m/montecarlosimulation.asp)

***[Follow me](https://steemit.com/@krishtopa), to be the first to learn about my publications devoted to popular science and educational topics***

**With Love,
Kate**

Image credit: [1](https://cmm.cit.nih.gov/intro_simulation/node25.html), [2](http://glceurope.blogspot.ru/2015/06/monte-carlo-simulation-multi-variable.html), [3](http://www.slideshare.net/asif3313/monte-carlo-presentation-24913479), [4](http://monte-karlo.ucoz.ru/news/me_tod_mo_nte_ka_rlo_metody_monte_karlo_mmk/2014-04-25-7)
👍  , , , , , , , , , , and 205 others
8 replies
· @krishtopa ·
$50.19
Big Data - modern and problematic method of data storage and analyzing
![](https://i.imgur.com/1bbBT30.png)

*Digital technologies are presented in all areas of human life. The amount of data recorded in the global warehouse is growing every second, which means that storage conditions and new opportunities to increase its volume must change with the same rate.*

*Experts in the field of IT have expressed the view that the expansion of Big Data and the acceleration of the growth rate has become an objective reality. Every second sources such as social networks, news sites, file sharing and others generate huge amounts of content.* 

*According to a study of IDC Digital Universe, in the next five years, the volume of data on the planet will grow up to 40 Zettabyte, that is by 2020 every person living on Earth will have 5200 GB.*

------------

It is known that the main flow of information is generated not by people. The sources are robots, which interact with each other. These are apparatus for monitoring, sensors, surveillance systems, operating systems, personal devices, smart phones, intelligent systems, sensors and so on. They all set a furious pace of data growth, which leads to the need to increase the number of production servers (real and virtual) - as a result, it leads to expanding and implementing of new data centers.

![](https://i.imgur.com/O6Xt6pA.png)

**In fact, big data - rather conditional and relative concept. The most common definition of it - is a set of information, exceeding the hard drive of a personal device in volume that is not operated by classical tools used for smaller volumes.**

# Generally speaking, big data processing technology can be summarized in three main areas:

- Storage and transfer of incoming information in gigabytes, terabytes and zettabytes for its storage, handling and application.
- Structuring fragmented content: texts, photos, video, audio and all other kinds of data.
- Analysis of Big Data and the introduction of different methods of processing unstructured information, the establishment of various analytical reports.

In fact, the use of Big Data means all areas of work with a huge amount of very disparate information, constantly updated and scattered across various sources. The goal is very simple - maximum efficiency, the introduction of new products and the growth of competitiveness.

# The problem of Big Data

**System Issues Big Data can be summarized in three main groups: the volume, speed of processing, lack of structure. They are three V - Volume, Velocity and Variety.**
![](https://i.imgur.com/x6FPqS4.png)

- Storage of large volumes of data requires special conditions, and it is a question of space and possibilities. Speed is not only connected with a possible slowdown made by the old methods of operating, it is also a question of interactivity: the faster the process, the greater the efficiency, the more productive results you get.
- The problem of heterogeneity and lack of structure arises because of different sources, formats and quality. To combine data and process it effectively, not only work on bringing them into a suitable form, but also some analytical tools required.
- There is a problem of the limit of data volume. It is difficult to predict it, and therefore difficult to predict what technologies and how much financial investments are required for further development. However, for certain volumes of data (terabytes, for example) are already used existing processing tools, which are also actively developed.

There is a problem with the lack of clear principles of work with such data volume. Heterogeneity of flows only aggravates the situation. 

Selection of data for processing and analysis algorithm may also be a problem because there is no understanding of what data should be collected and stored, and which can be ignored.

Another problem of Big Data is ethical. Namely: how the collection of data (especially without the user's knowledge) is different from the violation of privacy boundaries? Search Engines record user click on the Internet, they know your IP address, geolocation, interests, online shopping, personal data, email messages, etc., that, for example, allows to display contextual ads according to user behavior on the Internet. That is, by default, Big Data collects all information that is then stored on a data server sites.

![](https://i.imgur.com/4nb7MIW.gif)

-----------
*Analysis of big data has long been successfully used in marketing to determine: target of audiences, interests, demand, consumer activity. Thus, Big Data is an accurate tool for predicting the future marketing of the company.*

*Today, at the peak of high technology and huge flows of information, companies have more opportunities to achieve superior performance in business through the use of Big Data, but the technology requires new methods for its efficient and secure use.*

References: [1](http://hardware_software.complexdoc.ru/2623438.html)

*[Follow me](https://steemit.com/@krishtopa), to learn more about **popular science**, **math** and **technologies***

**With Love,
Kate**

Image credit: [1](http://hardware_software.complexdoc.ru/2623438.html), [2](https://habrahabr.ru/company/at_consulting/blog/265465/), [3](http://muzil.detstva-style.ru/topics/b9196-application-data-is-not-accessible/), [4](http://www.lumesselearning.com/learning-analytics_big_data/)
👍  , , , , , , , , , , and 236 others
3 replies
· @inphiknit · (edited)
$18.15
The Golden Number 1.618 🌀 (a.k.a. the Phi Ratio) in Sacred Geometry.
<html>
<p>The <strong>Golden Number</strong>, <strong>1.618</strong>, or the <strong>Phi Ratio</strong>, is one of nature's most ubiquitous and beautiful mysteries. There are several great posts here on Steemit about all the incredible places to find the Golden Ratio, or the Fibonacci series in nature, or even in economics, art and architecture. <a href="https://steemit.com/life/@bjornbm/the-fibonacci-sequence-and-golden-ratio-in-art-nature-animals-and-humans-all-explained-but-is-this-a-mere-coincidence">Here is one</a> from <a href="https://steemit.com/@bjornbm">@bjornbm</a>. Or <a href="https://steemit.com/popularscience/@stranger27/fibonacci-sequence-and-golden-ratio-magic-numbers-of-nature">here is one</a> from <a href="https://steemit.com/@stranger27">@stranger27</a>. Or another <a href="https://steemit.com/popularscience/@krishtopa/mysterious-fibonacci-numbers-what-they-are-and-what-they-do">great post</a>, by <a href="https://steemit.com/@krishtopa">@krishtopa</a>. But today, let's explore a few ways of finding this same number in some of the more prevalent shapes of "<strong>Sacred Geometry</strong>..." (All sketches are mine.)</p>
<p><br></p>
<p>The Phi ratio in the vesica piscis of 2 circles.</p>
<p><img src="https://i.imgsafe.org/c5c6b7eeb8.png" width="357" height="529"/></p>
<p>Fibonacci Spiral derived from the Golden Rectangle (left) and an infinite series of scaling Flowers of Life (right.)</p>
<p><img src="https://i.imgsafe.org/c5b6f0f12f.png" width="1491" height="1080"/></p>
<p><br></p>
<p>1.618 derived from the Golden Triangle (left,) and vesica piscis (bottom middle.)</p>
<p><img src="https://i.imgsafe.org/c5d964ce8c.png" width="1491" height="1080"/></p>
<h2><br></h2>
<p>And here it is all over a Pentagon/Pentacle.</p>
<p><img src="https://i.imgsafe.org/c5efc7a28a.png" width="1491" height="1080"/></p>
<p><br></p>
<p>It's everywhere!</p>
<p><img src="https://i.imgsafe.org/c60208c1bf.jpg" width="1491" height="1080"/></p>
<p><br></p>
<p><br></p>
<p>And just for fun!</p>
<p><img src="https://i.imgsafe.org/c60bf7f918.jpg" width="1491" height="1080"/></p>
<p><br></p>
<p><br></p>
<p>This is just one of the reasons why Sacred Geometry is called "Sacred..."</p>
<p>Where else can we find it?! More to come!</p>
<h2><br></h2>
<h2>Melech מלך ben Chaya, <a href="https://steemit.com/@inphiknit">@inphiknit</a>&nbsp;</h2>
<p><img src="https://i.imgsafe.org/594fb765a1.png" width="412" height="263"/></p>
</html>
👍  , , , , , , , , , , and 38 others
2 replies
· @krishtopa ·
$35.14
The fastest sort method - FlashSort
![](https://i.imgur.com/Qb57CxG.jpg)

*There are numerous sorting methods, each of which has its advantages and disadvantages. Sorting algorithms are widely used in programming, but sometimes developers do not even think what algorithm works best (combination of speed and complexity of implementation).*

*Today I would like to tell you about sorting algorithm with O (N) number of readings and permutations.*

*The algorithm sorts the input data in place and does not use additional memory.*

*We will talk about FlashSort, that processes very large arrays with uniformly distributed data.*

*The method was introduced in 1998 by the German scientist Karl-Dietrich Neubert.*

---------

**The operating principle is easy to explain on a specific example.**

Suppose there is a large array of n elements, which values range from 1 to 100. If we find an element with a value of 50, it is reasonable to consider that its rightful place is in the middle of the array. Similarly with other elements: 5, probably should be closer to the top of the structure and 95 is appropriate to be almost in the end. As a result of careless manipulation, we quickly get nearly sorted an array.

*The main task is to ensure that all elements of the array according to their values must be divided into several classes. The smallest numbers are in the first class, the largest - in the last, the other numbers - in the intermediate groups.*

To sort we need to use a special buffer - "buckets". This is an array of 256 lengths, each element of which contains the size of the bucket, and a link to the border of this bucket - a pointer to the first bucket element in the original array. Initially, the buffer is initialized with zeros.

![](https://i.imgur.com/o3KnVBr.gif)

# Thus, the sorting algorithm for each i-th iteration consists of four stages

**Stage 1. Counting bucket size**

In a first step size of buckets is calculated. We use our buffer, where a counter of the number of elements is used for each bucket. We run through all the elements, we take the value of the i-th byte and increment the counter for the corresponding bucket.

**Stage 2. The alignment of the boundaries**

Now, as we know the dimensions of each bucket, we can set clear boundaries in our original array for every bucket. We run through our buffer and set the boundaries – then set the pointers on the elements of each bucket.

**Stage 3. Permutation**

Now we rearrange the elements in the source array so that each of them was in its place, in its bucket.

*Permutation algorithm is as follows:*

We run through all elements of the array. For the current element, we take its i-th byte.

- If the current element is in its place (in your bucket), then everything is OK - we move on to the next element.

- If the item is not in its place – and the corresponding bucket is further, then we make a permutation of the element which is in the far bucket. We repeat this procedure until we get the right element with the desired bucket. Each time, making the change, we put the current element in the bucket and don’t re-analyze it. Thus, the number of permutations will never exceed N.

During each permutation, the counters in the buffer will be decremented back for the corresponding bucket, and in the end, our buffer will again be filled with zeros and is ready to be used in other iterations.

Permutation process will, in theory, be performed in the linear time since the number of permutations will never exceed N - the number of elements.

**Stage 4. Recursive descent**

And the last stage. Now we recursively sort items within each formed bucket. For each iteration, it is only necessary to know the border of the current bucket. Since we do not use additional memory, before moving to the inner iteration, we again run through the elements and compute the length of the current bucket.

![](https://i.imgur.com/j7RQxmO.png)

# Time Efficiency

As already mentioned, the algorithm is very effective in large arrays with uniformly distributed data. In this case, FlashSort with an average time of complexity O (n), will significantly get ahead QuickSort and other effective methods of sorting.

--------

*Despite the fact that the sorting method presented here always makes O (N) readings and permutations, yet it would be naive to expect that the algorithm will work in linear time with all types of data. It is obvious that, in practice, its speed will depend on the nature of the data that is sorted.*

*So you see that even this method has some minuses and we just need to wait until new really effective universal method appears.* 

**Watch this awesome video to see all existing sorting algorithms https://www.youtube.com/watch?v=kPRA0W1kECg**

References: [1](https://en.wikipedia.org/wiki/Flashsort)

***[Follow me](https://steemit.com/@krishtopa), to be the first to learn about my publications devoted to popular science and educational topics***

**With Love,
Kate**

Image credit: [1](http://gifsgallery.com/radix+sort+gif), [2](http://cocktailvp.com/apple-packing/)
👍  , , , , , , , , , , and 129 others
· @krishtopa ·
$76.18
The best way to analyze a huge amount of corporate data - Decision tree
![](https://i.imgur.com/5DrMVlu.jpg)

*The rapid development of information technology and progress in methods of data collection, storage, and processing has allowed many organizations collect massive amounts of data that must be analyzed. The volume of data is so large that experts’ capacity is not enough- it raised a demand on automatic data analysis techniques.*

*Decision tree - one of the major and most popular methods in the field of analysis and decision-making.*

-------
# What is a decision tree

**Decision tree** - it is a way of representing rules in a hierarchical, coherent structure, where each object corresponds to a single node, which gives a solution.

*Under the rule I mean a logical structure, presented in the form of "if ... then ...".*

![](https://i.imgur.com/KdEWtSN.jpg)



***There are a lot of ways of using decision trees, but the problems that can be solved by this machine can be combined into the following three classes:***

- **Information Description:** Decision trees allow you to store information about the data in a compact form - we can store a decision tree that contains an exact description of the objects.

- **Classification:** Decision trees brilliantly cope with classification tasks - assigning objects to one of the previously known classes. 

- **Regression:** If the target variable has got continuous values, decision trees allow us to establish the dependence of the target variable from independent variables. For example, numerical prediction problems are related to this class.

# How to build a decision tree?

Suppose we are given a training set T, containing objects each of which is characterized by m attributes, and one of them points that the object belongs to a particular class.
{C1, C2, ... Ck} - classes (label  of class value), then there are three situations:

- set T contains one or more examples of the same class Ck. Then the decision tree for T – is a leaf that defines the class Ck;
- set T does not contain a single example, it is an empty set. Then it is a leaf again, and a class, associated with a leaf, is selected from the plurality of another non-T set, from the set associated with the parent;
- set T contains examples belonging to different classes. In this case, you should divide the T set into some subsets. For this purpose one of the features having two or more distinct values O1, O2, ... On is selected. T is divided into subsets T1, T2, ... Tn, where each subset Ti contains all examples having Oi value for the selected characteristic. This procedure will continue recursively until the final set consists of the examples related to the same class.

![](https://i.imgur.com/WbyxlGX.jpg)

# Stages of decision tree constructing

**Division rule**

To build the tree on each internal node it is necessary to find such a condition, which would divide the set associated with this node in the subset. One of the attributes must be selected for this check. A general rule for selecting an attribute can be summarized as follows: The selected attribute should divide the set so that the result obtained in the subset would consist of objects that belong to the same class, or would be as close to it as possible, it means that the number of objects of other classes in each of these sets would be as small as possible.

**Stop rule**

The use of statistical methods to assess the feasibility of further decomposition, the so-called  prepruning. Ultimately prepruning of a process is attractive in terms of training time economy, but it is appropriate to make one important caveat: this approach builds less accurate classification models.

It limits the depth of the tree and stops further construction if a division leads to a creation of a tree with a depth greater than a predetermined value.

**Pruning rule** 

Very often the algorithms for constructing decision trees provide complex trees that are "filled with the data" have a lot of nodes and branches. Such "branchy " trees are very difficult to understand.
To solve this problem pruning is often used.

A precision of recognition of the decision tree is the ratio of correctly classified objects during training to the total number of objects from the training set, and a mistake - the number of misclassified objects. Let’s assume that we know the tree error estimation method, branches, and leaves. Then we can use the following simple rule:
- build a tree;
- prune or replace those branches that do not lead to an increase of error.

In contrast to the build process, pruning of branches is done from up from the bottom, moving through leaves in a tree, marking nodes as leaves or replacing them with subtree.

# The advantages of using decision trees

- quick learning process;
- intuitive classification model;
- high accuracy of the forecast;
- construction of non-parametric models.

![](https://i.imgur.com/Y0VIZk0.png)

---------- 
*In conclusion, I want to say that decision trees are a wonderful tool for decision support systems and data mining.*

*The structure of many packages for data mining includes methods of construction of decision trees. In areas where the cost of failure is high, they serve as an excellent tool for analyst or supervisor.*

***Decision trees are successfully used to solve practical problems in the following areas:***

- *Banking. The credit rating of bank customers when issuing loans.*
- *Industry. Quality control, non-destructive testing, etc.*
- *Medicine. Diagnosis of various diseases.*

References: [1](https://en.wikipedia.org/wiki/Decision_tree)


*[Follow me](https://steemit.com/@krishtopa), to learn more about **popular science**, **math** and **technologies***

**With Love,
Kate**

Image credit: [1](http://mir-animasii.ru/oboi/priroda/derevja_vesnoj/53-1-0-4087), [2](http://premiereflooring.com/pages/8/decision-tree-analysis-finance), [3](https://madhureshkumar.wordpress.com/tag/decision-tree/), [4](http://www.sfs.uni-tuebingen.de/~vhenrich/ss13/java/homework/hw7/decisionTrees.html)
👍  , , , , , , , , , , and 181 others
9 replies