Artificial Intelligence vs. Natural Stupidity

I imagine that many readers interested in the Artificial Intelligence topic are getting used to a very romantic view of the argument. The majority of the articles you may read present anxious questions about a technocratic future generated and managed by AI driven machines. Is this wrong? Not at all, but this is the romantic side of the story. Many of us will surely loose their jobs due to “intelligent” machines, quite soon. The production/consumption paradigm is rapidly changing, as society adopts growing levels of robotics.

What really is AI? We are in the field of software programming. Since the beginning, programmers were faced with the problem of transferring the execution of tasks from the material to the digital world. Many tasks could be translated much or more easily, but some tasks could not be solved with the tools that were developed in the first decades of computer science. Then a new concept appeared: neural networks. The current logic is reversed inside a neural network: instead of calculating the result from a procedural sequence, replicating the assembly line in digital form, the result is calculated as the less improbable from the library of possible results.

So, to have a working neural network (an AI system is usually made up with many, that accomplish different typologies of tasks), you must have a database (the larger the better) that collects the data that represent the events you are trying to analyze, plus a fixed algorithm that interpolates the result from the data contained in the database.

This approach to problem solving based on the less improbable solution has demonstrated to work very efficiently on otherwise impossible task, as extracting information from images, ranking and categorizing information and also forecasting complex-ever-changing phenomena, such as weather or the stock market.

What should be clear is that neural networks and all the consequent AI development are basically a statistic tool, a different way to smooth and integrate large quantities of data. The fact that these tools can work so well in apparently impossible tasks is the key of the door that opens on the world of magic. In effect, neural networks seems magic because they can correlate a quantity of data that we cannot. It’s brute force statistic. In you put garbage into a neural network, the output will be garbage. If you put in well tempered data, you may get a surprisingly magic output. It can “recognize”, it can “view” and it can “forecast”. But it remains brute force statistic.




error: Content is protected !!