The coming age of thinking machines is already here. All manufactures around the world are substituting worker with specialized robots and a new subdivision of work is raising. Artificial intelligence (AI) is spreading into a lot of different fields, but always with the same purpose: obtain more efficient results and profits. A computer (and its mechanical counterpart) can reach far better and faster results, with continuity and at a lower cost, almost always. The effort of developing “intelligent machines” is characterizing the current evolution of western industrial world.
If you try to understand what AI is and its tools, you get a lot of mathematical writing, very complete and incomprehensible. An exhaustive theory of models and optimization tools, that make this matter seems like an esoteric discipline. Using these tools in everyday life is completely different: about 25 years ago, my first approach with neural networks was the purchase of “Brainmaker” software (seems to be still available here) and I started play with it. After a while, it was clear that building a model, a working model, was a trial-and- error process, where you do not need great mathematical background (I have not), but a good knowledge of the process you are trying to describe. A lot of help came from the Brainmaker’s manual, that has many practical samples and almost no theory. If you adopt the correct tool, you may not even need to know a programming language. I had some programming background, so I bought a neural network library to experiment more in depth.
The neural network model that you may create is conceptually identical to a spreadsheet, with lines of data that represent singular experiences and columns of uniform data. Some columns, usually at far right, do represent the value (or values) that you associate to that experience, or more simply the value you want the network to output when that experience will manifest again. This is the heart of any AI system: a lot of classified experiences and the ability of the network to associate a proper output value.
No theory is able, at the moment, to preview if a model can produce an output and, above all, a good output. It’s a matter of try and try and try. You have to shape a database and collect data (more is better), then you have to process the database to shape the model and apply the neural network to it. At last, you may input your new and never seen before data and – magic! – you get a diagnosis that is supposed to fit the data at its best.
This is the process applied to thinking machines and it is quite stupid, isn’t it? No ability to induce, to create an unexpected reply to the problem, but a heavy brute force pattern recognition. The real intelligence, if any, is in the conception of the model and in its implementation. It is always on men’s side, not machines’.