A brief history of SPXBOT

In the late 80s, I crossed with BrainMaker, a suggestive piece of software that let you play with neural networks. I was working as an architect and I was self taught in the theory of patterns as formulated by Christopher Alexander. On one side pattern recognition, on the other side patterns in reality. Nice field of research.

At a certain point, let’s say beginning 90s, I was ready to take off with a language and code my own tools. Much easier to say, I finally got through a wonderful library that manages back-propagation networks. Straight, fast, error prone, wow! I’ve built many many versions of a tool, that was always showing the same definitive fault, paying the necessity to tune parameters and entering self referral loops.

Mid 90s, big turmoil: in Italy, in Florence, where I lived, in work, in my life. I gave up. I saved everything, backuped orderly, and closed. In mid years zero, let’s say 2005, I was living and working in Milan, working hard building a villa in the outskirts of Moscow for a russian wealth family. It was like having another son, well, a daughter. My love.

It was around 2012 that I begun thinking to the necessity to develop a plan B. Put your skills under short circuit and your plan B will materialize. In the spring of 2013 the design begun to take shape. About one year to collect the data, build the database management, and generate the skeleton of a possible model. The most of the following two years of development are documented in the Market Mind View free blog, already available to snoopies.

The model has been developed since then and I think that it has achieved a stable mature configuration in latest months, mid 2017. The result of four years of work is a model that correlates simultaneously dozens and dozens of inputs, consisting of financial and economic indices as stock markets, currencies, bonds, rates, commodities, metals, energy and more. The largest of the active neural network models has more than 17 million neurons or nodes. The model is heavily deparametrised, as it is set up, since design, to enhance the generalization ability of neural networks, in other words their ability to see, to recognize, to classify. Are you surprised? Software can see? Are we really into Minority Report or Matrix? Can software really see into the future? Well, no. And yes.

Since early 90s, I’m used to chart my modified DMI indicator, here in the Pascal version for the CFD platform. The indicator separates positive (blue) and negative (red) action of the market, as DMI does, using averages as input, instead of the plain data H/L/C. Can’t live without.

Let’s look at it in another way: the neural network always provide the less improbable result, chosen from the archive of possible results it has recorded. It’ a complex relationship: as these data, this possible diagnosis. This classification. This signal. Neural networks learn from the experience you provide them. They do not calculate the result: they know the result, if someone has taught it to them. Otherwise, they guess, in a reverse statistics, the less improbable result. Not the precise result of a calculation, but the less improbable result from it’s experience database. If you teach rubbish to a network, you get… guess? You get the less improbable rubbish.

In mid 2015, I realized that probably the architecture work, in Italy, was over and the neural works were worth the development. I also realized that the information I was producing¬† was getting sensitive and that induced me to hide from large public, giving life to this premium website. I mean, I wanted to continue my research and to have interaction with selected interested people, because this interaction is precious. It’s real fuel in the development process. So the site is available under a cheap annual fee, because I prefer a selected small club to a vast messy audience.





error: Content is protected !!