r.Virgeel’s model is quite stable, since a couple of years. It took a lot of time to set it up and verify it, avoiding ex-post testing that induces a lot of uncertainty.
I’m talking about the core code that builds up the model to be fed to the neural networks for getting back the reading of the market conditions and also a forecast of its future behavior. Depending on the approach, this code can easily push the hardware to its limits and crash the whole system, so it is crucial to optimize it carefully.
A few week ago I begun testing a larger model, more accurate in the inputs, even if the change seemed not to reflect in the outputs: the neural networks are strange bits of code, very error-tolerant. I almost doubled the dimension of the model, passing from about 12 million neurons to more than 24. Even if the outputs di not changed radically, and that’s a good thing demonstrating that the model is already well-shaped, I suppose that the calculations are now more “focused” having much more parameters to correlate.
At the moment, I’m watching and checking the day by day readings and forecast and it seams in good accord with the market evolution.