model insights

Recent updates

Recent updates

Under the usual code revision, I sorted ut with a small modification of the training set that seems to have a large impact on the stability of the neural networks in the model. And also in our trading behaviors. I was able to find some interesting patterns in the data, that show that it is possible to reduce a bit of profitability for more reliable information This is why we need to make sure that the training set has enough accuracy.

As I wrote, the difference is tiny, just 1 bar. All training positions have been adjusted. We now have a more reliable set of predictions and the most affected indicator is probably the rV.Target. It was estimating the maximum extension of the move. It now represents the level that when cleared gives a start to the close position procedure.

There is a small (averaged) loss in presumed profitability, compensated by what should be a more reliable set of previsions.

As you know, I do not backtest anything, so let’s see how it preforms in the real world.

Posted by Luca in free, generics, indicators, model insights
Latest upgrade

Latest upgrade

r.Virgeel’s model is quite stable, since a couple of years. It took a lot of time to set it up and verify it, avoiding ex-post testing that induces a lot of uncertainty.

I’m talking about the core code that builds up the model to be fed to the neural networks for getting back the reading of the market conditions and also a forecast of its future behavior. Depending on the approach, this code can easily push the hardware to its limits and crash the whole system, so it is crucial to optimize it carefully.

A few week ago I begun testing a larger model, more accurate in the inputs, even if the change seemed not to reflect in the outputs: the neural networks are strange bits of code, very error-tolerant. I almost doubled the dimension of the model, passing from about 12 million neurons to more than 24. Even if the outputs di not changed radically, and that’s a good thing demonstrating that the model is already well-shaped, I suppose that the calculations are now more “focused” having much more parameters to correlate.

At the moment, I’m watching and checking the day by day readings and forecast and it seams in good accord with the market evolution.

Posted by Luca in free, model insights

Stair Seats

Most of us have a general knowledge of what statistic does: it extracts from data some relevant information about the data itself. What a neural network does is to add a layer of correlation between the data and relate it to the desired output. You must instruct a neural network, before using it. You must associate one or more values to each of your info sets (records of a table in a database, usually).  This is a process of knowledge transfer. It works well with classifications to produce diagnosis systems.

When well taught, and it is a long and delicate task indeed, the neural network has the ability to recognize patterns in never seen before data and so output the less improbable evaluation from its data bank of knowledge.

What are patterns? A pattern is an event that repeats in different shapes, but always in the same manner, staging always similar processes. The concept of pattern was introduced by the architect Christopher Alexander in its A Pattern Language (Oxford University Press, 1977). It was aimed for architects, and you may consider the approach with this sample 125 STAIR SEATS.

Then the concept of pattern has rapidly spread into the world of software programming, producing the revolution of object-oriented languages (OOPL). Leveraging on the human relational attitudes, patterns rise in every context, not excluded trading and investing, as traders know well. If you use technical indicators, you know what I’m talking about: your eyes are trained to recognize patterns in the charts. That’s why they call it artificial intelligence.


Posted by Luca in free, generics, model insights

Waves and Cycles

When you get the patterns into the wave observation, you will see the cycles. Cycles that expand and contract, that generates trends, cycles of any dimension. Cycles that change continuously. Unlike technical analysis that tries to avoid the so called”noise”, neural networks love noise, you feed them with noise, the rawest data as possible seems always better, and then you have to train them.

I know at least three methods to extract cycles from an historical sequence: the Fourier’s Transforms and the method of Armstrong. Well, in effect I know almost nothing about Fourier’s Transforms and not enough of the Armtrong method. But I know about another way: the neural networks. What is interesting is that all the three methods  are totally different, use different tools and apply different logics.

Training a neural network means trasferring knowledge to the data. You associate, tag, mark, you name it, a certain record in you database to a certain “meaning”. For example, to the days lacking to the next turn. The unique ability of the neural networks is to project values in the future, not in base to an abstract theory, but crunching the real numbers.

Artificial Intelligence can analyze cycles, you just have to pose the correct question, as to the Speaking Mirror: you need to extract meaning from the patterns. One of the r.Virgeel’sindicators, the rv.NC Next Cycle evaluates when and where the next cycle will take place: how many bars into the future and at which price level. It is not a triggering indicators, it is aimed to focus our attention to the incoming events. I consider the rv.NC an alert indicator.




Posted by Luca in free, generics, model insights


Today I would like to talk about waves. The markets express themselves through waves. But first, let’s look at waves. When on the shore, look at waves. look at them for a long time and pattern will arise behind your eyes. When I first attempted to put a neural network at work, the real first time, after having prepared the database and applied the simplest conditions possible,  I was expecting a big awful inevitable crash in the code, or an anyway “impossible” error to underline that I was trying to make something not available. Whooo. It passed and produced a meaningful output. In a fraction of time, all the theories I’ve read about random walks went in the bin. Trash. Market activity is actually pattern-based, and patterns arise from the wavy nature. At that time, I was working using the SPX as the input of the network and it took me many years to understand that everything you want a neural network gives you,  it must be on the output side. Self-referential networks may work for a while, but they are going to crash.

Waves generate because many forces are applied to the market. On the water surface, it is the wind that generates waves. In the market, the summing of all the operative setups generate waves. Waves that have intensity (volume) and volatility (price delta). Return to the shore: you will note that waves into the sea have moments of intense activity and other moments of almost flat water. This happens because waves have the characteristic of summing them up, either building giant waves or reciprocally annihilating.

I saw these images by Dave Sandford  and, whether I suppose that heavy editing may have been applied, I’m sure you will be pleased to watch:

We learn a very important lesson from these pictures: the dynamic of summing and annihilating each other generates exceptional events, explosions, overcharging, unsustainable conditions that collapse in a fraction of time. If you now return to a long term chart of the SPX,

Semi-log SPX monthly chart

I’m sure you are looking at it with a different eye. You know that enlarging the chart up to a 1 minute realtime, you have the same fractal behaviour. Waves build them up in any time frame.

Now, I would like your to return to “patterns will arise behind your eyes” and consider that, with a long experience, you may easily detect patterns in events. Simpler or more complex, patterns represent a probable outcome of an event, they fuel our ability to “foreview”. Patterns are actually the concept behind our self-defense attitude, they represent the conditions of a series of different components and their contribution to reaching a certain result.

Patterns inside a neural network are subject to the correlation between the inputs, as set up in the code. The network cannot guess something that it is not in the realm of what it already knows. No intuition. Correlation, instead. Correlation is the equivalent of the summation/annihilation process in water waves. In a neural network-based environment, the recipe of the elements that shape it is absolutely fundamental.

Can you correlate three inputs? Can you correlate twelve? many dozens? We do not have even the charting possibility of over imposing many dozens of prices and detect the available patterns. Here comes useful the neural approach. Consider that when analyzing the SPX, r.Virgeel does not “see” the SPX, it has no input from the index. What does r.Virgeel sees, instead? It sees the fluctuations of a selection of markets and econometric data from more than 200. The brain of r.Virgeel works at full steam for daily time frame with a distinguished size of 12 millions of neurons, and each one is a correlation. We are in a territory where human brain is by far over and artificial intelligence is able to produce meaning from what we humans perceive as chaos.

‘Cycles rise naturally into a system into which energy is added or subtract’

The website offers access to the numbers crunched by the neural networks: targets, stops, etc. Even if signals are provided, I recommend you to consider r.Virgeel is NOT a trading system, but a set of weapons you may add to your arsenal.


Posted by Luca in free, generics, model insights

How r.Virgeel is trained

Following the requests coming from some users that were initially confused about how to interpret the signals coming out from r.Virgeel, I think that nothing is better than to explain how the indicators are produced. I have never done it before so extensively.

The process is very different from the calculations for a technical indicator as RSI or MACD, which involves the direct extraction of values from the price historical data.

Using neural networks follows a different logic: what to I want to know?  Think to a medical sample: doctor, I have these symptoms, which is the disease? Ideally, if I have a database of symptoms associated to various diseases, querying the database can give an easily probable range of possibilities, if not the correct reply. This happens now in many major hospitals around the world.

If you have ever used a spreadsheet (who doesn’t?), think to a table made of columns of homogeneous data, each line a single “experience”. You may add one or more columns and associate to each experience a characteristic, a category, a value, a disease, something that give sense to the line of data.  Great, you have created an a.i. model. It is a table, where some columns are input data that represent an event or an experience, and other columns that represent a meaning that we want to associate to each event or experience.

Usually, by habit, input columns are placed left in the spreadsheet and associated values are placed right.

r.Virgeel left columns does NOT include any data from the S&P 500 index. All the inputs come from dozens of other market indices, that covers all main stock markets, commodities, forex, metals, yields, bonds, etc.

I may say that r.Virgeel evaluates the S&P 500 as a consequence of all other markets. The influence of each market is directly related to its dimension, so the bond market and the forex are the most relevant.

At the moment the daily model has 74 indices on the input side and the weekly model has 123 inputs on the input side.

At this point, how the r.Virgeel indicators are built? What is placed in the right side columns?

I will cover the main indicators, the rv.Stop, rv.Target, rv.Position and rv.Signal indicators.

First, the market action is broken in “positions”, either long and short. Here you may see a sample of a long position, from the training program. This code is one of the three legs that support r.Virgeel (the others being the data maintenance and the number cruncher): here is where the “experience” is transferred into the model.


The position is marked by the thick white line.

rv.Stop (red dots) is placed to accommodate the whole movement. This is a traditional stop, a value that, if perforated, does invalidate the position.

rv.Target is calculated from the highest high and from the highest close of the final bar of the position.

Two series of signals are placed at significant bars (let’s call them “arrows”): the rv.Position puts arrows at the first and second bar of the move, to mark the entry points, and at the second-last and at the last bar, to mark the close of the position. Then it marks all the other bars of the position with a “Stay long” marker.
The same signals are placed in the rv.Signal indicator, except for the “Stay long” markers, plus other arrows are placed at meaningful bars, to highlight minor corrective movements and new entry points available, as the main position is holding.

These four indicators work in accord almost always. Almost. When they point in different directions, it’s a nice warning that something crucial is happening. Big attention is suggested then. After recent improvements, it seems that indecision/uncertainty usually last one/two bars, then r.Virgeel returns under normal conditions.

In the background, you may see coded the rv.PosColor: bright red at major long entry points, bright red at major short entry points, a gradient in between. It is a confirming tool.

All indicators are calculated separately and r.Virgeel is totally unaware of its own past outputs. Every time it produces an output, let’s call it a forecast or a diagnosis, it evaluates present values on the left side against the archive of experiences it has recorded. It is called PatternRecognition and is produced by applying a reverse logic algorithm. The less improbable result is output.

This is how neural networks work. Now you may understand better when I’ve written “A.I. is different“, it has nothing to do with technical analysis by any means.

One recent improvement has been the introduction of the FastTrack indicator. It is the first indicator that has been suggested directly by r.Virgeel. The model has a strange behaviour, probably depending on the fact that I’m not a mathematician I cannot explain it, that produces exaggerated forecasts under defined conditions. Leveraging on this defect, I verified extensively that the output was a fantastic trend follower: false signals are near to nil, considering that even if when they occur, usually you may return on the correct side with a minimum loss.  The FastTrack output is a series of four levels. What matters are the two levels that define the central neutrality area.

Let’s make an example. Today’s D-FT (Daily FastTrack) is the sequence 2837.79, 2797.362 | 2808.145, 2788.749

  • The ordered sequence is 2837.79, 2808.145, 2797.362,  2788.749
  • The central neutrality area is defined by the numbers 2808.145, 2797.362
  • Above 2808.145, we have confirmation of a rising dynamic under development,
  • Under 2797.362, we should have confirmed of a declining action under development.

The FastTrack is an end of day indicator, it is very sensible and I find it useful placed on the short term intraday real-time chart. Actually, using the r.Virgeel’s framework, the D-FT levels, some rudiments of Elliott’s Wave’s theory and a couple of moving averages seems a good setup. I’m testing it at the moment with a certain satisfaction and I will report the experience when it will be mature. Simply said, I’m moving the first steps towards a wealth building machine, using intraday action, based on a weekly staircase. Each week target is based on a stable compound capitalization model. In a few month I will go bankrupt (on the CFD account) or the machine will be snorting and puffing and chugging.

r.Virgeel’s indicators somehow mimic some traditional t.a. indicators and this may generate confusion or misunderstanding. It’s my fault: as the model construction has developed, I’ve got from my previous experience of investor and trading scholar, fishing ideas trying to shape r.Virgeel. Many indicators have been developed and few survived.  Some brilliant ideas have been demonstrated them dumb, others have a long track of solidity and sharpness. The quest is not over.


Posted by Luca in a.i., free, indicators, model insights

The new AlphaChart

Since a few weeks, subscribers have a new tool,  the AlphaChart page.

This page on the website has substituted the daily email and post, it has a larger chart, almost all model’s outputs are represented and it integrates the weekly and the monthly forecasts on the same chart. Most recent bars are enlarged in the bottom right corner, with the projected D-FT levels for the incoming bar. It has all relevant numeric data reported on it. The chart is clean and easy to be read, it has room for improvements and additions, easy to be accessed by your smartphone, tablet or pc.

The AlphaChart is updated every 15 minutes during trading days, to let users have visual feedback in (almost) realtime.

The first chart of the final version of the AlphaChart, dated 8th of February.

By the way, this chart indicates an rv.Target at 2805, exactly where we are almost one month later. Targets are dynamic, new targets have been opened during the past month.

The AlphaChart is still under development: it has reached a first stable step and I will refine and complete it. Also, the chart page is growing with additional analysis and the new real-time updates, that, I have to say, is a bit tricky: it is in alpha testing, at the moment, meaning that it may become a stable feature or die soon for lack of interest. But it may be important, during reversal days, to track the market during the close phase, to get alerts in advance.



Posted by Luca in free, generics, model insights, r.Virgeel

Bifurcations, minority reports and r.Virgeel’s jargon

I have published the usual monthly update and in the post I have included a significant minority report. One subscriber was surprised by the existence of a “minority report” and asked how does it work. It is an interesting question, that I cannot reply exhaustively,  without revealing some well-kept secret about the building of the model. But, I can try to explain.

Like any software tool, r.Virgeel’s code is plenty of variables. One of my long term efforts has always been to try to reduce the parameters of the model to the minimum, to avoid any possibility of over-optimize the networks. Neural networks have their ability to generalize, inducing replies out of sample, inside their realm of comfort.

Finally, I arrived in the latest versions of the model, to just one variable. The one and only that affects the model sensitiveness. Let’s call it sensibility. Low sensibility produces more volatile analysis and indicators; higher sensibility produces results more stable, day by day. If sensibility goes too high, r.Virgeel gets stuck for long periods, inside a sort of trance. There is an interval of best response.

I’m interested in a reactive and adaptive response, so I usually select a value of sensitiveness inside the well-tested range and change it only occasionally. And I also get a look to the forecasts generated with different sensitiveness and if I find a particularly persistent minority report I share it with the subscribers, to warn of any possible incoming event.

Also, consider that, to work, neural networks must be trained. Training means that experience is transferred into the network and what sorts out is that very similar conditions are trained for opposite outputs. It is not an error or a limit, it is inevitable: every market is an alive entity and r.Virgeel brews its reports from a huge correlation matrix between dozens of markets. Altogether, it’s gigantic. Bifurcations are inevitable, are part of the alive thing. Bifurcations and minority reports are different aspects of the same datascape. I’m working on this aspect, but it a long way. Anyway, bifurcations have reduced their aggressiveness and really interesting minority reports are rare.

If you come from technical analysis, you are used to consider the price bars as your primary source and you are used to self refer your data to generate some significance. Inside the model, the SPX is absent from the correlation matrix, being placed on the learning side. Yes, it’s different. The whole configuration of dozens of other markets generates the SPX  inside the brain of r.Virgeel – by the way, the most relevant markets are the biggest, not surprisingly. The process is known as “pattern recognition”: find where some similar data is in the archive and learn from it to process the current moment. Once the model works, and it has been real-time tested since 2013, you, me, we are not requested to do much. Through the indicators, r.Virgeel gives a variety of different reading of the present status of the market, designed to be in reciprocal confirmation.

The use of the output we get from r.Virgeel is up to each one of us. I’m sure everyone uses it differently, with different time horizons. The a.i. tools offer a huge potential to enhance many trading styles. At the moment I’m testing the intraday use the FastTrack indicator and the results are really nice. The FastTrack levels on the intraday chart help me to have an immediate frame and it works from 4h to 5mins. You do not have to use r.Virgeel in a specific way: find how it fits your plans.

A final word about the jargon: I understand that sometimes you may be confused by technical terms, but I must use them to try to be clear. Artificial intelligence has a plethora of dialects and terminology, it’s exploding just now and I’m sure that in the next future many concepts will be of public domain.


Posted by Luca in free, minority report, model insights, r.Virgeel

Forecast/ability 2

In the previous post “Forecast/ability” I did refer to the daily a.i. forecasts and I showed the results of a long and extensive research on the quality of the response of the model.

But when we come to the weekly and to the monthly forecast, things change radically and for the best. Undoubtedly, weekly and monthly bars undergo a reduced “noise” and express better the global consent of the partecipants to the market activity. Market is fractal in nature and I have not an explanation for why it behaves differently from daily to weekly and monthly time frames. Maybe because it reflects the attitudes of different categories of actors (investors have a totally different approach to the market than daytraders or position traders). Anyway this is what comes out years of observations of the forecasts produced by my model.

Just as an example, this monthly forecast chart has been produced exactly one year ago, on 5th of December of 2017 and shows how r.Virgeel forecasted correctly the October 2018 correction, ten months in advance. Astonishing, uh? Consider that the monthly model is a long term database of financial and econometric data, so the detected patterns are not only related to the market activity, but also to the underlying economic activity.

(The right part of the chart has been cutted, for respect to paying subscribers, as it refers to current market expectations and it is still valid).



Here another example, from the weekly model, published to subscribers on the 3rd of March 2018: the deep and scary correction was shaking the markets and r.Virgeel correctly forecasted that in 4 to 5 weeks the S&P 500 should reverse (not reaching the previous low) and go for new all times highs, as it did.


Quite obviously, these forecast on the long term side are not much interesting for very short term traders, but they may be invaluable for position traders and investors, that have a global vision of the market totally depurated from the biased news and the so-called experts opinions. hese are not opinions of any kind, they are the result of pure brute force number crunching


The psychological advantage of these knowledge is actually the first and best result: less stress, better decisions, more returns on your investments!




Posted by Luca in a.i., accuracy samples, free, generics, model insights, r.Virgeel


Following my previous post “New Tools at the Horizon“, one question was twirling in my mind: why the stock market is forecastable, but the forecasts are not affordable?

The forecastability of the market is an evidence, because if it were not – being it just a random walk – there would not be the possibility to have an output from the neural networks that manage the forecast process. For a neural network to work, there must be some sort of structure inside tha data that can be used to produce the forecast/diagnosis.

This chart shows a blind neural network, unable to recognize any pattern in the input data.

And this hidden structure is present indeed inside the market data, otherwise r.Virgeel would be totally blind and dumb. This is a sample chart of a blind network: not structure is evaluated and the output is just an array of zero values.

The fact that we humans do not recognize any structure in the data is irrelevant.

So we have a (hidden) structure, the neural tools recognize it, but the output ranges from nicely precise to totally incorrect, without having the possibility to know how much the result is matching the real future movement of the price.


Now, I begin to see the light.

The price of a financial instrument is the result of an ask/bid process, where a multitude of actors (I’m considering liquid markets with a wide audience) buy and sell that instruments under the suggestion of a personal forecast that the price of that instrument will rise or fall in the future.  Every partecipant to this activity actually does a personal forecast every time he/she executes an order. So, the resulting price is the sum of all the collective forecasts and, at the end of the day, this collective forecasting process generates the push that contribute to move the trend.

[revec2t text="Every partecipant to the market activity actually does a personal forecast every time he/she executes an order."]

In other words, every attempt to forecast the market is a process of forecasting a collective forecast activity, a meta-forecast: no surprise that somewhere in the process one or more dimensions are lost and the result is probably something similar to a shadow, that let you recognize the original shape under certain conditions and  totally mistify the original shape under other conditions. When you project a multidimensional event in a field that reduces the dimensions (think to a 3d object projected onto a plane) you lose a significant portion of information and you may generate a lot of ambuguity.


A 3d object projected onto a 2d planes may generate very different shapes


Now, the forecasting process is just a minor side activity of r.Virgeel, even if it is the most appealing and mind-storming:  r.Virgeel is mostly a diagnostic tool that reads current data and find historical patterns that match the best market position available, with a significant success.






Posted by Luca in a.i., accuracy samples, educational, free, model insights, r.Virgeel

New Tools At The Horizon

In past September, I designed a new weekly model and some new tools to investigate more deeply the quality of r.Virgeel’s model response. The results were really astonishing and have started a real revolution inside my approach to artificial intelligence and investing.

Three months later, after a huge amount of testing and experiments, the new weekly model is almost abandoned (well, it’s alive, partially), but many of the discoveries have been transposed in the “old” model and I may begin to share the results of the research.

Bad News First

I’ve always considered the Future Bars as my best benchmark: 24 bars (either daily, or weekly or monthly) in the future to map the “less improbable” path that the S&P 500 will follow in the coming future – it’s a big challenge, indeed! One of the new tools I’ve developed lets me test the behaviour of r.Virgeel in the past and a lot of surprises came in.

chart n.1

chart n.2

The response of the daily model is very variable, ranging from the nice precision shown in chart n.1 to the total failure shown in chart n.2, but what was more surprising was that r. Virgeel is much more precise during wave development and during intra-wave correction and totally wrong, usually, at turning points. I still have no idea of why it is so, but I’m sure it is a direct consequence of the fact that the market is a live thing, a really special living organism.


At first, this was a big delusion to me! But it helped me to separate the a.i. forecast output from the a.i. diagnostical ones (or the main other indicators that allow r.Virgeel to decipher the current market condition and to take position), that are much more affordable and precise. For the first time, I realized how the two things are deeply different in nature and how our expectations must be different on these two aspects of market analysis.

Then Good News Arrived

Then something surprising happened, and it was a revolution. I was revising hundreds of sample charts, when r.Virgeel suggested me to note its recurrent and inesplicable behaviour at market turning points: the FastTrack indicator took shape in a few hours and it is one of the best goals I’ve achieved, ever.

I was a bit upset, at the beginning, because I am a medium to long term investor and not a short term trader, while the FastTrack – it was clear since the beginning – is a tool for short term positions. Then I put it at test, as usual real time test, from its first long position on 30th of October: since then it has completed three swings up and three down, and is now in its seventh position (long), collecting a gross profit of 302 points (or 11.3% profit) from closed positions plus an open position that is gaining 108 points more, making the total profit up to 410 points or 15.4% of the initial capital. Of the six closed position, two were losing: one (long) was down 8 points and the other (short) 4 points.

It is a gross and theoretical profit, that must be adapted to the instruments used to invest, to position costs and slippage, but considering any possible drawdown, it is reasonable to say that you could have had a before taxes return of more than 10% in just one month, during a very difficult market condition (a deep correction inside a rising market). I will publish the complete positions record at the end of the public diffusion of the FastTrack, in about two weeks, but I’m sure that the readers that are following the blog are well aware of the goodness of the new indicator. It is precise, responsive, objective and totally deparametrized.

Expanding The Analysis

One of the consequences of the introduction of the FastTrack indicator is that r.Virgeel will soon be able to apply its model to financial instruments other than just the S&P 500: I’m working on it and I hope to be ready for New Year’s Eve to produce the FastTrack levels for the DowJones Industrial Index and for the Nasdaq Composite, and then for main stock market indices worldwide (DAX, FTSE, HSI, N225, …) and also for EURUSD, GOLD and others. I will need some time to prepare and verify the framework and the new models, and to modify the website to accomodate all these new informations – it’s a nice challenge – but the result will be a larger set of possibilities for us to approach the investing selection.

[revec2t text="r.Virgeel will soon be able to apply its model to financial instruments other than just the S&P 500"]

As a consequence of all the above, I have also realized that the standard one year subscription plan that the site offers since it was born is probably inadequate to host short term traders that have a totally different approach to the market, so a new one month subscription plan is now available, to let you test and evaluate if the r.Virgeel collaboration fits profitably with your trading habits.

A Final Warning: EURUSD

As I told you, the new weekly model is in stand-by, but I regularly update the database and verify its response to updates.

Today, it produced this chart for EURUSD, signalling a possible waterfall event with an Euro crash in the coming weeks, pointing to 1.035/1.05 area. As the European crisis is looming (Italy’s budget, France turmoils, Deutsche bank crashing – is it enough? ), the Euro seems destinated to pay the bill at a dear cost.


UPDATE – It was not yet ready to crash, but at moment the EURUSD is not in good shape anyway.








Posted by Luca in free, model insights, performance, r.Virgeel, subscription

Bifurcation at last!

If you’ve read here and there around the blog, you know I sometimes used the word “bifurcation” to indicate double exit situations, but I’ve never been able to show them, before.

Now I’m building a new tool and the results are plenty of surprises. One is the following chart:

Next day bar is forecasted quite correctly, but the second day ahead shows an evident divergence.

Its another confirmation that bifurcations exist and are part of the data, are encapsulated into the model matrix.




Posted by Luca in free, generics, model insights, r.Virgeel

Going Forward

For many months, I’ve tried to put together the pieces to have a long term database, monthly based and with a huge history. I was moved by the progress of the monthly forecast, that I see sharper than before. No way. Data is largely unavailable. Very few series of mayor commodities and indices are out there, but very few indeed. r.Virgeel works well with more data. More data. No way.

Then I started to look at my beloved weekly forecast and I soon realized that I could enlarge the weekly database in time reducing its components. To make it simple, more data for less symbols. Mmh…

I hoped that data could go back some decades more, but I now have a new weekly database about 75% longer in time than the standard one. It needs a new whole ecosystem of code to work and the basic part is in beta development and running. More work is needed, but I’m very curious of all new code output, when it will come.

So, I’m rewriting the code once again and with a new database (DBMS), it’a game of traps and bugs. The new DBMS is slower than the standard one, but its tameness is great. Being slower, I have to rewrite the code optimizing every step, and this is good. It will take time, but then we will have a brand new weekly brain. Worth the effort.



This chart is the very first captured from the development. It is the training process feedback. the color code seems to work well. One of the enhancements is a better visual evaluation of r.Virgeel learning, an evolution of the Colored Bars. I try to be as impartial as possible and help r.Virgeel detect bottoms and top and generate optimal signals for operative triggers. Working on the past, it is not difficult, but sometimes tricky.

Another enhancement at the horizon is that r.Virgeel will no more be restricted to the S&P 500, but it will open to a batch of sperimental new subjects: EUR/USD and Gold will be the first. Before it was not impossible to obtain a forecast of other symbols, but it was quite complex, being the original project very SPX centric. Now it will be easier to test some new entries and see how they work out.

I wanted a huge monthly database and I will have a not-so-huge-but-larger new weekly model.  At work, now.





Posted by Luca in free, indicators, model insights

Astonishing results

The S&P 500 has entered a new never seen before territory, passing 2900 level on Monday, and you may wonder how it is possible that r.Virgeel may forecast something that it has never seen before.

It’s a good question. You have to know that neural networks, if applied to a well designed model, have the ability to generalize, a typical human behaviour. To generalize means to derive or induce a general conception or principle from particulars: in r.Virgeel’s case, it means that it has enough experience (past data and training) to digest never seen before values and produce a reliable analysis.

Aside the confirmation that the model that forms the foundation of r.Virgeel is well shaped, another interesting observation regards the fact that this is a further demostration that the market has an inner hidden structure and any random walk model is rubbish. The fact that humans are not able to see this structure is not relevant, when you have such a tool as artificial intelligence brute force.

Given the astonishing results of r.Virgeel (yes, they are astonishing for me too), I’m working for the extension of the model: it’s a hard and tedious job, it will take time, but I’m beginning to see that improvements to the model are possible. More data, more precision.

It’s artificial intelligence, it’s different!




Posted by Luca in free, model insights, r.Virgeel

Latest performance

The following slider shows some latest forecasts brewed by r.Virgeel on daily time frame: it is almost in realtime, as it shows how the model has acted since the last bottom in 2700 area in late June, starting from the close of latest 27th of June to 18th of July, for 15 bars, so three full weeks.

  • in bar #1, you may note that r.Virgeel marks the third consecutive reverse signal: it’s a very clear entry suggestion,  at next bar opening;
  • in bar #2, we have the confirmation: a reversal bar and the Stop now correctly in place;
  • subsequent bars shows how r.Virgeel reacted to some “uncertainties” in the S&P 500 and to the following move.


[metaslider id=”11911″]


You may finally note that the weekly forecast encapsulated in the chart has been slower, in this case, to adapt to the evolution of the market.

Even considering the worst entry point at 2716 (the close of the 28th), as the index is now around 2800, the position is 84 points positive, or +3%.






Posted by Luca in checkouts, free, model insights, performance, r.Virgeel

Are you kidding?

Going throug the whole materials that I have accumutated during spxbot development, I crossed this post from Dec. 19th 2014, available here.

This was the very day I opened my eyes, this chart demontrated that, even if it was in it’s very first steps, r.Virgeel could see “things that we humans…”

The chart is here. It is very rudimental, but the message is clear. We came from a very negative week and the forecast – as usual brewed in the weekend before the Monday opening – showed a close totally recovering the loss.

I thought it was kidding and that my many months of work were rubbish. Then came the Friday checkout and… Bingo! The dart was in the centre. I really couldn’t believe!

(the small red dot is the close of the week, the bull’s eye is the forecasted value)

Two things were coming to the surface: 1. there is a structure in the market and 2. r.Virgeel can see it better than us.

More than three years since then, r.Virgeel walks on its legs as a nerdy teen now, long time has passed.



Posted by Luca in free, generics, memorabilia, model insights

Spxbot robo-advisory performance

(this post was originally published in the newsletter no.1 in mid January. Probably I will not publish any performance related data in the future, mainly beacause r.Virgeel is not a trading system and also because real performance depends on a lot of factors, depending on your location, tax burden, investment strategy, etc. Anyway, the first two years of activity are here summerized, to show how the the robo advisor is performing in the real world)

I’ve always been reticent to publish the performance of the Position trading system: I’ve even dismissed the record table for the subscribers, without any complaint from them. It should be complex to explain why, it’s something intuitive, but it is related to my distrust in backtesting. First, I’ve always tested forward, not behind. It’s slow, for sure. The model, or as I call it now: r.Virgeel has taken shape during four years and is performing well. It is still under development: new ideas are passing as clouds at the moment. r.Virgeel correlates dozens and dozens of markets and is trained to be a prudent trader, preferring the long term run to the frequent short term trading. r.Virgeel warned us that the market was heading to 2730 many months before, and, honestly, I couldn’t believe it. r.Virgeel is trained to take the best position available in any single moment and it knows the market since a long time. It is not biased, it is responsive and adaptive, it’s huge (the brain is now around 15-17 Millions of neurons). It’s not something that you can backtest, you know (even if it is remotely possible), and I cannot guarantee that at a certain point it doesn’t go completely nuts. At the moment, r,Virgeel is well fit.

Anyway, I understand well that performance is a good starting indicator for evaluating any investment strategy: we have a bottom line and our mental model is arranged to appreciate rankings.

So, this is the update to the daily Position performance: the system started it’s first recorded operation (long) on Feb. 12, 2016 with the index at 1857. Two years may seem a short time and it is, but, as I wrote before, r.Virgeel is unbiased, it doesn’t take in account its previous evaluations. Since inception the Position system has completed 14 positions, 12 long and 2 shorts. Holding the position has lasted from 6 to 74 days, with an average around 37 days. Three positions were negative, respectively -0.28%, -0.84% and -0.04%. The total gain sums up to 823 points, or 44.32% with an annualized rate of 23.14%. A buy and hold strategy should have won, as the S&P 500 has gained 831 points in the same time window. This is mainly due to the fact that the model is trained to be conservative: it exits the market when the short term profits are too risky and recent market conditions made it difficult to re-enter at a lower price.

All calculations are made on gross trading the index value, without taking into account transaction costs and fiscal payments. A reasonable slippage is applied. All transaction are evaluated at the opening of the following day, after signal generation. Personally, I manage my whole personal capital under r.Virgeel advisory. Only, I select the ETFs in a range of preferred sectors.




Posted by Luca in free, model insights, performance, selected primer, 0 comments

Model’s tuning

Long time no see! I’ve been busy, reconsidering the structure of r.Virgeel. I come up that we need more signals and this led me to split the Position indicator, having signals from the long positions and from the short positions, separated. In effect, the Position indicator has been split since its inception, but I’ve always considered that the single Position array was sufficient, because it was the exact replica of the two split vectors. But, on the other side, and now I realize it, having both signals in overlapping bars can be a significant advantage. So now we have it.

Also, I have cleaned the learning database and simplified the algorithmic learning, it should be now in optimal conditions. All the models have a global instruction, easy to maintain.

And, finally, I added some new data lines and revived some others that, having errors, were not considered in the model. Now, the single model components are:

  • monthly model 54 components
  • weekly model 109 components
  • daily model 71 components

I concentrated to enlarge the weekly database, that is performing quite well, and probably some new data lines will follow soon. The database is never large enough!

Keep in touch, the new newsletter is on its way!



Posted by Luca in free, model insights, 0 comments

New Forecast Chart

The migration process is almost completed. New hardware, new database, revised faster code and a new chart layout. And more. Porting the code to the new hardware (and new OS), was long and plenty of traps: I had to walk through any line of code and some parts had to be completely rewritten. Since a while it is under observation and it is operating with satisfaction.

Here the overwiew of the new chart, with all the elements. For subscribers, the new chart will go operative in a few days , after the last checks.


[metaslider id=7990]


All indicators, except the Cycle Frequency Analysis and the mhDMI, are the output of neural networks, with no human opinion or intervention. The model (mr. r.Virgeel) is totally unbiased and as deprametrised as possible, making it responsive and adaptive.

The Future Bars

Next 24 bars, that cover a bit more than one month in daily time frame, almost next six months in weekly time frame and two full years in monthly time frame.

The Stop

The Stop is trend following, being the value that the index must not penetrate (on close) otherwise the current position is negated, closed and system goes flat.

The Stamina

Stamina is an evaluation of the residual energy available to the market to proceed in the current trend. It is calculated separately the positive and the negative stamina, relative to positive (long) or negative (short) positions. Turning points have highest stamina, position’s end has the lowest

The Compass

It is a simple confirmation graphic tool: if the arrow is green and point upward, well, long position are preferred. If the arrow points down and is red, the market is on decline.

The Cycle Frequency Analysis

From the page 67 and following, I made a reverse engineering of a part (the highs and lows cycles) and it is charted in the bottom strip. Bottom, the Cycles of Lows. In the middle, the Cycles of Highs. At the top, the sum of the two. Please, refers to the PDF link above for any further information.

The mhDMI

This is a modified version of Wilder’s  Direction Movement Indicators (DMI): basically, it is a momentum indicator that splits market action in two different lines, one for positive growing action and one for negative declining action. The modification from the original code is the use of smoothed inputs, that produce an asymmetrical output.




Posted by Luca in free, model insights, 0 comments

A brief history of SPXBOT

In the late 80s, I crossed with BrainMaker, a suggestive piece of software that let you play with neural networks. I was working as an architect and I was self taught in the theory of patterns as formulated by Christopher Alexander. On one side pattern recognition, on the other side patterns in reality. Nice field of research.

At a certain point, let’s say beginning 90s, I was ready to take off with a language and code my own tools. Much easier to say, I finally got through a wonderful library that manages back-propagation networks. Straight, fast, error prone, wow! I’ve built many many versions of a tool, that was always showing the same definitive fault, paying the necessity to tune parameters and entering self referral loops.

Mid 90s, big turmoil: in Italy, in Florence, where I lived, in work, in my life. I gave up. I saved everything, backuped orderly, and closed. In mid years zero, let’s say 2005, I was living and working in Milan, working hard building a villa in the outskirts of Moscow for a russian wealth family. It was like having another son, well, a daughter. My love.

It was around 2012 that I begun thinking to the necessity to develop a plan B. Put your skills under short circuit and your plan B will materialize. In the spring of 2013 the design begun to take shape. About one year to collect the data, build the database management, and generate the skeleton of a possible model. The most of the following two years of development are documented in the Market Mind View free blog, already available to snoopies.

The model has been developed since then and I think that it has achieved a stable mature configuration in latest months, mid 2017. The result of four years of work is a model that correlates simultaneously dozens and dozens of inputs, consisting of financial and economic indices as stock markets, currencies, bonds, rates, commodities, metals, energy and more. The largest of the active neural network models has more than 17 million neurons or nodes. The model is heavily deparametrised, as it is set up, since design, to enhance the generalization ability of neural networks, in other words their ability to see, to recognize, to classify. Are you surprised? Software can see? Are we really into Minority Report or Matrix? Can software really see into the future? Well, no. And yes.

Since early 90s, I’m used to chart my modified DMI indicator, here in the Pascal version for the CFD platform. The indicator separates positive (blue) and negative (red) action of the market, as DMI does, using averages as input, instead of the plain data H/L/C. Can’t live without.

Let’s look at it in another way: the neural network always provide the less improbable result, chosen from the archive of possible results it has recorded. It’ a complex relationship: as these data, this possible diagnosis. This classification. This signal. Neural networks learn from the experience you provide them. They do not calculate the result: they know the result, if someone has taught it to them. Otherwise, they guess, in a reverse statistics, the less improbable result. Not the precise result of a calculation, but the less improbable result from it’s experience database. If you teach rubbish to a network, you get… guess? You get the less improbable rubbish.

In mid 2015, I realized that probably the architecture work, in Italy, was over and the neural works were worth the development. I also realized that the information I was producing  was getting sensitive and that induced me to hide from large public, giving life to this premium website. I mean, I wanted to continue my research and to have interaction with selected interested people, because this interaction is precious. It’s real fuel in the development process. So the site is available under a cheap annual fee, because I prefer a selected small club to a vast messy audience.




Posted by Luca in educational, free, generics, model insights, selected primer, 2 comments

Riding the wave and then… splash!


Following my previous post, I would like to point out that approaching the a.i. advisory, you have to change your mind. With most probability, you are trained in technical analysis, various techniques to train your eye and numbers to correlate the stream of data. With an extended application to chart reading and some discipline, it’s not hard to avoid largest mistakes and trade with some results.

What happens is that your carefully brewed technique, a certain day, blows up. You where riding the wave and … splash! All technical analysis let you ride one wave, if well set up, but you are bound to splash as soon as the new wave comes in.

Try to experiment with volatility and then you begin to see how impredictable are the consequences of human behavior. And that is just one dimension. So, if you are so expert to radically change your trading attitude and timely, no need to read further.

I have to say, I’m not in the group. I tend to stick to that particular solution and trust and be deluded. And lose money. I do not really trade with t.a. any more, except a minimal account where I experiment and keep updated the relation from the a.i. model output and t.a. charting. There, I trade using the a.i. daily forecast as a long term preview and I use a 2h bars chart with my beloved modified DMI plus Parabolic and Hi-Lo Activator. In other word, these are momentum and stops. I activate them manually, but following indicators notifications. Shaking all information, it’s not hard to catch good opportunities, always in the direction of predominant trend. If in doubt, stay out. The account is growing.

So, if the use of a robo-advisor is in your future, prepare to radically change your trading and investing behaviour. Sit down, try to imagine all the things that you can do in your spare time and start. Yes, you will have a lot of spare time, embarrassing lots. You need to fill that time and with periodicity, every day, or even every week, maybe every 4 hours? (it will depend on the advisory you will adopt, there are many around, they are popping out massively), you will just peep, take note and check and take few, very few triggering decisions. Than, fast back to your gardening, to your fitness training, to your passion with motors, to reading as there is no tomorrow. In my spare time, I develop the spxbot a.i. engine and this website.

Just a final note: anytime I tried to use t.a. indicators (and I tried many) inside the model, the result were disastrous. The one dimensional constraint that they introduce conflicts with network correlation ability. In effect, I’ve earned from my really first experiments in neural networks modelling with BrainMaker down in the late ’80s, neural networks work good with the data as raw as possible. Any transformation introduces a field of possible wrong solutions and divergencies proliferate. The neural networks do not need to be perfect, they need to see just a little bit better than you and to discriminate using details that you cannot even imagine exist. That’s not perfection, that’s sharpness.

Keep in touch.



[pt_view id=”ef87bf0myf”]





Posted by Luca in a.i., free, model insights, psychology, selected primer, 0 comments

As you asked

A daytrading reader ask about some features of the site. Here the reply:

Stamina indicator is designed to evaluate how much energy is implicit in market conditions to complete the current move. ​ It is somethig as the mirror of the Target. ​ It is calculated separately positive (for long position) and negative (for short ones). So, the present values of  0.214/0 means that the market has (should have) 21.4% of the move ahead toward a high, and 0% in the downward direction.

Position indicator is a pattern re​cognition tool, it fires 1 (or -1) when it detects the condition for long position opening  and 0 when the position is supposed to be closed. A 0.5 value is the standard “stay long” suggestion (-0.5 if the position is short).​ Other values inbetween define the “Add to position” and the “Reaching top” (or bottom) signals​, usually with a couple of days anticipation.

The model is totally unbiased, meaning that there is no assumptions about current conditions: it evaluates in total autonomy the market status, and there is no “bullish or medium calculation”. It is free to evaluate, just following the training. Neural networks need to be trained, that means that you have to tell them how to interprete historical data. Then, ​they go on ​their own legs.​

Finally, please note: the model correlates many different inputs (on daily timeframe, they are now 78) that goes from stock market indices around the world, to rates, commodities, bonds, currencies and economic and financial indices. The resulting neural network is huge (many millions of neurons).



Posted by Luca in free, model insights, 0 comments

As you asked: intraday

A reader asked:
I was looking at the details of the Spxbot but I wasn’t sure if it would give me the info I’m looking for.
I day trade the SPX and I am looking for a forecast that provide intraday data for the current day. Does your program give that data? I read where your program is 24 bars ahead. What is the time frame for each bar and do the ‘next’ 24 bars change as new data comes in creating a fluid forecast. 
Lastly, if I wanted to use your information for weekly trading, how do I use the data for the longer range timeframe?

Sorry, no intraday forecast from the spxbot model: it is set up to provide daily, weekly and monthly time frames analysis. Intraday data is totally another story, with an enormous quantity of noise, many time frames and the difficulty to collect good quality data for a very long period, otherwise the forecast is not really worth.
But… I just tell you this: last April I opened a CFD account with 1000 euros to have  the possibility to experiment and play with short term (I’m usually a mid to long term investor). After losing about 15% with some experiments, I decided to use the two hours time frame for carry the positions and the 15 minutes time frame to get signals for open/close positions. The positions are taken on the SPX with the model daily forecast in mind, so never going against the forecasted trend and profiting from extremes: now the account has 2218 euro, with a quite conservative global capital managment (no more than 10% of margin used). I usually do not speak of the returns of the trading activity and I even dismissed the position recording from the website, because I see that such good profits may appear suspicious and to just hook new customers.
So, the only time frames that are forecasted are daily, weekly and monthly bars, and yes, the forecast does change as data flows in, because the model is designed to be responsive and adaptive.
Weekly forecast are my preferred: low noise and a large mix of data available. The different time frames work together, even if sometimes daily and weekly goes on different direction, you may get reciprocal confirmation and both provide entry/exit signals.
In effect, also the monthly produces entry/exit signals, but I use it as a projected path for the long term, not for real trades. Anyway, I’m sure that other subscribers may use the forecasts in different ways, as additional intruments for their trading activity.
Posted by Luca in free, model insights, 0 comments

Model Training Update

from the Training tool, with Stop, Target, Energy and Cyclical analysis


The model needs training. The training tools have been deeply revised. In the beginning the training was done by hand (!). Then a  simple tool interfaced the operations, still giving space to errors and inaccuracies. Now the training tool has become a complete “position manager”. I have in mind to call it R.Virgeel, but its just data and code, in the end.

The training phase is when the knowledge is transferred to the model and the process is nicely complex and plenty of traps, but undergoes one simple rule: “garbage in, garbage out“.

Having to organise the positions, my approach was to search for the basic ultimate tool, the one and only that you need to manage your position and it came out that it is the Stop. It is a very traditional and well known trading tool, with a lot of variants, and if correctly used, a stop can really embed an almost complete trading system.

If you pair it with the Target, you already have an almost complete position manager. The target is where the current position is heading to, and it has been doubled now, two slightly different calculation to mark the area of probable inversion.

from the Training tool, with Stop (red dots), Target (green dots) and Energy analysis (coloured bars)

Then, if we calculate the potential energy that flows through the positions, we have the possibility to argue where in the current move we are. It is the Stamina indicator, nothing graphic, just two percentage numbers, separated positive and negative positions: stamina is 100% at position beginning and 0% at position end, and it degrades as the position is going on.

Finally, the Position indicator is the feedback, with graphic signals at the extremes. It has totally absorbed the Signal indicator, so the latest has been removed.

All these indicators are neurally calculated, based on close data, each day.




Posted by Luca in free, model insights, r.Virgeel