a.i.

Some data about model performance

I’m sure that my (few) readers are curious of one thing: how the system has performed? percentage, percentage… I can’t blame you, even if I’m not very excited by past performance of any system. Performance percentages let’s you dream and are¬†very dangerous ūüėČ
But this is not a technical analysis automation or an algorithmic trader bot.

Here I have an artificial intelligence that reads the market status and is trained to recognize tops and bottoms, ¬†to evaluate targets of the current move and finally to place a security stop for position protection. The model mimics a simple trader’s approach (I prefer to call it an investor approach) that invest on the index S&P 500 without leverage of any kind.

The model has exited the beta phase in last February, even if it has been continuously developed since then and it is on the working bench almost every day.

The first recorded position has been opened long on 12th of February at 1857 value of the index. Since then the model has completed five trades, either long or short, ¬†for a gross total of 434 points earned. It is a 23.37% performance on starting value. Some more points are implicit in the current open position, but it’s early to evaluate. No position has been negative, maximum earning position was 12.49% and minimum earning position was 1.10%.

Obviously these are gross data, as real net earnings depend on chosen trading instrument, trading costs and above all on taxation of trading activity in your country.

 

 

Posted by Luca in a.i., free, performance, 0 comments

The model and COT

You may wonder if does the model contains data from the COMMITMENTS OF TRADERS weekly data?

No, the COT data (a report published every Friday by the Commodity Futures Trading Commission, that disaggregate the open interest of future contracts) is not part of the weekly model because, after a long experimentation, it resulted completely irrelevant. This means that all the information that the COT data may bring to the model is already incorporated into the prices of financial instruments.If you conduct a personal analysis of the prices, by hand and eye, the COT data is a good help in determining how the market and its principal actors are positioned. The COT can be a really useful tool for a correct and useful analysis.

But do not forget that here we are using artificial intelligence tools, that scan the data for hidden structures and patterns: it’s a completely different approach. As you may imagine, I was a bit surprised to discover the uselessness of COT data for forecasts. I thought that COT was providing some non evident information that could enrich the model analysis, but I was wrong.

You may find charts of COT data at http://cotbase.com/ and various other sites, and raw data can be downloaded from http://www.cftc.gov/Marketreports/CommitmentsofTraders/index.htm

 

 

Posted by Luca in a.i., free, 1 comment

Cassandra was hated by her fellow citizens

tumblr_m3diq5iivd1qccw1w“Actually, I’m surprised that there aren’t more sites like SPXBOT out there”, he writes to me.

No doubt, it is one of the best compliments I’ve received so far. Yes, it was not easy to build the system: programming, trading experience, and a lot of time were the the main ingredients.

Consider that what I do, what my software does, is generally considered impossible. The random walk theory, the experience of traders, and even similar artificial intelligence experiments on financial markets all say that the market is unpredictable Рand they are not wrong!

But there is a (hidden) structure in the market; this is what my work demonstrates. There is a structure inside those floating numbers that can be uncovered, step by step, and this can provide valuable support for any trading or investiment decision.

The life of Cassandra –¬†the oracle of temple of Apollo in Troy¬†– was never easy¬†and she forecasted various woes and the final destruction of the city. She was hated by many¬†and disliked¬†by the majority: most people does not want to have their¬†free will put aside and I can easily understand that…¬†now.

 

Posted by Luca in a.i., free, 0 comments

Don’t get fooled by poets

Yesterday I read this¬†https://medium.com/@nitin_pande/deep-neural-nets-and-the-purpose-of-life-d3d60a38d108#.l6m5b6bch and I have to say I couldn’t dissent more.

No way a biological structure can be compared to a software tool, even if this software is someway inspired by what we know of the brain structure. And we are not DNN (Deep Neural Networks), as the author Nitin Pande suggests. Luckily, we are a bit more complex and unknown.

I dissent from these “poetic” views of the instrument, because it induce false expectations in ignorants and a false approach in the comprehension of reality. Neural networks are stupid instruments that can accomplish¬†very complex tasks easily, but this does not make them less stupid and above all they can do what they can do just because someone correctly programmed them. Nothing similar to the positronic brain (see Asimov robots) or a self learning device. Don’t get¬†fooled by this poetic vision, neural networks are completely different. And we, or other biological entities or reality, are NOT comparable to DNNs. We just try to model working NNs taking inspiration from our interpretation of reality.

I do not think it’s worth make philosophy¬†starting¬†from false assumptions.

 

Posted by Luca in a.i., free, 0 comments

About Artificial Stupidity

The coming age of thinking machines is already here. All manufactures¬†around the world are substituting worker with specialized robots and a new subdivision of work is raising. Artificial intelligence (AI) is spreading into a lot of different fields, but always with the same purpose: obtain more efficient results and profits. A computer (and its mechanical counterpart) can reach far better and faster results, with continuity and at a lower cost, almost always. The effort of developing “intelligent machines” is characterizing the current¬†evolution of western industrial world.

If you try to understand what AI is and its tools, you get a lot of mathematical writing, very complete and incomprehensible. An exhaustive theory of models and optimization tools, that make this matter seems like an esoteric discipline. Using these tools in everyday life is completely different: about 25 years ago, my first approach with neural networks was the purchase of “Brainmaker” software (seems to be still available here) and I started play with it. After a while, it was clear that building a model, a working model, was a trial-and- error process, where you do not need great mathematical background (I have not), but a good knowledge of the process you are trying to describe. A lot of help came from the Brainmaker’s manual, that has many practical samples and almost no theory. ¬†If you adopt the correct tool, you may not even need to know a programming language.¬†I had some programming background, so I bought a¬†neural network library to experiment more in depth.

The neural network model that you may create is conceptually identical to a spreadsheet, with lines of data that represent singular experiences and columns of uniform data. Some columns, usually at far right, do represent the value (or values) that you associate to that experience, or more simply the value you want the network to output when that experience will manifest again. This is the heart of any AI system: a lot of classified experiences and the ability of the network to associate a proper output value.

No theory¬†is able, at the moment, to preview if a model can produce an output and, above all, a good output. It’s a matter of try and try and try. You have to shape a database and collect data (more is better), then you have to process the database to shape the model and apply the neural network to it. At last, you may input your new and never seen before data and – magic! – you get a diagnosis that is supposed to fit the data at its best.

This is the process applied to thinking machines and it is quite stupid, isn’t it? No ability to induce, to create an unexpected reply to the problem, but a heavy brute force pattern recognition. The real intelligence, if any, is in the conception of the model and in its implementation. It is always on men’s side, not machines’.

Posted by Luca in a.i., free, 0 comments

Monday checkout: the power of the model

Friday evening the software has produced the usual forecast for Monday (yesterday): here you see just the first two bars into the future , plus the actual market data for yesterday (the white line  + close dot).  Can you see the power of the model?

spx-da-am-20160912-221056-f713faca-d7f6-4bff-b671-5710fa73a689-3500-24

As it often happens, especially with fast moving bars, the future has entered slightly faster that what was forecasted.

You may think: he was lucky! Sorry, there is no he, just number crunching, big data and code at work. No human opinion is involved , so no he and no lucky.

Is the model always correct? no, of course. Sometimes it gets confused, the view gets misted or jumps at bifurcations – the market is not an easy beast and you cannot put yourself against it.

This is what I¬†mean by¬†know in advance. ūüôā

 

 

Posted by Luca in a.i., checkouts, free, 0 comments

Exploring deep learning

In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players.

But there is a problem. There is no mathematical reason why networks arranged in layers should be so good at these challenges.” […]

Now that we finally understand why deep neural networks work so well, mathematicians can get to work exploring the specific mathematical properties that allow them to perform so well.

https://www.technologyreview.com/s/602344/the-extraordinary-link-between-deep-neural-networks-and-the-nature-of-the-universe/

 

Posted by Luca in a.i., free, 0 comments

Artificial Intelligence: a view from 1990

If neural networks are such great pattern matchers and can be used for prediction and¬†forecasting, then can they be used to predict the stock market? If so, we can all get rich.¬†Naturally, such thinking is to be expected and someone tried to do it [White, 1988]. He¬†used NLS and feedforward neural networks to predict daily IBM stock prices. He also¬†used back-propagation for training. Unfortunately, the results were disappointing. In¬†some ways, this result could have been expected. After all, neural networks can only¬†process information, make data transformations and detect patterns. They cannot make¬†up something from nothing. Where no information exists, neural networks cannot¬†magically find meaning. Assuming that stock prices are nearly random on a day-to-day¬†basis, then neural networks cannot be expected to predict the next day’s stock price.¬†However, negative results do not prove that the task of predicting stock prices ¬† cannot¬†be done.
More realistically, neural networks may prove to be useful if more reasonable problems are worked on. For example, HNC used neural networks to analyze foreign currency trading. Their system was able to discover features in the data. They analyzed information about the Pound Sterling, Japanese Yen and Deutsch Mark. With this neural network system inexperienced traders could make profitable decisions. Others found that feedforward neural networks were substantially better than regression techniques when used for corporate bond rating [Dutta & Shekhar, 1988]. Using their neural network systems, Nestor Company developed a successful automated securities trading program. They correctly classified 75 percent of the patterns which were prescreened for unambiguously identified patterns. This result was good when compared to other automated traders which operate in the range of 50 percent to 60 percent accuracy. Their system was done on a DEC VAX and processed a pattern in about one second. A bond-trading system was developed by the Nestor Company which could make correct recommendations 72 percent of the time. A non-neural network system only gave correct signals 55 percent of the time. While the percent improvements were small, the profitability was significant.
It appears that neural networks can be used to an advantage for trading or market tasks. Possibly one of the secrets to successful market analysis, is to define and restrict the problem to one which is potentially solvable. Neural networks can not identify information which is not available in the original data. However, if there is information in the raw input, even if it is hidden, neural networks seem to be able to pull it out. The successful examples above suggest that neural networks can be used for profitable trading. Their success suggests that more consideration should be given to using neural networks for financial analysis.

Alianna J. Maren, Craig T. Harston, Robert M. Pap, HANDBOOK OF NEURAL COMPUTING APPLICATIONS, Academic Press, Inc., San Diego, California USA, 1990

ISBN 0-12-546090-2
I love doing things that are considered (almost) impossible ūüėČ

 

 

Posted by Luca in a.i., free

Bifurcation

lm_3-4Bifurcation is a market behavior that you get aware of when deeply analyzing it. I means that almost identical condition may produce¬†opposite¬†¬†exit direction. Today is one of those days. You may say it’s easy to preview, as Yellen speaks and her market manipulation is absolute. Yes, but the model doesn’t know of ¬†Yellen’s discourse or any other mundane events. It’s just aware of prices of markets around the world and it was predicting a very special and volatile day for today since many days and that we may exit volatility in either directions, just due to very small bias differences.

What does this tell us about the market? It explains that the market (the time-price structure) has a hidden complex ever changing structure that some instruments (such as Artificial Intelligence) may uncover, maybe not completely. but with enough comprehension to result proficient. These instruments are the frontier of market analysis these days, and if do not approach them you will remain in the herd, that is routinely sheared.

 

Posted by Luca in a.i., educational, free, model insights, 1 comment

How past month was forecasted

spx-da-am-20160715-225854-2c91b11d-d29e-4949-9ffc-3f45d18e2946-3500-24

On 15th of July, this was the forecast from the model, on daily basis.
You may note how precise was the projection of future behavior of the S&P 500 for the month ahead.

Consider that:

  1. as you move more into the future, the less reliable is the forecast
  2.  from day to day the forecast chart adapts to real market numbers
  3.  daily forecast is experimentally the weakest, since weekly has demonstrated to be much more affordable from an investor point of view
  4.  No forecast is perfect, even when performs as shown: what the artificial intelligence model does is to read something into market numbers that no human eye can. Just that.

 

 

 

Posted by Luca in a.i., free, model insights, 0 comments

Responsive and adaptive

spx-da-am-20160524-225903-36d5e0f8-d5d7-42f3-b877-9fb9a71da157-3500-24

Forecast charts from May 05, 2016

The forecast charts are the output of the Amodel. All the work, all the code, all the information that is produced here it is put in there, in the chart. The model is designed to be adaptive and responsive, so that it adapts to (chaotic) ever changing markets.

Under specific circumstances, let’s say today, it outputs a response pattern that delineates how the market will behave in the future. When the the model is in its optimal “viewing” conditions, the pattern flows, for some time¬†and the wave of the future flows though the present in synchronicity, then sometimes completes and sometimes, after some bars, changes and start develop a different pattern (that may be a¬†totally¬†different view of¬†the market or just an expansion or contraction of something similar to the previous forecasted pattern).

The code that parse the model is in real time: every time it fires, it is totally unaware of what it did yesterday or the day before and on. It looks at the situation right now and produces the opportune output.

Under normal working conditions, the Amodel, as the final part of the wave is performing, stabilizes the output and you have some more bars to get confirmations of the imminent top or bottom.

 

Posted by Luca in a.i., educational, free, model insights, selected primer, 0 comments

Know in advance

What does “know in advance” means, here at spxbot? I want to make you an example, so that you can evaluate – I know it’s difficult to¬†believe that something impossible (as forecasting the market) can be done

SPX-da-am-20160715-225854-2C91B11D-D29E-4949-9FFC-3F45D18E2946-3500-24On 15th of July I published the usual daily forecast and what it was saying was: two weeks ahead of real choppiness before a downward move to come. The choppiness was there in the numbers, a lot of repeating days almost horizontal with a very light upward bias and a strong resistence between 2170-80

Now, more that two weeks later, we can check it out and see how good the prevision was: everything happened as forecasted and the past two weeks have offered a lot of occasions to close long positions when price was over 2170.

This is what I mean with “know in advance”. The model has a powerful ability to watch into market evolution and also the unique ability to eliminate stress connected to trading activity, as stress is due mainly to unknown future.

Posted by Luca in a.i., free, 0 comments

The Realm of Imprecision

27072016 112027

Shown in the chart: 1 daily bar forecast comparison with actual close. I get confirm that there is a widespread one day lagging in computations. Need to work on it.

Except bars projection, all other indicators read the realtime status, now. They do not project future values, they just get a reading of the edge.

But bars projection goes further, goes into the future and project latest price into next bars. Something impossible happens: you see the wave coming in. You see it, in advance.

As future bars are calculated by the neural networks, it is impossible to know what really happens. It does not depend on the sequence of experiences you submit to the network, it does not take in account the rumor, the huge amount of rumor that lies inside the market. It works as Pattern Recognition. The model is set up to see and it recognizes.

 

Posted by Luca in a.i., free, psychology, selected primer, 0 comments

Traditional analysis is out of touch

lm_3-4¬†[I]’ve found and corrected a couple of minor bugs in the Signal and in the target models (minor logical errors that under some circumstances did crash the whole code!) and this has made me think that the Signal model acts similarly to the bifurcation model¬†under fractal theory.

The market is a chaotic self-adapting structure, it is asymmetric, but regular and continuously passes through levels of bifurcation: the side image is quite near to what I’m suggesting.

I’m not a mathematician, so I cannot¬†explain the theory behind, but I¬†can recognize that a similar structure lies inside the market, as the result of the interactions¬†of all subjects (traders, investors, scalpers, big and small money managers, etc).

In some way, I think that this complexity is what nowadays has made the market so “technical”, or difficult to read and play. And why traditional analysis (fundamental or technical) is getting out of touch.

 

Posted by Luca in a.i., free