Robustness of systems

Discussions about the testing and simulation of mechanical trading systems using historical data and other methods. Trading Blox Customers should post Trading Blox specific questions in the Customer Support forum.
Post Reply
mark123
Contributor
Contributor
Posts: 3
Joined: Thu Mar 05, 2009 4:56 pm

Robustness of systems

Post by mark123 » Thu Dec 08, 2011 12:30 pm

Hi I am trying to develope a new trading system. I have limited experience however I think my question is quite simple and was wanting any advise/comment from a more experianced system trader.
Basically If I generate a portfolio of assets, I can get much better Performance statistics if i "Micro" calibrate the specific parameters for each asset rather than the portfolio as a whole. Is this data mining and the robustness of the portfolio compromised leading to a likely drop in the Sharpes going forward out of sample etc?
Thanks.

RedRock
Roundtable Knight
Roundtable Knight
Posts: 941
Joined: Fri Jan 30, 2004 3:54 pm
Location: Chicago

Post by RedRock » Thu Dec 08, 2011 1:16 pm

Yes. Less is more in the future, usually.

stopsareforwimps
Roundtable Knight
Roundtable Knight
Posts: 199
Joined: Sun Oct 10, 2010 1:47 am
Location: Melbourne Australia

Post by stopsareforwimps » Thu Dec 08, 2011 5:30 pm

RedRock wrote:Yes. Less is more in the future, usually.
Useful search terms include "bias variance tradeoff", regularization, overfitting, Bayesian Estimation, Ridge Regression.

In machine learning there are often two levels of fitting of models to data, which requires the data to be split into three lots, only *one* of which is used to optimize parameters. At the lowest level you optimize the model's parameters. At a higher level you optimize the structure of the model including the number of active parameters.

Code: Select all

for number of active parameters (NAP) from 1 by 1 until NAP greater than X pick the best model
     for various parameter values pick the best parameters
             measure performance of model with parameters
     end
end
Typically you split the data into three sets. One set is used to optimize the low level. The second set is used to optimize the high level. The third set is used to get an idea what the real performance is likely to be like. You might split the data among the sets 50:30:20.

It is a bit tricky to split the data. Because markets are correlated it may be best to use time spans for splitting the data. One approach is to break the data up into slabs of months, with say a mean length of 12 months and a standard deviation of 6 months.

You can perhaps squeeze more out of the data by running the whole process many times, with a different random partition of the data each time. The output is a distribution of outcomes. This will give you a sense for how stable your model building process is and how reliable are the predicted performances. Warning: this process is likely to be enlightening but depressing at the same time.

Some learning technologies such as Support Vector Machines inherently deal with bias variance tradeoffs. Others have parameters that allow you to control the bias. Often the bias-limiting factor is called 'lambda' - effectively you apply a cost or a fee to each additional parameter in your model.

See for example (all this material requires a fair bit of mathematics unfortunately):

http://www.ml-class.org/course/resource ... -materials

or

http://cs229.stanford.edu/materials.html

or

(lectures from above notes)
http://www.youtube.com/watch?v=UzxYlbK2 ... ure=relmfu

or

http://www.youtube.com/user/mathematica ... A0D2E8FFBA

As I mentioned in an earlier posting, the Vapnik Chervonenkis theorems guarantee that your optimized model will be probably close to correct, subject to its bias, and based on certain assumptions.

That is to say, a model's biases - its assumptions and limitations - creates a source of errors that no amount of data can overcome. On top of that, VC says that there is random error that can be reduced to any required level given enough data of the right kind.

The caveat about the right kind of data is the killer. The requirement is that the data you have is drawn at random from the target distribution. This means if the structure of returns is changing, or if your data is not representative then your results can be way off.

For example let's say you develop a stock market timing model based on US and Australian data for the last 110 years, as I did. These are two of the best-performing stock markets during that time. And it was a time when overall economic growth in the world was very high.

If you attempt to apply this model to the future, this may not work if you apply it to average stock markets, and to a hypothetical future world where economic growth is sluggish.

mark123
Contributor
Contributor
Posts: 3
Joined: Thu Mar 05, 2009 4:56 pm

Post by mark123 » Mon Dec 12, 2011 3:48 am

Thanks for the good advise. I will presume less is more, and then hopefully build on that with the comprehesive reply from stopsareforwhimps....thanks alot.

AFJ Garner
Roundtable Knight
Roundtable Knight
Posts: 2040
Joined: Fri Apr 25, 2003 3:33 pm
Location: London
Contact:

Post by AFJ Garner » Mon Dec 12, 2011 4:25 am

The caveat about the right kind of data is the killer.
In a nutshell that is the real problem and what prompted me to start the thread "Different This Time". No amount of statistical analysis is going to help if we face very different times to those we have faced during the period for which we have data. If we are entering a period of de-leveraging and deflation, data from the past 40 years of inflation and high leverage is likely to bring misleading conclusions if one relies too much on available data in designing a system and portfolio.

I have none of the answers and merely pose the questions. Are commodities and stocks going to rise in value over the next ten years if economies and markets are shrinking? If not, then are trend following systems going to be able to make up for the lack of long profits with short profits?

For much of the period for which we have reliable daily data, shorting has proved of value for limited periods and the risk reward ratio compared to the long side has been unattractive for the period as a whole. The period 1980 to 1985 seems to have been an exception for the futures data which I possess and for the systems I have tested. During that period, unusually, many of the instruments in my portfolio exhibited long, smooth downward trends well suited to capture profits from TF systems. During much of the rest of the period, short profits proved difficult to achieve except during brief periods of high crisis. Profits captured during those periods evaporated as markets recovered.

For many of us, the current year has been toxic and we have faced a trend follower’s nightmare: markets have chopped many systems to pieces as they have moved up and down to the tune of every announcement, every hope and fear in the Euro crisis. No trends equates to losses for many systems: we have seen many records spoilt this year to some degree or another. What happens over the next year is dependent on whether trends will re-emerge of a duration enabling systems to capture profit. Conference calls with Altis and Transtrend among others have elicited the response “no trend no profitâ€

Chris67
Roundtable Knight
Roundtable Knight
Posts: 1046
Joined: Tue Dec 16, 2003 2:12 pm
Location: London

Post by Chris67 » Mon Dec 12, 2011 5:48 am

I think you raise the most valid points in "our World" at the moment Mr Garner and I suspect like me you are having many moments of contemplation recently about those issues
I think you (plural) would be foolish to charge ahead assuming nothing has changed in the World of Trend Following as clearly it has ? Absolutely this could be a bad year that is followed by a great year and things never change as human nature never changes right ? OR a 3 year period of government interventions/ money printing / deflation and suspended markets (i.e. cant buy Swiss/ cant sell Italy - Spain etc) and maybe things dont work out too well for teh next few years - that doesnt mean TF is dead but it means if you are running a business dependent on annual returns with im-patient investors that you have a potential problem - However we do have the flexibility to trade in the way we see fit to make money ? flexibility / well - timed discretion and quicker profit takes may provide the answer ?

RedRock
Roundtable Knight
Roundtable Knight
Posts: 941
Joined: Fri Jan 30, 2004 3:54 pm
Location: Chicago

Post by RedRock » Mon Dec 12, 2011 11:25 am

Heavy Sigh. The contrarian in me leaps for joy. Yet she too is mortal.

AFJ Garner
Roundtable Knight
Roundtable Knight
Posts: 2040
Joined: Fri Apr 25, 2003 3:33 pm
Location: London
Contact:

Post by AFJ Garner » Mon Dec 12, 2011 12:21 pm

Heavy sigh indeed. But all I am really saying is "damned if I know"!

stopsareforwimps
Roundtable Knight
Roundtable Knight
Posts: 199
Joined: Sun Oct 10, 2010 1:47 am
Location: Melbourne Australia

Post by stopsareforwimps » Mon Dec 12, 2011 3:10 pm

AFJ Garner wrote:Heavy sigh indeed. But all I am really saying is "damned if I know"!
A few thoughts...

Perhaps there may be value in widening our sights a bit.

I was reading a random blog somewhere and it had a graph of extremes of sentiment: the McClellan indicator http://www.zerohedge.com/contributed/yo ... rld-weekly

So I did a search here and there is very little discussion of oscillators. I've never used them myself. Anthony Garner's Turtle+ system does reduce position size when the trend goes a long way, though, which is kind of oscillator-like. This year the S&P has gone nowhere with a total points move of over 3000. Fading extremes would likely have been profitable this year.

Years ago I was a value investor ("growth at a reasonable price", at least) with some success but I was turned off by the big draw-downs when I was wrong and the market was right. I was thinking however that valuation could provide a key to working out whether to have a long or short bias in my trading, rather than as a direct buy/sell signal.

Apart from arbitrage and charging people retail fees, the main ways to make money in markets seem to be trend following, and trading against extremes of valuation and sentiment. These seem to be complementary approaches that would work at different times.

I am working on trying to combine these ideas somehow. Another thing I am working on is trying to detect when a market pattern is dying. Maybe a change in the correlation patterns between various asset returns could be a sign.

As an example, I've often read that one sign of a stock market top is when the majority of stocks stop going up and only a few leaders carry the market higher. So most stocks would be negatively correlated with the market, when they were positively correlated earlier on.

stopsareforwimps
Roundtable Knight
Roundtable Knight
Posts: 199
Joined: Sun Oct 10, 2010 1:47 am
Location: Melbourne Australia

Post by stopsareforwimps » Mon Dec 12, 2011 5:32 pm

stopsareforwimps wrote:So most stocks would be negatively correlated with the market, when they were positively correlated earlier on.
Correction: they may not be negatively correlated but the direction would be different.

Post Reply