Stationarity

Discussions about the testing and simulation of mechanical trading systems using historical data and other methods. Trading Blox Customers should post Trading Blox specific questions in the Customer Support forum.
Post Reply
MCT
Roundtable Knight
Roundtable Knight
Posts: 102
Joined: Fri May 16, 2003 7:27 pm

Stationarity

Post by MCT » Tue Jan 20, 2004 3:20 pm

Given this is a forum mostly populated by pure mechanical system traders, I still haven’t yet read posts that addressed Stationarity - the statisticial term for ever-changing cycles. ..Or maybe I must have missed it.

A number series is stationary if the process that generated the series has been constant. Clifford Sherry, in his excellent text “The Mathematics of Technical Analysisâ€

forex_kid
Senior Member
Senior Member
Posts: 47
Joined: Thu Apr 17, 2003 5:30 pm
Location: Sacramento, CA

Post by forex_kid » Tue Jan 20, 2004 3:48 pm

Good post.

I've been doing some thinking recently about dynamic re-optimization. It seems that I've seen plenty of references to mechanical traders tweaking their systems slowly over time as the sweet spot for their systems drift.

In his recent piece on optimization:

http://www.tradingblox.com/articles/opt ... aradox.htm

c.f. talks about the reality of drift. It is going to happen, the markets are not fixed like a roulette wheel. What I would like to see more of is a discussion of ways that people approach developing a logic for handling this tweaking systematically.


Cheers,

Morgan

MCT
Roundtable Knight
Roundtable Knight
Posts: 102
Joined: Fri May 16, 2003 7:27 pm

Post by MCT » Tue Jan 20, 2004 9:27 pm

Did I ever miss it… :oops:

c.f. wrote:
"The main problem with using historical testing as a means of system analysis is that the future will never be exactly like the past. To the extent that a system captures its profits from the effects of unchanging human behavioral characteristics that reflect themselves in the market, the past offers a reasonable "approximation" of the future, but never an exact one.
First, I'd like to say your article was the best post I've read on this forum; I enjoyed reading it.

But if you would allow me to disagree, historical testing of the distant past is no where near a reasonable approximation of the future.

The distribution of price change from years past is less likely to repeat than the distribution of price change from the recent past.

This is one reason I believe in utilizing moving real time historical results of the recent past to optimize my systems. If current distribution of price changes varies significantly from their norm of recent past it will be time for me to reoptimize my system-that spell big change ahead, it's my best indicator :D . I optimize my systems using a look back period of about thirteen months on average. [that's all the sample size that I need]

There is an instance where historical testing is very important.
1)Identify past period of market hisotry where the market was following the same RULES as today. 2) We can then see if the system that worked in the recent past look-back period also worked back then. The idea is markets that exhibit stationarity of distribution of price changes follow the same RULES-irregardless of time period or time frame. If our stategy passes this taste it should increase our confidence in the strategy. It is a good taste for robustness.

Quantitative risk management is always best left to a computer, it is better at it. I work on the qualitative or discretionary aspects of my system to increase the probability of catching a good trend by analyzing patterns, sentiment, momentum, and volume. A simple breakout entry system cannot tell the probability of the outcome of a single trade...this might sound contradictory, nonetheless, as many have said before process matters much more than result, but single trade results still matter some what. Let me explain... I generally augment my semi-discretioaniary entries with statistical stationarity analysis to better judge the probabilities. Drawdowns are huge in most mechanical systems for this simple reason- low probability entry methods. Some trades are not worth the risk and should never be taken even if a breakout occurs. If one can design a system that has a low average loss and a high winning percentage with high payoff[it is very difficult but yes it is possible] we are looking at a system that should be very drawdown resistant. However, in spite of what some portfolio strategists would have us believe low percentage systems actually increase the likelihood of a major drawdown exponentially rather than decreasing it. If by careful design our high percentage system is also capable of performing at an optimal level under low win rate conditions then it could be said we've got optimal synergy between risk and reward and all the parts of our system that constitute our model....much easier said than done :(

It has always bewildered me why it is important to build and optimize systems with tones of historical data while giving little thought to the time frame over which the system might be valid. In my humble opinion, it is critical to re-optimize systems according to the characteristics of recent distribution of price change. Only recent time series has any predictive value. If you are trading a system that has been optimized on long term historical data, you have no way of knowing if the current distribution of price changes matches those from the past. Optimal systems trading would require that you identify the RULES of the market NOW and base your optimization on that. The only way to tell if there has been a change in the distribution of price changes is by testing and seeing an actual shift in the distribution of price changes. The next trade might just be when the distribution changes. This element of uncertainty is what Victor, or LTCM weren’t able to deal with . Talk about seating ducks :(

I feel mechanical systems do their best when utilized as guides, and as flexible[only on selection and entry/exit] and desciplined[how much?] trading partners that search for the rules that govern markets now.

I do not differentiate between my long term and short term systems or trades … I simply trade those time frames that exhibit the kind of behavior I’m scouring for.

Optimization is only a problem when you fail to take stationarity into account. It is a vastly time consuming process :( But well worth the effort.

Now, you can fire up VeriTrader 1.5 & your Excel spreadsheet and test away ... :)

Have fun and Good Trading,

MT

cyphrograph
Senior Member
Senior Member
Posts: 39
Joined: Thu Oct 16, 2003 11:38 am

Post by cyphrograph » Wed Jan 21, 2004 4:47 am

Stationarity is one of the most important issues in trading, IMO. You mention roulette as a perfect generator of stationary process. This is a good example. If there're 50/50 chance that current event will be continued - you can't apply any succesful trading or gambling system even if the process IS stationary. Now let's assume that the chances of current event continuation in the future are above 50% but they're always changing. Now you can apply succesful trading system to a non-stationary process like this. I would say: sure markets are non-stationary, probabilities are always changing, however - they're oscillating around some mean values.

In order to judge about stationarity of the process, we have to use large amounts of data - this is what Central Limit Theorem and Law of Large Numbers indicate. I've done some research in this field and results show that markets are pretty close to stationary process, patterns repeat themselves over and over and over again, confirming their fractal nature.
I don't use any stricte-statistical tools to measure stationarity. I simply measure it by aplying trading systems into the past data. Then I look how smooth the equity curve is. If it is smooth and shows similar profits in different periods - it is the best proof to me that market's nature doesn't change much over time. Commissions and slippage AREN'T included in tests, simply becasue I want to analyze price itself not the trading systems' robustness. Here we go, the first example:

System's performance stats for SP futures in 1-minute intervals. 1 contract held, no compounding, no commissions&slippage, in SP points.

15 Aug '98 to 15 Aug '99
Net Profit 3432; Max DD 41; # trades 46,762

15 Aug '99 to 15 Aug '00
Net Profit 3144; Max DD 52; # trades 47,171

15 Aug '00 to 15 Aug '01
Net Profit 3087; Max DD 55; # trades 47,231

VERY smooth equity curve.
Of course, in order to profit from it we would have to use longer time frames and make less trades to overcome commissions and slippage costs. With longer time frames we have to deal with two issues:
1) markets can be more non-stationary in the longer time frames,
2) we have less data and less trades - this means problems with getting statistically significant conclusions.
In the next post I'll present a system that trades on 15-min bars.

cyphrograph
Senior Member
Senior Member
Posts: 39
Joined: Thu Oct 16, 2003 11:38 am

Post by cyphrograph » Wed Jan 21, 2004 5:45 am

The system trades on 15 min bars. Again, commissions and slippage are not included. Look at the attached equity curve. The system made little profit for the first 4 years, to 13 Nov 1997. From that date - stable returns. There was 1216 trades for the whole 10 year period. Since 13 Nov '97 to 17 Oct '03 - 676 trades. Conclusions? SP market has changed from '98. It is probably connected with many important events of fundamental nature, like:

1) change in SP point multiplier from $500 to $250
2) e-mini SP contracts kick-off at globex
3) increasing number of online traders
4) lower commissions > more frequent trading > more day traders > more volatile markets
5) more volatile economic situation all over the world (stock market bubble burst in 00, 9/11, war with Iraq).

The point is markets can change their nature due to some recognizable events.
Attachments
SP_system_10_years.jpg
SP_system_10_years.jpg (25.52 KiB) Viewed 9242 times

Dutchtrader
Roundtable Fellow
Roundtable Fellow
Posts: 58
Joined: Wed Apr 30, 2003 4:35 pm
Location: Netherlands

Post by Dutchtrader » Wed Jan 21, 2004 3:25 pm

MCT wrote:

The distribution of price change from years past is less likely to repeat than the distribution of price change from the recent past.


Just curious: Is this logic, is this a fact or is this an assumption?

I like this topic very much so I try to understand it more completely

Thanks,

Marc

MCT
Roundtable Knight
Roundtable Knight
Posts: 102
Joined: Fri May 16, 2003 7:27 pm

Post by MCT » Wed Jan 21, 2004 5:18 pm

Dutchtrader wrote:MCT wrote:

The distribution of price change from years past is less likely to repeat than the distribution of price change from the recent past.

Just curious: Is this logic, is this a fact or is this an assumption?

1)Historical back testing is akin to back-fitting experience. In that regard, our experiences of the last two-to-three years are more likely to repeat than our experience of 20 or 30 years ago.
2)There is no system that will work every time, all the time.
3)An unguarded study of quantitative methods will rob you of your insight. Never study a theory or perform simulations “beforeâ€

Forum Mgmnt
Roundtable Knight
Roundtable Knight
Posts: 1842
Joined: Tue Apr 15, 2003 11:02 am
Contact:

Post by Forum Mgmnt » Wed Jan 21, 2004 6:49 pm

The distribution of price change from years past is less likely to repeat than the distribution of price change from the recent past.
My opinion is that over the very short term this statement is probably true, but over the next year or several years, I disagree.

In fact, I've seen plenty of evidence in my analysis of history that the recent past is not a very good predictor of what you might see a year to three years out.

- Forum Mgmnt

verec
Roundtable Knight
Roundtable Knight
Posts: 162
Joined: Mon Jun 09, 2003 7:04 pm
Location: London, UK
Contact:

Post by verec » Wed Jan 21, 2004 7:02 pm

May I suggest The Laws Of Form by George Spencer-Brown ?

A foray into multi-valued logic, where things that are neither true nor false can also be non applicable or imaginary. Apparently after the field of multi-valued logic was pioneered by Spencer Brown some 40+ years ago, it has expanded to a much wider realm, but that book is really fascinating where he shows you that being can jump out of the void with the mere tool of a distinction ...

He is also able to get rid of all self-referential paradoxes (ie: "This sentence is false"), as well as Bertrand Russel theory of types that Russel and Whitehead had to introduce in Principia Mathematica a good century ago and which had stayed unchallenged for 50+ years! Even Gödel's Theorem becomes somehow weakened by Spencer Brown work.

Really, really a good read (though challenging at times).

rabidric
Roundtable Knight
Roundtable Knight
Posts: 243
Joined: Mon Jan 09, 2006 7:45 am

Post by rabidric » Thu Jul 05, 2007 11:03 am

I have raised this thread from the dead because it is one of the best on this forum{well, up until the "statement 11 paradox" at least :) }


my belief is that generalising recent and past history is misleading.

there are some competing things at play:
~1. something has deteriorated in performance to the point that you change what you do, at that point it starts working again and you feel bad.
~2.something has deteriorated in performance to the point that taking up a modified/different strategy works better, which is then validated.
~3. something you have not seen before tears you a new arse before you are able to appropriately detect it through backward looking performance measures and respond. i.e. you don't have enough time/data to pick up the malignant change in "the generator", because your drawdown limits have been triggered and you have stopped trading.
~4.something deteriorates and you stick with it out of determination to avoid scenario(1) and it just gets suckier and suckier. eventually you quit with nothing left.

the devil is in the detail. Cursed Nonstationarity... :evil:

While it would seem that it is a case of damned if you do and damned if you don't, i am just a hair more optimistic than that:

# For the reason of (3) which has happened to me before, my drawdown limits are set with the express goal of leaving me with enough money to have another go with something new. This is more important to me than avoiding a (1) scenario.
# (3) is realistically quite hard to avoid in it's worst manifestations. Black swans et al.
# A (1) scenario is annoying but recoverable from. Unless you do it too often. better to be measured in one's modification, although not so ponderous as to allow (4).
# Scenario (2) has happened to me a few times, therefore i am happy to be prudent and run the risk of having a (1) every now and then because i believe that the benefits of (2) outweigh the negatives of (1), with appropriately selected reoptimization and bactest windows.
# different strategies and different portfolios/ time horizons require different rolling reoptimization/backtest windows. e.g. for an intraday strat a 3-5 year window, reoptimized every 6months seems reasonable. but a Very long term trend following system that trades a large portfolio infrequently, i would perhaps think of a 10-20 year backteast with 5-10 year gaps between reoptimization. or maybe not.... :roll: :)



blessed nonstationarity 8) . If the markets weren't a giant El Farol problem, then none of us would be able to make a living as everything would eventually get competed away. I thank NONstationarity for that small recompense at least.

Rich

Post Reply