## Seykota's risk management web page - Lake Ratio description

Discussions about Money Management and Risk Control.
edward kim
Roundtable Knight
Posts: 344
Joined: Sun Apr 20, 2003 2:42 pm
Location: Silicon Valley / San Jose, CA USA
Contact:

### Seykota's risk management web page - Lake Ratio description

http://www.seykota.com/tribe/risk/index.htm

Ed Seykota wrote a small piece on risk management, which is an extension of the coin flip risk model he worked on with Dave Druz. He also covers other topics such as portfolio selection, the Lake Ratio, and trading psychology, in brief.

Talking about Seykota might be a touchy topic in this forum, so I hope everyone looks at the page for its content and educational value, and nothing else.

Edward

Kiwi
Roundtable Knight
Posts: 513
Joined: Wed Apr 16, 2003 1:18 am
Location: Nowhere near
I dont think its really risky --- unless I dont see any more of your posts.

Have you worked thru his archives. When I do my head just hurts but I'm curious to know if you've found any other good information as I had expected it might be there.

Waiting to see if you post again

John

Mark Johnson
Roundtable Knight
Posts: 122
Joined: Thu Apr 17, 2003 9:49 am

### Seykota's Lake Ratio

Seykota takes the daily equity values E(i) and calculates the data series P(i), the "peaks", defined as

P(i) = MAX(E(n)) for n=1 to n=i

Then he calculates two numbers, WATER and EARTH:
WATER = SUM( P(j) - E(j) ) for j=1 to ndays
EARTH = SUM( E(k) ) for k=1 to ndays

His Lake Ratio = WATER/EARTH

People who love the MAR measurement are probably not going to like the Lake Ratio. The Lake Ratio includes all drawdowns in its construction, not just the maximum drawdown. If system A has one 30% drawdown and lots of 10% dd's, while system B has one 20% drawdown and lots of 15% dd's, Lake Ratio will prefer A but MAR will prefer B.

Technical note: the definition for P(i) above represents an Order-Nsquared algorithm. But a moment's thought will reveal a simple way to program it as Order-N.

edward kim
Roundtable Knight
Posts: 344
Joined: Sun Apr 20, 2003 2:42 pm
Location: Silicon Valley / San Jose, CA USA
Contact:

### Re: Seykota's Lake Ratio

Mark_Johnson wrote:People who love the MAR measurement are probably not going to like the Lake Ratio. The Lake Ratio includes all drawdowns in its construction, not just the maximum drawdown. If system A has one 30% drawdown and lots of 10% dd's, while system B has one 20% drawdown and lots of 15% dd's, Lake Ratio will prefer A but MAR will prefer B.
Hi Mark,

Your calculations prompted me to have a flashback about integral calculus and Big-O of N for large-scale computer computations~

I have a follow-up question. If in the 30-year history of a program you had the following results from two systems:

System A
19 drawdowns of 10% each
1 drawdown of 30%

System B
5 drawdowns of 15% each
15 drawdowns of 20% each

According to MAR, System B is still better (is that correct?) Should the MAR ratio also include the number of times the Max or near-Max drawdown has been hit? Or is there another way to resolve this? I would think that if I only got hit (not life-threatening) once in 30 years, that might be okay. Let me know Mark, if I'm missing something.

I read the entire archive John, because I wanted to understand Seykota's application of The Process. My head also hurt at first, but there were many things I got out of it. How and if others benefit, I don't know.

Edward

Kiwi
Roundtable Knight
Posts: 513
Joined: Wed Apr 16, 2003 1:18 am
Location: Nowhere near
Edward,

I think that what you are illustrating is the need for more than one measure.

Ed Seykota says in his article:
Since the most severe draw down problems (loss of confidence by investors and managers) occur during these "outlier" events, VaR does not really address or even predict the very scenarios it purports to remedy.
Similarly his lake ratio doesnt cover it either. The only way I know to deal with these event is to look at the MAR and apply a multiplier for a likely worse event in future.

But the example you give shows that MAR alone is limited. If both of your examples had had the same MAR then the one with a lower variability (say measured with Lake Ratio or with a standard deviation measure like Sharpe (or a down only Sharpe)) is to be preferred. My reasoning for this is that the higher the variability the higher the chance of an event exceding the observed maxDD occuring in the next period. I'd have to look at the statistics for this but I'm pretty sure that I could prove it.

I will try and prove this with Stats and see if I can come up with some "universal" that takes into account both the maxDD and the likelihood of seeing future events that exceed it. If anyone has such a measure though (Mark?) then please save me the effort.

I'll also go and reread c.f.'s paper.

John

Kiwi
Roundtable Knight
Posts: 513
Joined: Wed Apr 16, 2003 1:18 am
Location: Nowhere near

### Kiwi Ratio

OK ... here is a first cut with insufficient knowledge

Assumptions:

0) This system is robust and is not going to fail because, say, the S&P volatility gets too low. If this happens then all estimations of future drawdowns become irrelevant. (Thanks to Gary Fritz on the Omega user group.)

1) What matters is the worst case drawdown as this is the cause of strategy/fund abandonment. So MAR is on the right track.
2) For a large number of trades the historical maxDD is indicative of the likely future drawdown ... but
3) Assuming the return figures approximate a normal distribution then an estimation of the ~99.9% of max drawdowns can be calculated as mean return minus 3 times the standard deviation of the returns. The period used should be the period which results in the worst case that falls within the likely drawdown periods.

So the MaxDD for use in the MAR ratio should be the greater of 2) or 3). This will take into account the likelihood that highly variable revenue streams such as Eck's System B are very likely to have a future drawdown exceeding the historical MaxDD.

Example:
System return =50% pa
Max Historical DD = 20%
The Std Dev of returns was evaluated from 1 month to 20 months as this is a long term system and the historical longest drawdown was 10 months. The Return - 3 * Std Deviation of returns with 6 month period gave the lowest figure at 30%

So Kiwi Ratio = Modified MAR = 50/30 = 1.7, not 2.5 as would normally be measured.

A lower volatility in returns might have resulted in a R-3SD calculation below the historical MaxDD.

For a day trading system the periods evaluated might have varied from 3 days to 6 weeks.

As this is a statistical estimation at the 99.9% level we can expect a worse result in 0.1% of cases (need to think about this when I havent had a couple of wines).

All criticism and ideas welcome.

John

Ted Annemann
Roundtable Knight
Posts: 118
Joined: Tue Apr 15, 2003 7:44 pm
Location: Arizona

### Ratio kiwi

The reason why there are so many different ways of measuring Goodness of a trading system, is that so many traders want so many different things. Ask yourself why did Managed Accounts Review magazine invent the MAR Ratio, when the Sharpe Ratio was already available? Why did Frank Sortino invent the Sortino Ratio when MAR and Sharpe were already available? Why does Mark Johnson calculate and print a half dozen different goodness figures? Why did g.c. invent the g.c. Ratio? Why did Seykota invent the Lake Ratio? And now why did namecloak "Kiwi" invent the Kiwi Ratio?

It's because no two people can agree on exactly what they want. So each trader figures out for himself exactly what he wants, then uses or invents a math formula to compute it.

One of the things that's so intimidating and frightening to new traders is that there's no one single "best" way to do anything. Newbies want to be handed a universal set of rules: Do These Things And Make Big Profits. But alas, a system that fits one trader doesn't fit another. A bankroll for one trader is unavailable for another, a risk that frightens one trader to death is calmly accepted by another, ad infinitum. We can add to this list: what one trader seeks to maximize, is different from what another trader seeks to maximize. Therefore one trader may prefer the Kiwi Raio, another may like the Limey Ratio, another the Yank Ratio, another the Samba Ratio, and so forth.

The c.f. solution to this problem is simple and elegant: Do your own tests and make up your own mind. For those who can't resist the creative impulse to thumb Nobel-Prize winner William Sharpe in the eye by inventing a brand new Ratio, Do your own tests and make up your own mind. You won't be able to convince other traders to use your Ratio unless you use it yourself. So do some tests and show it at work.

There's some really good news: A nice big fat equity curve of a real system has been posted right here, to aid in your testing. Mark Johnson posted 276 months (23 years!) of equity curve for a system named Thirteen, at this URL: viewtopic.php?p=844&highlight=#844 The public availability of this data lets any number of inventors test their Goodness Figures on the same underlying equity curve.

So, fire up your spreadsheet or your Visual Basic or Java or whatever you use to test your ideas, and test away. When presenting results be sure to show a couple of your competitors (Sharpe, MAR, etc.) so as to conclusively demonstrate the vast preferability of your solution. Let the testing begin!

TedA

Kiwi
Roundtable Knight
Posts: 513
Joined: Wed Apr 16, 2003 1:18 am
Location: Nowhere near
Ted,

Thanks for the response. And for the reference to Mark's figures - I will use them to build a couple of examples. I could do it with a few other systems as well but those would just be examples to test if the theory seems to have relevance. I'm quite interested in the theoretical underpinnings as I don't like things that work in practice but don't have a good rationale.

I partially agree with your reasoning for the "Kiwi Ratio" (the only non-pejorative amongst your choices by the way). We're proud to name ourselves and our ratios around a little flightless brown bird that spends its nights searching the leaves and muck for tasty little insects But you didnt get the reason for the search entirely right. I have been happy to look at worst case and equity smoothness in my own choices but on reading Ed's and c.f.'s paper and looking at Eck's question it seemed that it should be possible to combine MAR with a statistical prediction that would indicate if the MAR was clearly too low (possibly because of curve fitting). Thats the insect I'm looking for tonight. Even after finding it I will still use the Sharpe ratio for smoothness.

A modified MAR should match Ed Seykota's stated objective for a ratio - that it reflect the likely most severe drawdown. So the obvious approach was to take the worst case historical drawdown or the worstcase predicted by the distribution of incomes.

My choice for the worst case was 3 standard deviations off the mean rate of return (because I believe that the returns will be normally distributed even if the trade results are a truncated normal distribution because of stops). I'd be interested in criticism of that choice as my statistics is not good enough.

Also, I deliberately didnt use Monte Carlo analysis to determine a "worst case". This is partly because I don't believe that trade results are independent of order. Its also though to search for a simple measure that would take into account the difference between volatile returns with a high lake ration vs smoother (better sharpe ratio) returns with a low lake ratio but possibly a higher MAR. Possibly you need to use both ratios and intuit between them but it seems that the underlying Sharpe ratio assumptions could drive a modified MAR. Comments and criticism invited?

John

Kiwi
Roundtable Knight
Posts: 513
Joined: Wed Apr 16, 2003 1:18 am
Location: Nowhere near
OK. I took the simple approach to analysis to see what turned up. I took Mark's published data. I imported it to excel (spreadsheet available by request and constructed columns to provide cumulative return, max return and drawdown form maximum.

These were then filtered to remove intermediate months so that I had end of month, end of six month and end of year returns. From these I extracted the mean, standard deviation and maximum drawdown (end of period).

The results were:

Monthly Data
Mean 4.66%
Std Dev 10.65%
MaxDD 50.14%
ExpectDD 50.14% (as mean-3Stdev = 27.29 which is less than MaxDD)

6 Monthly Data
Mean 31.07%
Std Dev 30.30%
MaxDD 49.53%
ExpectDD 60.44%

Annual Data
Mean 72.13%
Std Dev 65.51%
MaxDD 21.06%
ExpectDD 124.41%

So what is this telling me? It seems to be telling me that when the sample size gets too small then a prediction based on Std Deviations of the data (be it the Sharpe Ratio or this attempt to improve on MAR) is getting too large -- or does this just mean that it should be large when the sample size is small.

Alternatively is it telling me that the correct figure to use is the monthly return and that the historical maxDD is outside of the 3 standard deviation figure. Perhaps this system is performing well and a system with a higher lake ratio (worse) might have its historical maxDD within 3 standard deviations.

Critique of results/method/statistics and other thoughts all welcome.

The obvious alternative is to go back to scrambling trade order with Monte Carlo analysis but I still have this nagging question about the assumption that trades are independent of the previous trades. I'd still like to see if its possible to "adjust" the max drawdown figure to allow for the smoothness of the equity curve that generated the maxDD.

John

Kiwi
Roundtable Knight
Posts: 513
Joined: Wed Apr 16, 2003 1:18 am
Location: Nowhere near
Another thought:

If the normal Monte Carlo Sim is flawed for trading because the events may be order dependent ( viewtopic.php?p=222#222 ) then what about this idea. Instead of full randomization keep the sequence order for each commodity being tested the same but move the sequences relative to each other.

So if you were testing a portfolio of commodities C & S with 3 trades the possible outcomes would come from the sequences

C1 C2 C2 * * * C1 C2 C3 * * * C1 C2 C3
S1 S2 S3 * * * *S3 S1 S3 * * * S2 S3 S1

If there were multiple systems then they would scramble in the same way (order dependent for each commodity/system pair).

The most obvious flaw to me is that if you had US and TY the commodities are highly correlated buuuut as Mark Johnson pointed out the system/commodity pair might or might not be highly correlated.

Any views (Ted, I dont have the wherewithall to test this at present so I'm looking for opinions and play with the idea not "you should all trade this way absolutes"). Would this reduce the range of tests too far? Would it still let the scrambling expose accidental curve fitting of the equity curves when testing systems with their money management?

John

MCT
Roundtable Knight
Posts: 102
Joined: Fri May 16, 2003 7:27 pm
I personally feel such calculations and efforts are futile
MT
Last edited by MCT on Wed Sep 17, 2003 8:09 pm, edited 1 time in total.

Forum Mgmnt
Roundtable Knight
Posts: 1842
Joined: Tue Apr 15, 2003 11:02 am
Contact:
What to do... what to do...

Yes, this is a messy business with no clearly defined right answers to many of the questions we most want to have answered.

Some of the problems:
• We know that the future is not exactly like the past. Yet, we know of no other basis for an objective determination of the relative merits of one approach versus another.

RESULT: Sometimes the future brings surprises which were not possible to anticipate using historical analysis.
• We know that trading system and market behavior does not behave according to a normal distribution, yet, that is the basis for the vast majority of our statistical analysis.

RESULT: Any relationship between estimated statistical probabilities (i.e. that event E has X% probability of occuring) and reality are coincidental. Still the trends and ideas indicated by these statistical formula are valuable. Thus, we can say that something that represents 4 sigma has less probability of occuring than 3 sigma but not precisely how much less.

For example, a trend that has moved 8 ATR, probably has a probability of moving an additional 1 ATR to 9ATR, that is greater than the probability of a trend that is 3 ATR moving to 4 ATR. Normal-distribution based analysis of probabilities based on sample size would not indicate this.

This is not just a fat tail problem. The actual shape of the curve does not appear to be a simple bell with wider tails. It appears from empirical analysis that once an event reaches a certain threshold all bet are off and the curve itself changes.

We're a bit like a pollster taking a poll from a non-random sample of a population by polling as he walks from one town to another. As time passes and he moves from town to town, he find that while his previous observations are useful, the non-representativeness of his previous polls limits his ability to predict what the next town will bring.

The theories developed talking to people in the rich conservative neighborhoods make for much surprise when he encounters their poor liberal neighbors.

Thus the stock trading systems developed in the late 90s don't seem to hold up so well, the last few years.

This issue was also the primary fallacy of Long-Term Capital Management.
• We know that events in markets are not independent because the actors in those events are people with memory and psychological responses rather than mechanical and purely rational ones.

RESULT: Some of the useful tools like Monte Carlo analysis don't give an accurate picture of the probability of worse outcomes. Still, they do provide useful information since there is some level of independence in the outcomes of a trading system.
While this is by no means an exhaustive list of the gaps between our hard-theory and reality, it serves to illustrate the incomplete, messy, and often dangerous environment in which we choose to live as traders.

This is also the basis for the opportunity. If there were clearly "right" answers that actually worked all the time, there would not be any opportunity for traders like ourselves to make money.

So the essense of trading successfully is the search for a path to take you through this uncertainty; one based on reason, but that reflects a mature understanding of the risks that come from the inadequacies of our science.
Last edited by Forum Mgmnt on Wed May 21, 2003 10:50 pm, edited 1 time in total.

John Duprey
Contributor
Posts: 1
Joined: Sun May 18, 2003 10:32 pm

### Response to c.f.

Forum Mgmnt wrote:What to do... what to do...

Yes, this is a messy business with no clearly defined right answers to many of the questions we most want to have answered.

Some of the problems:

We know that the future is not exactly like the past. Yet, we know of no other basis for an objective determination of the relative merits of one approach versus another.
Hmmm. The future is unknowable and unpredictible. There's nothing like what will happen tomorrow, because it hasn't happened yet. Stop trying. Be comfortable and accepting of not knowing. Keep bets small, diversify, and don't worry.
RESULT: Sometimes the future brings surprises which were not possible to anticipate using historical analysis.
Wow, everything that happens to me tomorrow, is new to me. Everyday is new and unfolding. Every event that occures in the market tomorrow has never occurred before, it possesses its unique new dayness.
We know that trading system and market behavior does not behave according to a normal distribution, yet, that is the basis for the vast majority of our statistical analysis.

RESULT: Any relationship between estimated statistical probabilities (i.e. that event E has X% probability of occuring) and reality are coincidental. Still the trends and ideas indicated by these statistical formula are valuable. Thus, we can say that something that represents 4 sigma has less probability of occuring than 3 sigma but not precisely how much less.

For example, a trend that has moved 8 ATR, probably has a probability of moving an additional 1 ATR to 9ATR, that is greater than the probability of a trend that is 3 ATR moving to 4 ATR. Normal-distribution based analysis of probabilities based on sample size would not indicate this.

This is not just a fat tail problem. The actual shape of the curve does not appear to be a simple bell with wider tails. It appears from empirical analysis that once an event reaches a certain threshold all bet are off and the curve itself changes.

We're a bit like a pollster taking a poll from a non-random sample of a population by polling as he walks from one town to another. As time passes and he moves from town to town, he find that while his previous observations are useful, the non-representativeness of his previous polls limits his ability to predict what the next town will bring.

The theories developed talking to people in the rich conservative neighborhoods make for much surprise when he encounters their poor liberal neighbors.

Thus the stock trading systems developed in the late 90s don't seem to hold up so well, the last few years.
And while the markets broke new ground almost daily people attempted to explain it using historical analasis. When their was no historical basis for what was happening. Bizarre!!! Its just like the 80's bullmarket but on exlax.
This issue was also the primary fallacy of Long-Term Capital Management.

We know that events in markets are not independent because the actors in those events are people with memory and psychological responses rather than mechanical and purely rational ones.
Well, are the people independent? Regardless of whether they have memory or are mechanically rational. Each person in the markets acts individually ( ask the IRS ), but I could use some friends to drawdown with. So the "hard theory" is in trouble, but there are alot of believers. You could start the " Church of Reason ". but they already tried that many hundreds of years ago. To paraphrase Ed Seykota " it seemed a waste of my M.I.T. education to not try and figure out the markets, to just sit there, and follow the trend".
RESULT: Some of the useful tools like Monte Carlo analysis don't give an accurate picture of the probability of worse outcomes. Still, they do provide useful information since there is some level of independence in the outcomes of a trading system.

While this is by no means an exhaustive list of the gaps between our hard-theory and reality, it serves to illustrate the incomplete, messy, and often dangerous environment in which we choose to live as traders.

This is also the basis for the opportunity. If there were clearly "right" answers that actually worked all the time, there would not be any opportunity for traders like ourselves to make money.

So the essense of trading successfully is the search for a path to take you through this uncertainty; one based on reason, but that reflects a mature understanding of the risks that come from the inadequacies of our science
MODERATOR'S NOTE: Fixed up some of the quoted section BBCode.

Christian Smart
Roundtable Fellow
Posts: 50
Joined: Fri Apr 18, 2003 8:53 pm
Location: Huntsville, AL
Contact:
Kiwi,
There are ways to generate correlated random numbers in a Monte Carlo simulation. Two Excel add-ins, @Risk and Crystal Ball, have this capability built in. Or if you're a do-it-yourselfer, you can implement the Iman-Conover or the Lurie-Goldberg algorithms.

yoyo2000
Roundtable Fellow
Posts: 58
Joined: Fri Jan 30, 2004 10:37 pm
hi,ted,i'm interested in your words "...Why does Mark Johnson calculate and print a half dozen different goodness figures? Why did g.c. invent the g.c. Ratio? ..."
I could find them myself.
regards.

nodoodahs
Roundtable Knight
Posts: 218
Joined: Wed Aug 09, 2006 4:01 pm
There is a huge amount of academic literature on performance analysis for trading systems or portfolio management.

The "end game" is to understand enough about them, and about YOU, to choose (or invent) the one that suits your goals the best ...

yoyo2000
Roundtable Fellow
Posts: 58
Joined: Fri Jan 30, 2004 10:37 pm
I search both Mark Johnson's figures and g.c.Ratio on google,but couldn't get valuable result.

that's why i post here for help.

sluggo
Roundtable Knight
Posts: 2986
Joined: Fri Jun 11, 2004 2:50 pm
Just what in the heck is the "g.c. ratio" in this six year old posted message?? After a lot of browsing through archived images, (and I do mean a lot), I think I have discovered the answer: g. c. is an abbreviation for Galt Capital, which once upon a time (pre-May-2003) was talking up a performance measure they invented, called the Galt Ratio. Hunting around, I was only able to find the first page of the paper
Attachments
topp.pdf
first page only
gcratio.png (26.32 KiB) Viewed 17694 times
Last edited by sluggo on Thu Dec 03, 2009 5:41 pm, edited 2 times in total.

nodoodahs
Roundtable Knight
Posts: 218
Joined: Wed Aug 09, 2006 4:01 pm
yoyo2000 wrote:I search both Mark Johnson's figures and g.c.Ratio on google,but couldn't get valuable result.

that's why i post here for help.
One must LEARN how to effectively use Google.

Searches:

sortino OR sharpe OR lake "performance evaluation"

"mark johnson" "trading systems" "performance evaluation"

g.c. ratio "trading systems" "performance evaluation"

If you find interesting results in those, change the keywords.

Use "Google advanced" to search the sites where some good results come from.

If you find that academic research appeals to you, you can use the advanced feature to search for PDF file types or search through "Google scholar" for that type of material.

The point: there is a LOT of information about performance ratios. Primarily because the different ratios each express a different theoretical viewpoint on what "risk" is and what the preferences of the trader are. I think that building a broad understanding of the concepts through wide reading is helpful in determining which measurements are best for each individual.

yoyo2000
Roundtable Fellow
Posts: 58
Joined: Fri Jan 30, 2004 10:37 pm
nodoodahs,I follow your key-words on google searching,but it'seem that the first and main results pointing to this forum

and there is only mentions in brief found,as sluggo said.

In my opinion,there are three kinds of performance indicators(PI,for short),the first kind is basic one,including net profit,APR,MDD,avg bars and avg win/loss bars,avg trade and win/loss trade,w%,equity chart,underwater chart,and other basic PI.
the second kind is those comparing performance with other strategies on C2,morningstar,including sharpe ratio,sortino ratio(maybe),profit factor(also a basic PI).
the third is those manully PI for developer and customed user to observe and study his trading system,this kind of PI is very personality,including seykota lake ratio,mark john's stuff and g.c.ratio,and ext.

anyway,the most important thing is one uses PI which fits him.