Accuracy of worst drawdown
Accuracy of worst drawdown
Does anyone have a rule of thumb for analysisng the worst drawdown that 1.5 may kick out over a 20 year test. For example if i have a system that has an MAR of 2 .. 40 % cagr with 20 % max drawdown .. I was thinking of doubling the worst drawdown going forward ... i.e I better be able to live with 40 % worst drawdown .. is this too aggressive .. I have some friends in real trading funds who ususlally double their worst backtested drawdowns for armageddon numbers ..... Any thoughts anyone
Thanks you
Chris
Thanks you
Chris
Chris,
I think thats about right. A lot of people squirm around giving concrete answers since nothing is for sure. But from asking the same questions i have gathered that a realistic drawdown is your Worst DD% * 1.5.
Then you take the Worst DD% * 1.75 & 2 to come up with your very worst drawdown, that which would typically arrive when the Four Horseman come flying down your street and start knocking on your door.
In noticing people talk about drawdowns, i think length of drawdown seems to be overlooked, which is probably the worst part of drawdowns in the first place. Months of wondering if you are going to make it.
So i propose that one thing to also consider is the theoretical length of drawdowns. Since larger bet sizing typically accompanies longer drawdown durations, I would look at large drawdowns, which are similar in % terms to your worst theoretical drawdown and see how long they typically last with your system or a system a lot like it.
So, you might want to look at 40% drawdowns (assuming your tested drawdown is around 20%) and see how long they typically last. While it won't be exact, it might help give you an idea that on how long your worst scenario might last:
So my long answer to your short question...
MAX Drawdown is 1.5  2x tested and..
Duration of Drawdown is maybe * 1.5 tested duration. Or about the duration for a drawdown that typically accompanies a drawdown that is 1.5  2x your MAX drawdown.
I think thats about right. A lot of people squirm around giving concrete answers since nothing is for sure. But from asking the same questions i have gathered that a realistic drawdown is your Worst DD% * 1.5.
Then you take the Worst DD% * 1.75 & 2 to come up with your very worst drawdown, that which would typically arrive when the Four Horseman come flying down your street and start knocking on your door.
In noticing people talk about drawdowns, i think length of drawdown seems to be overlooked, which is probably the worst part of drawdowns in the first place. Months of wondering if you are going to make it.
So i propose that one thing to also consider is the theoretical length of drawdowns. Since larger bet sizing typically accompanies longer drawdown durations, I would look at large drawdowns, which are similar in % terms to your worst theoretical drawdown and see how long they typically last with your system or a system a lot like it.
So, you might want to look at 40% drawdowns (assuming your tested drawdown is around 20%) and see how long they typically last. While it won't be exact, it might help give you an idea that on how long your worst scenario might last:
So my long answer to your short question...
MAX Drawdown is 1.5  2x tested and..
Duration of Drawdown is maybe * 1.5 tested duration. Or about the duration for a drawdown that typically accompanies a drawdown that is 1.5  2x your MAX drawdown.
Chris,
This is an excellent topic. My belief is that the appropriate way to look at drawdown is subjective. If the suit fits, wear it; if not, try on another one.
I look at Monte Carlo simulations, paying close attention to the distributions of possible paths. I also use heuristics to change the probability distribution in favor of fat tails for negative returns. If the system seems to use parameters that, when incrementally changed, yield more volatile hypothetical return and risk results, then I will be more likely to raise the probability of worse results (from, say, less than 5% to something like 10% to 15% or even 20%) in calculating expected results. Lastly, I like to assume that I will not trade a system that seems to have a reasonable (if small  subjectively determined) likelihood of suffering a new MaxDD 2.5 times greater than the simulated MaxDD (emphasis: simulated).
Use your imagination and do not boil it down to a single, absolute rule. Employ your judgment. This is decidedly *not* science.
This is an excellent topic. My belief is that the appropriate way to look at drawdown is subjective. If the suit fits, wear it; if not, try on another one.
I look at Monte Carlo simulations, paying close attention to the distributions of possible paths. I also use heuristics to change the probability distribution in favor of fat tails for negative returns. If the system seems to use parameters that, when incrementally changed, yield more volatile hypothetical return and risk results, then I will be more likely to raise the probability of worse results (from, say, less than 5% to something like 10% to 15% or even 20%) in calculating expected results. Lastly, I like to assume that I will not trade a system that seems to have a reasonable (if small  subjectively determined) likelihood of suffering a new MaxDD 2.5 times greater than the simulated MaxDD (emphasis: simulated).
Use your imagination and do not boil it down to a single, absolute rule. Employ your judgment. This is decidedly *not* science.
Ken
Thanks for the nice food for meditation and imagination; I like to look at fat tails and I've been think about a reasonable way to consider drawdown measures. I know that there're %DD, absolute DD, MAR and other ratios but I still find myself more confortable looking at a simple DD chart (absolute and %); I was thinking to measure the drawdown surface against return surface in order to get a better measure because the above measure consider the magnitude but not the frequency of DDs,
i.e is it better to have an average 15% DD with a max %20 DD or an average 5% DD with a max %30 DD?
While I'm writing, I'm answering myself (too bad !?!), "there is not perfect car for everyone".
best regards, as ever
Thanks for the nice food for meditation and imagination; I like to look at fat tails and I've been think about a reasonable way to consider drawdown measures. I know that there're %DD, absolute DD, MAR and other ratios but I still find myself more confortable looking at a simple DD chart (absolute and %); I was thinking to measure the drawdown surface against return surface in order to get a better measure because the above measure consider the magnitude but not the frequency of DDs,
i.e is it better to have an average 15% DD with a max %20 DD or an average 5% DD with a max %30 DD?
While I'm writing, I'm answering myself (too bad !?!), "there is not perfect car for everyone".
best regards, as ever

 Roundtable Knight
 Posts: 118
 Joined: Tue Apr 15, 2003 7:44 pm
 Location: Arizona
What I do is what Ken described: use Monte Carlo techniques to create 100,000 virtual traders, each one with their own MC generated equity curve. Find the maxDD for each of these virtual traders' equity curves, and construct a cumulative histogram: What % of virtual traders had a maxDD of 15% or higher? What % of virtual traders had a maxDD of 16% or higher? What % of virtual traders had a maxDD of 17% or higher? (rinse and repeat)
Then comes the human part: deciding which piece of the histogram applies to me. How much guts do I have? Where is my uncle point? What percent of virtual traders went beyond my uncle point in MC simulation?
I do this several times: for MaxDD, and again for #days_in_longest_drawdown, and again for avg_#days_between_equity_new_highs. Once you've used MC to generate "the beans", you can "count the beans" a number of different ways. This gives you more data to study and strengthens your excuse for having analysis paralysis.
teda
Then comes the human part: deciding which piece of the histogram applies to me. How much guts do I have? Where is my uncle point? What percent of virtual traders went beyond my uncle point in MC simulation?
I do this several times: for MaxDD, and again for #days_in_longest_drawdown, and again for avg_#days_between_equity_new_highs. Once you've used MC to generate "the beans", you can "count the beans" a number of different ways. This gives you more data to study and strengthens your excuse for having analysis paralysis.
teda
Ted,
when you say
I've tried the WL Montecarlo Addon, there is only one problem: backadjusted contracts like Crude Oil. For istance:
02/21/1996 buy 1 CL @ 1.38
04/30/1996 sell 1 CL @ 4.23
$Profit: 5,610
%Change: 406.52
and the WL Montecarlo uses the % changes.
I've tried a simulation on KC (10,000 Equity Curve scrambles with 115 trades), while I try to solve the % change problem and do some research, I guess my mind is less distorted now compared to when I am trading.
Trading system and Uncle Points are ready, just need adequate capitalization.
Another few questions, did you build your Montecarlo by yourself, have you read Nassim Taleb?
Thanks again for the advice (hoping you don't send me the bill ).
best regards, as ever
when you say
do you mean 100.000 equity curve scrambles and/or scramble or randomize trades?100,000 virtual traders
I've tried the WL Montecarlo Addon, there is only one problem: backadjusted contracts like Crude Oil. For istance:
02/21/1996 buy 1 CL @ 1.38
04/30/1996 sell 1 CL @ 4.23
$Profit: 5,610
%Change: 406.52
and the WL Montecarlo uses the % changes.
I've tried a simulation on KC (10,000 Equity Curve scrambles with 115 trades), while I try to solve the % change problem and do some research, I guess my mind is less distorted now compared to when I am trading.
Trading system and Uncle Points are ready, just need adequate capitalization.
Another few questions, did you build your Montecarlo by yourself, have you read Nassim Taleb?
Thanks again for the advice (hoping you don't send me the bill ).
best regards, as ever
 Attachments

 dd.JPG (366.78 KiB) Viewed 9330 times

 Roundtable Knight
 Posts: 118
 Joined: Tue Apr 15, 2003 7:44 pm
 Location: Arizona
That type of plot is called a "probability density function". If you use a bit of calculus and compute the integral from left to right you get another plot called the "cumulative probability" and/or the "distribution function", which I happen to think is more helpful for traders.
The legend of the WealthLab plot calls out a few data points from the distribution function. Wouldn't it have been nice if they plotted the whole curve? :
the 1% point of the distribution is a MaxDD of 53.88%
the 5% point of the distribution is a MaxDD of 48.01%
the 50% point of the distribution is a MaxDD of 29.84%
the 95% point of the distribution is a MaxDD of 20.58%
the 99% point of the distribution is a MaxDD of 18.09%
This means (for example) that among your 10,000 virtual traders, 95% of them had a MaxDD of 20.50% or worse. Personally I find that to be the more useful information. My personal interpretation of this information is, "there is a 95% probability that I will undergo a MaxDD that's at least as bad as 20.58%". Furthermore, there is a 5% probability that I will suffer a MaxDD of 48.01%, or worse.
Also (speaking again of me personally) I find it useful to MC simulate the trading of an entire portfolio, using my position sizing rules, rather than just looking at one market like KC. I assume the $150 addon for WealthLab will let you do so.
The legend of the WealthLab plot calls out a few data points from the distribution function. Wouldn't it have been nice if they plotted the whole curve? :
the 1% point of the distribution is a MaxDD of 53.88%
the 5% point of the distribution is a MaxDD of 48.01%
the 50% point of the distribution is a MaxDD of 29.84%
the 95% point of the distribution is a MaxDD of 20.58%
the 99% point of the distribution is a MaxDD of 18.09%
This means (for example) that among your 10,000 virtual traders, 95% of them had a MaxDD of 20.50% or worse. Personally I find that to be the more useful information. My personal interpretation of this information is, "there is a 95% probability that I will undergo a MaxDD that's at least as bad as 20.58%". Furthermore, there is a 5% probability that I will suffer a MaxDD of 48.01%, or worse.
Also (speaking again of me personally) I find it useful to MC simulate the trading of an entire portfolio, using my position sizing rules, rather than just looking at one market like KC. I assume the $150 addon for WealthLab will let you do so.
Ted, I take it you're doing MC on singlelot trades within the portfolio; shuffling order, then applying sizing according to equity at the point of new sequence.Ted Annemann wrote:Also (speaking again of me personally) I find it useful to MC simulate the trading of an entire portfolio, using my position sizing rules, rather than just looking at one market like KC. I assume the $150 addon for WealthLab will let you do so.
Q: How do you deal with scaling and MC? Scaling is obviously dependent on sequence from the entry trade ... or do you discount that?
If you're trading turtle, do you ensure any of the rule conditions exist for the MC shuffled trades, like max number of directional units?
Cheers,
Kevin

 Roundtable Knight
 Posts: 118
 Joined: Tue Apr 15, 2003 7:44 pm
 Location: Arizona
ksberg, I used to do #2 and #3. Now I am experimenting with #4.
viewtopic.php?p=5749&highlight=#5749
Haven't tested Turtle system yet. My instinct predicts that #4 could do it with scalingin and with correlated position limits and all the rest .... if desired. It's up to the MC analyst to decide whether that is actually desirable or not. I suspect there are good reasons pro and con. However, at the moment this is just speculation since we haven't done the experiment.
If you own Veritrader you could do this yourself with a bit of work. You could write new software, call it software #5, similar in a way to "continuous contract" software, that reorders price data. You could say that OTS exited wheat on 3/3/1994 and went flat, then entered short on 6/9/1994, then exited short on 11/19/1994. Chop out that hunk of prices from 3/3/1994 to 11/19/1994: it contains a "flat period" (with no position) followed by a "trade period" (with a short position). Assemble together a bunch of (randomly selected) price hunks into a new tradeable "MCWheat". Do this again with Gold and Hogs and Swissfranc etc, then run Veritrader on the whole bunch. Now it's Veritrader's job (instead of your MC software's job) to deal with scaling in and correlated position limits etc. Veritrader does the beancounting and presents all the lovely statistical data you already know and love. You the MC analyst can decide whether to do the pricehunkrearranging independently on all markets, or whether to chop out the hunks in unison for highly correlated markets (like TY and US), or not.
viewtopic.php?p=5749&highlight=#5749
Haven't tested Turtle system yet. My instinct predicts that #4 could do it with scalingin and with correlated position limits and all the rest .... if desired. It's up to the MC analyst to decide whether that is actually desirable or not. I suspect there are good reasons pro and con. However, at the moment this is just speculation since we haven't done the experiment.
If you own Veritrader you could do this yourself with a bit of work. You could write new software, call it software #5, similar in a way to "continuous contract" software, that reorders price data. You could say that OTS exited wheat on 3/3/1994 and went flat, then entered short on 6/9/1994, then exited short on 11/19/1994. Chop out that hunk of prices from 3/3/1994 to 11/19/1994: it contains a "flat period" (with no position) followed by a "trade period" (with a short position). Assemble together a bunch of (randomly selected) price hunks into a new tradeable "MCWheat". Do this again with Gold and Hogs and Swissfranc etc, then run Veritrader on the whole bunch. Now it's Veritrader's job (instead of your MC software's job) to deal with scaling in and correlated position limits etc. Veritrader does the beancounting and presents all the lovely statistical data you already know and love. You the MC analyst can decide whether to do the pricehunkrearranging independently on all markets, or whether to chop out the hunks in unison for highly correlated markets (like TY and US), or not.
Your proposal #4 is an interesting take on MC. Perhaps another way to view the trade vector is by preserving relative order between dependent events. For example, if we scaled into Gold, but had 3 unrelated trades occur before puting on the 2nd unit, 9 unrelated trades occur before 3rd, and 5 before the 4th, our sequence vector is 0, 3, 9, 5. We shuffle based on entry (0), then insert scaled trades at sequenced intervals offset from entry. The shuffle completely ignores dates.Ted Annemann wrote:ksberg, I used to do #2 and #3. Now I am experimenting with #4.
viewtopic.php?p=5749&highlight=#5749
Haven't tested Turtle system yet. My instinct predicts that #4 could do it with scalingin and with correlated position limits and all the rest .... if desired. It's up to the MC analyst to decide whether that is actually desirable or not. I suspect there are good reasons pro and con. However, at the moment this is just speculation since we haven't done the experiment.
I would approach your proposal on market price resegmentation with caution. I would think the splices should occur on equal volatility, or at least not a jarring volatility. Also, resegmentation removes chaotic persistence (i.e. market dependency), and trends are the very reason we can make money. Perhaps conjestion areas become potential splice points, since you have a chance of preserving underlying behavior.
Anyway, the data reordering has as many, if not more issues than continuous contract data. I think I would prefer MC based on you suggestion #4.
Cheers,
Kevin