Recently I read "way of the turtle" (that's how I discovered this forum). Great book! Especially given the preponderance of low quality TA books out there.
One suggestion I thought of is a possible improvement over R-cubed measure. The way it's defined involves making choices (considers N max drawdowns), so it's not functorial. Now, RAR doesn't just give you a number, it produces a curve, namely the best fit line, call it RL. This line is a distinguished object in a set E of all possible equity curves producing that RL. To me it seems that RL is the best element in E. So, let d be L1 or L2 distance between actual equity curve and its RL and K be time interval over which the system is tested. We can define a measure M by
M = RAR/(1+d/K). L2 distance will emphasize drawdowns more than L1. No choices were made in this definition; only an assumption that RL is the best representative of its class E which seems very reasonable to me.
R-cubed
You've just (re)invented R-squared. (Which is a built-in statistic in Trading Blox software).
You give the highest prize to the equity curve that most closely matches RL. So does R-squared. And by the way, computing R-squared does not involve making choices. Your measure "M" can be replaced wlog by
EDIT - fixed typo.
You give the highest prize to the equity curve that most closely matches RL. So does R-squared. And by the way, computing R-squared does not involve making choices. Your measure "M" can be replaced wlog by
- M' = RAR x Rsquared
EDIT - fixed typo.
- Attachments
-
- From the website of "graphpad" statistical software
- graphpad_site.png (17.57 KiB) Viewed 4918 times
-
- From the Trading Blox user's guide
- bloxman.png (10.31 KiB) Viewed 4918 times
Last edited by sluggo on Sat Feb 28, 2009 10:14 am, edited 1 time in total.
Thank you for pointing out r-squared, however neither L1 nor L2 distance ignores the duration of drawdown. If equity curve spends a long time away from RL, they both will reflect that as you'll have many large terms in the sum. If you keep both distances then it lets you make additional conclusions: if L2 >> L1 then drawdowns are short and severe; if L1 >> L2 then they are long and dull.
The WOTT book (and Blox software) calculate a "Robust Sharpe" statistic by changing the numerator of the Sharpe ratio. Instead of the Compound Annual Growth Rate, it uses the Regressed Annual Return (RAR):
By the way this suggests a procedure for calculating Max without making choices. Just compute the L1 norm, L10 norm, L100 norm, and L1000 norm, none of which require choices. Regression fit an exponential (c - b*exp(-aL)) to the datapoints, which doesn't require choices. The asymptote of the fitted exponential at L=infinity is the L-infinity norm, which is the Max. Done. Woohoo. (This is a common stunt used in Minimax optimization, and is called the "least pth" algorithm. You approximate the L-infinity norm by the L-p norm (for small integer values of p) and use the least value of p that gives convergence. Here's an example: (link))
*not identical, I think, because of the 1 and the K. But philosophically close anyway.
- Robust Sharpe = RAR / Annualized_Standard_Deviation_of_Returns
- Robust MAR = RAR / MaxDD
By the way this suggests a procedure for calculating Max without making choices. Just compute the L1 norm, L10 norm, L100 norm, and L1000 norm, none of which require choices. Regression fit an exponential (c - b*exp(-aL)) to the datapoints, which doesn't require choices. The asymptote of the fitted exponential at L=infinity is the L-infinity norm, which is the Max. Done. Woohoo. (This is a common stunt used in Minimax optimization, and is called the "least pth" algorithm. You approximate the L-infinity norm by the L-p norm (for small integer values of p) and use the least value of p that gives convergence. Here's an example: (link))
*not identical, I think, because of the 1 and the K. But philosophically close anyway.