R-cubed

Discussions about the testing and simulation of mechanical trading systems using historical data and other methods. Trading Blox Customers should post Trading Blox specific questions in the Customer Support forum.
Post Reply
tigertaco
Full Member
Full Member
Posts: 23
Joined: Fri Feb 27, 2009 1:50 am

R-cubed

Post by tigertaco » Fri Feb 27, 2009 2:50 pm

Recently I read "way of the turtle" (that's how I discovered this forum). Great book! Especially given the preponderance of low quality TA books out there.

One suggestion I thought of is a possible improvement over R-cubed measure. The way it's defined involves making choices (considers N max drawdowns), so it's not functorial. Now, RAR doesn't just give you a number, it produces a curve, namely the best fit line, call it RL. This line is a distinguished object in a set E of all possible equity curves producing that RL. To me it seems that RL is the best element in E. So, let d be L1 or L2 distance between actual equity curve and its RL and K be time interval over which the system is tested. We can define a measure M by
M = RAR/(1+d/K). L2 distance will emphasize drawdowns more than L1. No choices were made in this definition; only an assumption that RL is the best representative of its class E which seems very reasonable to me.

sluggo
Roundtable Knight
Roundtable Knight
Posts: 2987
Joined: Fri Jun 11, 2004 2:50 pm

Post by sluggo » Fri Feb 27, 2009 3:05 pm

You've just (re)invented R-squared. (Which is a built-in statistic in Trading Blox software).

You give the highest prize to the equity curve that most closely matches RL. So does R-squared. And by the way, computing R-squared does not involve making choices. Your measure "M" can be replaced wlog by
  • M' = RAR x Rsquared
Also don't lose sight of the goal: to help human beings decide which equity curves they prefer most. R-cubed explicitly takes note of a psychological fact: drawdown depth produces one kind of pain, and drawdown duration produces another kind of pain. So R-cubed includes both of them in its composite gain/pain ratio. The measures M and M' above, only look at drawdown depth (magnitude of variation). They omit drawdown duration. By favoring mathematical and computational simplicity, M and M' manage to toss the baby out with the bathwater in the process.

EDIT - fixed typo.
Attachments
From the website of "graphpad" statistical software
From the website of "graphpad" statistical software
graphpad_site.png (17.57 KiB) Viewed 3967 times
From the Trading Blox user's guide
From the Trading Blox user's guide
bloxman.png (10.31 KiB) Viewed 3967 times
Last edited by sluggo on Sat Feb 28, 2009 10:14 am, edited 1 time in total.

tigertaco
Full Member
Full Member
Posts: 23
Joined: Fri Feb 27, 2009 1:50 am

Post by tigertaco » Fri Feb 27, 2009 3:54 pm

Thank you for pointing out r-squared, however neither L1 nor L2 distance ignores the duration of drawdown. If equity curve spends a long time away from RL, they both will reflect that as you'll have many large terms in the sum. If you keep both distances then it lets you make additional conclusions: if L2 >> L1 then drawdowns are short and severe; if L1 >> L2 then they are long and dull.

sluggo
Roundtable Knight
Roundtable Knight
Posts: 2987
Joined: Fri Jun 11, 2004 2:50 pm

Post by sluggo » Sun Mar 01, 2009 11:43 am

The WOTT book (and Blox software) calculate a "Robust Sharpe" statistic by changing the numerator of the Sharpe ratio. Instead of the Compound Annual Growth Rate, it uses the Regressed Annual Return (RAR):
  • Robust Sharpe = RAR / Annualized_Standard_Deviation_of_Returns
This immediately suggests defining a "Robust MAR" statistic in the same way: replace CAGR by RAR
  • Robust MAR = RAR / MaxDD
Now we recall the fun mathematical fact that the L-infinity norm of a vector is simply the max of its individual coordinates. Sooooo, if we calculate your statistic "M" using the L-infinity norm, rather than L1 or L2, we get something very closely related* to "Robust MAR". Cool.

By the way this suggests a procedure for calculating Max without making choices. Just compute the L1 norm, L10 norm, L100 norm, and L1000 norm, none of which require choices. Regression fit an exponential (c - b*exp(-aL)) to the datapoints, which doesn't require choices. The asymptote of the fitted exponential at L=infinity is the L-infinity norm, which is the Max. Done. Woohoo. (This is a common stunt used in Minimax optimization, and is called the "least pth" algorithm. You approximate the L-infinity norm by the L-p norm (for small integer values of p) and use the least value of p that gives convergence. Here's an example: (link))

*not identical, I think, because of the 1 and the K. But philosophically close anyway.

tigertaco
Full Member
Full Member
Posts: 23
Joined: Fri Feb 27, 2009 1:50 am

Post by tigertaco » Tue Mar 03, 2009 4:21 pm

That's very interesting. Fits a general philosophy of replacing thresholds/choices with a parameter which is varied until stability is achieved, and the stable value enters in the original definition.

Post Reply