Another new Goodness Measure; also, walking it forward

Discussions about the testing and simulation of mechanical trading systems using historical data and other methods. Trading Blox Customers should post Trading Blox specific questions in the Customer Support forum.
Post Reply
rabidric
Roundtable Knight
Roundtable Knight
Posts: 243
Joined: Mon Jan 09, 2006 7:45 am

Another new Goodness Measure; also, walking it forward

Post by rabidric »

edit : this pic "sluggo's pic" is referenced a lot in this now newly split topic
Image

i have started employing a variation on MAR, that i feel is somewhat more representative of the actual growth vs pain relationship when varying leverage.


here is how it goes:

Hill ratio=(1+CAGR)*(1-MaxDD).

there is nothing particularly significant about the output, as the numerator is tied to an arbitrary length(annual) and the denominator is lengthless. Getting above 1 is a worthwhile goal though imho.

the specific reason for using this variation on MAR is that it strips out the natural arithmetic inflation that MAR has built into itself when increasing leverage is used..
e.g. if you drop 33%, you need to make 50% to get back to where you started. An arithmetic comparison of such numbers yields 1.5, but since the actual effect in the process is multiplacative and leaves you exactly where you started(in this example), i think the comparison should bear that in mind. A similar reasoning is behind the use of log returns in option pricing, rather than absolute returns.

The reason I posted in this thread was so that users could reference the nice diagram a few posts above. Much of the Green MAR line's rise in the lefthand portion of the graph is due to the creeping arithmetic bias as we go from say 1%CAGR and 1%MAXDD at low leverage(MAR 1), to 100%CAGR and 50%DD at higher leverage(MAR 2).
With either mar, the simple fact remains that you require an entire year's worth of annual return to get out of your worst drawdown(in this example). In that sense, the higher MAR is not really a good measure of gain to pain.

With the GMAR/logMAR/Rabid Ratio such a move when increasing leverage would keep the ratio at parity. In actual backtesting the new ratio allows the user to more keenly identify areas of real performance enhancement or over/under-leverage.

Of course, a flippant assessment of this could just conclude "meh, yet another gain to pain ratio, it is all subjective anyway." Well, yes. But i think this ratio helps clarify some things that the Benchmark MAR can misrepresent. Enjoy.

I hope someone didn't already do this somewhere on this forum, It would be nice to have actually come up with an original concept. :lol: :idea:
Last edited by rabidric on Mon Mar 21, 2011 12:12 pm, edited 4 times in total.
LeviF
Roundtable Knight
Roundtable Knight
Posts: 1436
Joined: Mon Dec 22, 2003 12:24 pm
Location: Des Moines, IA
Contact:

Post by LeviF »

Interesting. This is related to my recent post here.

viewtopic.php?t=8225&highlight=leverage
rabidric
Roundtable Knight
Roundtable Knight
Posts: 243
Joined: Mon Jan 09, 2006 7:45 am

Post by rabidric »

Yep that thread was the other contender for my post. However, as my ratio will still show changes with changes in leverage, i could not claim that it is actually independent of leverage- i fear the replies in your thread are right- there is no leverage-independant goodness function. this gets you closer though. 8)

so i posted here, with the nice graph at hand.
All behold and worship the Green dot.
It is the Shining City on the Hill. 8)

....yet it is a false prophecy.
Rabid shall show ye that the true City lies in a slightly different place, on a different Hill.

[run the numbers on sluggo's results above. the peak MAR as shown is further to the right than where Rabid's Hill ratio of True Gain to Pain would have it, likely a lot further in this instance.....]
Last edited by rabidric on Mon Mar 21, 2011 12:13 pm, edited 1 time in total.
Eventhorizon
Roundtable Knight
Roundtable Knight
Posts: 229
Joined: Thu Jul 08, 2010 2:36 pm
Location: Boulder, CO
Contact:

Post by Eventhorizon »

Let's use %RAR and we could call it the Robust Rabid Ric Ratio (R^4)!
rabidric
Roundtable Knight
Roundtable Knight
Posts: 243
Joined: Mon Jan 09, 2006 7:45 am

Post by rabidric »

Lol, nice one. That's a lot of R's though!. By all means use far if you want. I prefer simple, but it is up to you.
for the name I was wondering about:

Hill Ratio.

It is my surname. for posterity and all that....
:wink:
bobsyd
Roundtable Knight
Roundtable Knight
Posts: 405
Joined: Wed Nov 05, 2008 2:49 pm

Post by bobsyd »

Attached is an auxiliary blox to assist in checking the performance of 12 Additional Goodness Measures, including Rabid Ric’s Hill Ratio. One way to assess the performance of goodness measures is to run Walk Forward tests where nothing is changed in each Simulation run except the Measure of Goodness Index in Edit>Preferences>Reporting General. A modified Walk Forward blox is also attached which has the Print Output reformatted to facilitate quicker compilation and analysis of results.

Shown below are the results of testing CAGR, MAR, Modified Sharpe, Annual Sharpe and the 12 Additional Goodness Measures on the default Walk Forward Suite (Donchian System – default parameter settings attached) in a fresh copy of TBB (v 3.7.2.1).

The only things I changed were to remove the Statistics Robust blox and replace with the 12 Additional Goodness Measures blox, remove the Walk Forward v4 blox and replace with the Walk Forward v4 (modified) blox, set the End Date to 2010-12-31, and in Global Parameters>Auxiliary set the Run (Index) to step from 1 to 8 by 1 plus set Optimization Run to Step True to False.

One set of results is clearly not enough to determine even preliminary conclusions. Many tests using different portfolios, different walk forward parameter ranges, different date ranges and different systems would be necessary before drawing any conclusions.

Hopefully others will perform their own tests and report their results in this thread. It would seem this could add a very useful addition to the wealth of knowledge available in this forum. Most of the forum discussion on goodness measures appears to have been either theoretical or based on experience, the basis of which is unknown.

There has also been a strong thread of “whatever YOU likeâ€
Attachments
Donchian Parameters.JPG
Donchian Parameters.JPG (122.31 KiB) Viewed 10820 times
Walk Forward Results.JPG
Walk Forward Results.JPG (184.6 KiB) Viewed 10820 times
Walk Forward v4 (modified).tbx
(28.98 KiB) Downloaded 317 times
12 Additional Goodness Measures.tbx
(2.61 KiB) Downloaded 324 times
Eventhorizon
Roundtable Knight
Roundtable Knight
Posts: 229
Joined: Thu Jul 08, 2010 2:36 pm
Location: Boulder, CO
Contact:

Post by Eventhorizon »

Bobsyd,

You have raised a quite fascinating issue. First I want to make sure I have correctly understood it!

Concept: Maximizing one goodness measure, Gw, in a walk-forward simulation may lead to the global maximization of another goodness measure, Gg, across the entire span of the simulation.

In the example you have posted, using CAGR, Sortino (annualized) and WF Sharpe as Gw, resulted in the best MAR (Gg).

My knee-jerk reaction to this proposition is: be careful not to be fooled by randomness. Firstly, you have essentially introduced a new parameter Gw, and have optimized Gg by varying the choice of Gw as you would vary any parameter for a traditional back-test (i.e. not a walk-forward).

Secondly, I am very skeptical of walk-forward in general. I think in your example there are 8 walk-forward tests (I didn't open the blox, but 2003 - 2010 inclusive is 8, 1 year periods). In the particular system there are 2 parameters used in the search. Essentially this amounts to optimizing in the traditional way using 16 (2 x 8 ) parameters. I would be surprised if one could not arrive at a better overall Gg if one were to search all 43m (3 ^ 16) combinations of those 16 parameters. These values, of course, would be uninterpretable to use going forward, since they would be connected to a date range in the past. By optimizing on Gw instead of Gg in each of the walk-forward optimizations, you have given the system the "chance" to be closer to the 16 values that would have been optimal for Gg. It may just be a matter of luck that in one particular back-test one particular choice of Gw does this better than another.

So, identifying the Gw that gave the best overall Gg is philosophically no different than identifying the best Entry Breakout / Exit Breakout combination without using the walk-forward. Using this approach essentially subverts the walk-forward process: the approach amounts to completing back-tests to identify the Gw that, when used in a walk-forward mode, gave the best Gg! It is a traditional back-test disguised as a walk-forward. Any time one modifies ones approach based on the results of a walk-forward test (using hind-sight to select Gw), you are under-mining the walk-forward-ness because one is using results from later in the test to adjust what happens earlier in the test!

The odds that the same choice of Gw will give us the best Gg in the future are likely no better than the odds that best set of Entry Breakout / Exit Breakout pairs optimized across the entire history will give us the best Gg in the future. (I am using the term "odds" loosely).

I hope I have expressed myself clearly - not sure that I have. As always, I am not wholly convinced of the position as stated above, it is more of a gut reaction - I hope others will join in this discusson.

Edited for spelling.
rabidric
Roundtable Knight
Roundtable Knight
Posts: 243
Joined: Mon Jan 09, 2006 7:45 am

Post by rabidric »

I greatly appreciate Bobsyd going to that trouble, and I hate to come down against it, but I am afraid I am with EH on that. sorry. I feel that in many ways it would deserve it's own thread (moderator split maybe?) along the lines of "optimizing for What, by What?" or something. There are many aspects to such an arguement, including sample size bias of certain stats yielding instability etc.(e.g. anything with MaxDD as an input is based on sampling from a reduced trade sequence, than something that relies purely on total performance like CAGR alone)

Something I think is worth raising at this point is the other pros and cons of the Hill ratio.

With reference to sluggo's chart, I think we all understand the perils of aiming for max cagr-> it is not only very unforgiving to "overshoots" , but even if we were to land on it by good fortune, it delivers a very bumpy ride.

by "aiming" for the MAR peak we bias ourseleves to the left of the CAGR peak, thus lowering our chances of overshoot/risk of ruin. However, due to the arithmetic bias in MAR, by aiming for the peak we run the risk of overshooting it still, and ending up in very bumpy ride territory(deep drawdown land).

A key feature of the Hill ratio, is that I have yet to encounter a situation where the Hill ratio peak lies to the right of the MAR ratio peak. Thus is serves a useful function of "targeting" the left hand side of the MAR peak. i.e. even if peak MAR is more important to you than Hill ratio, by optimizing your training data set for peak Hill ratio, you enhance the likelihood that you are in a good MAR region in the test data set, while reducing the chances that you overshoot the MAR peak in that test data set.

in short: peak MAR keeps you from the nasty side of peak CAGR, peak Hill ratio keeps you from the nasty side of peak MAR, but still in a good high performance place.

I still maintain that peak Hill ratio is a better gain:pain location than peak MAR anyway though.
Last edited by rabidric on Mon Mar 21, 2011 4:35 am, edited 1 time in total.
rabidric
Roundtable Knight
Roundtable Knight
Posts: 243
Joined: Mon Jan 09, 2006 7:45 am

Post by rabidric »

[..continued from above]

So you don't think I am blind in my praise of my creation, here is some balancing argument:

One of the problems with Hill ratio is that whilst it is good at identifying peaks of outperformance, it is not very good at differentiating between poor performing strats.

It helps if you remember that the Hill ratio effectively states:
"relative to the previous High water mark, what is my equity if i have one year's worth of CAGR, starting right after hitting my peak drawdown"

now consider two strats.
(A) MAxdd= 20%, CAGR=12.5%, Hill Ratio= 0.9
(B) Maxdd=11%, CAGR=1.12%, Hill ratio = 0.9

clearly B is awful where A is just mediocre. So for comparing strats Hill ratio is not perfect, and may even be rubbish. But I do not use it for that. Hill ratio is a tool to use when searching for "best allround" input parameter combinations. i.e the peak and gradient of the Hill ratio(it's dynamics w.r.t. inputs) is more important than it's absolute value.
Last edited by rabidric on Mon Mar 21, 2011 5:26 am, edited 1 time in total.
rabidric
Roundtable Knight
Roundtable Knight
Posts: 243
Joined: Mon Jan 09, 2006 7:45 am

Post by rabidric »

[..final continuation]

When presenting Hill ratio, I chose to do it in it's most basic form i.e.

HR= (1+CAGR)*(1-MaxDD)
where CAGR and MaxDD are decimals(10%=0.1).

This basic formula is of course open to elaboration, e.g. using a regression estimated CAGR as suggested above by others, or perhaps using some kind of average drawdown in lieu of MaxDD.

There is also another , possibly more interesting way of presenting the Hill ratio:
If we convert CAGR into a monthly compound return figure (1+CAGR)^(1/12)= (1+CMGR),
we can then , by playing around with algebra, rework the Hill ratio formula to give us the Number of Months to climbout from MaxDD, assuming climbout occurs at typical rates of return.

No of months to climbout= log(1/(1-MaxDD)) / log(1+CMGR)

by viewing the Computed Time to Climbout(CTC), we can do a number of things. We can for example, crosscheck it with our max drawdown duration. If we deduct the actual time of descent to maxDD from maxDDDuration, we have the ACTUAL TIME to CLIMBOUT(ATC).

by comparing CTC with ATC we can observe what kind of deviation we have from "independance of returns". i.e. if some form of serial autocorrelation occurs in the equity curve . Does your equity bounce back from deep drawdown at a faster rate than typical return?

One can also run correlation analysis of CTC with either ATC or just CTC versus actual MAXDDDuration.

This brings me onto my final point.
You will find that CTC correlates very well with MAXDDDuration. since CTC is a reworking of the Hill ratio, you will find that Hill ratio and MaxDDDuration are very well correlated.
Therefore you will find that the peak of Hill ratio falls in a very similar place as the peak of MAxDDuration in parameter searches.
SO, one could almost get by with using MaxDDuration as a locator function for ideal leverage that maximizes Pain to Gain.

I for one always used to shy away from placing to much importance on MaxDDuration, as although I intuitively felt it was a good fitness function, I disliked the fact that it was seemingly not taking return into account enough( it was a nebulus "time" statistic). However as I have demonstrated, actually, it is implicitly very closely related to actual Return versus Drawdown depth, or Pain to Gain. The CTC is computed from just those figures(return and drawdown), and yet is as good as analagous to MAxDDDuration in the vast majority of cases when used as a fitness function for leverage or other parameters.

So by a rather roundabout way, my argument goes: for choosing [input x
] so as to maximize pain to gain, MaxDDuration or Hillratio are fitness functions that yield the best allround result with a lower chance of encountering excess pain than MAR or CAGR fitness functions might if misapplied, while still not overly compromising raw returns too much in the pursuit of safety. They are a way to target the "sweetspot" on the lefthand slope of MAR that is still quite high on the "mountain" and doesn't require you to use some arbitrary rule of thumb like "use leverage of 2/3rds of what peak MAR implies".

I hope I have explained this all properly and clearly enough...let me know if i have made a mistake.(but i have been using this whole philosophy for while now, so i am pretty sure it is sound)
Eventhorizon
Roundtable Knight
Roundtable Knight
Posts: 229
Joined: Thu Jul 08, 2010 2:36 pm
Location: Boulder, CO
Contact:

Post by Eventhorizon »

Rabidric,

I have been experimenting with integrating the draw-down over the duration of the back-test to arrive at E(DD) i.e. the mean state of draw-down if one picked a day at random from the back-test. If you combine this with regressed annual return (divide RAR% by E(DD)) one arrives at a nice pain:gain measure that encapsulates: return, size of draw-downs and duration of draw-downs. Other requirements (maximum total equity draw-down, minimum RAR%, minimum trades, minimum r^2, etc) are treated as constraints. One of the side benefits is this objective function tends to favor high r^2 equity curves. Also, the constraints eliminate "bogus good" solutions (e.g. 1% RAR with 0.5% avg DD).

I am sure I am not the first person to think of this - it has a close relation to Seykota's Lake Ratio, which gave me the idea.

Aside: In the end the challenge is to optimize your system around one single goodness measure. Mathematically / procedurally you have no choice - an optimization can only have one objective function. When a person says "well, I look at a trade-off between this, that and the other" their challenge is to express that trade-off in a single number. How else can you perform such an optimization consistently and rationally?
sluggo
Roundtable Knight
Roundtable Knight
Posts: 2987
Joined: Fri Jun 11, 2004 2:50 pm

Post by sluggo »

Eventhorizon wrote:I am sure I am not the first person to think of this - it has a close relation to Seykota's Lake Ratio
Nor was he the first to think of it. That honor belongs to Peter Martin who published it in 1989; google for "Ulcer Index" to read further. Here is one of Peter Martin's figures

Image



Eventhorizon wrote: Mathematically / procedurally you have no choice - an optimization can only have one objective function. When a person says "well, I look at a trade-off between this, that and the other" their challenge is to express that trade-off in a single number. How else can you perform such an optimization consistently and rationally?
It's a standard problem, goes by the name "Multicriteria Optimization".Many people, including Goldberg, feel that "Evolutionary Algorithms" (of which, the genetic algorithm may be the best known) are particularly well suited to multicriteria optimization problems.
Eventhorizon
Roundtable Knight
Roundtable Knight
Posts: 229
Joined: Thu Jul 08, 2010 2:36 pm
Location: Boulder, CO
Contact:

Post by Eventhorizon »

Thanks for the interesting links, Sluggo, and the history of Lake Ratio.

Our present problem is, generally, to find that single set of parameters that maximizes our bliss.

For the benefit of those not familiar with the field (including me), the solution to multi-objective optimizations is a solution space - the pareto-frontier. This defines the volume (hyper-sphere?) of the available solution space wherein changing a parameter value to improve objective function n, at worst leaves all other objective functions unchanged. This space may or may not include the optimum of any one of the objective functions (or more than one if they happen to be at the exact same point).

A totally made up example might be: in a dual moving average x-over system with a 200 day slow average, you find %CAGR peaks at a fast moving average of 50, while MAR peaks at 45. So 45/200 is on the pareto frontier, as is 50/200. Both MAR and CAGR increase as fast moving average increases up to 45, and as FMA decreases down to 50. In between is a no-go zone where, if one increases, the other decreases. Let's say that, 50/225, 55/250 are also be on the frontier. To fill out the illustration, let's say CAGR improves with increasing MAs while MAR decreases. Now we have to trade-off %CAGR against MAR to choose our preferred solution from along the pareto frontier.

What is really interesting about the approaches highlighted in Sluggo's links is that we get a much smaller solution space to work with - the pareto frontier. It is a space in which we know that all of our objectives have reached minimum satisfactory levels and any final choice we make will be no worse than those levels.

Unfortunately, it still leaves us with the question as to which actual set of values to use in our trading system: there remains the requirement for a DM (decision maker) to make the trade-offs. The DM, either explicitly or implicitly, is going to weight the different objective functions and favor one over another - she can either do that subjectively or objectively (creating one objective function from the many). Either way one is going to use a particular solution which implies some weighting has consciously or unconsciously been applied to the multiple objective functions. Which brings us back to my original point - to arrive at a single solution you can only have (implicitly or explicitly) one objective function!

What has been your experience implementing these approaches?
sluggo
Roundtable Knight
Roundtable Knight
Posts: 2987
Joined: Fri Jun 11, 2004 2:50 pm

Post by sluggo »

Me, I sidestep the problem. I allow multiple solutions.

Rather than getting my knickers in a twist about whether to optimize for criterion A, or for criterion B, or for C, or for D ... I just optimize for each of them and get four different optimal solutions. Often this produces a bit of additional diversification, since what's best for A and what's best for B, are (often) structurally quite different.

I got the idea from Liz Cheval (an original Turtle) who casually mentioned in the Q&A session after a talk she gave, that her firm simultaneously trades N different strategies, and each of the N strategies is optimized for a different figure of merit. Here is that talk, regrettably without the Q&A (link)

This catholic (with a lowercase c) "use them all" approach, may lead you to invent some wildly unorthodox figures of merit / measures of goodness. I can imagine you might optimize system #53 in your suite, for this measure:
  • Goodness53 = Rsquared * Sharpe * (% of days that (A) you were Long, and also (B) the long term trend was down that day, i.e., the slope of the 200 day SMA was negative that day)
just to give an example of an unorthodox figure of merit that might produce some extra diversification
bobsyd
Roundtable Knight
Roundtable Knight
Posts: 405
Joined: Wed Nov 05, 2008 2:49 pm

Post by bobsyd »

In response to the moderator note above, I’ve reattached my blox here – even though as far as I can tell they are still there in my previous post.

As an added “bonusâ€
Attachments
12_Additional_Goodness_Measures.tbx
(2.61 KiB) Downloaded 294 times
Walk_Forward_v4_(modified).tbx
(28.98 KiB) Downloaded 281 times
Multiple System Walk Forward Equity Curve & Statistics TEMPLATE EXAMPLE 1.xls
(3.1 MiB) Downloaded 383 times
Eventhorizon
Roundtable Knight
Roundtable Knight
Posts: 229
Joined: Thu Jul 08, 2010 2:36 pm
Location: Boulder, CO
Contact:

Post by Eventhorizon »

Thank you Sluggo for sending me off on an interesting detour!!

I realized what I had written above regarding the pareto frontier was not accurate, so I am hoping to rectify that with this post.

Following is an idealized pair of objective functions (just parabolas). One might be, say, MAR the other CAGR. We want to optimize for both. We have a single variable, x, to play with, therefor the x-axis is our decision space. In the real-world we don't know the shapes of f1 and f2. So, to simulate the first step in using a genetic algorithm, I randomly select 100 values of x [20, 60] and 'generated' values of f1 and f2. I added in the value of f1 (-36) for the value of x (48) that maximizes f2, and the value of f2 (11.25) for the value of x (27) that maximizes f1.

Then I plot f2 vs f1 in the second chart. It is called the Objective Space because we are plotting the matching pairs of our objective functions, f1 and f2, that resulted from our random choices of x. The points in red are on the pareto frontier; the points in blue are not. The red points are said to "dominate" all other points. That is, for any red point, you cannot find another point that has a greater value for BOTH f1 and f2. This is not true for any of the blue points. Guess what values mark the boundaries of the pareto frontier. Notice a point outside of the ideal pareto optimum - this is the nature of a random search, the point IS currently dominant, but future generations of our genetic search should ultimately eliminate it.

I went back to the first chart and colored the values for x in the decision space that gave rise to the dominant outcomes in the objective space. This has helped our search for ideal values of x by narrowing it down to the red points. In this simple example, they are the points that lie between the maxima of f1 and f2. You can easily see that, for any points in blue, you can always find another solution that improves BOTH f1 and f2 - so you would never use one of those points.

As an exercise, imagine if, say, f1 had a dip where its maximum now is, so there were 2 maxima in that curve - what would happen to our pareto solution?

Bear in mind that this is a trivial example (the kind I like best because it helps you see what is going on). I can picture well enough what would happen with a 2-dimensional decision space (a plane with the objective functions describing surfaces above it, and the pareto solutions being outlines on the plane) and 3 objective functions (a 3-D surface in Objective Space). I can even manage a 3-D decision space without the objective functions, where the pareto solutions would be 1 or more 3-D surfaces. Beyond that, forget about it.

Hope this is of interest.

Edited for clarity.
Attachments
The resulting values of f1 and f2 plotted in the Objective Space
The resulting values of f1 and f2 plotted in the Objective Space
ObjectiveSpace1.png (6 KiB) Viewed 10323 times
The objective functions, f1, f2 and the Decision Space
The objective functions, f1, f2 and the Decision Space
DecisionSpace1.png (7.32 KiB) Viewed 10323 times
sluggo
Roundtable Knight
Roundtable Knight
Posts: 2987
Joined: Fri Jun 11, 2004 2:50 pm

Post by sluggo »

I copid Eventhorizon's figure and relabeled it using mechanical trading system terminology instead of Scary Math Words. Everything he said is true; this is simply one way to interpret it, from a trading system perspective.
Attachments
Same figure, same data.  Only the descriptions have been modified
Same figure, same data. Only the descriptions have been modified
GMGMplot.png (38.87 KiB) Viewed 10296 times
rabidric
Roundtable Knight
Roundtable Knight
Posts: 243
Joined: Mon Jan 09, 2006 7:45 am

Post by rabidric »

nice work gents.
8)
though i would hope the concept is intuitive to most traders here anyway.

sidenote:the multi-measure (final two) graphs are most definitely a very clear way of viewing things, though for some reason my brain always seems to default to "simultaneous parallel one measure plots" like the first plot then tracks changes like some kind of mental interlinked graphic equalizer. I think this may be because it still works in higher-dimensional analysis of perhaps up to 5-9 variables*. 2D and 3D are sometimes not enough! 8)

*human working memory optimized for this number.

Also, I regret suggesting "Hill ratio" as a name for my ratio. makes me feel like a vain twat everytime I think of it. We could still call it that though if we instead pretend that it is comparing the height of pre and post drawdown equity peaks. Then all someone needs is a "forest ratio", and we have lakes, hills and trees and things start getting quite picturesque :)

in before: "Freshwater Salmon" ratio !
Eventhorizon
Roundtable Knight
Roundtable Knight
Posts: 229
Joined: Thu Jul 08, 2010 2:36 pm
Location: Boulder, CO
Contact:

Post by Eventhorizon »

Sluggo,

Nice version of the chart - much clearer than mine!!
Post Reply