Page 1 of 1

Objective function

Posted: Fri Jan 07, 2011 10:29 am
by LeviF
Is there a robust objective function that is independent of the amount of leverage employed? Its hard to compare apples to apples when all else equal, increasing leverage tends to increase the value of most objective functions.

Posted: Fri Jan 07, 2011 11:13 am
by sluggo
No there is not. All will have discontinuities (absences of "robustness") at risk=0, and most will become meaningless when drawdown hits or exceeds 100%.

What you can do, however, is calculate the sensitivity of your Goodness measure, as you vary leverage. (Calculus people will recognize this as the first partial derivative of Goodness with respect to leverage).
  • Sensitivity = (change in Goodness) / (change in leverage)
You'll want to do this for very small changes in leverage, which means in Blox you'll want to (A) use very fine granularity when you step the %risk parameter; and (B) print out your Goodness measure with lots of digits of precision because you're hoping it won't change very much. Example Blox code is presented in this thread (LINK).

Calculate the sensitivities of a few dozen measures of Goodness, plot them, and choose the one whose curve is lowest.

Another completely different approach, which costs a mere factor of two in backtesting speed, is to run each backtest as a two-stage affair. Stage 1 uses some arbitrarily chosen value of leverage, and calculates performance statistics. "Before Test" of Stage 2 then uses the stage 1 results to calculate a new, different, value of leverage, normalized so that all stage2 backtests will have the same value of X. Where X might be CAGR, or MaxDD, or volatility of equity curve returns (standard deviation), or whatever you choose. Then it runs the Stage 2 backtest using the new, different, value of leverage.

Now all stage 2 backtests are directly comparable to one another, and you can use any and all measures of goodness you like (!).

Posted: Fri Jan 07, 2011 11:58 am
by Paul King
As sluggo implies when he says "and choose the one whose curve is lowest", IMHO it's more useful to approach this problem by using increasing leverage as a measure of how robust your system is, rather than trying to find a measure of goodness that's independent of leverage (or includes some measure of leverage like margin:equity ratio).

Assuming one has a measure of risk-adjusted return e.g. MAR, then as leverage is increased then a robust system should have the smallest change in MAR since both return and risk should be increasing in an approximately linear fashion (so you're being rewarded for each additional unit of risk you add).

If a particular system at some point starts to provide a smaller increase in return than the corresponding increase in risk (represented by drawdown in this case) then you have found the point at which it ceases to be worth it to increase leverage.

In this way you can compare systems by finding the point at which increased leverage causes an x% decrease in the measure of goodness and subsequently choose the one that is robust with the highest leverage.

Paul

Posted: Sat Jan 15, 2011 12:04 pm
by michaelt
I might be not fully understanding the question's intention. I interpret the question to be how to compare systems that have different nominal values for PnL, drawdrown, etc.

The way I compare systems: Each system is capitalized with a multiple of it's drawdown. (Say, 5X). Then the metrics such as sortino, sharpe, etc., are comparable as every system - in this approach - uses the same basis for leverage.

you wrote:
" Its hard to compare apples to apples when all else equal, increasing leverage tends to increase the value of most objective functions"

I am not sure this applies to the relative stats. For example, annualized return divided by standard deviation should stay the same regardless of leverage. I think the relative metrics should not change.

Posted: Sat Jan 15, 2011 1:35 pm
by sluggo
I believe the question arises from running large Stepped Parameter Simulations with Trading Blox software, and then becoming frustrated that the Goodness results (the output of the "Objective Function") are not comparable, because of the different leverage. For example the green curve in (THIS) plot, is a measure of Goodness called "MAR Ratio". Notice that it swings around dramatically as leverage (on the horizontal axis) is varied.

Here's a typical scenario that Blox users encounter all the time (below). One of the parameters stepped is the distance-to-stop, highlighted in red. Since distance-to-stop determines the risk of each trade, and since this system employs Fixed Fractional position sizing (which sets leverage proportional to (1/Risk)), the analyst is varying leverage -- whether he wants to or not. Leverage is not constant across all backtests in the Blox run.

And since the Goodness measures all have (Sensitivity > 0), changing the leverage changes the Goodness, even with all else remaining the same. LeviF seeks a Goodness measure (Objective Function) with Sensitivity==0, so that the reported Goodness does not vary when Leverage varies.

The only solution I'm aware of, is the "mere factor of two" strategem / awful kludge, mentioned above.