## Confidence Interval around a system change - OLD

### Confidence Interval around a system change - OLD

Please see an updated and amended post on this topic on 14 October, 2020

Last edited by Magnus on Wed Oct 14, 2020 12:28 pm, edited 3 times in total.

### Re: Confidence Interval around a system change

Hi,

I use the Z score (as shown on your last sheet) to compare the before and after results.

( I am not a statistical whiz, so I keep it simple. )

One observation:

MAR is an inadequate measure because it uses the single largest DD.

This can force you to reject an otherwise desirable solution due to an outlier.

The Sortino Ratio--semi deviation of DD's--is more stable, less sensitive to outliers.

If you use the Sharpe Ratio as the measure, the minimum hedge fund expectation is

AR / AV > 1

where AR = Average daily return; AV = Average daily volatility

AR / AV >= 3 is "possible" according to Perry Kaufman "but unusually good"

Leslie

I use the Z score (as shown on your last sheet) to compare the before and after results.

( I am not a statistical whiz, so I keep it simple. )

One observation:

MAR is an inadequate measure because it uses the single largest DD.

This can force you to reject an otherwise desirable solution due to an outlier.

The Sortino Ratio--semi deviation of DD's--is more stable, less sensitive to outliers.

If you use the Sharpe Ratio as the measure, the minimum hedge fund expectation is

AR / AV > 1

where AR = Average daily return; AV = Average daily volatility

AR / AV >= 3 is "possible" according to Perry Kaufman "but unusually good"

Leslie

### Re: Confidence Interval around a system change

Thanks for your input. After speaking to people with additional stat expertise ..., I am implementing some changes which I might post here later.

Having looked into your comment on MAR, I believe this is fine to use. There might be some outliers but as long as the simulation data output we decide to look at (MAR, Sharpe, ARR...) is deemed to follow a normal distribution (which we can show using a "Chi Square test" for example) and we have a sufficient number of data points (i.e. test results) using random portfolios, then we can apply a confidence interval calculation to assess if the system change results in a meaningful improvement.

We need to run tests before and after a system change on a meaningful (30+) number of random generated Portfolios.

In any case, if one has a preferred metric (other than MAR for example), the same calculations can be applied. I would be surprised if you come to a different conclusion unless the system change results in a relatively minor improvement (in which case it's questionable if you want to implement it anyway).

Having looked into your comment on MAR, I believe this is fine to use. There might be some outliers but as long as the simulation data output we decide to look at (MAR, Sharpe, ARR...) is deemed to follow a normal distribution (which we can show using a "Chi Square test" for example) and we have a sufficient number of data points (i.e. test results) using random portfolios, then we can apply a confidence interval calculation to assess if the system change results in a meaningful improvement.

We need to run tests before and after a system change on a meaningful (30+) number of random generated Portfolios.

In any case, if one has a preferred metric (other than MAR for example), the same calculations can be applied. I would be surprised if you come to a different conclusion unless the system change results in a relatively minor improvement (in which case it's questionable if you want to implement it anyway).