Posted: Wed Sep 22, 2004 3:15 pm
I like the idea of an automatic robustness check. The suggestions for user specified epsilon or variance could work, as could two-pass variance calibration. It seems that the former would be simplier as a first step.Forum Mgmnt wrote:Ah, now you get to the real point of all of this, "Automatic Optimization" where what you are looking for is NOT the peak of the test, but the best Robust Peak.
If we have a mechanism for coming up with automated robustness checking and some numeric robustness measure suited to individual tastes, then we can do automatic optimization in stages like:
1) Auto-optimize the Parameters to find some candidate peaks using genetic algorithms or advancing granularity iteration.
2) Run Automatic Robustness Checks on those peaks
3) Use this information for another genetic optimization pass where we use the result of the Automatic Robustness Check as the goodness measure rather than the raw input.
Ok, I'm going to rain on my own parade a bit. There might be a kink in how distributions are collected and analyzed. It always pays to ask: distribution of what? What's of interest, as c.f. mentions, is not the best peak, but best robust peak. If we're using distributions for parameter sensitivity, maybe it makes sense to look at how the robust peak varies over time, not the peak itself (which is likely to jump around). I don't have proof, but a local peak, even multiple local peaks may not equate in aggregate to the desired robust peak. On the other hand, what we would hope is that a distribution of robust peaks is a tight cluster (i.e. that we really found something optimal and stable).
I'd imagine the robust peak algorithm might use a goodness/fitness function in combination with image edge-detection techniques, or something similar.
Cheers,
Kevin