Page 1 of 1

not another robustness thread...

Posted: Thu Apr 19, 2012 3:49 pm
by illuminati
http://www.dorseywrightmm.com/downloads ... gement.pdf

The above link is a white paper that document the process dorsey wright takes in evaluating and testing systems. Although their paper is for relative strength, I wanted to get people opinion on it and if whether such process can be adapted in testing system components.

Ie, in the paper, they randomly select x number of top decile ranked stocks and hold them. In parallel, I will randomly select top x number of opportunities (given my entry method) from a basket of trades presented each day. This will probably require a massive portfolio. When a system exits a trade, it will again randomly select one of the opportunities and replace it. We define the global parameters like how many positions to hold at any single time. Each run represents a single simulation within a large pool of concurrently running simulations (1000+). This is to not assume that we are favourably picking entries.

Again, this requires a big number of instruments to present to the system so that enough opportunities pop up. It may be inappropriate for futures as the probability of any single day 10+ position popping up is low. But maybe for trend following in stocks?

hmm..

Posted: Thu Apr 19, 2012 4:32 pm
by LeviF
There is a random portfolio block around here somewhere that you can play with.