Optimization question
Optimization question
I am working on optimizing dual and tripple moving averages systems, on a single symbol its not a big deal, optimization in exhaustive mode takes only about 8 hours (20 years of data) to optimize it, Monte Carlo around 5 minutes. the thing with portfolio of futures optimization is that it takes around 25 days for tripple moving avg system (10 years of data on 36 commodities) to run exhaustive optimization and Monte Carlo only 40 minutes. I heard from some developers that Monte Carlo simply keeps on "bombing" same spots of data with same parameters so its not that effetive , from the other side exhaustive mode takes up so much time that I am afraid my computer won't handle it for 28 days with CPU usage of 90%, anyone has experience in this, any suggestions?
I can't comment on the testing vs. Monte Carlo, but I can offer suggestions about 90% cpu usage.
As long as you can keep your system cool, it should be able to handle that amount a load. I say should because I don't know any info on your machine.
Labs all over the world string together 10s and 100s of "normal" machines to create clusters. Some of the stuff they run takes months to complete.
As long as you can keep your system cool, it should be able to handle that amount a load. I say should because I don't know any info on your machine.
Labs all over the world string together 10s and 100s of "normal" machines to create clusters. Some of the stuff they run takes months to complete.
-
- Roundtable Fellow
- Posts: 82
- Joined: Wed Oct 08, 2003 1:19 pm
- Location: Washington, DC
I think you miss read my post. If you keep your system Cool, don't let the cpu get too hot then it would not be a problem.By system tool do you mean the testing software?
As far as the memory upgrade goes ... It definitly would not hurt, but my guess is that the optimization is more CPU bound then RAM bound. Maybe c.f. or Tim could give you better insight on this.
The cheapest way to cool your system better is to add 1 or more fans to it. The best way to control the temp is to add a water-cooled kit;
although, you might not have to use either of these. It all depends on how the system runs. Run it hard for a little while and get a temp reading. There is software that will display this for you or you can usually get the reading from your bios.
-
- Roundtable Knight
- Posts: 1842
- Joined: Tue Apr 15, 2003 11:02 am
- Contact:
hi Forum Mgmnt, yes I am running Wealth Lab, and optimizing tripple moving average system with
on 36 US commodities portfolio(10 years of data) in exhaustive mode, once it showed it'll take 27 days to complete I canceled it, maybe it'll come down to 15 days in a day or so, haven't tried it. I am running same Optimization script on Lean Hogs 10 years , takes around 7 hours. WHat do you personally prefer Exhaustive Mode or Monte Carlo? Monte Carlo is very fast but it doesn't give all info that is needed to find better parameters.
Code: Select all
{#OptVar1 14;2;19;1}
{#OptVar2 30;20;48;1}
{#OptVar3 50;49;170;1}
ok here are some Hogs Results:
as you see on Hogs it took 6 and a half hours
and here actual results
so as we see on Hogs , the best gainer was
EMA 18 EMA 41 EMA 165. Its amazing what exhaustive mode can do. there were only 16 trades. Here are results of the best triple mov avg system(with the above parameters for Hogs 10 years of data).
Avg bars held 245 days, so might be something to think of for Long Term Trend followers
as you see on Hogs it took 6 and a half hours
and here actual results
so as we see on Hogs , the best gainer was
EMA 18 EMA 41 EMA 165. Its amazing what exhaustive mode can do. there were only 16 trades. Here are results of the best triple mov avg system(with the above parameters for Hogs 10 years of data).
Avg bars held 245 days, so might be something to think of for Long Term Trend followers
From the link below it seems that the chassis temp should not go above 38 C and about 72 C for the CPU it self. This is a little bit old but it should be pretty close.
http://www.tomshardware.com/2004/11/14/ ... page3.html
http://www.tomshardware.com/2004/11/14/ ... page3.html
-
- Roundtable Knight
- Posts: 1842
- Joined: Tue Apr 15, 2003 11:02 am
- Contact:
Barli,
One of the reasons we spent so much time working on the performance of TradingBlox is because of the time it takes to run these sorts of optimizations. In the example you cite, there are 63,384 individual tests required to do a complete optimization. Even with Trading Blox (which is about 50 times faster ), an exhaustive test would take more than 13 hours on 36 markets for 20 years on a typical two or three year old machine.
You don't specify your settings for the Monte Carlo searching in WealthLab, these will affect the results considerably, as well as the time it will take to run the test. When speaking of Monte Carlo within WealthLab you are speaking of a very specialized usage of the term, a usage with which those who do not use WealthLab are probably not familiar. Namely a mechanism for determining an optimum set of parameter values without having to run an exhaustive test.
According to their manual, WealthLab does this by picking some random values within the range specified for optimization and then in successive passes narrowing down that range until an optimum set is determined. This is a fairly unsophisticated approach which is susceptible to a concept known in computer science known as local maxima. This means that it might find a set of parameters that are better than the surrounding values but not the best overall. Increasing the number of passes or iterations will reduce the chances of this problem arising but will increase the time it takes to do the optimization.
Another approach is to run tests with larger granularity and then reduce the granularity of the optimization in a few successive passes yourself. This approach has a few benefits, first, it is easy to understand what is going on which increases the likelihood you will believe it works. Second you get an intuitive feel for the system across the entire range of parameters. This is useful when determining if the system is robust and tradeable.
Finally, this approach is not susceptible to the problem of local maxima since you can guarantee that you have covered the entire parameter space with the test. With purely random values you may have missed an area in the first pass that contains the best values.
If you increase the granularity of your increments from 1 to 10 for the medium and large moving average and to 2 for the smaller moving average you will reduce the number of tests so that only 560 tests are required. With TradingBlox this reduces the time required from 13 hours to about 17 minutes.
I have attached some sample graphs from TradingBlox showing how the percentage return varies around the values for the different optimization parameters. Using this information you might determine that your initial optimization range is too constrained and that you should try others, for example a larger value for the long MA and a shorter value for the medium MA.
Once you have determined where the likely optimim values are you can then run smaller granularity tests centered around those values, i.e. using 5 as the granularity for LMA and MMA, and one for SMA. This is what I recommend when doing optimizations that require too much processing time.
- Forum Mgmnt
P.S. Of course, if you get tired of these multi-day tests you might consider moving to a faster platform
Seriously, I've spent a lot of time making TradingBlox fast because of exactly this sort of thing. When you are developing new systems if it takes two or three days to find out if an idea works it might take a few months to develop a system. If you can determine in a few hours if an idea works you can often write a system in a day or two.
P.P.S. For those of you who are interested in learning more about the various approaches one might take to solve this problem, the approach I recommend more closely follows the "branch and bound" method as described here:
http://en.wikipedia.org/wiki/Global_optimization
under the heading: Approaches - Deterministic
The descriptions here are as easy to understand as any I've found elsewhere.
Of course, the process I recommend could be automated and probably will be in a future version of Trading Blox, nevertheless I find there is great benefit in examining the results along the way so you can better understand how robust a system might be. Smoother curves and less jumpy results indicate a more robust system.
One of the reasons we spent so much time working on the performance of TradingBlox is because of the time it takes to run these sorts of optimizations. In the example you cite, there are 63,384 individual tests required to do a complete optimization. Even with Trading Blox (which is about 50 times faster ), an exhaustive test would take more than 13 hours on 36 markets for 20 years on a typical two or three year old machine.
You don't specify your settings for the Monte Carlo searching in WealthLab, these will affect the results considerably, as well as the time it will take to run the test. When speaking of Monte Carlo within WealthLab you are speaking of a very specialized usage of the term, a usage with which those who do not use WealthLab are probably not familiar. Namely a mechanism for determining an optimum set of parameter values without having to run an exhaustive test.
According to their manual, WealthLab does this by picking some random values within the range specified for optimization and then in successive passes narrowing down that range until an optimum set is determined. This is a fairly unsophisticated approach which is susceptible to a concept known in computer science known as local maxima. This means that it might find a set of parameters that are better than the surrounding values but not the best overall. Increasing the number of passes or iterations will reduce the chances of this problem arising but will increase the time it takes to do the optimization.
Another approach is to run tests with larger granularity and then reduce the granularity of the optimization in a few successive passes yourself. This approach has a few benefits, first, it is easy to understand what is going on which increases the likelihood you will believe it works. Second you get an intuitive feel for the system across the entire range of parameters. This is useful when determining if the system is robust and tradeable.
Finally, this approach is not susceptible to the problem of local maxima since you can guarantee that you have covered the entire parameter space with the test. With purely random values you may have missed an area in the first pass that contains the best values.
If you increase the granularity of your increments from 1 to 10 for the medium and large moving average and to 2 for the smaller moving average you will reduce the number of tests so that only 560 tests are required. With TradingBlox this reduces the time required from 13 hours to about 17 minutes.
I have attached some sample graphs from TradingBlox showing how the percentage return varies around the values for the different optimization parameters. Using this information you might determine that your initial optimization range is too constrained and that you should try others, for example a larger value for the long MA and a shorter value for the medium MA.
Once you have determined where the likely optimim values are you can then run smaller granularity tests centered around those values, i.e. using 5 as the granularity for LMA and MMA, and one for SMA. This is what I recommend when doing optimizations that require too much processing time.
- Forum Mgmnt
P.S. Of course, if you get tired of these multi-day tests you might consider moving to a faster platform
Seriously, I've spent a lot of time making TradingBlox fast because of exactly this sort of thing. When you are developing new systems if it takes two or three days to find out if an idea works it might take a few months to develop a system. If you can determine in a few hours if an idea works you can often write a system in a day or two.
P.P.S. For those of you who are interested in learning more about the various approaches one might take to solve this problem, the approach I recommend more closely follows the "branch and bound" method as described here:
http://en.wikipedia.org/wiki/Global_optimization
under the heading: Approaches - Deterministic
The descriptions here are as easy to understand as any I've found elsewhere.
Of course, the process I recommend could be automated and probably will be in a future version of Trading Blox, nevertheless I find there is great benefit in examining the results along the way so you can better understand how robust a system might be. Smoother curves and less jumpy results indicate a more robust system.
- Attachments
-
- SteppedParameterGraph_SMA.png (19.78 KiB) Viewed 15477 times
-
- SteppedParameterGraph_MMA.png (18.49 KiB) Viewed 15477 times
-
- SteppedParameterGraph_LMA.png (17.33 KiB) Viewed 15477 times
Last edited by Forum Mgmnt on Fri Jun 16, 2006 12:48 pm, edited 2 times in total.
-
- Roundtable Knight
- Posts: 1842
- Joined: Tue Apr 15, 2003 11:02 am
- Contact:
Oh, I should add that optimizing on one measure (e.g. profit like you did above) does not give you any indication that the parameters are optimal for others like Maximum Drawdown, Maximum Drawdown Length, etc. so you need to run other optimizations for these and come up with something that gives you acceptable results across any measures that you care about.
- Forum Mgmnt
- Forum Mgmnt
Interesting... Sort of an expert system for parameter optimization? So long as it would present its "notes" for a clear understanding of how a result set was conjured up.. Also some selectable variables so everyone doesent end up with the exact same systems....Forum Mgmnt wrote:Barli,
Of course, the process I recommend could be automated and probably will be in a future version of Trading Blox, .
Branch and Bound has similar terminology with neural Networks Ive toyed with prior.. I suppose it was the process by which the neurons were trained.
Barli, For fun i set up a test with the three MA runs you provided. I used a portfolio called meats over 10 years. LH LC PB DA. On my laptop the test took about two hours. I deleted the results. Was just curious about the speed.
-
- Roundtable Knight
- Posts: 1842
- Joined: Tue Apr 15, 2003 11:02 am
- Contact:
It depends on your testing and how efficiently your software uses the RAM that it has, and whether or not it does dumb things like write out to disk a lot which is really slow independent of the RAM or CPU speed.mojojojo wrote:c.f. for people that might be interested in having/building a system specifically for testing, is the testing usually more CPU or RAM bound? Will tradingblox take full advantange of SMP or dual core processors?
For most testing of futures with 1 Gig or more or RAM, the testing is CPU bound. If you are testing large stock portfolios you might start using so much memory that you are paging out to disk in which case more RAM will increase speed. You have to monitor the memory usage of the applications you have in RAM when you typically do your testing.
If you use intraday data you are even more likely to run out or RAM so the amount you have will impact your speed.
For TradingBlox we don't write out to disk for large tests unless you turn on specific reports which require it, so we are generally CPU bound.
We are working on a version of TradingBlox that will take advantage of more than one CPU or Core but the current version does not specifically do so.
- Forum Mgmnt
thanks for the explanation c.f.! I haven't played with other measures like Percent and so on...which measures do you suggest using to come up with the idea of something tradeable? I look mostly to the Profit Factor, Max Drawdown, Number of Trades(if too few trades were made I dont care for the system it tells me it's way too optimized except long term systems that use 100+ MA) , Payoff and Sharpe RatioForum Mgmnt wrote:Oh, I should add that optimizing on one measure (e.g. profit like you did above) does not give you any indication that the parameters are optimal for others like Maximum Drawdown, Maximum Drawdown Length, etc. so you need to run other optimizations for these and come up with something that gives you acceptable results across any measures that you care about.
- Forum Mgmnt
PS Concerning software differences I see flaws in both, my friend Gadoli uses TradingBlox for his trading systems development and takes it very seriously, I find Wealth Lab language easier to understand, but TradingBlox Money/Risk Management more robust than Wealth Lab. I haven't digged too deep into Monte Carlo optimization parameters. Which one do you personally prefer Exhaustive or Monte Carlo? That multi optimization sounds more complex than simply letting it run for 12 hours
Did you run optimization test or backtesting? I am curious about results you got, if it was optimization did you set the same OptVars I provided?RedRock wrote:
Barli, For fun i set up a test with the three MA runs you provided. I used a portfolio called meats over 10 years. LH LC PB DA. On my laptop the test took about two hours. I deleted the results. Was just curious about the speed.
Hi Barli,BARLI wrote:Did you run optimization test or backtesting? I am curious about results you got, if it was optimization did you set the same OptVars I provided?RedRock wrote:
Barli, For fun i set up a test with the three MA runs you provided. I used a portfolio called meats over 10 years. LH LC PB DA. On my laptop the test took about two hours. I deleted the results. Was just curious about the speed.
Same stepping paramerters you posted for the three ma's.
3-28 -1
29-48 -1
49-180 -1
2hrs 6 min to completion on FOUR mkt portfolio over 10 years. Didnt make note of the results, was only interested in time to completion. This was on an Intel duo core laptop with 1g ram. Your 3g pentium should be plenty fast but more RAM would probably assist.