How to SPEED UP simulations by 2 to 50 times (or more)!

Discussions about the testing and simulation of mechanical trading systems using historical data and other methods. Trading Blox Customers should post Trading Blox specific questions in the Customer Support forum.
Post Reply
Dean Hoffman
Roundtable Fellow
Roundtable Fellow
Posts: 87
Joined: Wed Jul 02, 2003 3:04 pm

How to SPEED UP simulations by 2 to 50 times (or more)!

Post by Dean Hoffman » Wed May 07, 2008 7:15 pm

For many years high end fund managers have been using computing grids to link up computers for parallel processing and supercomputer type performance. It now seems as though this is available to the average Windows user with a few computers (or more) and this very interesting product.

www.digipede.com

According to the site, the developer need only add “a few linesâ€

sluggo
Roundtable Knight
Roundtable Knight
Posts: 2986
Joined: Fri Jun 11, 2004 2:50 pm

Post by sluggo » Wed May 07, 2008 8:17 pm

I offer my good wishes and enthusiastic hopes for wild success, to whoever decides to be the first one to buy this and try it out. Good luck convincing software vendors that the interconnected assemblage of N motherboards is just one computer and needs to pay for just one software license. Also good luck convincing the first software developer to add those "few lines" of code.

It's obviously possible. But who will be the pioneer that invests blood, sweat, and tears to actually DO it?

Dean Hoffman
Roundtable Fellow
Roundtable Fellow
Posts: 87
Joined: Wed Jul 02, 2003 3:04 pm

Post by Dean Hoffman » Wed May 07, 2008 8:52 pm

The guys over at Tick Quest (Neo-Ticker) are already offering it. I hope this trend continues.

As you can see, Neo-Ticker has made this a profit model for their firm.

http://www.tickquest.com/go_firsttimebuy.html

Any other developer ought to see this as a potential way to increase sales and business as opposed to being worried about more than one node running the software.

I spoke to the guys at Digipede and they claim financial services is their #1 application. They have a link devoted just for it:

http://www.digipede.net/solutions/finan ... vices.html

Mathemagician
Full Member
Full Member
Posts: 20
Joined: Thu Jun 28, 2007 1:46 pm
Contact:

Post by Mathemagician » Fri May 09, 2008 3:09 pm

http://www.nvidia.com/object/tesla_comp ... tions.html

NVIDIA Tesla is likely more appropriate for applications like these. It essentially turns the GPU into a highly parallel FPU. For $1300 you get 128 dedicated cores. Multiple Tesla can be linked. If one needs real-time information one needs dedicated processing, and the Tesla is a much cheaper and more reliable way to obtain dedicated parallel processing than Digipede. I'll let you know how they perform once mine arrive (If I remember).

jj

rubix101
Contributing Member
Contributing Member
Posts: 7
Joined: Mon Mar 09, 2009 9:19 pm

Post by rubix101 » Mon Apr 06, 2009 8:57 pm

Did the Tesla card work? Did you have to do any extra programming or just install the card and see an improvement?

William
Roundtable Knight
Roundtable Knight
Posts: 238
Joined: Sun May 04, 2003 4:41 pm
Location: Manhattan, New York

Post by William » Sat Apr 11, 2009 6:14 pm

The application developer has to modify their application to take the compute-intensive kernels and map them to the GPU. The rest of the application remains on the CPU. Mapping a function to the GPU involves rewriting the function to expose the parallelism in the function and adding “Câ€

rubix101
Contributing Member
Contributing Member
Posts: 7
Joined: Mon Mar 09, 2009 9:19 pm

Post by rubix101 » Sat Apr 11, 2009 10:03 pm

Tim Arnold and sluggo had actually given me a response to a similar question about 3 weeks ago. But then when I saw this post, I got confused and thought that someone had actually done it a year ago. I guess I was more hopeful than anything else, but you are correct. And since the code is locked, it is status quo. I just wanted to be extra sure before buying some expensive equipment. Thanks again.

William
Roundtable Knight
Roundtable Knight
Posts: 238
Joined: Sun May 04, 2003 4:41 pm
Location: Manhattan, New York

Post by William » Sun Apr 12, 2009 6:16 am

Yea, i hear you, i was curious/hopeful myself.

ES
Roundtable Fellow
Roundtable Fellow
Posts: 97
Joined: Mon May 05, 2003 1:02 am

hardware

Post by ES » Sun May 17, 2009 7:50 pm

Gentlemen,
what type of hardware would you recommend running? is there a requirement for multiple processor motherboards? do you find yourselves upgrading this hardware on a regular basis?
is there a necessity for multiple pc's crushing through numbers?

ratio
Roundtable Knight
Roundtable Knight
Posts: 338
Joined: Sun Jan 15, 2006 11:07 pm
Location: Montreal, Canada

Post by ratio » Mon May 18, 2009 8:32 am

We run our testing on Dell XPS Studio with 12 gig ram and an Intel Core i7 920 processor.

Plus we added a Intel SSD Drive, wich give a little bit more speed for the loading of the data.

This is really, really fast, we run our test with 20,000 Nasdaq, OTC symbol over 1992 to 2009 on this.

This sell in the 1300-1400$ Not that expensive and really fast

Denis

ES
Roundtable Fellow
Roundtable Fellow
Posts: 97
Joined: Mon May 05, 2003 1:02 am

hardware

Post by ES » Mon May 18, 2009 11:21 am

thanks Ratio. how often do you upgrade or buy a new box? every 2 years ?
do you require any fault tolerance

ratio
Roundtable Knight
Roundtable Knight
Posts: 338
Joined: Sun Jan 15, 2006 11:07 pm
Location: Montreal, Canada

Post by ratio » Mon May 18, 2009 2:46 pm

It is all related to the size of the testing we are doing.

Since we do very large test 20,000 stocks we had to get a machine with vista 64 bits, TB 64 bits and 8 gig of ram.

Recently with the release of TB 3.0 8 gig ram cannot load the test anymore, we have to run it on a 12 Gig ram machine.

We bought HP with 8 gig ram and Intel 9330 CPU about one year ago.

About 3 month ago we bought the dell xps with 12 gig ram.

The HP was already really fast, but the XPS studio is double the speed.

So let say that we bougth every year.

However if it was only for future you dont need that much speed. Because the data set is much smaller. Even at 150 instruments it is still relatively small compare to what we test.

Now our production server, is a Dell R900 with 64 gig ram and intel X7460 4 - 6 core CPU = 24 core.

We run TB in a virtual machine under Windows 2008, where we allocate 4 virtual CPU and 8 gig ram and it run almost as fast as the dell xps Studio

We need less memory in the virtual machine as we are running production system wich does not have to handle all the delisted stocks.

Denis

ES
Roundtable Fellow
Roundtable Fellow
Posts: 97
Joined: Mon May 05, 2003 1:02 am

hardware

Post by ES » Thu May 21, 2009 9:04 pm

Merci bien Denie.

Post Reply