Page 1 of 1

How to SPEED UP simulations by 2 to 50 times (or more)!

Posted: Wed May 07, 2008 7:15 pm
by Dean Hoffman
For many years high end fund managers have been using computing grids to link up computers for parallel processing and supercomputer type performance. It now seems as though this is available to the average Windows user with a few computers (or more) and this very interesting product.

www.digipede.com

According to the site, the developer need only add “a few linesâ€

Posted: Wed May 07, 2008 8:17 pm
by sluggo
I offer my good wishes and enthusiastic hopes for wild success, to whoever decides to be the first one to buy this and try it out. Good luck convincing software vendors that the interconnected assemblage of N motherboards is just one computer and needs to pay for just one software license. Also good luck convincing the first software developer to add those "few lines" of code.

It's obviously possible. But who will be the pioneer that invests blood, sweat, and tears to actually DO it?

Posted: Wed May 07, 2008 8:52 pm
by Dean Hoffman
The guys over at Tick Quest (Neo-Ticker) are already offering it. I hope this trend continues.

As you can see, Neo-Ticker has made this a profit model for their firm.

http://www.tickquest.com/go_firsttimebuy.html

Any other developer ought to see this as a potential way to increase sales and business as opposed to being worried about more than one node running the software.

I spoke to the guys at Digipede and they claim financial services is their #1 application. They have a link devoted just for it:

http://www.digipede.net/solutions/finan ... vices.html

Posted: Fri May 09, 2008 3:09 pm
by Mathemagician
http://www.nvidia.com/object/tesla_comp ... tions.html

NVIDIA Tesla is likely more appropriate for applications like these. It essentially turns the GPU into a highly parallel FPU. For $1300 you get 128 dedicated cores. Multiple Tesla can be linked. If one needs real-time information one needs dedicated processing, and the Tesla is a much cheaper and more reliable way to obtain dedicated parallel processing than Digipede. I'll let you know how they perform once mine arrive (If I remember).

jj

Posted: Mon Apr 06, 2009 8:57 pm
by rubix101
Did the Tesla card work? Did you have to do any extra programming or just install the card and see an improvement?

Posted: Sat Apr 11, 2009 6:14 pm
by William
The application developer has to modify their application to take the compute-intensive kernels and map them to the GPU. The rest of the application remains on the CPU. Mapping a function to the GPU involves rewriting the function to expose the parallelism in the function and adding “Câ€

Posted: Sat Apr 11, 2009 10:03 pm
by rubix101
Tim Arnold and sluggo had actually given me a response to a similar question about 3 weeks ago. But then when I saw this post, I got confused and thought that someone had actually done it a year ago. I guess I was more hopeful than anything else, but you are correct. And since the code is locked, it is status quo. I just wanted to be extra sure before buying some expensive equipment. Thanks again.

Posted: Sun Apr 12, 2009 6:16 am
by William
Yea, i hear you, i was curious/hopeful myself.

hardware

Posted: Sun May 17, 2009 7:50 pm
by ES
Gentlemen,
what type of hardware would you recommend running? is there a requirement for multiple processor motherboards? do you find yourselves upgrading this hardware on a regular basis?
is there a necessity for multiple pc's crushing through numbers?

Posted: Mon May 18, 2009 8:32 am
by ratio
We run our testing on Dell XPS Studio with 12 gig ram and an Intel Core i7 920 processor.

Plus we added a Intel SSD Drive, wich give a little bit more speed for the loading of the data.

This is really, really fast, we run our test with 20,000 Nasdaq, OTC symbol over 1992 to 2009 on this.

This sell in the 1300-1400$ Not that expensive and really fast

Denis

hardware

Posted: Mon May 18, 2009 11:21 am
by ES
thanks Ratio. how often do you upgrade or buy a new box? every 2 years ?
do you require any fault tolerance

Posted: Mon May 18, 2009 2:46 pm
by ratio
It is all related to the size of the testing we are doing.

Since we do very large test 20,000 stocks we had to get a machine with vista 64 bits, TB 64 bits and 8 gig of ram.

Recently with the release of TB 3.0 8 gig ram cannot load the test anymore, we have to run it on a 12 Gig ram machine.

We bought HP with 8 gig ram and Intel 9330 CPU about one year ago.

About 3 month ago we bought the dell xps with 12 gig ram.

The HP was already really fast, but the XPS studio is double the speed.

So let say that we bougth every year.

However if it was only for future you dont need that much speed. Because the data set is much smaller. Even at 150 instruments it is still relatively small compare to what we test.

Now our production server, is a Dell R900 with 64 gig ram and intel X7460 4 - 6 core CPU = 24 core.

We run TB in a virtual machine under Windows 2008, where we allocate 4 virtual CPU and 8 gig ram and it run almost as fast as the dell xps Studio

We need less memory in the virtual machine as we are running production system wich does not have to handle all the delisted stocks.

Denis

hardware

Posted: Thu May 21, 2009 9:04 pm
by ES
Merci bien Denie.