Monte Carlo Simulation

Discussions about the testing and simulation of mechanical trading systems using historical data and other methods. Trading Blox Customers should post Trading Blox specific questions in the Customer Support forum.
Sir G
Moderator
Moderator
Posts: 243
Joined: Wed Apr 16, 2003 12:21 am
Location: Salt Lake City, Utah

Monte Carlo Simulation

Post by Sir G » Thu Apr 17, 2003 11:30 am

I would love to hear about the creative uses for MCS.

Sir G

Mark Johnson
Roundtable Knight
Roundtable Knight
Posts: 122
Joined: Thu Apr 17, 2003 9:49 am

Post by Mark Johnson » Fri Apr 18, 2003 10:58 am

The most useful result from Monte Carlo techniques, for me personally, is an estimate of the cumulative distribution function of possible outcomes.

I like to run a few thousand MC tests, create a CDF, and plot it. Then I stare at the plot and introspect. I look at the 1% and 5% points on the CDF and ask myself if I'm ready to bet that those things probably won't happen to me.

Here's an artificial example. Take a single-market trading system with positive expected value. Capture all N of its trades in backtesting. Using Monte Carlo, generate ten thousand random permutations of (N/2) trades without replacement. Compute the CAGR, MAXDD, MAR for each sequence. Plot these as cumulative distribution functions. Now ask, am I emotionally ready for the 1% probability event, to happen to me?

Previous writer asked for "creative" Monte Carlo. But this is textbook and thus not creative.

Mark Johnson
Roundtable Knight
Roundtable Knight
Posts: 122
Joined: Thu Apr 17, 2003 9:49 am

Post by Mark Johnson » Sat Apr 19, 2003 10:31 am

Here's an MC example I ran just now. I backtested an S&P system from 1/1/1996 to 4/17/2003. It generated 551 trades in that period. MonteCarlo performed the following procedure, one hundred thousand repetitions:

(a) Randomly pick half of the trades (225 of them). Scramble the order of these trades.
(b) For this scrambled sequence of trades, calculate CAGR, MaxDD, MAR. Write to disk.

After the above MonteCarlo procedure finishes, we have 100,000 samples. Build these into cumultive distribution functions and plot them. The plots say:

99% of sequences were more profitable than: CAGR=107.1% MaxDD=57.7% MAR=1.85
95% of sequences were more profitable than: CAGR=127.2% MaxDD=49.9% MAR=2.55
50% of sequences were more profitable than: CAGR=138.7% MaxDD=32.8% MAR=4.50

Wow. The "expected value" of MAR is 4.50, but there's a 5% chance it could be as small as 2.55 and a 1% chance it could be as small as 1.85. (And take a look at MaxDD!) Armed with these probabilities, we can make better-informed trading decisions.

wcb
Contributing Member
Contributing Member
Posts: 8
Joined: Wed Apr 16, 2003 10:30 pm

Post by wcb » Sat Apr 19, 2003 11:02 am

Mark - What software do you use for your Monte Carlo Simulations? Does this program also compute CAGR, MaxDD, and MAR?

I currently use Trading Recipes and am interested in using MCS for further analysis. Does TR have the appropriate trade output for use in the MCS package you use?

Thanks,

WCB

Forum Mgmnt
Roundtable Knight
Roundtable Knight
Posts: 1842
Joined: Tue Apr 15, 2003 11:02 am
Contact:

Another Perspective??!

Post by Forum Mgmnt » Sat Apr 19, 2003 11:33 am

:?

I've always thought that Monte Carlo Simulations were useful but not really very realistic. Here's my reasoning:

It seems to me that Monte Carlo simulations require independent events rather than events that are order dependent in order to be reflective of reality. In trading, it is not a consequence of random ordering that great trends tend to follow long periods of choppy markets and large drawdowns for trend-followers. Individual trades and their results don't qualify as independent events.

For this reason, they probably overstate the possible severity and length of drawdowns. I haven't checked this out rigorously, so this is just my intuition speaking.

That having been said, they are useful tools. I also think there is merit in the concept when taken from another angle.

The genesis of this idea was a conversation with Arthur Maddock about expectation after one of the day's of the g.c. seminar last year. He kept asking me about expectation (and indeed this was something that Rich taught us as part of the original Turtle class). However, I thought that thinking in terms of expectation was looking at the wrong thing. Too much data is lost in the translation.

Arthur, who has a very different way of looking at the world and problems than I do, kept persisting, and I kept thinking: "Why does he keep bringing this up?." Finally, I asked him, point-blank, why did he think it was so important. He said that he wanted to use expectation to run a Monte-Carlo-like simulation of a trading system. Now, that was an interesting and new idea for me.
SIDE NOTE: In one of the other forums, someone tried to interpret what I said and I noted that this was fine, that the interplay between different persons with different perspectives often allows the development of an idea that neither would have come up with by oneself. Someone else misinterpreting what you are getting at, sometimes results in a new insight that leads to a better perspective. This is one of the most recent examples of this phenomenon.
Except that when Arthur said this, I immediately recognized the value of taking what I thought was a better way of looking at a model of a trading system and running a Monte-Carlo-like simulation.

I have always considered it better to look at the distribution of outcomes rather than the expectation that a particular distribution might generate. One could represent this distribution as a mathematical curve, or for simplicity's sake, simply as a histogram of outcomes with associated probability percentages. (Mark, I think I've seen a post somewhere where you had a simulation contest of sorts that involved a game with this sort of probability distribution)

A histogram of outcomes that included the fat tails and had 20 elements is probably a reasonable model of the potential outcomes of a trading system.

So the idea is this:
  1. Run a test using historical data (as much as you can get)
  2. Build a histogram of all the trade outcomes
  3. Extrapolate a bit at the ends to get the fat tails
  4. Run a simulation where trades are taken from the distribution according to their respective probability
  5. Run many iterations of this simulation like you would a Monte-Carlo simulation
It seems to me this would generate better results than a Monte-Carlo simulation, however, it does suffer from the same problem of ignoring the dependence of the events in trader land, where human beings are affected by the past.

One way around this might be to look, not at the trades, but rather at the periods of winners and losers with some sort of noise filter. For example, you might consider series of wins and losses that are less than 5% as noise.

You could then look at winning periods and losing periods as a series with the probablity built as a dependent series. So you would have a distributions of probabilitys of winning periods (time and size) for a given losing period and vice versa. This would be two matrices of histograms rather than a single historgram.

Then you could run a simulation and generate a series of winning periods followed by losing periods.

This might be better, but you still are missing some of the dependence over the longer term that exists in actual markets and trading. Psychology drives the market, after all.

Ted Annemann
Roundtable Knight
Roundtable Knight
Posts: 118
Joined: Tue Apr 15, 2003 7:44 pm
Location: Arizona

Post by Ted Annemann » Sat Apr 19, 2003 12:59 pm

Forum Mgmnt, I believe the statisticians have beat you to the punch. If I'm not mistaken, they've been teaching & using that approach for years. They call it "resampling".

Mark Johnson
Roundtable Knight
Roundtable Knight
Posts: 122
Joined: Thu Apr 17, 2003 9:49 am

Post by Mark Johnson » Sat Apr 19, 2003 2:25 pm

WCB, I got a p.m. asking for more details so I wrote a MC little program this morning that calculated those values. Captured trades from TS4, fed them into the program, got the 100K output triples (CAGR, MaxDD, MAR), made the CDF's & plotted. I haven't tried the "Code" feature of this BBsys before, let's see if it works:

Code: Select all

#include <stdio.h>
#include <math.h>
#define	START_EQUITY	(1e7)
#define	BUXPERCAR	(1.5e5)
#define	YEARS		(7.375 / 2.0)
#define	MC_TRIALS	(100000)

void mj_shuffle (cards, n)
     double cards[];
     int n;
{
  int i, j;
  double temp;
  for (i = 0; i < (n-1); i++) {
      j = rand () % (n - (i + 1));
      j = j + i + 1;
      temp = cards[i];
      cards[i] = cards[j];
      cards[j] = temp;
    }
}

main ()
{
  int i, j, k, ntrades, nrun, ncars ;
  double outcome[3000];
  double this, prev, peak, valley, profit, x ;
  double equity, pctdd, maxpctdd, cagr, mar;

  for (i=0; i<3000; i++) outcome[i] = -9876543210.9;

  ntrades = 0;    /* read the file of trades */
  while (EOF != (i = scanf ("%le", &x))) {
      outcome[ntrades] = x;
      ntrades++;
    }

  for (nrun = 1; nrun <= MC_TRIALS; nrun++) {
      mj_shuffle(outcome, ntrades);
      equity = START_EQUITY;
      peak = equity;
      valley = peak;
      maxpctdd = 0.0;

     for (i = 0; i<(ntrades/2); i++) {
	  ncars = (int) (floor (equity / BUXPERCAR));
	  profit = ((double) ncars) * outcome[i];
	  equity += profit;
	  if (equity > peak) {
	      peak = equity;
	      valley = peak;
	    }
	  if (equity < valley) {
	      valley = equity;
	      pctdd = (peak - valley) / peak;
	      if (pctdd > maxpctdd) maxpctdd = pctdd;
	    }
	  if (equity < 0.0) equity = 0.0;
	}

      cagr = pow ((equity / START_EQUITY), (1.0 / YEARS));
      cagr *= 100.0;
      maxpctdd *= 100.0;
      mar = cagr / maxpctdd;
      printf ("%5d  %9.3f  %9.3f  %9.3f\n", nrun, cagr, maxpctdd, mar);
    }
}
Looks like the BBsys removed some leading blanks, sorry. Cut-and-paste then run it thru "cb" or "indent" to get things lined back up. MJ

PeterK
Full Member
Full Member
Posts: 13
Joined: Tue Apr 22, 2003 6:48 am

Post by PeterK » Tue Apr 22, 2003 7:43 am

I have also seen Monte Carlo used to simulate random selection of trades where it was likely that multiple trades were offered on the same day over a portfolio of stocks or commodities, and money management would not allow all trades to be taken.

In this instance, the backtested trades are not totally "jumbled up" in the randomisng process, just the days when multiple trades were signalled.

This allows for a realistic simulation of the variety of outcomes (histograms etc) that could have occurred in actual trading. In some systems sold by vendors, I have seen the claimed performance figures are pure luck in the sense that a different random selection of trades offered each day can give wildly different results. The Monte Carlo analysis allows this "luck" factor to be shown.

Peter K

Forum Mgmnt
Roundtable Knight
Roundtable Knight
Posts: 1842
Joined: Tue Apr 15, 2003 11:02 am
Contact:

Post by Forum Mgmnt » Tue Apr 22, 2003 12:14 pm

Ted, I've just boned up a bit on resampling to see if I was missing something.

While I agree that at some level, the purpose of my proposed simulation and resampling overlap, I don't agree that what I was proposing was covered by resampling, especially as it relates to extending the tails of the distribution.

Monte Carlo Simulations are themselves a means of the resampling method called "randomization", so in some respects all of this is about "resampling".

You might be thinking that I was proposing something akin to "bootstrapping" which is a way of generating pseudo-populations by drawing random populations from the data. However, while this is a useful way of getting better estimates from limited sample sets, I don't think it will buy much more than simple Monte Carlo simulations.

I believe that what I was proposing was something different, especially as it relates to the fat tails. A lot of resampling seems to be focused on the mean or getting a better mean for a particular parameter from a population. As traders, we don't care about the average return so much as the worst-case scenarios that will cause us to go bust or drawdown so much that our returns are ruined for years.

So the model I proposed, which builds and extends beyond the ends of the tails as indicated by the sample, is not resampling and actually the opposite of what you might do in some resampling techniques which explicitly discard the outliers in an attempt to find a better mean.

I also need to reiterate that most of the statistics that concern themselves with samples don't relate to dependent events, so they are an imperfect fit, at best.

Christian Smart
Roundtable Fellow
Roundtable Fellow
Posts: 50
Joined: Fri Apr 18, 2003 8:53 pm
Location: Huntsville, AL
Contact:

Post by Christian Smart » Wed Apr 23, 2003 7:44 am

Hi Forum Mgmnt,
What you've described is a form of Monte Carlo simulation - sampling from a histogram based on historical data, even one modified to account for fat tails, is just as much a form of Monte Carlo as is anything else.

Also, it is possible to perform correlated (i.e., non-independent) Monte Carlo simulations. Two well-known Excel add-ins for performing simulations provide this capability. This is not a simple ad-hoc routine, but one grounded in probability theory.

Mark Johnson
Roundtable Knight
Roundtable Knight
Posts: 122
Joined: Thu Apr 17, 2003 9:49 am

Post by Mark Johnson » Wed Apr 23, 2003 10:05 am

To those who sent me p.m.: The trades in the MC example above were generated as follows:

System = I-Master
Slippage = 2.0 big points ($500.00) per roundtrip ; ; ; yes, I know, I know....
Data = CSI backadjusted continuous contract of S&P day session

As you can plainly see by reading the MC code, the betsizing was embarrassingly rudimentary: trade 1 contract per $150K of account equity.

Remember, the point of the exercise was to illustrate MonteCarlo concepts. The S&P stuff was merely an example, not some sort of claimant to the throne of Best Trading System In The Universe. The focus should be on the Monte Carlo method, not the detailed minutae of the S&P example that happened to be picked.

Sir G
Moderator
Moderator
Posts: 243
Joined: Wed Apr 16, 2003 12:21 am
Location: Salt Lake City, Utah

/2

Post by Sir G » Wed Apr 23, 2003 10:16 am

Hi Mark-

Why do you use 1/2 of the trades? Why don't you reshuffle the whole deck of trades?

Thanks.

Sir G

blueberrycake
Roundtable Knight
Roundtable Knight
Posts: 125
Joined: Mon Apr 21, 2003 11:04 pm
Location: California

Post by blueberrycake » Wed Apr 23, 2003 11:36 pm

I think MC simulations are essential for figuring out the correct bet size, since each distribution has a different optimal bet size. Also, it gives you a pretty good idea of whether a particular system is tradeable based on its result distribution and number of trades.

This little bit of code is rather useful in answering these questions. You specify your average win, average loss, win/loss ratio, number of bets you plan to make and the amount you are going to bet on each trade. It will then run a simulation and tell you what you can expect to end up with on average and more importantly, what the bottom part of the distribution looks like (ie 5%, 10% cutoffs).

Code: Select all

int main(int argc, char *argv[]) {

	int i, j;
	float *fCapital, curCapital;

	int iterations = 500000;
	int numGames = 100;

	float winAmount = 2;
	float lossAmount = 1;
	float winPercent = .5;

	float initCapital = 100;

	float riskPercent = .05;

	fCapital = malloc(sizeof(float) * iterations);
	init_random();

	for (i = 0; i < iterations; i++) {
		curCapital = initCapital;
		for (j = 0; j < numGames; j++) {
			if (curCapital >= lossAmount) {

				// percent risk model
				if (((double) rand() / RAND_MAX) < winPercent)
					curCapital += curCapital * riskPercent / lossAmount * winAmount;
				else
					curCapital -= curCapital * riskPercent;

				// constant risk model
				/*
				if (((double) rand() / RAND_MAX) < winPercent)
					curCapital += winAmount;
				else
					curCapital -= lossAmount;
				*/
			}
		}
		fCapital[i] = curCapital;
	}

	qsort((void *) fCapital, (size_t) iterations, sizeof(float), floatCompare);

	printf("80 percentile: %.0f\n", fCapital[iterations / 100 * 80]);
	printf("50 percentile: %.0f\n", fCapital[iterations / 100 * 50]);
	printf("30 percentile: %.0f\n", fCapital[iterations / 100 * 30]);
	printf("20 percentile: %.0f\n", fCapital[iterations / 100 * 20]);
	printf("10 percentile: %.0f\n", fCapital[iterations / 100 * 10]);
	printf("5 percentile: %.0f\n", fCapital[iterations / 100 * 5]);

	return 1;
}

Kiwi
Roundtable Knight
Roundtable Knight
Posts: 513
Joined: Wed Apr 16, 2003 1:18 am
Location: Nowhere near

Free Monte Carlo Sim Tool

Post by Kiwi » Fri Aug 29, 2003 6:26 pm

For all you folk who've wondered what it is and wanted to try it there is a new free MCS tool. I havent tested it and it doesnt seem as orientated to money management experiments as Alex's sim at Unicorn but it is free and looks easy to use.

Advice came via Raymond Deux who makes NinjaTrader a superb tool for short term traders that I use and was:
We've just released Equity Monaco 1.0.

Equity Monaco is a Monte Carlo Simulator for analyzing
trading system perfomance. It works with NeoTicker,
but Equity Monaco can also analyze trading
system performance from other software.

Equity Monaco is a free product. We want to provide
a Monte Carlo Simulation for NeoTicker users, without
burdening NeoTicker itself too much. So we write it
as a separate program that integrates with NeoTicker.
The free part also acts as a promotion for TickQuest.

You can download Equity Monaco from:

http://66.113.187.197/EquityMonaco10.exe

The PDF version of the documentation is here:

http://66.113.187.197/equitymonaco.pdf

-----------------
Louis Lin
TickQuest Inc www.tickquest.com
John


PS. Was going to post it at 3 other places around the forum but didnt want to make Sir G mad. Now its up to all you budding MJs to generate some new quantitative insights :)

gbos
Senior Member
Senior Member
Posts: 26
Joined: Wed May 21, 2003 1:06 pm
Location: Athens Greece
Contact:

Post by gbos » Tue Sep 09, 2003 9:16 am

Kiwi very nice program. Here is my own home made add-in (MonteCarlo.xla) (money management orientated) with instructions (Read_Me.pdf). Works fine on my Excel 2002 (English Version) but I haven’t tested in other excel versions so if it doesn’t work please don’t throw me rocks. :D
Attachments
Monte.zip
(145.13 KiB) Downloaded 1273 times

bloom
Senior Member
Senior Member
Posts: 35
Joined: Thu Apr 17, 2003 12:45 am
Location: SARS infested HONG KONG..ahh

Post by bloom » Tue Sep 09, 2003 10:39 am

hmm..we know that volatility is the lifeblood of any system, and we also know that volaitlity is cyclical, so I would think that the distribution of trades is probably mean-reverting. Would it be possible to simulate this in MC? Is there any tools that could simulate a probability distribution with
a <0.5 hurst coefficient, I think this would give us more realistic results.

gbos
Senior Member
Senior Member
Posts: 26
Joined: Wed May 21, 2003 1:06 pm
Location: Athens Greece
Contact:

Post by gbos » Tue Sep 09, 2003 1:10 pm

Hi vegasoul

This is not difficult. An easy way that you can simulate this kind of relationship between trades is with the aid of Markov chains. See any introductory text on probability for reference. As for a<0.5 I can’t understand the question. If you are referring to the add-in 'a' coefficient can be changed by you and it only plays role to the relevant with 'a' questions on the menu and not in the simulation in general.
:oops: Oops I just realized you are referring to fractals!

Best Rgds
Last edited by gbos on Wed Sep 10, 2003 1:16 am, edited 1 time in total.

CRM114
Senior Member
Senior Member
Posts: 35
Joined: Tue May 06, 2003 7:51 pm
Location: Florida

Post by CRM114 » Tue Sep 09, 2003 11:37 pm

vegasoul wrote:Is there any tools that could simulate a probability distribution with a <0.5 hurst coefficient, I think this would give us more realistic results.
I can give you a procedure that will produce approximations of fractional Brownian motion. It's on page 495 of the book

Chaos and Fractals by Peitgen, Jurgens, and Saupe, Springer-Verlag 1992.

Say you want to simulate a time series X(t), 0 <= t <= 1, with Hurst exponent H, 0 <= H <= 1. Start with


X(0) = 0 and X(1) = sigma(0) * N(0,1),


where N is a normally-distributed random number with mean 0 and variance 1, and sigma(0)^2 is the variance that you've chosen for the interval [0,1]. The remaining samples are constructed as follows.


X(0.5) = 0.5 * (X(0) + X(1)) + sigma(1) * N(0,1), sigma(1) = sigma(0) * sqrt(1 - 2^(2*H - 2))/2^H.

I've corrected what I believe was a mistake in the book, where the divisor in the previous formula was omitted.


X(0.25) = 0.5 * (X(0.0) + X(0.5)) + sigma(2) * N(0,1), sigma(2) = sigma(1) * (0.5)^H
X(0.75) = 0.5 * (X(0.5) + X(1.0)) + sigma(2) * N(0,1).


X(0.125) = 0.5 * (X(0.00) + X(0.25)) + sigma(3) * N(0,1), sigma(3) = sigma(2) * (0.5)^H
X(0.375) = 0.5 * (X(0.25) + X(0.50)) + sigma(3) * N(0,1).
X(0.625) = 0.5 * (X(0.50) + X(0.75)) + sigma(3) * N(0,1).
X(0.875) = 0.5 * (X(0.75) + X(1.00)) + sigma(3) * N(0,1).


Continue in this fashion, always reducing the standard deviation by the factor (0.5)^H.
Last edited by CRM114 on Wed Oct 06, 2004 6:26 am, edited 1 time in total.

Asterix
Senior Member
Senior Member
Posts: 44
Joined: Mon Apr 05, 2004 11:16 pm
Location: San Diego

Another Approach to Monte Carlo Simulation

Post by Asterix » Wed Jun 23, 2004 12:12 pm

I began experimenting with MC analysis of trading system results about 10 years ago and wrote a program that does some of what c.f. describes in his post. (i.e. extending the distribution to account for the fat tails.)

After reading more on the topic of dependency, I began to question the validity of the MC results. Random sampling of the trade results assumes that the individual results are independent and can occur in any order.

I came up with another method that I called random entry. Rather than re-shuffling all of the trade results, I preserved the order of the trades but randomly picked the starting point. If the starting point was in the middle rather than the end, then I used a wrap-around algorithm when I got to the end of the data and began with the first value in the data and continued until reaching the starting point.

This system isn't without it's own weak points, but it does produce different statistics compared to total re-shuffling for each run. For example, sometimes, the worst drawdown was greater and the risk of ruin was greater.

smodato
Senior Member
Senior Member
Posts: 27
Joined: Wed Jul 14, 2004 2:53 am

Post by smodato » Wed Jul 20, 2005 5:41 am

I'd like to go back to this old topic and to discuss the following:

let's suppose we have a number of backtesting trades from a system,let's say 250 trades (it is just an example).
Let's look at different approaches:

1) we take the distribution characteristics, average trade, standard deviation and generate with these data 1000 different series of 250 trades sequences, each of these with position sizing, so we get a scenario of 1000 possible equities and dradown out of these we extrapolate the risk analyses of the chosen approach, this method may be weak if the distribution is not normal
2) we create 1000 permutations of the 250 trades and apply position sizing to each of them creating again 1000 equitites and drawdowns
3) we create 1000 possible equities generating random numbers between 1 and 250 and picking the correspondend trade from the main arrow and then apply position sizing and proceed in the same way as described above.

What are your comments about the three different approaches? Personally I'd prefer the 3rd but your experience would help me in the best choice.
Thanks, bye
Smodato

Post Reply