Convergence speed of Monte Carlo simulations

Use this forum to discuss data providers like CSI, charting, or other non testing software.
Post Reply
sluggo
Roundtable Knight
Roundtable Knight
Posts: 2987
Joined: Fri Jun 11, 2004 2:50 pm

Convergence speed of Monte Carlo simulations

Post by sluggo »

I was reading Eventhorizon's blog post (here), which inspired me to run some Monte Carlo simulations and observe their convergence.

I used the same example he did: estimating the value of "PI" ( ~ 3.14159...) by Monte Carlo simulation. MC generates random points in the (x,y) plane; those points which fall within a circle of radius r are called "hits" and those that fall outside the circle are called "misses". The ratio of (#hits / #misses), plus a little bit of trigonometry, give you a value of PI. And the more random points you generate, the more accurate your approximation becomes. (As the approximation gets better and better, the MC simulation is said to "converge" to the correct value of PI).

I ran a bunch of MC simulations for 4 different values of the radius "r": 1.0, 0.5, 0.2, and 0.1. These were chosen because they span a reasonably large range of hit-to-miss ratios. My data is shown below in Figure 1, and the equation I fitted from this data is shown in Figure 2.

The Y-axis on these plots is Error; it is the difference between the true value of PI and the calculated value from MC simulation. In all cases, the error falls to zero as you run more and more Monte Carlo iterations. Some people may be surprised at how many iterations are required to get 10% accuracy, or 5% accuracy, or 1% accuracy. (These can be read from the plots).

These experiments performed each simulation 1000 times (with different random numbers each time); a simulation is a run of ten million iterations. This gives 1000 different MC approximations of PI. The different approximations of PI form an error distribution; percentiles of the distribution were used to estimate confidence of error bounds. So for example, a "90% confidence" error (blue triangles in Figure 1) means that 900 of the 1000 MC approximations had an error less than or equal to the "90% confidence Error" plotted in the figure.

I fitted an equation to the 90% confidence data (blue triangles in these figures); it is shown in Figure 2. I'm amused that the equation says you get errors of about 0.01 percent (one one-hundredth of one percent) if you simulate N=400 million iterations. This is apparently the number of MC iterations that blackjack card counters use when they compare and evaluate different strategies for counting cards (ref)

You might enjoy fooling around with these sorts of computations yourself. I happened to use PERL, but you may enjoy applying a different software toolkit (R? MATLAB? Maple? NumPy? awk?) instead. Who knows, you may even discover a subtle blunder (or a big screaming obvious one) in my figures. Try it out, have some fun, and possibly learn something along the way.
Attachments
Figure 2
Figure 2
error_equation.png (12.13 KiB) Viewed 3329 times
Figure 1
Figure 1
Error_vs_Iterations_plots.png (91.54 KiB) Viewed 3331 times
Post Reply