I am working on a very typical university homework. Im supposed to write a C program to estimate the pi via Buffon Needle using Monte Carlo method. I think my program works properly, but i never ever get the pi right. It is always near typical 3.14, but sometimes its 3,148910 sometimes 3,13894. Whats the reason behind it? Can i do anything to make it more 'equal' to accurate value? (code comments are in Polish, comments in () are in English.)
#include <stdio.h>
#include <math.h>
#include <time.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
srand(time(NULL));
int i; //wprowadzenie licznika (counter)
double l; // dlugosc igly (lenght of needle)
double n;// liczba prob (number of trials)
double L; // szerokosc miedzy liniami (width between lines)
double x; // kat miedzy igla a normalna do linii (angle)
double pi; // szacowana wartosc liczby pi (pi)
double P=0; //ilość prób spełniających warunek styku igły z linią (ammount of trials that worked)
double d; //odleglosc srodka igly od linii (distance between centre of needle and line)
double y; //minimalizacja obliczeń
double stosun; //stosunek wykonanych obliczeń (ammount of trials to ammount of succesfull trials)
printf("Program szacuje wartosc liczby pi metoda Monte Carlo na podstawie Igly Buffona. Prosze Pamietac o:\n-Podaniu liczby prób jako wartości większej od zera i całkowitej\n-Podaniu długości igły mniejszej niż szerokości między liniami\n");
printf("Prosze podac liczbe prob:\n");
scanf("%lf", &n);
printf("Prosze podac dlugosc igly:\n");
scanf( "%lf", &l);
printf("Prosze podac szerokość miedzy liniami:\n");
scanf("%lf", &L);
if (n <= 0.0 || l > L)
{
printf("Nastąpił błąd wynikających z podania niepoprawnych danych\n");
return 1;
}
printf("%f %f %f", n,l,L); //debuging midway
for (i=0; i<n; i++)
{
x = ((double)rand()/RAND_MAX)*3.1415; // losowy kat w radianach (random angle in radians)
d = ((double)rand()/RAND_MAX)*L/2; // losowa dlugosc igly mniejsza niz odstep miedzy liniami (random needle lenght)
y = (l/2) * sin (x);
printf("Próba%d\n", i);
if (d<=y)
{
P++;
}
}
stosun=n/P;
printf("Stosun %lf", stosun);
pi=stosun*2*l/L;
printf("liczba pi=%lf", pi);
return 0;
}
The variance of Monte Carlo simulations tends to be really large. (They converge to zero error as n increases, but very slowly.) You can read about Buffon's Needle in particular here. I'd like to draw your attention to the figure where 5 different runs of 1000 tosses each are plotted. Inspecting each of them it would look like the value has already converged but to quite a wrong (and different in each case) result.
It's good to do your analysis before you start. The error estimate done rigorously in the above source is about 5.6 / n. That's the variance, so to get a standard deviation, we need a square root, which makes it even worse. After 1000 tosses you can expect π to have an error of about 0.073 (although this can easily be twice as high, or much less, just by pure chance). After a million tosses it should be correct to about ±0.002. After a hundred million you are likely to get the first three digits after the decimal point right and the first error on the fourth or (rarely) fifth. I don't know your n but the values you are getting seem well within what is typical. Edit: with n = 1,000,000, they are +3 σ and –1 σ.
Note that it's a common misconception that one can work around this by making several runs and averaging their results (which should fluctuate above and below the correct value with the same probability, after all). This of course works, but does not offer any advantage whatsoever, apart from the obvious that it can be distributed to multiple machines. The error of running 100 simulations of a thousand iterations each is exactly the same as that of running a single one with a hundred thousand iterations.
An obvious way of making the estimate better is thus increasing the number of hits. One big advantage of Monte Carlo methods is that the individual samples are usually completely independent from each other and thus allow great opportunities for parallelization (OpenMP, vectorized instructions, GPU). But the first step really should be trying to make your code run faster so that increasing n by an order of magnitude or so hurts less. For example, because you know from the analysis that the precision is never going to be really stunning, you can use float
instead of double
which is on some platforms up to twice as fast. Eliminate the call to sin
(it's certainly more elegant if you don't need the value of π to start with). Output your summary only every 1000 steps or so.
A less obvious one is doing the analysis properly and noting how you can decrease the numerator in the variance (the denominator will be n anything you do). If I am not mistaken this experiment runs the best when the length of the needle is equal to the separation between the lines: less than that is suboptimal and more than that brings multiple crossings which you don't want. There may be other ways of how to enhance it by altering the distribution of the angle and/or the position.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With