On this page, we discuss the Monte-Carlo method for numerical quadrature (i.e. numerical evaluation of integrals). The Monte-Carlo method is a very special class of methods for solving an integral numerically that differs substantially from the standard methods (as described in Vectorized numerical quadrature), with applications in many areas of research. It is well-worth understanding the basics of the Monte-Carlo method, and this is what we will go through here.
Basics of Monte-Carlo
In the one-dimensional case (that is, for a function of a single variable), the Monte-Carlo method relies on the fact that:
Where is a randomly-sampled point in the range , and is the total number of points sampled. Note that this may also be written in the form:
Where is the expectation value (average value) of the function over the same interval. The general proof requires an understanding of statistics (specifically the law of large numbers). For a more detailed description of the mathematics, please see this article; while it is specifically about the application of Monte-Carlo integration in computer graphics, it describes the same principle. However, we will focus purely on the application of the theory here, instead of the theoretical foundations of Monte-Carlo integration. To actually use the Monte-Carlo method in practice, we of course cannot sample infinitely-many points, so we let be a large but finite number. Thus, the exact result becomes an approximate result, given by:
As a demonstrative example, consider the following integral:
The analytical solution, of course, is . We may compare the exact result with the approximate value of the integral, calculated with the Monte-Carlo method, as shown in the below code sample in Python:
import numpy as np
from numpy.random import default_rng
a = 0
b = np.pi
N = 100
# function to integrate
# can be changed to any function desired
f = lambda x: np.sin(x)
# analytical solution
# set this to None if it doesn't exist
exact_solution = 2.0
def monte_carlo_1d(f, a, b, N, random_seed=2):
"""
Implements the basic Monte-Carlo
quadrature method in 1D.
Note: seed must be provided for
reproducible results.
"""
E = 0 # expectation value
xk = 0 # random x-value in domain
# random number generator
rng = default_rng(seed=random_seed)
for k in range(1, N):
# draw random x-value from [a, b]
xk = rng.uniform(a, b, size=1).item()
f_k = f(xk)
E += f_k
E *= 1/N
return (b - a)*EWhy Monte-Carlo integration?
One nice advantage of the Monte-Carlo method is that it works for any function. We do not need to require that the function take a specific form; the Monte-Carlo method just works, although (by the law of large numbers) it does require a fairly large number of sampled random points to achieve an accurate result.
As a crude demonstration, consider the Python code that we showed prior. We can perform a rough benchmark by adding the following Python code:
N_values = [5, 10, 50, 100, 500, 1000, 10000, 100000]
for sample in N_values:
result = monte_carlo_1d(f, a, b, sample)
print(f"\nN = {sample}")
print(f"Approximated value of integral: {result:.3f}")
if exact_solution is not None:
abs_err = abs(result - exact_solution)
rel_percent_err = 100*abs(abs_err/exact_solution)
print(f"Absolute error: {abs_err:.3f} ({rel_percent_err:.1f}%)")Again, this is running in vanilla Python and certainly not in any scientific-computing environment for optimal benchmarking. However, the benchmark results (shown below) very much show a clear trend.
| Monte-Carlo result | Error of result | |
|---|---|---|
| 5 | 1.492 | 25.4% |
| 10 | 1.749 | 12.5% |
| 50 | 2.157 | 7.9% |
| 100 | 2.005 | 0.2% |
| 500 | 1.975 | 1.3% |
| 1,000 | 2.005 | 0.3% |
| 10,000 | 1.981 | 1.0% |
| 100,000 | 1.997 | 0.1% |
Note: the error oscillates slightly since Monte-Carlo methods are by nature stochastic (relying on randomness), so their convergence is an overall trend but occurs unevenly, with overshoots and undershoots common.
But by far the biggest advantage of Monte-Carlo numerical quadrature is that it is an method, unlike standard numerical quadrature methods, which are typically for dimensions, meaning that such standard methods become intolerably-slow for evaluating integrals over a large number of points for anything beyond one or two dimensions. This also means that the Monte-Carlo method is vastly faster for evaluating integrals over millions of points, evaluating integrals of higher-dimensional functions, or both. For precision numerical calculations or for performing integrals over functions with a large number of parameters (such as the rendering equation in computer graphics), the Monte-Carlo method has such an edge in terms of performance that it becomes the only practical method of numerical integration.
The Monte-Carlo method in higher dimensions
We may generalize the Monte-Carlo method to any number of dimensions relatively easily. The generalized formula for dimensions is given by:
Where is the domain of integration, is the volume of the space contained inside , is now a set of coordinates for a function of variables , and lastly, is a randomly sampled point in the domain. This allows us to straightforwardly perform integration in polar, cylindrical, and spherical coordinates just as easily as in Cartesian coordinates (by sampling from a uniform distribution of points in the domain).
Enhanced Monte-Carlo methods
In the description we have given a very basic description of Monte-Carlo methods, but there is far more to study about them. One of the most important areas that we didn’t mention is importance sampling. It is a collection of techniques to make it easier to sample over non-uniform domains, and to improve the rate of convergence in general by selectively sampling some regions more than others. For a guide to importance sampling, please see this article. The computer graphics community in particular has developed extensive techniques for optimizing Monte-Carlo integration, including next-event estimation and Russian roulette; for a general overview of Monte-Carlo as applied in computer graphics, see this article.