Markov chain Monte Carlo
Briefly… I want to only touch on Markov chain Monte-Carlo (MCMC) simulations. To explore probabilities of multiple parameters (probability-space) is computationally expensive. If you, say take 10 variations of a single parameter that will cost 10 computations. Two parameters will cost 100 computations. If you, say, compute 8 parameters, it will cost 100,000,000 evaluations. Supercomputers typically push a few million computations.. but that is pushing it. And so MCMC comes into play.
- Monte Carlo: any calculation that involves an element of randomness e.g. making a hop of a random size (normally drawn from a gaussian).
- Markov chain: where the next hop is taken depends on where it ends up -> only a position of greater likelihood in probability space is taken.
So an MCMC run will gradually converge to a point in probability-space, where the most likely combination of parameter values (i.e. model) is found. To prevent false minima form being found, you can launch several Markov Chains: most will end up at the true maximum.
Computationally, MCMC scales linearly with additional parameters, as opposed to the multiplicative grid effect. Of mention, is the Metropolis-Hastings algorithm, which resembles Markov-Chains but sometimes allow steps to positions of lower likelihood.. allowing exploration of the shape of the maximum.