Introduction To Monte Carlo Methods

Introduction To Monte Carlo Methods

I’m going to keep this tutorial light on math, because the goal is just to give a general understanding.

The idea of Monte Carlo methods is this—generate some random samples for some random variable of interest, then use these samples to compute values you’re interested in.

I know, super broad. The truth is Monte Carlo has a ton of different applications. It’s used in product design, to simulate variability in manufacturing. It’s used in physics, biology and chemistry, to do a whole host of things that I only partially understand. It can be used in AI for games, for example the chinese game Go. And finally, in finance, to evaluate financial derivatives or option pricing [1]. In short—it’s used everywhere.

The methods we use today originated from the Manhattan Project, as a way to simulate the distance neutrons would travel through through various materials [1]. Ideas using sampling had been around for a little while, but they took off in the making of the atomic bomb, and have since appeared in lots of other fields.

The big advantage with Monte Carlo methods is that they inject randomness and real-world complexity into the model. They are also more robust to adjustments such as applying a distribution to the random variable you are considering. The justification for a Monte Carlo method lies in the law of large numbers. I’ll elaborate in the first example.

The examples I give are considered simple Monte Carlo. In this kind of problem we want to know the expected value of some random variable. We generate a bunch of these random variables and take their average. The random variable will often have a probability distribution.

Estimating Pi

We can use something called the random darts method, a Monte Carlo simulation, to estimate pi. Here is my R code for this example.

The logic goes as follows—

If we inscribe a circle in a square, where one side length of the square equals the diameter of the circle we can easily calculate the ratio of circle area to square area.

Now if we could estimate this value, we would be able to estimate pi.

We can do this by randomly sampling points in the square, and calculating the proportion of those inside the circle to the total points. So I just calculate red points over total points, and multiply by 4.

Now as the number of points increases, the closer our value will get to pi.

This is a very simple example of a Monte Carlo method at work.

Simulating Traffic

Here is a more useful example. We can simulate traffic using the Nagel–Schreckenberg model. In this model, we have a road, which is made up by cells or spaces and contains a speed limit, and then a certain number of cars. We iterate through the cars and update their velocity based on the four following rules. Note – a car’s velocity = v.

  1. Cars not at the maximum velocity will increase their velocity by one unit.
  2. We then assess the distance d between a car and the car in front of it. If the car’s velocity is greater than or equal to the distance, we adjust it’s velocity to d-1.
  3. Now we add some randomization. This is the step that makes it Monte Carlo-esque. With a probability p, we will reduce the cars velocity by 1.
  4. Then, we move the car up v units.

This model is simple, but it does a pretty good job of simulating traffic behavior. It doesn’t deal with accidents or bad drivers; it’s purpose is to assess those times when traffic just appears and vanishes without any apparent reason. More sophisticated models exist, but many of them are based on this model.

View my code for getting the simulated data here, and for visualizing it in Processing here.

I love how some of the “cars” are right on the bumper of the one in front of it, and others are just chilling out taking their time. Haha

Challenges with Monte Carlo Methods

The first big challenge for Monte Carlo is how to come up with independent samples for whatever distribution your dealing with. This is a harder than you might think. In my code I just called R or Python’s built in random functions, but sampling can become much more sophisticated. That is a lot of what you will read about from more academic sources.

Here is a link on how R’s built in uniform sampling distribution works.

Another problem is getting the error to converge. Notice with the pi example how the error stopped converging quickly for the latter part of the graph. Most Monte Carlo applications just use really large samples due to low computing costs to compensate.

Monte Carlo methods are an awesome topic to explore, and I hope this post popularizes them even a little bit more (outside of finance and physics, that is).

Sources

1. Wikipedia

2. Art Owen’s textbook on the subject. My favorite resource so far.

3. Kevin Murphy’s textbook.

时间: 2024-11-04 11:33:08

Introduction To Monte Carlo Methods的相关文章

History of Monte Carlo Methods - Part 1

History of Monte Carlo Methods - Part 1 Some time ago in June 2013 I gave a lab tutorial on Monte Carlo methods at Microsoft Research. These tutorials are seminar-talk length (45 minutes) but are supposed to be light, accessible to a general computer

Introduction to Monte Carlo Tree Search (蒙特卡罗搜索树简介)

 部分翻译自“Monte Carlo Tree Search and Its Applications”. MCTS 结合了传统 MC 随机采样的方法 和 树搜索的方法.MC 方法利用重复的随机采样来得到结果.在 MCTS 中,随机采样的过程是在随机模拟的形式中,用来拓展游戏树.该游戏树紧接着别用来决定下一个 move.MCTS 随着游戏树迭代的生长.每一次迭代,game tree 就 traversed 和 expanded.一段时间之后,game tree 就会收敛.这意味着在每次迭代中都

Experiment 22 - Monte Carlo Simulation

Experiment 22 - Monte Carlo SimulationDepartment of Electrical Engineering & ElectronicsSeptember 2019, Ver. 3.4Experiment specificationsModule(s) ELEC224 / ELEC273Experiment code 22Semester 1Level 2Lab location PC labs, third floor/fourth floor, che

Monte carlo

转载 http://blog.sciencenet.cn/blog-324394-292355.html 蒙特卡罗(Monte Carlo)方法,也称为计算机随机模拟方法,是一种基于"随机数"的计算方法. 1.起源 这一方法源于美国在第二次世界大战进研制原子弹的"曼哈顿计划".Monte Carlo方法创始人主要是这四位:Stanislaw Marcin Ulam, Enrico Fermi, John von Neumann(学计算机的肯定都认识这个牛人吧)和 N

Monte Carlo Approximations

准备总结几篇关于 Markov Chain Monte Carlo 的笔记. 本系列笔记主要译自A Gentle Introduction to Markov Chain Monte Carlo (MCMC) 文章下给出的链接. Monte Carlo Approximations Monte Carlo Approximation for Integration 理论部分 蒙特卡洛方法是用来近似计算积分的,通过数值方法也可以计算积分:最简单的近似方法是通过求小矩边梯形的面积再累加.但是当函数的

MCMC(Markov Chain Monte Carlo) and Gibbs Sampling

MCMC(Markov Chain Monte Carlo) and Gibbs Sampling 1.   随机模拟 随机模拟(或者统计模拟)方法有一个很酷的别名是蒙特卡罗方法(Monte Carlo Simulation).这个方法的发展始于20世纪40年代,和原子弹制造的曼哈顿计划密切相关,当时的几个大牛,包括乌拉姆.冯.诺依曼.费米.费曼.Nicholas Metropolis, 在美国洛斯阿拉莫斯国家实验室研究裂变物质的中子连锁反应的时候,开始使用统计模拟的方法,并在最早的计算机上进行

(转)Monte Carlo method 蒙特卡洛方法

转载自:维基百科  蒙特卡洛方法 https://zh.wikipedia.org/wiki/%E8%92%99%E5%9C%B0%E5%8D%A1%E7%BE%85%E6%96%B9%E6%B3%95 蒙特卡洛方法[编辑] 维基百科,自由的百科全书 蒙特卡洛方法(英语:Monte Carlo method),也称统计模拟方法,是二十世纪四十年代中期由于科学技术的发展和电子计算机的发明,而被提出的一种以概率统计理论为指导的一类非常重要的数值计算方法.是指使用随机数(或更常见的伪随机数)来解决很多

用python实现Monte Carlo Tic-Tac-Toe(井字游戏)

1 """ 2 Monte Carlo Tic-Tac-Toe Player @author dark_guard 3 """ 4 5 import random 6 import poc_ttt_gui 7 import poc_ttt_provided as provided 8 9 # Constants for Monte Carlo simulator 10 # Change as desired 11 NTRIALS = 20 # N

[matlab]Monte Carlo模拟学习笔记

理论基础:大数定理,当频数足够多时,概率可以逼近频率,从而依靠频率与$\pi$的关系,求出$\pi$ 所以,rand在Monte Carlo中是必不可少的,必须保证测试数据的随机性. 用蒙特卡洛方法进行计算机模拟的步骤:[1] 设计一个逻辑框图,即模拟模型.[2] 根据流程图编写程序,模拟随机现象.可通过具有各种概率分布的模拟随机数来模拟随机现象.[3] 分析模拟结果,计算所需要结果. ex1.投针试验求$\pi$ %蒲丰投针实验的计算机模拟 format long; %设置15位显示精度 a=