Description
Robot Localization using Particle Filters
1 Overview
The goal of this homework is to become familiar with robot localization using particle filters, also known as Monte Carlo Localization. In particular, you will be implementing a global localization filter for a lost indoor mobile robot (global meaning that you do not know the initial pose of the robot). Your lost robot is operating in Wean Hall with nothing but odometry and a laser rangefinder. Fortunately, you have a map of Wean Hall and a deep understanding of particle filtering to help it localize.
As you saw in class, particle filters are non-parametric variants of the recursive Bayes filter with a resampling step. The Prediction Step of the Bayes filter involves sampling particles from a proposal distribution, while the Correction Step computes importance weights for each particle as the ratio of target and proposal distributions. The Resampling Step resamples particles with probabilities proportional to their importance weights.
When applying particle filters for robot localization, each particle represents a robot pose hypothesis which for a 2D localization case includes the (x,y) position and orientation θ of the robot. The Prediction and Correction Steps are derived from robot motion and sensor models respectively. This can be summarized as an iterative process involving three major steps:
1. Prediction Step: Updating particle poses by sampling particles from the motion model, that is ). The proposal distribution here is the motion model, p(xt|ut,xt−1).
Algorithm 1 Particle Filter for Robot Localization
1: X¯t = Xt = φ
2: for m = 1 to M do
3: sample . Motion model
4: . Sensor model
5:
6: end for
7: for m = 1 to M do
8: draw i with probability
9: add
10: end for
11: return Xt . Resampling
2. Correction Step: Computing an importance weight for each particle as the ratio of target and proposal distributions. This reduces to computing weights using the sensor model, that is
3. Resampling Step: Resampling particles for the next time step with probabilities proportial to their importance weights.
Here, m is the particle index, t is the current time step, and M is the occupancy map. is the robot pose and importance weight of particle m at time t.
2 Monte Carlo Localization
Monte Carlo Localization (MCL), a popular localization algorithm, is essentially the application of particle filter for mobile robot localization. You can refer to Section 4.3 of [1] for details on the MCL algorithm. Algorithm 1, taken from [1], describes the particle filter algorithm applied for robot localization.
As you can see, the MCL algorithm requires knowledge of the robot motion and sensor models, and also of the resampling process to be used. We briefly describe these three components and point you to resources with more details and pseudocodes.
Motion Model
The motion model p(xt|ut,xt−1) is needed as part of the prediction step for updating particle poses from the previous time step using odometry readings. Chapter 5 of [1] details two types of motion models, the Odometry Motion Model and the Velocity Motion Model. You can use either model for sampling particles according to ). The Odometry Motion Model might be more straightforward to implement since that uses odometry measurements directly as a basis for computing posteriors over the robot poses.
Sensor Model
The sensor model p(zt|xt,m) is needed as part of the correction step for computing importance weights (proportional to observation likelihood) for each particle. Since the robot is equipped with a laser range finder sensor, we’ll be using a beam measurement model of range finders. Section 6.3 of [1] details this beam measurement model p(zt|xt,m) as a mixture of four probability densities, each modeling a different type of measurement error. You’ll have to play around with parameters for these densities based on the sensor data logs that you have. You are also free to go beyond a mixture of these four probability densities and use a measurement model that you think describes the observed laser scans better.
Additionally, as part of this beam measurement model, you’ll be performing ray-casting on the occupancy map so as to compute true range readings ztk∗ from individual particle positions (shifted to laser position).
Hint: The book specifies that the sensor model needs to be a normalized probability distribution, however, we have found in practice it is easier to debug when the mixture weights (and thus the distribution) are unnormalized as the particle weights are later normalized.
Note: Keep in mind that the sensor coordinate frame and robot coordinate frame are not perfectly aligned. See data/instruct.txt for details on this as well as units used in the provided data.
Resampling
One strategy for reducing the variance in particle filtering is using a resampling process known as low variance sampling. Another strategy is to reduce the frequency at which resampling takes place. Refer to the Resampling subsection under Section 4.3.4 of [1] for more details on variance reduction and using low variance resampling for particle filters.
3 Implementation
Resources
• data/instruct.txt – Format description for the map and the data logs.
• data/log/robotdataN.log – Five data logs (odometry and laser data).
• data/map/wean.dat – Map of Wean Hall to use for localization.
• assets/wean.gif – Image of map (for reference).
assets/robotmovie1.gif – Animation of data log 1 (for reference).
We have also provided you with helper code (in Python) that reads in the occupancy map, parses robot sensor logs and implements the outer loop of the particle filter algorithm illustrated in Algorithm 1. The motion model, sensor model, and resampling implementations are left as an exercise for you.
• main.py – Parses sensor logs (robotdata1.log) and implements outer loop of the particle filter algorithm shown in Algorithm 1. Relies on SensorModel, MotionModel and Resampling classes for returning appropriate values.
• map reader.py – Reads in the Wean Hall map (wean.dat) and computes and displays corresponding occupancy grid map.
• motion model.py, sensor model.py, resampling.py – Provides class interfaces for expected input/output arguments. Implementation of corresponding algorithms are left as an exercise for you.
You are free to use the helper code directly or purely for reference purposes. To utilize the framework, please start with a Python 3 environment. We highly recommend creating a virtual environment to be used for the rest of this class using e.g. conda. A short tutorial for creating a virtual environment can be found here. With your virtual environment or in your global environment you can use pip install -r requirements.txt (located in the code directory) to install the basic dependencies.
Improving Efficiency
Although there is no real-time-ness requirement, the faster your code, the more particles you will be able to use feasibly and faster would be your parameter tuning iterations. You’ll most probably have to apply some implementation ’hacks’ to improve performance, for instance,
• Intializing particles in completely unoccupied areas instead of uniformly everywhere on the map. • Subsampling the laser scans to say, every 5 degrees, instead of considering all 180 range measurements
• Since motion model and sensor model computations are independent for all particles, parallelizing your code would make it much faster.
You’ll observe that operations like ray-casting are one of the most computationally expensive operations. Think of approaches to make this faster, for instance using coarser discrete sampling along the ray or possibly even precomputing a look-up table for the raycasts.
• Lastly, if you use Python, apply vectorization as much as possible; if you’re comfortable with C++, consider using the OpenMP backend (which is a oneliner) to accelerate.
Debugging
For easier debugging, ensure that you visualize and test individual modules like the motion model, sensor model or the resampling separately. Some ideas for doing that are,
• Test your motion model separately by using a single particle and plotting its trajectory on the occupancy map. The odometry would cause the particle position to drift over time globally, but locally the motion should still make sense when comparing with given animation of datalog 1 (robotmovie1.gif). • Cross-check your sensor model mixture probability distribution by plotting the p(zt|zt∗) graph for some set of values of zt∗.
• Test your ray-casting algorithm outputs by drawing robot position, laser scan ranges and the ray casting outputs on the occupancy map.
4 What to turn in
Score breakup
• (10 points) Motion Model: implementation correctness, description
• (20 points) Sensor Model: implementation correctness, description
• (10 points) Resampling Process: implementation correctness, description
(10 points) Discussion of parameter tuning: for Motion and Sensor Models
• (30 points) Performance: accuracy and convergence speed on example logs
• (20 points) Write-up quality, video quality, readability, description of performance, and future work
• (Extra Credit: 10 points) Kidnapped robot problem
• (Extra Credit: 10 points) Adaptive number of particles
• (Extra Credit: 5 points) Vectorized Python or fast C++ implementation
5 Extra credit
Focus on getting your particle filter to work well before attacking the extra credit. Points will be given for an implementation of the kidnapped robot problem and adaptive number of particles. Please answer the corresponding questions below in your write up.
i. Kidnapped robot problem: The kidnapped robot problem commonly refers to a situation where an autonomous robot in operation is carried to an arbitrary location. You can simulate such a situation by either fusing two of the log files or removing a chunk of readings from one log. How would your localization algorithm deal with the uncertainity created in a kidnapped robot problem scenario?
ii. Adaptive number of particles: Can you think of a method that is more efficient to run, based on reducing the number of particles over timesteps? Describe the metric you use for choosing the number of particles at any time step.
You will also receive bonus credits provided your implementation is optimized, either with vectorization in Python or acceleration in C++.
6 Advice
The performance of your algorithm is dependent on (i) parameter tuning and (ii) number of particles. While increasing the number of particles gives you better performance, it also leads to increased computational time. An ideal implementation has a reasonable number of particles, while also not being terribly slow. Consider these factors while deciding your language of choice—e.g. choosing between a faster implementation in C++ or vectorized optimization in Python vs. using the raw Python skeleton code.
References
[1] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005.
Reviews
There are no reviews yet.