INTRODUCTION:
SLAM stands for Simultaneous Localization and Mapping.
In robotics, simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken-and-egg problem there are several algorithms known for solving it, at least approximately, in tractable time for certain environments. Popular approximate solution methods include the particle filter and extended Kalman filter.
SLAM algorithms are tailored to the available resources, hence not aimed at perfection, but at operational compliance. Published approaches are employed in self-driving cars, unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, newly emerging domestic robots and even inside the human body.
One way to understand it is to imagine a person entering an unfamiliar building for the first time. If a person walks in the front door, his eyes immediately begin to gaze about and you quickly assess the layout of the room or rooms nearest to your current location. At this point, he knows that he is located at the front entrance and he has an initial sense of the layout--or map--of a small part of the building. As he crosses the floor ahead, his eyes and head continue to scan from side to side and he notice doorways and other entrances leading to additional rooms and perhaps even stairways or elevators leading up or down to additional floors.
As he moves about the building, he doesn’t completely forget where he has already been. Indeed, at any moment he has a pretty good idea where he is within the current map that he has so far constructed in his head, and unless he has a really bad sense of direction, he could probably turn around and get back out of the building without too much trouble. Finding his way around the building is a good example of simultaneously constructing a map and localizing himself within that map.
1.2 Description of SLAM Problem:
For autonomous mobile robots, learning maps is often essential. Being able to automatically navigate in an environment is dependent on having a map, and manually creating this map is often a hard and labor intensive effort. Maintaining can prove costly enough to render the robot unusable. Autonomous mobile robots also need to localize themselves in their environment. Some sensor arrays could provide a full state estimate, such as an overhead camera combined with computer vision software.
Above solution is used primarily when the environment restricted to a small surface, such as in the Micro Robot World Cup Soccer Tournament (MicroSot).Here robot position can be computed directly. However, when the environment grows, or the environment should not be changed in order for the robot to obtain a position estimate, such sensor arrays become infeasible. For a robot exploring unprepared indoor environments, its location most often has to be computed from several sensor scans, and is dependent on a map.
Importantly, the problem of SLAM – learning a map while simultaneously estimating the robot’s position in that map – consists of two mutually dependent sub problems. If a complete and accurate map existed, simpler algorithms such as Monte Carlo Localization could have been utilized for generating position estimates. Likewise, if a complete history of accurate positions existed for the robot, map learning would be reduced to writing sensor data to a map representation. It’s a chicken and egg problem. For this reason, the problem is recognized to be hard, and it requires a search for a solution in a high-dimensional space of possible locations and maps.
The setting for the SLAM problem (2D) is that of a robot moving in an environment consisting of a population of features as shown in Figure 1.1. The robot is equipped with proprioceptive sensors that can measure its own motion and exteroceptive sensors that can take measurements of the relative location between the robot and nearby features. The objective of the SLAM problem is to estimate the position and orientation of the robot together with the locations of all the features. Estimation of position of obstacle is done by using ultrasonic sensors sweeping over 180 degrees and data fed to MATLAB by using python script execution in raspberry pi mounted on robot. Now in MATLAB a plot is drawn using the coordinates received.
Complete Solution
Chat with our Experts
Want to contact us directly? No Problem. We are always here for you
Get Online
Assignment Help Services