Using the LabVIEW Robotics Environment Simulator, we have implemented an Extended Kalman Filter (EKF) based Simultaneous Localization and Mapping (SLAM) algorithm. Our project also includes an obstacle avoidance controller, to navigate the robot through obstacles. Figure 1 shows the test environment used.
Most robots, especially autonomous robots, need to be clever enough to avoid bumping into obstacles. To do this they need sensors to investigate the environment around them, they need to process this data and identify obstacles in their vicinity. Finally, they need to be able to generate motor commands that steer them clear of any obstacles around them.
This is the simplest form of obstacle avoidance and there are tons of examples on the internet, with plenty of little robots that can do this quite nicely.With this sort of rudimentary obstacle avoidance algorithm, a robot could keep clear of obstacles but it would most likely wander around aimlessly while doing so. Sometimes robots need to be a little more intelligent. They need to reach a goalperhaps they are chasing a target, perhaps they need to reach one of many way-points along a pre-determined path, perhaps they are headed toward a battery charging point or a position of interest, maybe they are meeting up with a friend, perhaps they need to duck under enemy radar cover while approaching a target!!
Whatever the case, these kinds of robotsrobots that can navigate intelligently, need to have a slightly more robust obstacle avoidance behaviour built into them. Clever Robots can avoid obstacles as they head towards a goalWhat are some of the things a robot would need to achieve this sort of tasking. It would need to know the position of its goal in space. “Where am I going?”. It would need to know its own position in space and be able to update this position as it moved within this space. “Where am I now?” There are several ways to do this.
A robot could use shaft encoders on its wheels to estimate its position, or use a to fix itself. If it were outdoors, it could use a GPS receiver. It would need sensors to locate obstacles. “What is there around me?”.
There are plenty of options for sensing obstacles, Ultrasonic pingers, Infra-Red Tx-Rx modules, LIDAR. It would need to choose a path that would keep it clear of obstacles and still eventually lead it to its goal. “How do I get to my goal safely?” This is where a good obstacle avoidance algorithm comes into play.So here is my autonomous robot obstacle avoidance simulator. I wrote this in P ython and most of the 400 odd lines of code only there to build the simulation, display graphics etc. The actual robot decision making code is only about 20 lines long and can easily be implemented on a small microcontroller like device.The Robot and its Goal. The Robot and the Goal.
The robot is shown as a yellow circle marked “R”. The blue line represents its direct Line of Sight to its Goal (Marked with a green circle and annotated “G”). The white line represents the current recommended heading of the robot.
In this screenshot, blue and white lines coincide. The tiny green circles behind the robot are its position trail. This robot is fitted with 18 distance measuring sensors shown numbered from 8 to -8. Each has a Field of View of 10 degrees and a maximum pick up range of 200 pixels. For a practical implementation on a real robotic platform, one could just as well use 3 sensors and sweep them through 60 degrees using a servo motor.Obstacles. Avoiding ObstaclesThis algorithm uses the method of weighted sums to determine a recommended heading for the robot. Much of the program logic is based on by Ioan Susnea, Viorel Minzu, Grigore Vasiliu and I hereby acknowledge their work and place on record my thanks to them for sharing their work.So how does this algorithm work.
Simply put, each sensor is given a weight according to its “look” direction. Sensors closer to the middle have lower weights and those that look sideways have the maximum weight. Sensors on the port side have negative weights and those that look to starboard have positive weights. The output from each sensor is the distance of the nearest obstacle along its Line of Sight.
If no obstacle is present, the sensor outputs a maximum value. We then just sum up and normalise the weighted outputs of each sensor to produce a recommended turning direction. As you can easily make out, whenever obstacles are present on any particular side of the robot, the sensors on that side are squeezed. Thus sensors with index numbers on the “free” side will dominate the weighted sum and the recommended output will reflect this bias. Look at the images below. Works at first but eventually the robot crashes through the wall.So clearly there is still much work to be done, and I have made several modifications to the original concept to try and improve the algorithm, some of which I list below.
I modified the algorithm so that the sensor array is always centered on the Line of Sight to the goal and not centered on the robot’s heading. This makes sense since the algorithm shouldn’t really care which direction the robot is facing, rather it should be always checking for a clear path to the goal. At each step the robot checks if there is a clear path to the target. Obviously it can only do this with the obstacles currently in its view and not all the obstacles in the simulation.
![Lidar Lidar](/uploads/1/2/5/3/125350301/867547882.png)
If there is a clear path, the robot abandons any recommendations of the obstacle avoidance algorithm and heads straight for the goal until fresh obstacles are detected and deviations to track are needed.I have uploaded the simulation into the website and you can experiment with it by clicking. Press the “play” button on the CodeSkulptor toolbar to start the simulation.The code is a little messy and still under development but it should provide a good starting point for a simple obstacle avoidance algorithm.Use the “Set Robot” button to set the position of the Robot.
The “Set Goal” button allows us to change the position of the goal, and the “Add Obs” button allows us to add more obstacles as needed. Tweaking the constants on lines 8 to 17 will alter the behaviour of the robot. Use the “Step” button to step through the simulation. The “Set Start” button is irrelevant at this stage of development and I will probably remove it some time soon.Hopefully the video below provides a good demo of the algorithm at work.
Supersonic SensorWe need to follow these steps to configure the supersonic sensor. Step 1Supersonic sensor. Vcc goes to 5V via the wire we attached earlier.
GND goes to the GND wire. ECHO pin goes to Analog Pin 5, and TRIG pin goes to Analog Pin 4. Step 2Next, to connect the power. We connect the black wire from the switch to the motor shield. It goes to the GND pin of this external power connector. The red wire from the switch goes to the M+ connector.
If we open the switch, we can see that the green LED lights up, the Arduino is receiving power from the batteries.