A Full Autonomous Stack, a Tutorial | ROS + Raspberry Pi + Arduino + SLAM

A walkthrough for setting up a ROS stack on a Raspberry Pi! Post still in progress and will be updated periodically ๐Ÿ™‚

1. Building a robot.


This part is somewhat looser than the others. In a nutshell, find some motors, wheels, motor controllers, and some connecting materials. Throw them all at a wall and hope that the come together nicely.

I have used RobotShop.com’s:

  • Scout platform
  • Slightly stronger motors than the one it ships with. (Something like this.)
  • Cytron 10A 5-30V Dual Channel DC Motor Driver.
  • YDLIDAR G2 Lidar.
  • Raspberry Pi 3 Model B. (I could not install ROS on a Raspberry Pi 4 at this time, maybe you could!)

2. Installing ROS

ROS (Robot Operation System) is a framework that facilitates the use of a wide variety of "packages" to control a robot. Those packages range all the way from motion control, to path planning, mapping, localization, SLAM, perception, and more. ROS provides a relatively simple interface with those packages, and the ability to of course create custom packages.

Note: The Raspberry Pi 4 is more computationally capable than its predecessors. However, installing ROS on the Pi3 is currently (as of December 2019) easier, and allegedly more reliable.

Get the disc image

I dowloaded the Ubuntu 16.04 Xenial with pre-installed ROS from Ubiquity Robotics. They have great instructions on how to install and download the image. The main points are:

  • Download the image from the top of the page.
  • Flash it to an SD card (at least 8GB). You can use Etcher, it works well.
  • Connect to the WiFi network that starts with ubiquityrobot. Password is robotseverywhere.
  • Go to Terminal, and connect to your Pi using ssh ubuntu@10.42.0.1. Password is ubuntu.
  • Run roscore to make sure that things are working properly. If you get a warning/errors, try stopping ROS and starting it again with killall -9 roscore.

3. Remotely connecting to ROS

Something we would want to be able to do is to access the ROS communication messages from our laptop. There are a couple of steps to do here.

  • Spin a Linux machine with ROS Kinetic Kame. Either a virtual machine or a real machine. You can use VMWare-Fusion with Ubuntu 16.04 or something similar. We will refer to that machine as the Observer machine. The robot is the Master.

  • On the Master, find the ROS_IP and ROSMASTER_URI. These two things are the information both machines will need to communicate. Find the ROS_IP by running ifconfig.

  • I would add these lines to .bashrc on the Observer machine, or create a script to run them together. This is not required, but would make your life (potentially) easier in the long run (so you won’t need to type those lines every time you’d want to connect to the robot ๐Ÿ™‚ ).

  • On the Master (robot), run roscore.

  • On the observer, you now have access to the messages and topics that are on the Master. More on that soon.

  • On the robot (the machine running roscore):

    • ROS_IP is its own IP.
    • ROS_MASTER_URI is HTTP://:11311.
  • On the observer computer:

    • ROS_IP is its own IP.
    • ROS_MASTER_URI is the robot’s IP

In this example (the IPs would probably be different in your network), on the robot, we set: export ROS_IP=192.168.43.228 export ROS_MASTER_URI=http://192.168.43.228:11311

On the observer laptop, we set: export ROS_IP=192.168.43.123 export ROS_MASTER_URI=http://ubiquityrobot.local:11311. This master URI looks different (but is actually the same under the alias). I believe that setting it to 192.168.42.228 would work (should be the same as the .local), but I did not test it.

A couple of notes here:

  • To make sure the communication works, I followed this tutorial to publish basic shapes to rviz.
  • I had to make the messages compatible with Indigo, following an answer here. (The solution with downloading the common msgs Indigo folder and using the visualization_msgs package foder in catkin_ws/src.)
  • In rviz, make sure to set the frame to my_frame (if following tutorial).

4. Connecting to WiFi

A short step, to make sure both machines have internet connectivity. The information is taken from this website.

  • On the robot machine, pifi add YOURNETWOKNAME YOURNETWORKPASSWORD
  • Restart the Pi, sudo reboot. Now the Raspberry Pi will connect to your WiFi network on startup. To connect to it, connect your computer to the same network, and ssh ubuntu@ubiquityrobot.local with the password ubuntu.

Woo! Now both machines have internet, and can communicate over SSH.

5. Testing the lidar

This step was a bit of a doozy. It took me a while to figure out how to get the lidar to run. But I did! So hopefully you won’t have to suffer too.

I am using the YDLIDAR G2 for this build. The first step is to install the necessary drivers. The driver is a ROS package.

  • cd catkin_workspace/src.
  • git clone https://github.com/EAIROBOT/ydlidar_ros.git.
  • catkin_make
  • Follow the directions from the repository, written below:
    • roscd ydlidar_ros/startup
    • sudo chmod 777 ./*
    • sudo sh initenv.sh
  • Go back to your catkin workspace, and run source devel/setup.bash.
  • git checkout G2. Move to the branch of your Lidar model.
  • Run catkin_make again.

Test the lidar with roslaunch ydlidar_ros lidar.launch. Visualize the scans in Rviz, by adding the topic /scan.

It may look something like this! Background may vary ๐Ÿ™‚

6. ROS + Arduino; Getting them to talk to each other.

As we know, the Raspberry Pi is the "brain" of our robot, perceiving the environment and planning in it. The Arduino, is simply used to control the motors of the robot. It doesn’t do much thinking. So our goal here, is to get commands from the Raspberry Pi to the Arduino, so it’ll be able to tell the motors how to move, accordingly. In high level, what we do is install rosserial, a ROS module that enables Arduino communication, on both the Raspberry Pi and the Arduino.

  • Following the steps from the ROS website, we start with installing the package. sudo apt-get install ros-kinetic-rosserial-arduino, and then, sudo apt-get install ros-kinetic-rosserial. If you are using a ROS version different from Kinetic, change the word kinetic to your version.
  • In the following commands, substitute catkin_ws with the name of your catkin workspace.
    git clone https://github.com/ros-drivers/rosserial.git
    cd catkin_ws
    catkin_make
    catkin_make install
    
  • In your Arduino IDE, install the rosserial library. I found it the easiest to just do it from the IDE itself. Search for rosserial in the Library Manager and install it.

And that’s it!

For a test run, try the HelloWorld example, from the examples included with the rosserial library. Flash the Arduino with it, and connect to the Raspberry Pi. To run it:

  • On the Raspberry Pi roscore
  • In a second Raspberry Pi terminal, rosrun rosserial_python serial_node.py /dev/ttyACM0. Change ttyACM0 with the port of your Arduino. You can check the port by navigating to ~/dev/, and observing which files disappear and re-appear when the Arduino is disconnected and connected.
  • In a third terminal, rostopic echo chatter to see the messages being sent.

7. Installing Hector-SLAM

This part is exciting! We will now add the mapping and localization functionality to our robot. We use the Hector-SLAM package, since it enables us to create maps and localize ourselves with a Lidar alone. I found this video by Tiziano Fiorenzani and the official resources on the ROS website helpful for setting Hector-SLAM up.

  • Clone the GitHub repository to your catkin workspace. Navigate to the src folder and run git clone https://github.com/tu-darmstadt-ros-pkg/hector_slam.git.
  • [This may fail!, see sub-bullet for work-arounds] Build ROS by running catkin_make and then sourcing setup.bash with source ~/catkin_ws/devel/setup.bash.
    • If your build gets stalled, or seems to be very slow. Do two things.
    • Run the build with catkin_make -j2

We need to make a couple of modifications to the Hector SLAM tutorial files in order for them to work with our setup. We first take note of the transformations available to us on the \tf topic, and the reference frames they use.

  • Spin the lidar node, with roslaunch ydlidar_ros lidar.launch.
  • Check the communication on the /tf topic with rostopic echo /tf
  • I get only one transformation:
---                                                                          
transforms:                                                                         
  -                                                                          
    header:                                                                  
      seq: 0                                                                 
      stamp:                                                                 
        secs: 1578619851                                                     
        nsecs: 284012533                                                     
      frame_id: "/base_footprint"                                            
    child_frame_id: "/laser_frame"
    transform:                                             
      translation:                                         
        x: 0.2245                                          
        y: 0.0                                             
        z: 0.2                                             
      rotation:                                            
        x: 0.0                                             
        y: 0.0                                             
        z: 0.0                                             
        w: 1.0                                             
---                        

So we see that we have only two frames. Namely /base_footprint and laser_frame. We will update the file ~/catkin_ws/src/hector_slam/hector_mapping/launch/mapping_default.launch to accommodate those.

  • At the somewhat top of the file, change the first line to the second.
<arg name="odom_frame" default="nav"/>
<arg name="odom_frame" default="base_footprint"/>
  • At almost the very bottom of the file, change from/to:
<node pkg="tf" type="static_transform_publisher" name="map_nav_broadcaster" args="0 0 0 0 0 0 map nav 100"/>
<node pkg="tf" type="static_transform_publisher" name="map_nav_broadcaster" args="0 0 0 0 0 0 base_footprint laser_frame 100"/>
  • Navigate to ~/catkin_ws/src/hector_slam/hector_slam_launch/launch/tutorial.launch, and change from/to
<param name="/use_sim_time" value="true"/>
<param name="/use_sim_time" value="false"/>

This should do the trick! Try it out!

  • In a first terminal run the lidar with roslauch ydlidar_ros lidar.launch
  • In a second terminal run Hector SLAM with roslaunch hector_slam_launch tutorial.launch

You should be able to see the results in Rviz. Choose the /map topic to visualize the map that was created.

8. Lower Level Robot Control (That’s where the Arduino comes in!)

We now want to create a ROS package that would allow ROS communication to move the robot in the world. Again, Tiziano Fiorenzani has a great video explaining the basics of what we are doing here. In a nutshell, we want to make a subscriber node that would run on the Arduino, and listen to the topic /cmd_vel. We would want to begin with sending commands from the keyboard to the robot.

To see what this topic is all about, run rosrun teleop_twist_keyboard teleop_twist_keyboard.py. In another terminal, run rostopic info /cmd_vel to see that this topic publishes the structure geometry_msgs/Twist. Run rosmsg show geometry_msgs/Twist, to see the attributes of the message. They are a linear and angular commands.

geometry_msgs/Vector3 linear
  float64 x
  float64 y
  float64 z
geometry_msgs/Vector3 angular
  float64 x
  float64 y
  float64 z

Let’s create the ROS node on our Arduino. We would want to map values in precentages (that we get from /cmd_vel) to the range [0,255] that our motor controller understands.

The entirety of the code for this node lives on the Arduino. So we use this sketch, and upload it. This is a very very simple sketch, that only supports forward and stopping motion. Check out the GitHub repo for a full program.


#if (ARDUINO &gt;= 100)
#include 
#else
#include 
#endif

#include 
#include 
// Pin variables for motors.
const int right_pwm_pin = 5;
const int right_dir_pin = A0;
const int left_pwm_pin = 6;
const int left_dir_pin = A1;
const bool left_fwd = true;
const bool right_fwd = false;

// Default_speed.
const int default_vel = 201;

ros::NodeHandle  nh;

void MoveFwd(const size_t speed) {
  digitalWrite(right_dir_pin, right_fwd);
  digitalWrite(left_dir_pin, left_fwd);
  analogWrite(right_pwm_pin, speed);
  analogWrite(left_pwm_pin, speed);
}

void MoveStop() {
  digitalWrite(right_dir_pin, right_fwd);
  digitalWrite(left_dir_pin, left_fwd);
  analogWrite(right_pwm_pin, 0);
  analogWrite(left_pwm_pin, 0);
}

void cmd_vel_cb(const geometry_msgs::Twist &amp; msg) {
  // Read the message. Act accordingly.
  // We only care about the linear x, and the rotational z.
  const float x = msg.linear.x;
  const float z_rotation = msg.angular.z;

  // Decide on the morot state we need, according to command.
  if (x &gt; 0 &amp;&amp; z_rotation == 0) {
    MoveFwd(default_vel);
  }
  else {
    MoveStop();
  }
}
ros::Subscriber sub("cmd_vel", cmd_vel_cb);
void setup() {
  pinMode(right_pwm_pin, OUTPUT);    // sets the digital pin 13 as output
  pinMode(right_dir_pin, OUTPUT);
  pinMode(left_pwm_pin, OUTPUT);
  pinMode(left_dir_pin, OUTPUT);
  // Set initial values for directions. Set both to forward.
  digitalWrite(right_dir_pin, right_fwd);
  digitalWrite(left_dir_pin, left_fwd);
  nh.initNode();
  nh.subscribe(sub);
}

void loop() {
  nh.spinOnce();
  delay(1);
}

We can control the robot from our laptop now! In separate terminal instances, run the following:

  • Allow Arduino communication with rosrun rosserial_python serial_node.py /dev/ttyACM0
  • Enable keyboard control with rosrun teleop_twist_keyboard teleop_twist_keyboard.py

To make our lives easier for the next time we run the teleop node, we can create a launch file!

9. Launch files!

Creating a launch file is pretty simple, and can be done following the documentation on ROS.org. In our case, we end up with the following launch file to launch all the necessary nodes for keyboard teleoperation.

<launch>
<node pkg="rosserial_arduino" type="serial_node.py"
name="serial_arduino">
<param name="port" value="/dev/ttyACM0" />
</node>
<node pkg="teleop_twist_keyboard" type="teleop_twist_keyboard.py" name="teleop_twi </launch>

I have placed this launch file in the directory ~/catkin_ws/src/lidarbot/launch. Don’t forget to catkin_make and source devel/setup.bash !

We can now run the robot in a teleoperated mode with

roslaunch lidarbot lidarbot_teleop.launch

9. Correcting angle offset.

When I was designing the Lidar mount that I ended up 3D printing, I failed to look through the datasheet and design in in a way such that the "forward" direction of the Lidar would actually point forward. Let’s correct that.

Because of a lack of time, let’s do a somewhat hack-y patch.

Navigate to /catkin_ws/src/ydlidar/sdk/src/CYdLidar.cpp, and change the function void CYdLidar::checkCalibrationAngle(const std::string &amp;serialNumber) { to the following. We are simply overriding the angle offset value provided by the Lidar model.


void CYdLidar::checkCalibrationAngle(const std::string &serialNumber) {
  m_AngleOffset = 0.0;
  result_t ans;
  offset_angle angle;
  int retry = 0;
  m_isAngleOffsetCorrected = false;

  float override_offset_angle = 140.0;

  while (retry < 2) {
    ans = lidarPtr->getZeroOffsetAngle(angle);

    if (IS_OK(ans)) {
      if (angle.angle > 720 || angle.angle < -720) {
        ans = lidarPtr->getZeroOffsetAngle(angle);

        if (!IS_OK(ans)) {
          continue;
          retry++;
        }
      }

      m_isAngleOffsetCorrected = (angle.angle != 720);
      m_AngleOffset = angle.angle / 4.0;
      printf("[YDLIDAR INFO] Successfully obtained the %s offset angle[%f] from the lidar[%s]\n"
             , m_isAngleOffsetCorrected ? "corrected" : "uncorrrected", m_AngleOffset,
             serialNumber.c_str());

      std::cout << "Overriding offset angle to " << override_offset_angle << "\n";
      m_AngleOffset  = override_offset_angle;
      return;
    }

    retry++;
  }

Great, our Lidar’s arrow points forward now.

10. Save a map.

[Update December 2020] There is a better/more correct way to save a map than the one I have outlined below initially. The former method is below, and the better one is after that.

10a. One way to save a map (this is not what you need for localization!)

In separate terminals, run:

roslaunch lidarbot lidarbot_teleop.launch

roslaunch ydlidar_ros lidar.launch

roslaunch hector_slam_launch tutorial.launch

And open Rviz from another linux machine, if possible.

Now, as you’ll be driving around the space (slowly! We want the map to be built accurately, so no need to give it a hard time doing so.) you’ll see a map starting to be build in real time, in Rviz. The lighter colors are empty space, and the dark ones are obstacles.

When you think your map is sufficiently good, run the following:

rostopic pub syscommand std_msgs/String "savegeotiff"

This will save a .tif and a .tfw files in ~/catkin_ws/src/hector_slam/hector_geotiff/maps directory.

The map will look something like this:

10b. Second way to save a map. Use this for localization!

I have been following the excellent tutorials provided by The Construct on YouTube. They have information about how to record a map, how to provide a map to Rviz, and how to performa localization with the map on a Husky robot. In this section we will be looking first at how to record a map.

Let’s begin by downloading the map server that will do the heavy lifting for us. We will do this with sudo apt-get install ros-kinetic-map-server on the robot Raspberry Pi.

To record a map, we should spin up the robot, in a similar way to how it was done in section 10a. Run the following in separate terminal instances.

roslaunch lidarbot lidarbot_teleop.launch
roslaunch ydlidar_ros lidar.launch
roslaunch hector_slam_launch tutorial.launch

Move it around the room slowly until you are happy with how the map looks on Rviz (or just hope that it looks okay ๐Ÿ™‚ ), and then run:

rosrun map_server map_saver -f my_map

This command will save my_map.yaml and my_map.pgm files! These specify the occupancy information of the map. You can change the name of this map by changing the my_map argument to whichever name you’d like. The .pgm file can be used to visualize the map the was created! From your computer, you can use "Secure Copy", aka SCP, to download the .pgm file and visualize it. In my case, I have saved my map files to ~/catkin_ws/maps/, so I downloaded them to my Mac machine’s Downloads folder using:

scp ubuntu@ubiquityrobot.local:~/catkin_ws/maps/my_map.pgm ~/Downloads 

11. Serve a saved map

In order for the navigation stack to be able to localize the robot, it needs access to the map we have just saved. Luckily, this is a fairly easy thing to do! The most straightforward way to do this is by running:

rosrun map-server map-server my_map.yaml

If you had an Rviz session started up, you can visualize the map by showing the /map topic!

You can also set up a launch file to serve the map for you, such that you won’t have to run this command every time you require a map to be served. For example, if we create a new launch file called serve_map.launch in the lidarbot package, we can call it by roslaunch lidarbot serve_map.launch. We should populate it with something like:

<launch>
<arg name="map_fname" value="/home/user/catkin_ws/src/lidarbot/maps/mmy_map.yaml></arg>
<node pkg="map_server" type="map_server" name="map_server" args="$(arg map_fname)>

</node>
</launch>

Pay attention to the argument value for map_fname. Change it to the path to where you left the map files.

12. Navigation

Alright, so here are the bad news. Given the weird nature of this weird, I am unfortunately not able to provide the end of this guide right now. However, I will try to point you to all the resources you need to set up the navigation stack.

Our goal here is to get our robot to localize in the known map and then navigate to a specified pose in the map.

The primary resource here would be the ROS tutorial for the navigation stack. There they explain how to set up AMCL for localization and navigation.

The navigation part of the tutorial, and of the navigation stack, is publishing commands to the topic /cmd_vel which are then used to move the robot. Luckily for us, we have already gotten our robot to move in response to messages published on this topic in the teleoperation portion of this tutorial. So in theory, this step should be relatively easy to do.

Two other great resources are The Construct’s tutorial on localization in a known map and more information on the Husky localization that is used in the former tutorial.

I will try my best to provide more information as this year progresses! For now, please enjoy the simple wall follower below ๐Ÿ™‚

Simple application, wall following.

We are done setting up our robot! It is SMORT and can drive on its own, and sense its environment. I will be updating this Github repository with code for some fun applications. The first thing on there is a wall following ROS package!

Happy building!

Update February 2021: I realized that I did not actually detail how to run this example. So here we go.

  1. Head over to the repository above, and navigate to src/ros/wall_follower_sim/.

  2. Copy this folder to your catkin_ws/src/ folder in your robot. This is a catkin package.

  3. Build your workspace with catkin_make &amp;&amp; ./devel/setup.bash. (I probably have typos here but you get the idea.)

  4. (a) On terminal 1, run roslaunch lidarbot lidarbot_teleop.launch

    (b) On terminal 2, run roslaunch ydlidar_ros lidar.launch

    (c) On terminal 3, run roslaunch wall_follower wall_follower.launch

That should do the trick! You can adjust parameters (velocity, distance to wall) for the wall following from the associated Python files. Lower velocities are more reliable.

32 Comments »

  1. Yaraish you r doing great Man ,lot of knowledgable stuff that I have learnt from Your work. kindly do some navigation with gps ros |raspberrypi|arduino thanks for the nice content.

  2. Great information thanks. Question, how come your TV information is all zeros for the base and laser frame? Did you define the transformations elsewhere? I ask because the center of your robot and the center of your laser are at different heights.

    • That’s an awesome question. I am afraid that my answer would not be very satisfying – I left those transformations as zero because that “worked”. I assumed that I would work in “2D” environments, where the Z translation would not matter, and that the XY location of the laser, relatively to the center of the robot, would be close. So I figured that keeping the transform as zeros would do the trick – and it did! At least of simple applications.

  3. hi do you know how to use raspberry as brain and motor controller without arduino? maybe using python to control by motor driver?

      • hi thank you for replying, so right now i’m stuck at catkin_make “https://github.com/ros-drivers/rosserial.git” this repo. can you tell why this error appear;

        CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkin_workspace.cmake:95 (message):
        This workspace contains non-catkin packages in it, and catkin cannot build
        a non-homogeneous workspace without isolation. Try the
        ‘catkin_make_isolated’ command instead.
        Call Stack (most recent call first):
        CMakeLists.txt:67 (catkin_workspace)

        Thank you

      • Hey! I am not immediately sure about what the issue is – maybe try to follow the installation again? Or follow the error message and run `catkin_make_isolated`, though I don’t think that this is the safest way to go.

  4. Good Morning, again great work. But I have a question. I was looking through your guide where you mentioned typu mounted the lidar in the wrong orientation…and you modified the launch file to fix it…My question is couldn’t you use the tf broadcast transformation file and put in a rotation about z(yaw) entry?

    • In a sense, the wall follower could be seen as an initial implementation of obstacle avoidance. You could think about the characterization of part of the point cloud as a “wall”, or also as an “obstacle”. So from that point on, you could probably say something like “if there is a wall ahead of me, stop and rotate”. Finding that wall could be as simple as checking for enough points existing close to the robot and ahead of it. The intervals for the angles using which we are looking for walls in the wall follower could also be adjusted to look forward, as opposed to the side.

      To your other question, path planning would probably need a map to operate on. You will need to know where you are in space, to know where you want to go, and what part of the world is free for you to plan through. I have not had the time to implement that (yet!), so maybe you can start experimenting with that on your own ๐Ÿ™‚

  5. Hello! My teammate and I are building a similar bot and have resorted to using pieces of your wall following. However, we have come across a few problems.
    1: When compiling the python code after a few tweaks, we get error ModuleNotFoundError: no module named ‘rospy’, we have tried everything and cant figure out a workaround.
    2: we assume that is why our wall following code does not work. The bot simply drives in a straight line or sometimes to one side or the other.
    3: Not a question, but thank you for posting this! It has helped us more than you can imagine

      • You’re awesome for replying so fast! Yeah, we found every article regarding that and none helped haha. Yeah seems like google will be our best bet with this. Do you have any recommendations in regards to what might be causing problems with the line following code? We were wondering if you did the ROS 2d Navigation setup before you did the line following code.

      • haha I just happened to be typing ๐Ÿ˜‰
        I feel like it could be a ROS versioning issue? Maybe a new/old release is not playing nicely with a new/old Raspberry Pi?
        And if Python cannot seem to be finding ROS, then I believe that it won’t be able to communicate with the Lidar and thus also not be able to use its outputs. Did you have luck with the teleop scripts?

  6. Yeah were on the Pi4 and ros melodic… teleop worked perfectly and that’s exactly when we started following your tutorial. We were able to see via the wall following that it is measuring distance from the wall near perfectly, but doesn’t react to it. Very strange!

    • Huh! Okay then the fact the wall detection works gets you 90% there! It could be that you are on a Pi4. Back when I was setting this machine up I could not get ROS to run properly on a Raspberry Pi 4, so had to revert to 3. Maybe that could be an issue? That’s a bit confusing ๐Ÿ™‚

      • Hi Yoraish! I left you an email via contact, but I have a brief question. Did you set up ros 2d navigation via their given launch file before wall follow? Thanks ๐Ÿ™‚

  7. Hello, I am following the same steps as your project. I have cloned the rosserial repository in my workspace but when enter catkin_make the building process throws errors saying that it is not compatible with the compiler. I am stuck on this for a while now, please help.

  8. Hey Yoraish, I thought you should know that we found a pretty simple but big problem! Our wall following code never posted to Cmd_vel. After much troubleshooting, we found that in the python wall following code there was “/” missing before “cmd_vel”

    In the code it is written as “cmd_vel” instead of “/cmd_vel” . Hope this helps!

  9. Hi, thank you for your tutorial. Can we use the same Arduino code you provided for the keyboard teleop for the navigation stack?

  10. Hello, First of all great post. I’ve been trying to build obstacle avoidance robot ass a project by following the same procedure. Since you have not shown how exactly the path planning and navigation works. I followed the links that you’ve mentioned and tried. Unfortunately My bot works only through teleop keyboard node, but not through navigation goals of path planning. So should I change anything in the Arduino program which I’ve been using for Teleop or Can we use the same Arduino code for the navigation stack? Can someone help me with this?. I’m stuck in this for weeks now and my deadliness are approaching.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s