A Full Autonomous Stack, a Tutorial | ROS + Raspberry Pi + Arduino + SLAM
UPDATED September 2021: This guide is a walkthrough for setting up an autonomous ROS stack on a Raspberry Pi. Teleoperation, mapping, localization, and navigation are all covered!
1. Building a robot.
This part is somewhat looser than the others. In a nutshell, find some motors, wheels, motor controllers, and some connecting materials. Throw them all at a wall and hope that they come together nicely.
I have used RobotShop.com’s:
- Scout platform
- Slightly stronger motors than the one it ships with. (Something like this.)
- Cytron 10A 5-30V Dual Channel DC Motor Driver.
- YDLIDAR G2 Lidar.
- Raspberry Pi 3 Model B.
- Update 09/21: I could not install ROS on a Raspberry Pi 4 at the time of writing, but you can definitely do it now! Check out the Ubuntu images from Ubiquity Robotics here.

2. Installing ROS
ROS (Robot Operation System) is a framework that facilitates the use of a wide variety of packages to control a robot. Those packages range all the way from motion control, to path planning, to mapping, to localization, SLAM, perception, transformations, communication, and more. ROS provides a relatively simple way interface with those packages and the ability to create custom packages.
Note: The Raspberry Pi 4 is more computationally capable than its predecessors. However, installing ROS on the Pi3 is currently (as of December 2019) easier, and allegedly more reliable.
Get the disc image
I dowloaded the Ubuntu 16.04 Xenial with pre-installed ROS from Ubiquity Robotics. They have great instructions on how to install and download the image. The main points are:
- Download the image from the top of the page.
- Flash it to an SD card (at least 8GB). You can use Etcher, it works well.
- Connect your computer to the WiFi network that starts with
ubiquityrobot
. Password isrobotseverywhere
. - Go to Terminal, and connect to your Pi using
ssh ubuntu@10.42.0.1
. Password isubuntu
. - Run
roscore
to make sure that things are working properly. If you get a warning/errors, try stopping ROS and starting it again withkillall -9 roscore
.
3. Remotely connecting to ROS
We want to access the ROS communication messages from our laptop. This will also let us visualize things in a convenient manner (with rviz). There are a couple of steps to follow here.
-
Spin a Linux machine with ROS Kinetic-Kame. Either a virtual machine or a real machine. You can use VMWare-Fusion with Ubuntu 16.04 or something similar. We will refer to that machine as the Observer machine. The robot is the Master.
-
On the Master, find the
ROS_IP
andROSMASTER_URI
. These two things are the information both machines will need to communicate. Find theROS_IP
(being the IP address of the master computer) by runningifconfig
. -
In this example (the IPs would probably be different in your network), on the robot, we set:
export ROS_IP=192.168.43.228 export ROS_MASTER_URI=http://192.168.43.228:11311
On the observer laptop, we set: export ROS_IP=192.168.43.123 export ROS_MASTER_URI=http://ubiquityrobot.local:11311
. This master URI looks different (but is actually the same under the alias). I believe that setting ubiquityrobot.local
to 192.168.42.228
would work (should be the same as the .local
), but I did not test it.
-
I would add these lines to
.bashrc
on the Observer machine, or create a script to run them together. This is not required, but would make your life (potentially) easier in the long run (so you won’t need to type those lines every time you’d want to connect to the robot š ). -
On the master (robot), run
roscore
. -
On the observer, you now have access to the messages and topics that are on the Master. More on that soon.
In summary, what we have just done is:
-
On the robot (the master machine running
roscore
): -
- Set
ROS_IP
is the machine IP address.
- Set
-
ROS_MASTER_URI
is HTTP://:11311.
-
On the observer computer:
-
- Set
ROS_IP
is its own IP.
- Set
-
- SET
ROS_MASTER_URI
is the robot’s IP
- SET
A couple of notes here:
- To make sure the communication works, I followed this tutorial to publish basic shapes to rviz.
- I had to make the messages compatible with Indigo, following an answer here. (The solution with downloading the common msgs Indigo folder and using the
visualization_msgs
package folder incatkin_ws/src
.) - In rviz, make sure to set the frame to
my_frame
(if following tutorial).
4. Connecting to WiFi
This part sets up and verifies proper operation of wireless internet connectivity. The information is taken from this website.
- On the robot machine,
pifi add YOURNETWOKNAME YOURNETWORKPASSWORD
- Restart the Pi,
sudo reboot
. Now the Raspberry Pi will connect to your WiFi network on startup. To connect to it, connect your computer to the same network, andssh ubuntu@ubiquityrobot.local
with the passwordubuntu
.
Woo! Now both machines have internet, and can communicate over SSH.
5. Testing the lidar
This step was a bit of a doozy. It took me a while to figure out how to get the lidar to run. But I did! So hopefully you won’t have to suffer like I had to.
I am using the YDLIDAR G2 lidar for this build. The first step is to install the necessary drivers. The driver is a ROS package.
cd catkin_workspace/src
.git clone https://github.com/EAIROBOT/ydlidar_ros.git
.catkin_make
- Follow the directions from the repository, written below:
-
roscd ydlidar_ros/startup
-
sudo chmod 777 ./*
-
sudo sh initenv.sh
- Go back to your catkin workspace, and run
source devel/setup.bash
. git checkout G2
. This command moves you to the branch of your specific Lidar model. It is G2 in our case.- Run
catkin_make
again.
Test the lidar with roslaunch ydlidar_ros lidar.launch
. Visualize the scans in Rviz, by adding the topic /scan
.
It may look something like this! Background may vary š

6. ROS + Arduino; Getting them to talk to each other.
Our goal here is to get commands from the Raspberry Pi to the Arduino with the wish of having the Arduino tell the motors how to move. We will install rosserial, a ROS module that enables Arduino-ROS communication, on both the Raspberry Pi and the Arduino to achieve that.
- Following the steps from the ROS website, we start with installing the package.
sudo apt-get install ros-kinetic-rosserial-arduino
, and then,sudo apt-get install ros-kinetic-rosserial
. If you are using a ROS version different from Kinetic, change the wordkinetic
to your version. - In the following commands, substitute
catkin_ws
with the name of your catkin workspace.
cd catkin_ws/src
git clone https://github.com/ros-drivers/rosserial.git
cd catkin_ws
catkin_make
catkin_make install
- In your Arduino IDE, install the rosserial library. I found it the easiest to do it from the IDE itself: search for
rosserial
in the Library Manager and install it.
And that should be it!
For a test run, try the HelloWorld example, from the examples included with the rosserial library. Flash the Arduino with it and connect it to the Raspberry Pi. To run it:
- On the Raspberry Pi
roscore
- In a second Raspberry Pi terminal,
rosrun rosserial_python serial_node.py /dev/ttyACM0
. ChangettyACM0
to the port of your Arduino. You can check the port by navigating to~/dev/
, and observing which files disappear and re-appear when the Arduino is disconnected and connected. - In a third terminal,
rostopic echo chatter
to see the messages being sent.
7. Installing Hector-SLAM
This part is exciting! We will now add the mapping and (a bit later) localization functionality to our robot. We use the Hector-SLAM package. It enables us to create the maps (with a Lidar alone, no IMU needed) that we will later use for localization and navigation. ( I found this video by Tiziano Fiorenzani and the official resources on the ROS website helpful for setting Hector-SLAM up.
- Clone the GitHub repository to your catkin workspace. Navigate to the
src
folder and rungit clone https://github.com/tu-darmstadt-ros-pkg/hector_slam.git
. - [This may fail!, see sub-bullet for work-arounds] Build your ROS workspace by running
catkin_make
and then sourcingsetup.bash
withsource ~/catkin_ws/devel/setup.bash
. -
- If your build gets stalled, or seems to be very slow. Do two things.
-
-
- Increase swap space to 1GB. Follow this tutorial for instructions.
-
-
- Run the build with
catkin_make -j2
- Run the build with
We need to make a couple of modifications to the Hector SLAM tutorial files in order for them to work with our setup. We first take note of the transformations available to us on the \tf
topic, and the reference frames they use.
- Spin the lidar node, with
roslaunch ydlidar_ros lidar.launch
. - Check the communication on the
/tf
topic withrostopic echo /tf
- I get only one transformation:
---
transforms:
-
header:
seq: 0
stamp:
secs: 1578619851
nsecs: 284012533
frame_id: "/base_footprint"
child_frame_id: "/laser_frame"
transform:
translation:
x: 0.2245
y: 0.0
z: 0.2
rotation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
---
So we see that we have only two frames. Namely /base_footprint
and laser_frame
. We will update the file ~/catkin_ws/src/hector_slam/hector_mapping/launch/mapping_default.launch
to accommodate those.
- At the somewhat top of the file, change the first line to the second.
<arg name="odom_frame" default="nav"/>
<arg name="odom_frame" default="base_footprint"/>
- UPDATE 09/21: Do not follow this bullet point. We will take care of that in a different launch file.
At almost the very bottom of the file, change from/to:~
<node pkg="tf" type="static_transform_publisher" name="map_nav_broadcaster" args="0 0 0 0 0 0 map nav 100"/>
<node pkg="tf" type="static_transform_publisher" name="map_nav_broadcaster" args="0 0 0 0 0 0 base_footprint laser_frame 100"/>
- Navigate to
~/catkin_ws/src/hector_slam/hector_slam_launch/launch/tutorial.launch
, and change from/to
<param name="/use_sim_time" value="true"/>
<param name="/use_sim_time" value="false"/>
This should do the trick! Try it out!
- In a first terminal run the lidar with
roslauch ydlidar_ros lidar.launch
- In a second terminal run Hector SLAM with
roslaunch hector_slam_launch tutorial.launch
You should be able to see the results in Rviz. Choose the /map
topic to visualize the map that was created.
8. Lower Level Robot Control (That’s where the Arduino comes in!)
We now want to create a ROS package that would allow ROS communication to move the robot in the world. Again, Tiziano Fiorenzani has a great video explaining the basics of what we are doing here. In a nutshell, we want to make a subscriber node that would run on the Arduino, and listen to the topic /cmd_vel
. We would want to begin with sending commands from the keyboard to the robot.
To see what this topic is all about, run rosrun teleop_twist_keyboard teleop_twist_keyboard.py
. In another terminal, run rostopic info /cmd_vel
to see that this topic publishes the structure geometry_msgs/Twist
. Run rosmsg show geometry_msgs/Twist
, to see the attributes of the message. They are a linear and angular commands.
geometry_msgs/Vector3 linear
float64 x
float64 y
float64 z
geometry_msgs/Vector3 angular
float64 x
float64 y
float64 z
Let’s create the ROS node on our Arduino. We would want to map values in precentages (that we get from /cmd_vel
) to the range [0,255] that our motor controller understands.
The entirety of the code for this node lives on the Arduino. You can find the Arduino sketch I have used here https://github.com/yoraish/lidar_bot under the arduino folder. An example for a very simple sketch that only supports forward and stopping motion is below! Check out the GitHub repo for a full program.
#if (ARDUINO >= 100)
#include
#else
#include
#endif
#include
#include
// Pin variables for motors.
const int right_pwm_pin = 5;
const int right_dir_pin = A0;
const int left_pwm_pin = 6;
const int left_dir_pin = A1;
const bool left_fwd = true;
const bool right_fwd = false;
// Default_speed.
const int default_vel = 201;
ros::NodeHandle nh;
void MoveFwd(const size_t speed) {
digitalWrite(right_dir_pin, right_fwd);
digitalWrite(left_dir_pin, left_fwd);
analogWrite(right_pwm_pin, speed);
analogWrite(left_pwm_pin, speed);
}
void MoveStop() {
digitalWrite(right_dir_pin, right_fwd);
digitalWrite(left_dir_pin, left_fwd);
analogWrite(right_pwm_pin, 0);
analogWrite(left_pwm_pin, 0);
}
void cmd_vel_cb(const geometry_msgs::Twist & msg) {
// Read the message. Act accordingly.
// We only care about the linear x, and the rotational z.
const float x = msg.linear.x;
const float z_rotation = msg.angular.z;
// Decide on the morot state we need, according to command.
if (x > 0 && z_rotation == 0) {
MoveFwd(default_vel);
}
else {
MoveStop();
}
}
ros::Subscriber sub("cmd_vel", cmd_vel_cb);
void setup() {
pinMode(right_pwm_pin, OUTPUT); // sets the digital pin 13 as output
pinMode(right_dir_pin, OUTPUT);
pinMode(left_pwm_pin, OUTPUT);
pinMode(left_dir_pin, OUTPUT);
// Set initial values for directions. Set both to forward.
digitalWrite(right_dir_pin, right_fwd);
digitalWrite(left_dir_pin, left_fwd);
nh.initNode();
nh.subscribe(sub);
}
void loop() {
nh.spinOnce();
delay(1);
}
We can control the robot from our laptop now! In separate terminal instances, run the following:
- Allow Arduino communication with
rosrun rosserial_python serial_node.py /dev/ttyACM0
- Enable keyboard control with
rosrun teleop_twist_keyboard teleop_twist_keyboard.py
To make our lives easier for the next time we run the teleop node, we can create a launch file!
9. Launch files.
Creating a launch file is pretty simple. The documentation at ROS.org is a great resource for that. In our case, we end up with the following launch file to launch all the necessary nodes for keyboard teleoperation with ROS and through our Arduino.
<launch>
<node pkg="rosserial_arduino" type="serial_node.py"
name="serial_arduino">
<param name="port" value="/dev/ttyACM0" />
</node>
<node pkg="teleop_twist_keyboard" type="teleop_twist_keyboard.py" name="teleop_twi </launch>
I have placed this launch file in the directory ~/catkin_ws/src/lidarbot/launch
. Don’t forget to catkin_make
and source devel/setup.bash
!
We can now run the robot in a teleoperated mode with
roslaunch lidarbot lidarbot_teleop.launch
9. Correcting angle offset.
When I was designing the Lidar mount that I ended up 3D printing, I failed to look through the datasheet and design in in a way such that the "forward" direction of the Lidar would actually point forward. Let’s correct that.
Because of a lack of time, I chose to do it in a somewhat hack-y patch.
Navigate to /catkin_ws/src/ydlidar/sdk/src/CYdLidar.cpp
, and change the function void CYdLidar::checkCalibrationAngle(const std::string &serialNumber) {
to the following. We are simply overriding the angle offset value provided by the Lidar model.
void CYdLidar::checkCalibrationAngle(const std::string &serialNumber) {
m_AngleOffset = 0.0;
result_t ans;
offset_angle angle;
int retry = 0;
m_isAngleOffsetCorrected = false;
float override_offset_angle = 140.0;
while (retry < 2) {
ans = lidarPtr->getZeroOffsetAngle(angle);
if (IS_OK(ans)) {
if (angle.angle > 720 || angle.angle < -720) {
ans = lidarPtr->getZeroOffsetAngle(angle);
if (!IS_OK(ans)) {
continue;
retry++;
}
}
m_isAngleOffsetCorrected = (angle.angle != 720);
m_AngleOffset = angle.angle / 4.0;
printf("[YDLIDAR INFO] Successfully obtained the %s offset angle[%f] from the lidar[%s]\n"
, m_isAngleOffsetCorrected ? "corrected" : "uncorrrected", m_AngleOffset,
serialNumber.c_str());
std::cout << "Overriding offset angle to " << override_offset_angle << "\n";
m_AngleOffset = override_offset_angle;
return;
}
retry++;
}
Great, our Lidar’s arrow points forward now.
10. Save a map.
This section outlines two methods for creating maps. The first one is "correct" in the sense that we’ll use it for localization later. The second one will not be used. I am keeping that subsection here for reference.
I have been following the excellent tutorials provided by The Construct on YouTube. They have information about how to record a map, how to provide a map to Rviz, and how to performa localization with the map on a Husky robot. In this section we will be looking first at how to record a map.
Let’s begin by downloading the map server
that will do the heavy lifting for us. We will do this with sudo apt-get install ros-kinetic-map-server
on the robot Raspberry Pi.
To record a map, we should spin up the robot by executing the following commands in separate terminals:
roslaunch lidarbot lidarbot_teleop.launch
roslaunch ydlidar_ros lidar.launch
roslaunch hector_slam_launch tutorial.launch
Now that your lidar is spinning and all the nodes are up, move your robot around the room slowly until you are happy with how the map looks on Rviz (or just hope that it looks okay š ), and then run:
rosrun map_server map_saver -f my_map
This command will save my_map.yaml
and my_map.pgm
files! These two files specify the occupancy information of the map. You can change the name of this map by changing the my_map
argument to whichever name you’d like.
The .pgm
file can be used to visualize the map the was created. From your computer, you can use "Secure Copy", aka SCP, to download the .pgm
file and visualize it. In my case, I have saved my map files to ~/catkin_ws/maps/
, so I downloaded them to my Mac machine’s Downloads folder using:
scp ubuntu@ubiquityrobot.local:~/catkin_ws/maps/my_map.pgm ~/Downloads

Another way to save a map (this is not what you need for localization)
In separate terminals, run:
roslaunch lidarbot lidarbot_teleop.launch
roslaunch ydlidar_ros lidar.launch
roslaunch hector_slam_launch tutorial.launch
And open Rviz from another linux machine, if possible.
Now, as you"ll be driving around the space (slowly! We want the map to be built accurately, so no need to give it a hard time doing so.) you’ll see a map starting to be build in real time, in Rviz. The lighter colors are empty space, and the dark ones are obstacles.
When you think your map is sufficiently good, run the following:
rostopic pub syscommand std_msgs/String "savegeotiff"
This will save a .tif and a .tfw files in ~/catkin_ws/src/hector_slam/hector_geotiff/maps
directory.
The map will look something like this:

11. Serve a saved map
In order for the navigation stack to be able to localize the robot, it needs access to the map we have just saved. Luckily, this is a fairly easy task to do! The most straightforward way to do this is by running:
rosrun map-server map-server my_map.yaml
If you had an Rviz session started up, you can visualize the map by showing the /map
topic!
You can also set up a launch file to serve the map for you, such that you won’t have to run this command every time you require a map to be served. For example, if we create a new launch file called serve_map.launch
in the lidarbot
package, we can call it by roslaunch lidarbot serve_map.launch
. We should populate it with something like:
<launch>
<arg name="map_fname" value="/home/user/catkin_ws/src/lidarbot/maps/mmy_map.yaml></arg>
<node pkg="map_server" type="map_server" name="map_server" args="$(arg map_fname)>
</node>
</launch>
Pay attention to the argument value for map_fname
. Change it to the path to where you left the map files.
12.Localization
UPDATE 09/21: Localization and navigation finally works!
Our goal in the next two sections is to get our robot to localize in the known map we have recorded and navigate through the map to a specified pose.
The main components of a probabilistic localization system for a robot moving in 2D are
- Laser scanner (lidar) to answer the question: what am I seeing now?
- We are using our YDLIDAR G2.
- Odometry to answer the question: how much has the robot moved in the recent moment?
- Helps determine how the robot should change its previous localization estimate to accomodate the motion.
- Saved map to answer the question: which one of my pose estimates on the map is seeing a similar (simulated) laser scan to the one my real laser scan is seeing?
- In other words, because a map gives us the ability to generate expected laser scans from any point on the map (via ray tracing), we can compare our lidar observations to different ray-traced observations generated from different points on the map and assert that the most-similar map point is the location of the robot.
Let’s go through these components one by one.
Laser scans
This is simple. We have laser scans from our YDLIDAR. Check ā.
Odometry
This is where things get interesting. Traditionally, odometry data is extracted from accelerometers (on inertial measurement units (IMUs)) or wheel encoders, or GPS, or cameras (for visual-odometry), or optical flow, or some convex combination of all of the above.
In our setup, however, you have probably noticed that we have none of the above. We only have a laser scanner. Therefore, to get odometry data we will exploit an undocumented feature of Hector-SLAM that provides the displacement of the laser scanner over time. This has been tricky to set up — I have relied extensively on help from this ROS Answers page. Specifically, credit should be given to Maximilian HeĆ, the author of this package and launch file for nudging me towards the right direction.
Anyway, the important thing for you, dear reader, is the existence of the launch file localize.launch
in the lidarbot
package that exists in the git repo. You’ll notice that the file begins with similar content to older launch files that we have developed in this tutorial. Namely,
<!-- Run teleop -->
<include file="$(find lidarbot)/launch/lidarbot_teleop.launch"/>
<!-- Wait for the lidar to start up. -->
<arg name="node_start_delay" default="5.0" />
<!-- Run the map server -->
<node name="map_server" pkg="map_server" type="map_server" args="$(find lidarbot)/maps/room_map2.pgm 0.05"/>
<!-- Add transformations. -->
<node pkg="tf" type="static_transform_publisher" name="link_to_laser_bc" args="0 0 0 0 0 0 base_link laser_frame 100"/>
As you see, these lines run teleop, they run the map server, and then also publish a rigid transformation between the base frame base_link
and the laser frame laser_frame
. I chose to keep this transform as effectively nothing for convenience. You’ll also notice that the line starting the lidar is commented out. I have seen weird things happening when this line is run in the launch file so I just started running it separately. More on that later.
Now, we initiate a Hector-SLAM node and get its odometry. I’ll spare you the details (please look at the git repo for those). The important lines are
<arg name="tf_map_scanmatch_transform_frame_name" default="scanmatcher_frame"/>
<arg name="base_frame" default="base_link"/>
<arg name="odom_frame" default="base_link"/>
<!-- Change below!! Used to be true. -->
<arg name="pub_map_odom_transform" default="false"/>
<arg name="scan_subscriber_queue_size" default="5"/>
<arg name="scan_topic" default="scan"/>
<arg name="map_size" default="2048"/>
<arg name="pub_odometry" default="true"/>
<node pkg="hector_mapping" type="hector_mapping" name="hector_mapping" output="screen">
<param name="pub_odometry" value="$(arg pub_odometry)"/>
<!-- Frame names -->
<remap from="map" to="mapcurrent" />
<param name="map_frame" value="mapcurrent_frame" />
<param name="base_frame" value="$(arg base_frame)" />
<param name="odom_frame" value="$(arg odom_frame)" />
These few lines give a name to a map-to-scan frame, they specify base_link
as the robot base frame and also the odom frame (since in our case Hector is not getting any additional odometry from an IMU), and also ask Hector to not publish a map-to-odom transformation. This will be taken care of my AMCL (the probabilistic localization package we’ll use).
Additionally, we change the map topic Hector used to publish it’s SLAM-created map to a different name because we’ll be serving our own pre-made map.
There’s one more thing that has to be done to get our odometry ready. Hector publishes a topic with the odometry transformation with the name /scanmatch_odom"
. We need to convert the messages on this topic to tf
transformations. The node odomtransformer
(that
Maximilian HeĆ created and I copied) does just that.
If you run this launch file with the code up to this point, with the commands
roslaunch lidarbot odomtransformer.launch
roslaunch ydlidar_ros lidar.launch
roslaunch lidarbot localize.launch
then you’ll be able to see an odometry transformation changing as the robot moves around via rostopic echo /tf
.
13. Localization and Navigation.
To localize in a known map, we feed information from our recorded maps and the Hector SLAM odometry to AMCL, a probabilistic localization package. In the launch file mentioned above, the following lines are important– they specify the names of relevant topics and other localization parameters.
<param name="tf_broadcast" value="true" />
<param name="base_frame_id" value="/base_link" />
<param name="global_frame_id" value="map" />
<param name="odom_frame_id" value="scanmatch_odom" />
<param name="use_map_topic" value="false" />
To test this setup, run the following and then, in rviz, click the publish 2D Pose Estimate button and then click on the robot location on the map. When you teleoperate the robot around the map slowly you’d want to see the state estimate changing with it to reflect the true robot pose. Make sure to visualize the estimated state topics in rviz alongside the map topic.
roslaunch lidarbot odomtransformer.launch
roslaunch ydlidar_ros lidar.launch
roslaunch lidarbot localize.launch
We make use of the ROS Navigation Stack to have the robot navigate around the map autonomously. This subset of ROS packages can plan paths in maps with dynamic obstacles and directly control robots via publishing commands to the topic /cmd_vel
. Our Arduino is subscribed to this topic and uses its information to move the robot.
To put this together I have followed the Navigation Stack Robot Setup page on the ROS website. It is relatively straightforward — please see the git repo for all the details. I have not optimizes the robot dimensions and thresholds yet, but they work nontheless. The files of interest are
- base_local_planner_params.yaml
- costmap_common_params.yaml
- global_costmap_params.yaml
- local_costmap_params.yaml
You’ll notice that the bottom of the launch file includes these yaml files as parameters for the Movement Stack move_base
node.
<node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
<rosparam file="$(find lidarbot)/costmap_common_params.yaml" command="load" ns="global_costmap" />
<rosparam file="$(find lidarbot)/costmap_common_params.yaml" command="load" ns="local_costmap" />
<rosparam file="$(find lidarbot)/local_costmap_params.yaml" command="load" />
<rosparam file="$(find lidarbot)/global_costmap_params.yaml" command="load" />
<rosparam file="$(find lidarbot)/base_local_planner_params.yaml" command="load" />
</node>
These files specify robot dimensions, max and min allowed velocities, and various parameters for cost maps and navigation. Don’t forget to make and source your workspace!
Putting it all together
Run these lines!
roslaunch lidarbot odomtransformer.launch
roslaunch ydlidar_ros lidar.launch
roslaunch lidarbot localize.launch
Open rviz!
Set a 2D pose estimate!
Set a navigation goal!
Watch in absolute amazement as your robot moves to that location in space.
Simple application, wall following.
This is an old (Jan 2020) part of the tutorial. Proceed with caution.
We are done setting up our robot! It is SMORT and can drive on its own, and sense its environment. I will be updating this Github repository at a dedicated branch with code for some fun applications. The first thing on there is a wall following ROS package!
Happy building!
Update February 2021: I realized that I did not actually detail how to run this example. So here we go.
-
Head over to the repository above, and navigate to
src/ros/wall_follower_sim/
. -
Copy this folder to your
catkin_ws/src/
folder in your robot. This is a catkin package. -
Build your workspace with
catkin_make && ./devel/setup.bash
. (I probably have typos here but you get the idea.) -
(a) On terminal 1, run
roslaunch lidarbot lidarbot_teleop.launch
(b) On terminal 2, run
roslaunch ydlidar_ros lidar.launch
(c) On terminal 3, run
roslaunch wall_follower wall_follower.launch
That should do the trick! You can adjust parameters (velocity, distance to wall) for the wall following from the associated Python files. Lower velocities are more reliable.
Update 2022-03-26
Thank you all who have reached out with questions and comments! I am always extremely happy to get your messages :).
The TF-tree that you should be expecting is below:
And also a very sketchy wiring diagram:
NO WAY!!!! A new post????!?!?!?!?!
I understand the excitement, it is indeed difficult to perceive the idea of a new post yes
Yaraish you r doing great Man ,lot of knowledgable stuff that I have learnt from Your work. kindly do some navigation with gps ros |raspberrypi|arduino thanks for the nice content.
Hi Sudharsan! I am glad you like my work š
I have some GPS stuff going on here: https://yoraish.com/2018/03/02/javeliner/
I don’t currently have any GPS projects on, Maybe they’ll come in the future. Please let me know if you need guidance!
hello my bro I have a question how to program the Raspberry?
thanks for answer
Great job Congratulations, your are a Genius
Great information thanks. Question, how come your TV information is all zeros for the base and laser frame? Did you define the transformations elsewhere? I ask because the center of your robot and the center of your laser are at different heights.
That’s an awesome question. I am afraid that my answer would not be very satisfying – I left those transformations as zero because that “worked”. I assumed that I would work in “2D” environments, where the Z translation would not matter, and that the XY location of the laser, relatively to the center of the robot, would be close. So I figured that keeping the transform as zeros would do the trick – and it did! At least of simple applications.
Hey love the project. I just was wondering if you used any motor encoder on the wheels. If not how to you track the location? Are you just using lidar?
hi do you know how to use raspberry as brain and motor controller without arduino? maybe using python to control by motor driver?
I have not done much with this setup – but it is definitely possible! You may want to look into some PWM/Servo hats for the raspberry pi – those can help the communication between the Pi and motor controllers. https://learn.adafruit.com/adafruit-16-channel-pwm-servo-hat-for-raspberry-pi/using-the-python-library
hi thank you for replying, so right now i’m stuck at catkin_make “https://github.com/ros-drivers/rosserial.git” this repo. can you tell why this error appear;
CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkin_workspace.cmake:95 (message):
This workspace contains non-catkin packages in it, and catkin cannot build
a non-homogeneous workspace without isolation. Try the
‘catkin_make_isolated’ command instead.
Call Stack (most recent call first):
CMakeLists.txt:67 (catkin_workspace)
Thank you
Hey! I am not immediately sure about what the issue is – maybe try to follow the installation again? Or follow the error message and run `catkin_make_isolated`, though I don’t think that this is the safest way to go.
Good Morning, again great work. But I have a question. I was looking through your guide where you mentioned typu mounted the lidar in the wrong orientation…and you modified the launch file to fix it…My question is couldn’t you use the tf broadcast transformation file and put in a rotation about z(yaw) entry?
How would I go about adding obstacle avoidance and path planning?
In a sense, the wall follower could be seen as an initial implementation of obstacle avoidance. You could think about the characterization of part of the point cloud as a “wall”, or also as an “obstacle”. So from that point on, you could probably say something like “if there is a wall ahead of me, stop and rotate”. Finding that wall could be as simple as checking for enough points existing close to the robot and ahead of it. The intervals for the angles using which we are looking for walls in the wall follower could also be adjusted to look forward, as opposed to the side.
To your other question, path planning would probably need a map to operate on. You will need to know where you are in space, to know where you want to go, and what part of the world is free for you to plan through. I have not had the time to implement that (yet!), so maybe you can start experimenting with that on your own š
Hello, i have a question. can i use a diffrent lidar? i am using rplidar A1. Thanks
Hi! I don’t have much experience with RPLidar A1, but Tiziano seems to do! I think his tutorials might help you, check out his videos at https://www.youtube.com/watch?v=Qrtz0a7HaQ4&list=PLuteWQUGtU9BU0sQIVqRQa24p-pSBCYNv&index=10&ab_channel=TizianoFiorenzani .
Did you make any adjustments to you AMCL launch file for your lidar?
No, I did not do much work towards getting AMCL to play nicely with this system.
Hello! My teammate and I are building a similar bot and have resorted to using pieces of your wall following. However, we have come across a few problems.
1: When compiling the python code after a few tweaks, we get error ModuleNotFoundError: no module named ‘rospy’, we have tried everything and cant figure out a workaround.
2: we assume that is why our wall following code does not work. The bot simply drives in a straight line or sometimes to one side or the other.
3: Not a question, but thank you for posting this! It has helped us more than you can imagine
Very welcome!!
Regarding (1), what do you mean by tried everything? I believe you already stumbled upon posts like this one: https://stackoverflow.com/questions/51790447/importerror-no-module-named-rospy, no help there? I am afraid I won’t be able to be of much help without more information. And even then, your best bet would probably to roam the internet in search of salvation. Best of luck!!
You’re awesome for replying so fast! Yeah, we found every article regarding that and none helped haha. Yeah seems like google will be our best bet with this. Do you have any recommendations in regards to what might be causing problems with the line following code? We were wondering if you did the ROS 2d Navigation setup before you did the line following code.
haha I just happened to be typing š
I feel like it could be a ROS versioning issue? Maybe a new/old release is not playing nicely with a new/old Raspberry Pi?
And if Python cannot seem to be finding ROS, then I believe that it won’t be able to communicate with the Lidar and thus also not be able to use its outputs. Did you have luck with the teleop scripts?
Yeah were on the Pi4 and ros melodic… teleop worked perfectly and that’s exactly when we started following your tutorial. We were able to see via the wall following that it is measuring distance from the wall near perfectly, but doesn’t react to it. Very strange!
Huh! Okay then the fact the wall detection works gets you 90% there! It could be that you are on a Pi4. Back when I was setting this machine up I could not get ROS to run properly on a Raspberry Pi 4, so had to revert to 3. Maybe that could be an issue? That’s a bit confusing š
thank you! I will start trying stuff and see what works!
Hi Yoraish! I left you an email via contact, but I have a brief question. Did you set up ros 2d navigation via their given launch file before wall follow? Thanks š
Hello, I am following the same steps as your project. I have cloned the rosserial repository in my workspace but when enter catkin_make the building process throws errors saying that it is not compatible with the compiler. I am stuck on this for a while now, please help.
Hey Yoraish, I thought you should know that we found a pretty simple but big problem! Our wall following code never posted to Cmd_vel. After much troubleshooting, we found that in the python wall following code there was “/” missing before “cmd_vel”
In the code it is written as “cmd_vel” instead of “/cmd_vel” . Hope this helps!
Hi can you give me a tutorial for low level controlling /? PLS
Hi, thank you for your tutorial. Can we use the same Arduino code you provided for the keyboard teleop for the navigation stack?
Hi. Thank you for your tutorial. Can you please post the circuit diagram too?
Hello, First of all great post. I’ve been trying to build obstacle avoidance robot ass a project by following the same procedure. Since you have not shown how exactly the path planning and navigation works. I followed the links that you’ve mentioned and tried. Unfortunately My bot works only through teleop keyboard node, but not through navigation goals of path planning. So should I change anything in the Arduino program which I’ve been using for Teleop or Can we use the same Arduino code for the navigation stack? Can someone help me with this?. I’m stuck in this for weeks now and my deadliness are approaching.
may i ask have u able to solve this?
Hello, I find your project is great. I’m trying to rebuild it right now, but fail with the wiring. How did you wire it?
Thanks for your document
I have followed your document but when I set a target using RVIZ the robot does not move. The topic cmd_vel does not publish anything. How do you move the robot automatically?
The topic `cmd_vel` might not publish because of errors in the navigation stack. Some checks to test when proceeding: Are you sending a navigation goal properly in Rviz (with a correct global frame)? Are you successfully building the files? Can you visualize the costmap that the local planner is creating? Is localization working? When moving the robot by hand, does it remain localized?
Good luck!
are u using ubuntu mate on ur rasp pi 3
Thanks for the great tutorial, Yoraish!
I have been trying to mimick what you mentioned but I always get this error:-
rosrun tf tf_echo odom base_link
Failure at 1638156602.496281684
Exception thrown:”odom” passed to lookupTransform argument target_frame does not exist.
The current list of frames is:
Frame scanmatch_odom exists with parent map.
Frame laser exists with parent base_link.
Frame base_link exists with parent scanmatch_odom.
Frame scanmatcher_frame exists with parent mapcurrent_frame.
And when I launched the localize.launch, it would say:
Timed out waiting for transform from base_link to odom to become available before running costmap, tf error: canTransform: target_frame odom does not exist.. canTransform returned after 0.100815 timeout was 0.1.
Any suggestion please?
Thanks & Regards,
Robert
+1
Hello,
I followed your tutorial and frankly you did a great job. I made a small car with 4 drive wheels and 4 steering wheels. I have 4 brushless motors with 4 ESC to control the speed and two servo motors: one for the front steering and the other for the rear steering.
I managed to do the localization and navigation following your tutorial. However, I have some problems with my arduino code, because it does not allow the car to turn on itself. So when I give a 2D goal on RVIZ, the car tends to get stuck.
I looked at your arduino code but I didn’t understand well how you control your car in the “cmd_vel_cb” function.
Could you please explain me the formulas used.
Until your return I wish you a happy new year 2022.
Hey there, could you help out with a tf tree on your setup, since you managed to get it working? cannot set up localization and navigation no matter what I do.
Many thanks!
Hello
Thank you for this amazing tutorial
Iām stuck on the localize step
I think Iām having issues with the frames
Can you help me with the frames ?
Maybe a screen shot of how the frames should look like
My email is
Hazem7mohammad@gmail.com
Thank you
Frames are now attached to the end of the post!
Hi yoraish,
your tutorial is great, thank you very much! I had the same problem as Robert, so I added another static tf between base_link and laser_frame. However, when I set a navigation goal cmd_vel only shows 1/-1 in the angular component. I assume that there is still a problem with the tf tree. Could you please upload your tf tree and your rqt_graph?
Best regards!
Frames are now attached to the end of the post!
Can I have your wall_follower file again? I can’t find it in your GitHub.
Thank you^_^.
@Boss Oh shoot I see what I did there, I accidentally deleted the wall follower files. You can retrieve them by checking out an older commit, like this: git checkout 379c22c
Could you please explain why the tf tree is broken? And what exactly does scanmatch_odom fram represent?
From a high level, the particle filter that is in charge of localizing the robot on the known map has two components, a motion model step, and a sensor mode. AMCL is in charge of the sensor model– it ranks pose estimates based on their likelihood of correctly reflecting the robot’s pose. AMCL removes unlikely estimates and adds more particles around higher-likelihood regions. We are left with the task of producing a motion model, i.e., moving the particles in a way that reflects the robot’s motion such that the position estimates will be updated when robot motion information is available. This is the role of scanmatch_odom, it provides this motion information to the particle filter that is implemented in AMCL.
One ROS convention for communicating this motion is via the TF tree, which we use here too :). The scanmatch odom frame is a pose on the map relative to which our robot’s motion is computed. Then, AMCL computes the best transform between that frame and the map frame such that the child of scanmatch_odom, aka base_link, is placed on the map correctly.
Finally, the TF tree is broken because we are not using part of it. Hector is building a map in the background, and this map has its own frames (scanmatcher_frame, mapcurrent, etc.). We use some information from Hector to produce the odometry needed for AMCL and publish it on scnamatch_odom.
I hope this helps!
Hi yoraish. First of all, amazing post.
I have been trying to build an autonomous mobile robot with this post.
When i try to localize and navigate, i get an error,
“Costmap2dRos transform timeout. Current time: 16549.8590 global_time stamp : 16549. 5517. tolerance :0.3
Could not get robot pose, cancelling reconfiguration.”
How do I fix this?
Hi Badri
Did you solve this problem?
hello my bro I have a question how to program the Raspberry?
thanks for answer