- System architecture
- Preparing the environment
- Calibrating the camera
- Rectifying image
- Getting odometry
- Visualizing pose
After this tutorial you will be able to create the system that determines position and orientation of a robot by analyzing the associated camera images. This information can be used in Simultaneous Localisation And Mapping (SLAM) problem that has been at the center of decades
of robotics research.
As you can see in this picture, we have Raspberry Camera connected and raspicam creating multimedia pipeline and sending video from camera to gst-launch. The latter then transmit the image to our server over UDP. gscam will broadcast the video to /raspicam/image_raw topic. This image should be rectified with image_proc node. And finally, rectified image is taken by mono_odometer, which handles it and computes position and orientation of the robot publishing this data straight to the /vision/pose topic.
# **Preparing the environment**
## **Connecting the camera**
Firstly, connect your camera to Raspberry. To determine whether its working or not, just type:
In order to get a good calibration you will need to move the checkerboard around in the camera frame such that:
- checkerboard on the camera's left, right, top and bottom of field of view
- X bar - left/right in field of view
- Y bar - top/bottom in field of view
- Size bar - toward/away and tilt from the camera
- checkerboard filling the whole field of view
- checkerboard tilted to the left, right, top and bottom
As you move the checkerboard around you will see three bars on the calibration sidebar increase in length. When the **CALIBRATE** button lights, you have enough data for calibration and can click **CALIBRATE** to see the results.
Calibration can take about a minute. The windows might be greyed out but just wait, it is working.
After calibration is done, you can save the archive and then extract it. You will need the *.yaml file. Rename it to raspicam.yaml and move it to the ~/odometry/src/gscam/example directory. Then open file raspicam.launch that weve already created and change it, so that it should looks like this:
After that you have your camera calibrated and can launch gscam by:
$ roslaunch gscam raspicam.launch
# **Rectifying the image**
The raw image from the camera driver is not what is needed for visual processing, but rather an undistorted and (if necessary) debayered image. This is the job of image_proc. For example, if you have topics /raspicam/image_raw and /raspicam/camera_info you would do:
The **rqt_pose_view** is a very simple plugin just displaying an OpenGL 3D view showing a colored cube. You can drag and drop a geometry_msgs/Pose topic onto it from the "Topic Introspection" or "Publisher" plugins to make it visualize the orientation specified in the message.
Really nice work and I am excited to try this! I am trying to use your setup but couldn't launch raspicam launch file properly. Setup on Pi was fine and the video is broadcasting since I am able to stream the video with command $ gst-launch-1.0 -v udpsrc port=9000 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=f
The problem arises when I run the raspicam.launch file and it returns the message:
[FATAL] [1515599618.204792312]: GStreamer: cannot link outelement("rtph264depay0") -> sink
[FATAL] [1515599618.204858007]: Failed to initialize gscam stream!
Do you have any idea what is wrong? My launch file is exactly the same as your post.