Quantcast

Using Bumblebee Xb3 with image_pipeline

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Using Bumblebee Xb3 with image_pipeline

Patrick Mihelich
Hi Paul,

I'm opening this topic to the list as I know there are some other Bumblebee users out there. My comments are inline below.

By the way, there is another community-contributed Bumblebee driver that may be useful to you, see the bumblebee2 package.

On Mon, Sep 20, 2010 at 5:57 PM, Paul Furgale <[hidden email]> wrote:
I'm currently writing a ROS interface for my Point Grey Research Bumblebee Xb3 camera. The cameras come pre-calibrated and in my experience the factory calibration works well. Unfortunately, they don't use the 5 parameter distortion model currently supported in the image_pipeline. I can use the Point Grey libraries to read their calibration file and generate maps suitable for the OpenCV function cv::remap(). Skimming the code of image_pipeline suggests that you have something very similar implemented.

Out of curiosity, do you know what distortion model they use? I don't have experience with the Bumblebee cameras, but I've browsed Point Grey's software docs and it looks like their SDKs perform exactly the same operations as the image_pipeline. It would not surprise me at all if their distortion parameters could be encoded into CameraInfo as-is (or at least closely approximated), if only their APIs exposed them.

Rather than go my own way and produce a parallel implementation of PinholeCamera and the stereo processing already available in ROS, I would be interested in helping shoehorn in support for custom rectification/dewarping maps. The simplest solution I can think of is to add the ability to set the undistort maps in the pinhole camera model, and add some fields to the CameraInfo message. Granted, a custom map would be necessarily large, but the CameraInfo message is meant to be transmitted once during initialization (as far as I can tell).

Actually, every Image message is paired with a CameraInfo message having the same timestamp. So unfortunately putting custom maps in CameraInfo would be very expensive in bandwidth. image_geometry::PinholeCameraModel is optimized to only rebuild the undistort maps when the parameters change (e.g. in self-calibrating systems).

Currently the easiest way to integrate is to just bite the bullet and use camera_calibration to get a camera model that the image_pipeline understands.

Another way is to replicate the topics produced by stereo_image_proc in your driver node. You'd use PtGrey libs for the rectification (image_rect and image_rect_color topics). stereo_image_proc has some undocumented but fairly stable library methods (processDisparity, processPoints2) you can use to produce the stereo outputs (disparity image and point cloud). The community-contributed videre_stereo_cam package has a driver for Videre stereo cameras that I believe follows this model.

For Diamondback I'm planning to refactor the image_pipeline into nodelets. That might be interesting to your use case, as nodelets will give you much more flexibility in mixing and matching the image_pipeline operations. You'd just write your own nodelet for rectification and swap it in for the default one using PinholeCameraModel.

The last couple options have the drawback that CameraInfo and PinholeCameraModel still don't understand the distortion model. For many (admittedly not all) vision processing nodes this will be OK, as they operate on rectified images and ignore distortion.

Do you have any interest in this? Is this the right venue to discuss this or should I post these suggestions somewhere else?

Yes, and the mailing list is the best place for design discussions like this.

Cheers,
Patrick

Thanks!
--
Paul Furgale
PhD Candidate
University of Toronto
Institute for Aerospace Studies
Autonomous Space Robotics Lab
ph: 647-834-2849
skype: paul.furgale
Videos of robots doing things: http://asrl.utias.utoronto.ca/~ptf/


_______________________________________________
ros-users mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Using Bumblebee Xb3 with image_pipeline

Kurt Konolige
Unfortunately, the distortion model for the Bumblebee is proprietary,
and uses a very large data structure (so it must be doing a local
algorithm).  So you would want to write a custom image pipeline node
that performs rectification.  You can't put the BB distortion data
structure into a CameraInfo message.

Cheers --Kurt

On 9/21/2010 4:37 PM, Patrick Mihelich wrote:

> Hi Paul,
>
> I'm opening this topic to the list as I know there are some other
> Bumblebee users out there. My comments are inline below.
>
> By the way, there is another community-contributed Bumblebee driver that
> may be useful to you, see the bumblebee2
> <http://www.ros.org/wiki/bumblebee2> package.
>
> On Mon, Sep 20, 2010 at 5:57 PM, Paul Furgale <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     I'm currently writing a ROS interface for my Point Grey Research
>     Bumblebee Xb3 camera. The cameras come pre-calibrated and in my
>     experience the factory calibration works well. Unfortunately, they
>     don't use the 5 parameter distortion model currently supported in
>     the image_pipeline. I can use the Point Grey libraries to read their
>     calibration file and generate maps suitable for the OpenCV function
>     cv::remap(). Skimming the code of image_pipeline suggests that you
>     have something very similar implemented.
>
>
> Out of curiosity, do you know what distortion model they use? I don't
> have experience with the Bumblebee cameras, but I've browsed Point
> Grey's software docs and it looks like their SDKs perform exactly the
> same operations as the image_pipeline. It would not surprise me at all
> if their distortion parameters could be encoded into CameraInfo as-is
> (or at least closely approximated), if only their APIs exposed them.
>
>     Rather than go my own way and produce a parallel implementation of
>     PinholeCamera and the stereo processing already available in ROS, I
>     would be interested in helping shoehorn in support for custom
>     rectification/dewarping maps. The simplest solution I can think of
>     is to add the ability to set the undistort maps in the pinhole
>     camera model, and add some fields to the CameraInfo message.
>     Granted, a custom map would be necessarily large, but the CameraInfo
>     message is meant to be transmitted once during initialization (as
>     far as I can tell).
>
>
> Actually, every Image message is paired with a CameraInfo message having
> the same timestamp. So unfortunately putting custom maps in CameraInfo
> would be very expensive in bandwidth. image_geometry::PinholeCameraModel
> is optimized to only rebuild the undistort maps when the parameters
> change (e.g. in self-calibrating systems).
>
> Currently the easiest way to integrate is to just bite the bullet and
> use camera_calibration to get a camera model that the image_pipeline
> understands.
>
> Another way is to replicate the topics produced by stereo_image_proc in
> your driver node. You'd use PtGrey libs for the rectification
> (image_rect and image_rect_color topics). stereo_image_proc has some
> undocumented but fairly stable library methods
> <http://www.ros.org/doc/api/stereo_image_proc/html/classstereo__image__proc_1_1StereoProcessor.html>
> (processDisparity, processPoints2) you can use to produce the stereo
> outputs (disparity image and point cloud). The community-contributed
> videre_stereo_cam <http://www.ros.org/wiki/videre_stereo_cam> package
> has a driver for Videre stereo cameras that I believe follows this model.
>
> For Diamondback I'm planning to refactor the image_pipeline into
> nodelets. That might be interesting to your use case, as nodelets will
> give you much more flexibility in mixing and matching the image_pipeline
> operations. You'd just write your own nodelet for rectification and swap
> it in for the default one using PinholeCameraModel.
>
> The last couple options have the drawback that CameraInfo and
> PinholeCameraModel still don't understand the distortion model. For many
> (admittedly not all) vision processing nodes this will be OK, as they
> operate on rectified images and ignore distortion.
>
>     Do you have any interest in this? Is this the right venue to discuss
>     this or should I post these suggestions somewhere else?
>
>
> Yes, and the mailing list is the best place for design discussions like
> this.
>
> Cheers,
> Patrick
>
>     Thanks!
>     --
>     Paul Furgale
>     PhD Candidate
>     University of Toronto
>     Institute for Aerospace Studies
>     Autonomous Space Robotics Lab
>     ph: 647-834-2849
>     skype: paul.furgale
>     Videos of robots doing things: http://asrl.utias.utoronto.ca/~ptf/
>     <http://asrl.utias.utoronto.ca/%7Eptf/>
>
>
>
>
> _______________________________________________
> ros-users mailing list
> [hidden email]
> https://code.ros.org/mailman/listinfo/ros-users
_______________________________________________
ros-users mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Using Bumblebee Xb3 with image_pipeline

Paul Furgale
In reply to this post by Patrick Mihelich
Patrick,

> Out of curiosity, do you know what distortion model they use?

As Kurt suggests, it's a proprietary algorithm. It looks like a 2D spline warp.

> Actually, every Image message is paired with a CameraInfo message having the
> same timestamp. So unfortunately putting custom maps in CameraInfo would be
> very expensive in bandwidth. image_geometry::PinholeCameraModel is optimized
> to only rebuild the undistort maps when the parameters change (e.g. in
> self-calibrating systems).

Is it possible to get the same effect by sending a CameraInfo message only when calibration parameters change?

> Another way is to replicate the topics produced by stereo_image_proc in your
> driver node.

Yes, that's a possibility. I see a few drawbacks. First, it's a bit of work to get going---significant work if I want it to be robust, well documented, and tested. Second, it produces a substantial bit of code that has to be maintained as ROS evolves. From my perspective, the sooner data from my sensors gets into common code (well-used, well-documented, well-tested), the better.

> For Diamondback I'm planning to refactor the image_pipeline into nodelets.

That sounds great! If I do end up going the route of reproducing the functionality of stereo_image_proc, I'd like to do that on the first pass. This will avoid extra work when Diamondback comes out. Is there a roadmap available for what this will look like?

Thanks,

Paul
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Using Bumblebee Xb3 with image_pipeline

Patrick Mihelich
In reply to this post by Patrick Mihelich
On Wed, Sep 22, 2010 at 5:06 AM, Paul Furgale <[hidden email]> wrote:
Is it possible to get the same effect by sending a CameraInfo message only
when calibration parameters change?

This would be a major design change to the image_pipeline. Sending a CameraInfo on change only introduces a synchronization problem; how does the subscriber synch the new parameters with the correct image? Since CameraInfo is lightweight compared to Image, we decided to just pay the (low) cost of sending it every time.

Certainly your way could be made to work, it just makes the code more complicated and we haven't found it necessary. If you write your own specialized rectification nodelet, of course you could use whatever side channel you want for expensive updates.

> For Diamondback I'm planning to refactor the image_pipeline into nodelets.

That sounds great! If I do end up going the route of reproducing the
functionality of stereo_image_proc, I'd like to do that on the first pass.
This will avoid extra work when Diamondback comes out. Is there a roadmap
available for what this will look like?

The rough schedule for Diamondback work is at http://www.ros.org/wiki/diamondback/Planning. The image_pipeline work should be done by November. At some point I'll draw up a more formal design document, but the breakdown will look something like:

Nodelets for:
 * Color processing: image_raw -> image, image_color
 * Rectification: image -> image_rect
 * Stereo correlation: left/image_rect, right/image_rect -> disparity
 * Point cloud: disparity -> points, points2

Patrick

_______________________________________________
ros-users mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Using Bumblebee Xb3 with image_pipeline

Bill Morris
On Wed, 2010-09-22 at 16:12 -0700, Patrick Mihelich wrote:

> On Wed, Sep 22, 2010 at 5:06 AM, Paul Furgale
> <[hidden email]> wrote:
>         Is it possible to get the same effect by sending a CameraInfo
>         message only
>         when calibration parameters change?
>
> This would be a major design change to the image_pipeline. Sending a
> CameraInfo on change only introduces a synchronization problem; how
> does the subscriber synch the new parameters with the correct image?
> Since CameraInfo is lightweight compared to Image, we decided to just
> pay the (low) cost of sending it every time.

My preference is to keep the current system or continue to support it as
it provides a path for autofocus calibration support.

>
> Nodelets for:
>  * Color processing: image_raw -> image, image_color
>  * Rectification: image -> image_rect
>  * Stereo correlation: left/image_rect, right/image_rect -> disparity
> * Point cloud: disparity -> points, points2

Are you planning on supporting color correction?
Can you let us know when the code is working enough to start porting
camera drivers?

_______________________________________________
ros-users mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Using Bumblebee Xb3 with image_pipeline

Paul Furgale
In reply to this post by Patrick Mihelich
> Certainly your way could be made to work, it just makes the code more
> complicated and we haven't found it necessary. If you write your own
> specialized rectification nodelet, of course you could use whatever side
> channel you want for expensive updates.

Okay, sounds good. I'll keep plugging away here getting my driver up and running.

Thanks for your help!

Paul
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Disparity image as heat map

Ibrahim
In reply to this post by Patrick Mihelich
Dear All,

I'm an amateur who just started using Bumble Bee XB3. Can anybody please tell me how you can obtain a color disparity heat map using the Visual C++ environment. Because by default the disparity map is in grey scale.
Secondly do you recommend to use the factory default calibration or calibrate the camera to your specified parameters in order to get a good disparity image.

I'll be really grateful for answering my questions.

Regards,
Ibrahim
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Disparity image as heat map

Patrick Mihelich
On Tue, Nov 2, 2010 at 3:26 AM, Ibrahim <[hidden email]> wrote:
I'm an amateur who just started using Bumble Bee XB3. Can anybody please
tell me how you can obtain a color disparity heat map using the Visual C++
environment. Because by default the disparity map is in grey scale.

You can use the stereo_view utility in image_view to view the color-mapped disparity image. There's no public API for doing that mapping, but you can look at the stereo_view.cpp source code to see what we do.

Secondly do you recommend to use the factory default calibration or
calibrate the camera to your specified parameters in order to get a good
disparity image.

There have been a couple other threads on the mailing list about using the Bumble Bee XB3 if you search the archives. Someone was working on using the factory calibration to produce rectified images - unfortunately that's a bit difficult to integrate with the rest of the image_pipeline right now, though it will get easier in Diamondback. Another way is to just recalibrate using camera_calibration and let stereo_image_proc do all of the processing.

Cheers,
Patrick

_______________________________________________
ros-users mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-users
Loading...