We would like to introduce you our project: **IMcoders**.
A few months ago we started developing some sensors in our spare time to provide odometry data for robot with wheels. One of the main objectives of the project is to develop sensors extremely easy to integrate in already developed devices. The provided odometry data can be used among the output of other sensors to navigate autonomously.
Here it is a short introduction before going to the problematic:
If you want to prototype with autonomous navigation on real vehicles the options are quite limited.
Imagine you want to make the forklift in a warehouse, a tractor at the field or a wheelchair at the school to navigate autonomously, to prototype and get some first hand input data you could easily integrate cameras and get some visual odometry or attach a GPS device for the use cases when you are outside. Perfect, you can navigate using that input but still the system is not reliable enough, maybe there are not features for computing a reliable visual odometry or you are inside a building and there is no GPS signal, wouldnt be nice to have some encoder input? Lets add them to our vehicle!... hmmm not so easy, right? If you are good at mechanics, you could install some encoders in your vehicle, but on off the shelf vehicles, the hardware modifications needed to add this sensors are not a real option for all the people. At the moment, there is nothing mechanical and money-wise affordable for everybody. In order to meet this needs the IMcoders are borned.
For that we are using IMUs but not in a conventional way. The idea here is to attach an IMU to each robot wheel and measure its spatial orientation. Tracking the change of the orientation of the wheel we can infer how fast the wheel is spinning and, if needed, its direction. (Yes, as you probably already notice the Idea is to provide an output very similar to a traditional encoder, just from a different source, hence the name IMCoder = IMU+Encoder)
You could think this approach has a lot of error due to the nature of IMUs (and of course is not the perfect solution for every use case!) but adding some constraints based on the location of the IMUs on the robot most of the error can be mitigated so that the output provided is stable for most use cases.
After some simulations, we developed some IMU wireless boards which provide IMU data using the ROS interface:
It means that combining some of them and using some theory about differential drive steering, we might be able to calculate a reliable odometry. So that's what it is almost happening. To focus on the odometry calculations we created a simulation environment using gazebo and we attached one IMU (using the gazebo IMU plugin) to each wheel of our simulated differential drive robot. It is almost working as expected: the calculated odometry using our sensors is quite similar to the one provided by the diff_drive plugin for gazebo. We say almost because there is still a mismatching between the output odometry provided by the diff_drive odometry plugin and ours. We guess there is something we are not considering within our calculations for the odometry, so our output it is not as good as expected (it is our first time working with quaternions).
Summarizing what we are doing:
* We get the IMU absolute orientation as a quaternion in one time instant and also in the next one.
* We compute the quaternion that defines the rotation between the first measurement and second one
* The rotation of the sensor is translated to linear velocity (we know the diameter of the wheels).
With this information (linear velocity of each wheel) and a little bit of theory about differential driving vehicles, we are able to compute the new position of the robot.
In case you want to reproduce the problem [just follow the readme](https://github.com/solosito/IMcoders/tree/devel) in the repository (more precisely, the *Differential wheeled robot with IMcoders* part) and you will be able to play around with our simulation environment.
Once the problem is solved we will continue integrating the sensors in a comercial RC car (Parrot Jumping Sumo) and testing them with real data:
Hi , thanks to share this very interesting and ingenious approach.
Why do not use directly the rotation rate provided by the IMU? instead of the absolute orientation, which requires extra computations at IMU level, usually involving a 3D compass, which may not work correctly close to wheels and motors.
So my proposal would be to directly use rotation rate provided by gyros, and with diff-drive forward kinematics compute platform velocities.
This would lead to an even smipler approach , where a single 1D gyro could solve the problem (ideally).
What do you think ?
In reply to this post by Tully Foote via ros-users
> Why do not use directly the rotation rate provided by the IMU?
The point is that the gyro provides just a rotational speed measurement, which drifts. The main advantadge of our approach is that, using the gravity, the computed movement can be corrected. For instance, imagine that the algorithm computed that the wheel moved pi/2 but it really moved 3pi/8. At the end of the movement (when the robot is standing still), the gravity will help to correct the movement because it is always point the same way. Further more, during the movement the measured acceleration will always belongs to the inferior semispace (its z component will always be negative), so we even might be able to make some corrections during the movement. But this approximation is still to be tested.
That's the main advantage of having absolute rotation measurements instead of relative ones.
Thanks for the clarification
ok , I see you use "imu.orientation" to compute wheel angular position. This orientation is not a raw measurement of an IMU, its an estimate, often provided by sensor itself, using raw measurements (a_xyz and w_xyz). Using this angular position, as you said, there is no position drift due to velocity noise time integration.
But I wonder, if your accelerometers are not just exactly on the axis of rotation, additional linear accelerations will be measured by accelerometers, not related with the gravity, so the wheel orientation computations may be corrupted.
What do you think about ?
In reply to this post by Tully Foote via ros-users
You are right. We already considered that and we already saw that, for quick translations, the estimated orientation is erratic. Thus, depending on the use case, our sensors will provide a better or worse output depending on the quality of the IMU integrated in the board (sometimes it is just a matter of calibration). Here we are facing a trade-off between the cost of the sensor and the expected accuracy.
Anyway, for the applications as the ones exposed above, we guess that the accuracy for the sensor we chose (MPU9250) for the first prototype will be more than enough.
I hope I answered your question. Really thank you for the interest by the way :slight_smile:
In reply to this post by Tully Foote via ros-users
[quote="solosito, post:5, topic:5543"]
Thus, depending on the use case, our sensors will provide a better or worse output depending on the quality of the IMU integrated in the board (sometimes it is just a matter of calibration).
How are you currently calibrating for the extrinsics of the origin of the IMU sensor with respect to the center of rotation of the wheel axis? After fixating the IMCoder to the wheel in a new position, are you by chance rotating the wheel at a constant velocity, then inferring the rotational speed `` from the sinusoidal frequency from the two IMU accelerometers axes perpendicular to the wheels axle, then additionally using the amplitude to infer the radial distance `r` from the wheel axis, and the phase of the peak of the waveform to discern the angular position ``? I suppose the phase between accelerometer axes x, y (assuming z points along the wheel axle) could be used to resolve the rotation of the IMU about the endpoint of the vector `(r, )` if neither `x` or `y` happen necessarily lie along `(r, )`.
I'm not sure how level the mounting of your fixture is to the wheel, or even if the toy's wheel and axles would be true (straight); if it's only roughly perpendicular, the offaxis comentents may then need to be accounted for as well, necessitating a full 6DOF calibration rather than just a 3DOF calibration. Additionally, if positioning is subject to disturbances during reinstallment, like in the case with the rotationally symmetric magnetic clips, then perhaps making the calibration online or as a tracking filter might be appropriate to simplify deployment to arbitrary platforms.
[quote="ruffsl, post:7, topic:5543"]
How are you currently calibrating for the extrinsics of the origin of the IMU sensor with respect to the center of rotation of the wheel axis?
Hey, sorry for the late answer. Due to the youthness of the project and our limitated time for developing, we are not doing right now any kind of calibration. We already considered what you exposed and the intention is to follow that path but just if necessary.
The main goal is to have a system providing an odometry good enough for being used in autonomous navigation among other inputs (e.g., visual odometry, GPS, UWB sensors...). Thus, our developing line is going iteration by iteration checking what's really necessary.
So now going back to the topic, regarding the perpendicularity of the robot axles: that's a good point, for a first approximation, we just assumed that they are perpendicular. Last day we recorded some datasets we are probably checking this weekend.
What we are trying right now is finding out a "ground truth" to compare the output of our algorithm against. For that, we thought about computing a visual odometry using the robot's camera (but we don't know how good it is) or using aruco markers for getting the position of the robot.
Do you think we are following the right path? What would be your proposal given our time restrictions?
What we are trying right now is finding out a ground truth to compare the output of our algorithm against. For that, we thought about computing a visual odometry using the robots camera (but we dont know how good it is) or using aruco markers for getting the position of the robot.
Do you think we are following the right path? What would be your proposal given our time restrictions
I'm not sure as to the scale or distance you'd like to test against to compare your odometry with, (are we talking like looping around a table or a building?), but usually benchmarking against a established SLAM algorithm (as opposed to a odometry method) would still be useful, like cartographer. Try and use a SLAM approach where data association of landmarks would be less of an issue, and when odometry sensing is optional for runtime. If you don't want to use a LIDAR, or can't fit one on the platform, but say only a onboard camera, you could use something like this:
If the platform is sensor deprived, i.e. you can't tack on camera on it, you could flip the problem and go the poor-man's-motion-capture route using a fixed facing camera and a printed fiducial taped to the robot:
The Golem Lab at Georgia Tech used something quite like, using a six camera overhead vision system when a proper mocap system was unavailable. Just be sure to disable any autofocusing features if your using a cheap web camera or something.