The _openai_ros_ package provides a common ground for using OpenAI Gym infrastructure to train robots with Reinforcement Learning algorithms without having to care about the OpenAI part and simplifying the way to change robot, tasks or learning algorithms while keeping the same structure. This allows to easily compare results between roboticists.
Training a robot on a task is reduced to the following steps:
* select the robot you want to use and pick its _openai_ros_ robot environment (several provided, but you can create your own with a provided template)
* select the task environment to solve (we provide some tasks, but you can create your own with a provided template)
* launch the simulation with the Gazebo environment (provided)
* and then apply the learning algorithm (can be your own implementation or one of baselines from OpenAI).
If you want to change the robot, just change the robot environment and keep the rest. If you want to change the task, just change the task environment and keep the rest. If you want to test different learning algorithms on the same robot and task, just change it and keep the rest!
We provide complete documentation and several video examples on how to do all those steps.
Within the _openai_ros_ package, we provide already made OpenAI-to-Gazebo sim connections to all the ROS robots so roboticists do not have to care about how to connect the OpenAI algorithms to the simulated robots, and just concentrate on the learning.
In this initial release, we provide connectors to the following robots:
* Cube robot
* Hopper robot
* ROSbot by Husarion
* Wam by Barret
* Parrot drone
* Sawyer by Rethink robotics
* Shadow Robot Grasping Sandbox
* Summit XL by Robotnik
* Turtlebot3 by Robotis
* WAMV water vehicle of the RobotX Challenge
Templates are also provided for the creation of your own robot connector.
The package is **open source and has IGPL license**. Any contribution will be welcomed.