Creating virtual environments for reinforcement learning

In reinforcement learning, the agent must interact with an environment, either by sending an action or by receiving a reward or observation. Real life environments may be impractical to train for safety or time-efficiency reasons. Therefore, a simulated virtual environment is often used in practice.

OpenAI Gym is arguably the most used virtual environment library in reinforcement learning. It offers a wide range of virtual environments for classical benchmark problems.

These environments are great for learning, but eventually you will want to create a virtual environment to solve your own problem, be it for stock trading, robotics or self driving vehicles.

In this tutorial series, you will learn how to create your own virtual environment with a particular focus on robotics applications. This is the classical pipeline for training RL robotics agents.

We will focus here on the interaction between the last 3 blocs.

This tutorial series is composed as follows:

Part 0 – prerequisites

Follow these 2 tutorials:
Learning to use OpenAI Gym
Virtual environments for reinforcement learning

Part 1 – Registering a custom Gym environment

Part 2 – Creating a simple Gym environment – Tic Tac Toe

Part 3 – Creating a Gym environment with Pybullet and a XML file

Part 4 – Creating a Gym environment with Pybullet and a URDF file