pyRDDLGym is the official generator of Gym environments from RDDL files.
pyRDDLGym supports the simulation of general Markov Decision Processes (MDPs) described in the Relational Dynamic Influence Diagram Language (RDDL). Notably, it allows for the modeling of complex environments with concurrent events, indigenous and exogenous noises in a lifted (object-oriented specification) way, enabling automatic scaling of environments from a single object to thousands without changing the model. It is significantly faster and easier than coding environments directly using programming languages. pyRDDLGym allows for rapid development of environments, verification of environments, and easy transfer and sharing since there is no code involved. Additionally, pyRDDLGym contains built-in examples ranging from classical control and operations research to complex networked traffic signal control.
pyRDDLGym eliminates the need for coding, transitioning seamlessly from problem design to simulation instantaneously.
Rapid development involves defining an environment in a few lines of RDDL code instead of hundreds of lines of Python code.
Automatic scaling up of the environment is achieved by simply setting the number of objects to simulate, without the need for reprogramming the environment.
pyRDDLGym comes with additional tools, e.g., DBNs generator, Symbolic Representations, and built-in planners such as JaxPlan and PROST.
A fully Python package, 100% compatible with OpenAI Gym. You can seamlessly use your favorite Gym-interacting agent with pyRDDLGym environments.
All simulations from pyRDDLGym are reproducible, making it easy to debug and improve reinforcement learning and planning algorithms.
pyRDDLGym is an open source project under MIT license. Welcome to feedback your issues and suggestions.
Pip install the package, or download the source code.