Fork me on GitHub

bolero.environment.OpenAiGym

class bolero.environment.OpenAiGym(env_name='CartPole-v0', render=False, log_to_file=False, log_to_stdout=False, seed=None)[source]

Wrapper for OpenAI Gym environments.

gym is a dependency and it is not installed with BOLeRo by default. You have to install it manually with “sudo pip install gym”.

OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. See OpenAI gym’s documentation for details.

Parameters:
env_name : string, optional (default: ‘CartPole-v0’)

Name of the environment. See OpenAI gym’s environments for an overview.

render : bool, optional (default: False)

Visualize the environment

log_to_file: optional, boolean or string (default: False)

Log results to given file, it will be located in the $BL_LOG_PATH

log_to_stdout: optional, boolean (default: False)

Log to standard output

seed : int, optional (default: None)

Seed for the environment

__init__(env_name='CartPole-v0', render=False, log_to_file=False, log_to_stdout=False, seed=None)[source]
get_args()

Get parameters for this estimator.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

get_discrete_action_space()[source]

Get list of possible actions.

An error will be raised if the action space of the problem is not discrete. The environment must be initialized before this method can be called.

Returns:
action_space : iterable

Actions that the agent can take