slim.simulation.simulator module
The main entry point of the simulation.
This module provides two important classes:
a
SimulatorPZEnv
class that implements all of the game-theoretic logic (policy turning).a
get_env()
method that wrapsSimulatorPZEnv
with extra assertions.a
Simulator
class for standalone runs; this will perform automatic stepping for each day, artifact saving and error management so you do not have to.
In the large majority of cases you may just want to import Simulator
in your code.
A few other functions are provided for testing purposes.
- class slim.simulation.simulator.Simulator(output_dir: pathlib.Path, cfg: Config)
Bases:
object
The main entry point of the simulator.
This class provides the main loop of the simulation and is typically used by the driver when extracting simulation data.
Furthermore, the class allows the user to perform experience replays by resuming snapshots.
See the Getting Started guide for details.
- run_model(*, resume=False, quiet=False)
Perform the simulation by running the model.
- Parameters
resume – if True it will resume the simulation
quiet – if True it will disable tqdm’s pretty printing.
- class slim.simulation.simulator.SimulatorPZEnv(cfg: Config)
Bases:
pettingzoo.utils.env.AECEnv
A PettingZoo environment. This implements the basic API that any policy expects.
If you simply want to launch a simulation please just use the
Simulator
class. Also consider using theget_env()
helper rather than using this class directly.Environment description
This class models an AEC environment in which each farmer will actively maximise their own rewards.
To better model reality, a farm operator is not omniscient but only has access to:
lice aggregation
fish population (per-cage)
which treatment(s) are being performed right now
if the organisation has asked you to treat, e.g. because someone else is treating as well
The action space is the following:
Nothing
Fallow (game over - until production cycles are implemented)
Apply 1 out of N treatments
Typically, all treatments will be considered in use for a few days (or months) and a repeated treatment command will be silently ignored.
- Parameters
cfg – the config to use
- action_space(agent)
Takes in agent and returns the action space for that agent.
MUST return the same value for the same agent name
Default implementation is to return the action_spaces dict
- metadata = {'name': 'slim_v0', 'render.modes': ['human']}
- property no_observation
- observation_space(agent)
Takes in agent and returns the observation space for that agent.
MUST return the same value for the same agent name
Default implementation is to return the observation_spaces dict
- observe(agent)
Returns the observation an agent currently can make. last() calls this function.
- render(mode='human')
Displays a rendered frame from the environment, if supported. Alternate render modes in the default environments are ‘rgb_array’ which returns a numpy array and is supported by all environments outside of classic, and ‘ansi’ which returns the strings printed (specific to classic environments).
- reset()
Resets the environment to a starting state.
- step(action: gym.spaces.discrete.Discrete)
Receives a dictionary of actions keyed by the agent name. Returns the observation dictionary, reward dictionary, done dictionary, and info dictionary, where each dictionary is keyed by the agent.
- stop()
- slim.simulation.simulator.get_env(cfg: Config) pettingzoo.utils.wrappers.order_enforcing.OrderEnforcingWrapper
Generate a
SimulatorPZEnv
wrapped inside a PettingZoo wrapper. Note that nesting wrappers may make accessing attributes more difficult.- Parameters
cfg – the config to use
- Returns
the wrapped environment