Soft Actor Critic is Easy in PyTorch | Complete Deep Reinforcement Learning Tutorial
The soft actor critic algorithm is an off policy actor critic method for dealing with reinforcement learning problems in continuous action spaces. It makes use of a novel framework that seeks to maximize the entropy of our agent. We’re going to write our very own SAC agent in PyTorch, starting from scratch.
We’re going to need to implement several classes for this project:
A Replay buffer to keep track of the states the agent encountered, the actions it took, and the rewards it received along the way.
A critic network that tells the agent how valuable it thinks the chosen actions were.
A value network that informs the agent how valuable each state is.
We will also make use of ideas from double Q learning, like taking the minimum of estimation from two critics, for our update rules for the value and actor network.
We will test our agent in the Inverted Pendulum environment from the PyBullet package, which is an open 3d rendering and physics engine.
Code