Research talk: Breaking the deadly triad with a target network
Speaker: Shangtong Zhang, PhD Student, Oxford University
The deadly triad refers to the instability of an off-policy reinforcement learning (RL) algorithm when it employs function approximation and bootstrapping simultaneously, and this is a major challenge in off-policy RL. Join PhD student Shangtong Zhang, from the WhiRL group at the University of Oxford, to learn how the target network can be used as a tool for theoretically breaking the deadly triad. Together, you’ll explore how to theoretically understand the conventional wisdom that a target network stabilizes training, a novel target network update rule that augments the commonly used Polyak-averaging style update with two projections, and how a target network can be used in linear off-policy RL algorithms, in both prediction and control settings, as well as both discounted and average-reward Markov decision processes.
Learn more about the 2021 Microsoft Research Summit:
1 view
28
5
4 weeks ago 00:00:32 1
…but the people are retarded
2 months ago 01:04:12 1
Depravity of Power: USA & Co Trying To KILL International Law | Dr. Binoy Kampmark
2 months ago 00:11:44 1
Apple CEO’s High Stake Visit To China For Apology & Request To Market Share
2 months ago 00:39:26 1
Bob Laramee - Visualizing the Signal From the Noise: Keynote Talk for the ICINC 2024 Conference