COMP4901Z: Reinforcement Learning
Fall 2025, Dept. of Computer Science and Engineering (CSE), The Hong Kong University of Science and Technology (HKUST)
Instructor: Long Chen
Class Time & Location: Monday & Wednesday 9:00AM - 10:20AM (RM1527, Lift 22)
Email: longchen@ust.hk
(For course-related queries, please use the subject starting from [COMP4901Z]
)
Office Hours: There is no specific office hour, you can directly ask any questions after the lecture.
Teaching Assistant: Yanghao Wang (ywangtg@connect.ust.hk
) and Wei Chen (wchendb@connect.ust.hk
)
For those who have enrolled in the COMP4901 course, if you want to get the recorded videos for absent classes, you can direct sent emails to the TA.
Course Description: Reinforcement learning (RL) is a computational learning approach where an agent tries to maximize the total amount of reward it receives while interacting with a complex and uncertain environment. It not only shows strong performance in lots of games (such as Go), but also becomes an essential technique in many today’s real-world applications (such as LLM training, and embodied AI). This course aims to teach the fundamentals and the advanced topics of RL. The course content includes the introduction of basic RL elemnets (including MDP, dynamic programming, policy iteration), value-based approaches (DQN), policy-based approaches (policy gradient, actor critic), model-based RL, and RL techniques in today’s computer vision or AI applications. To better enhance the understanding, we will also contain some Python/Pytorch implementations.
Pre-requisite:
Math: You should have familiar background in Linear Algebra (e.g., matrix inversion) and Probability (e.g., expectation, sampling).
Machine Learning: Basic machine learning knowledge (e.g., gradient backpropagation) and deep learning knowledge (e.g., neural network) as needed.
Programming: Python, PyTorch (necessary for assignment)
Grading scheme:
- In-class Quiz: 20%
- Assignment: 20%
- Midterm: 20%
- Final Exam: 40%
Reference books/materials:
Richard S. Sutton, Andrew G. Barto. Reinforcement Learning: An Introduction. Second Edition. [pdf]
Kevin P. Murphy. Reinforcement Learning: An Overview. [pdf]
Alekh Agarwal, Nan Jiang, Sham M. Kakade, Wen Sun. Reinforcement Learning: Theory and Algorithms. [pdf]
Csaba Szepesvari. Algorithms for Reinforcement Learning. [pdf]
Dimitri P. Bertsekas. Reinforcement Learning and Optimal Control. [pdf]
Content Coverage
- Markov Decision Processes
- Dynamic Programming
- Monte Carlo and Temporal Difference Learning
- Q-Learning
- DQN and advanced techniques
- Policy Gradient
- Actor Critic
- Advanced Policy Gradient
- Continuous Controls
- Imitation Learning
- Model-based RL
Lecture Syllabus / Schedule
Course overview
Basic Required Prerequisite
Basic RL concepts
Comparions with other ML methods
(Sep 1 & 3)
Exploration vs. Exploitation
Greedy vs. \(\epsilon\)-greedy
Upper Confidence Bound (UCB)
Bayesian Bandits
Markov process, Markov reward process
Markov decision process
Optimal policies and value functions
Policy evaluation
Policy improvement
Policy iteration vs. Value iteration
Monte-Carlo Learning
Temporal Difference Learning
On-policy Monte-Carlo Control
Off-policy Monte-Carlo Control
SARSA
Q-Learning
Classes of function approximation
Gradient-based algorithm
Convergence and divergence
Deep Q-Learning
Human-level Control through Deep Reinforcement Learning. Nature'15.
Experience Replay
Target Network
Double DQN
Dueling Network
Noisy Network
Prioritized Experience Replay. ICLR, 2016.
Dueling Network Architectures for Deep Reinforcement Learning. ICML'16.
A Distributional Perspective on Reinforcement Learning. ICML'17.
Noisy Networks for Exploration. ICLR'18.
Rainbow: Combining Improvements in Deep Reinforcement Learning. AAAI'18.
REINFORCE
Policy gradient with baseline
Off-Policy policy gradient
Actor critic
Advantage actor critic (A2C)
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. ICML'18.
Natural policy gradient
Trust region policy optimization (TRPO)
Natural Actor-Critic. ECML'05
Trust Region Policy Optimization. ICML'15.
Proximal Policy Optimization Algorithms. arXiv'17
Deterministic policy gradient
TD3
Continuous Control with Deep Reinforcement Learning. ICLR'16.
Addressing Function Approximation Error in Actor-Critic Methods. ICML'18.
Behavior cloning
Dataset Aggregation (DAgger)
Inverse RL
Open-loop planning
Monte Carlo Tree Search (MCTS)
Dyna & Dyna-Q
Model-based Policy Learning
Model-free RL with a Model
Acknowledgements
This course was inspired by and/or uses reserouces from the following courses:
Reinforcement Learning by David Silver, DeepMind, 2015.
CS285: Deep Reinforcement Learning by Sergey Levine, UC Berkeley, 2023.
CS234: Reinforcement Learning by Emma Brunskill, Stanford University, 2024.
10-403: Deep Reinforcement Learning by Katerina Fragkiadaki, Carnegie Mellon University, 2024.
Special Topics in AI: Foundations of Reinforcement Learning by Yuejie Chi, Carnegie Mellon University, 2023.
CS 6789: Foundations of Reinforcement Learning by Wen Sun and Sham Kakade, Cornell Univeristy.
CS224R: Deep Reinforcement Learning by Chelsea Finn, Stanford University, 2023.
DeepMind x UCL RL Lecture Series by Hado van Hasselt, DeepMind, 2021.