Long Chen
Department of Computer Science and Engineering (CSE)
School of Engineering (SENG)
The Hong Kong University of Science and Technology (HKUST)
Email: longchen A~T ust.hk
Office: Room 3508 (via Lift 25 & 26), Academic Building,
HKUST, Clear Water Bay, Kowloon, Hong Kong
Dr. Long CHEN (Chinese: 陈隆) is an assistant professor at the Department of Computer Science and Engineering (CSE), Hong Kong University of Science and Technology (HKUST) starting from 2023. He is leading the research group: LONG Group. Before joining HKUST, he was a postdoctoral research scientist at the DVMM Lab, Columbia University working with Prof. Shih-Fu Chang (2021 - 2023). He obtained his Ph.D. degree in Computer Science from Zhejiang University and his Ph.D. advisor is Prof. Jun Xiao (2015 - 2020). During his Ph.D. study period, he also worked closely with Prof. Hanwang Zhang from Nanyang Technological University (NTU) and Prof. Tat-Seng Chua from National University of Singapore (NUS). He obtained his B.Eng. degree in Electronic Information Engineering from Dalian University of Technology (2011 - 2015). He was a senior research scientist at Tencent AI Lab (2020 - 2021).
Research Group: LONG Group @ HKUST CSE
1. Based on the current funding situation, we have only extremely limited postdocs, research assistants, and visiting students openings. (Please also highlight if you have other funding sources or supports).
2. As for Ph.D. and M.Phil. positions, we always have the openings all year around.
3. To further increase the diversity, Ph.D./M.Phil applicants from overseas countries and HK are strongly recommended.
Recent Teaching
-
2025 Spr.:
COMP6411C: Advanced Topics in Multimodal Machine Learning
-
2024 Fall:
COMP4901Z: Reinforcement Learning
-
2024 Spr.:
COMP6411C: Advanced Topics in Multimodal Machine Learning
Research Interest
His primary research interests are Computer Vision
, Machine Learning
, and Multimedia
. Specifically, he aims to build an efficient vision system that can understand complex visual scenes as humans. By “human-like”, we mean that the vision systems should be equipped with three types of abilities:
1) Explainable: The model should rely on (right) explicit evidences when making decisions, i.e., right for the right reasons.
2) Robust: The model should be robust to some situations with only low-quality training data (e.g., training samples are biased, noisy, or limited).
3) Universal: The model design is relatively universal, i.e., it is expected to be effective for various tasks.
Meanwhile, with the rapid development in pretrained models, such as the appearance of Large Language Models (LLMs), Stable Diffusion, we are also very interested in several releveant cutting-edge directions:
4) Building more explainable, robust, and universal vision models with the help of pretrained models (LLMs, diffusion models).
5) Designing more efficient and stronger multimodal LLMs.
6) The inherent weaknesses in existing LLMs and diffusion models.
News
Nov, 2024 | Our research group has the 4th group outing activity: Hiking in MacLehose Trail (Section 2), again!. |
---|---|
Sep, 2024 | I was ranked as the World’s Top 2% Most-cited Scientists (in the single year 2023) by Stanford University. |
Sep, 2024 | I will serve as an Area Chair for CVPR 2025. |
Aug, 2024 | I will serve as an Area Chair for ICLR 2025. |
Jul, 2024 | Two students have received HKUST RedBird PhD Awards. Congrats to Chaolei and Jiazhen!. |
Jun, 2024 | I will serve as a Senior PC for AAAI 2025. |
May, 2024 | I will serve as an Area Chair for NeurIPS 2024 and an Area Chair for BMVC 2024. |
Apr, 2024 | We will organize The 2nd Workshop on Deep Multimodal Generation and Retrieval in ACM Multimedia 2024. |
Jan, 2024 | I will serve as an Area Chair for ECCV 2024. |
Jan, 2024 | Our research group has the 2nd group outing activity: Hiking in Shek-O and Cape D'Aguilar. |
Nov, 2023 | I will serve as an Area Chair for ACM Multimedia 2024. |
Oct, 2023 | Our research group has the 1st group outing activity: Hiking in MacLehose Trail (Section 2). |
Recent Publications
- arXivarXiv preprint (arXiv) , arXiv , Codes
- arXivarXiv preprint (arXiv) , arXiv
- arXiv
- arXivarXiv preprint (arXiv) , arXiv
- arXivarXiv preprint (arXiv) , arXiv
- arXivarXiv preprint (arXiv) , arXiv , Codes
- arXivarXiv preprint (arXiv) , arXiv
- arXivarXiv preprint (arXiv) , arXiv
- NeurIPSNeural Information Processing Systems (NeurIPS) , 2024
- NeurIPSNeural Information Processing Systems (NeurIPS) , 2024 , Codes
- EMNLP[New!!] Optimizing Language Models with Fair and Stable Reward Composition in Reinforcement LearningEmpirical Methods in Natural Language Processing (EMNLP) , 2024
- ECCVEuropean Conference on Computer Vision (ECCV) , 2024 , Website
- ECCVEuropean Conference on Computer Vision (ECCV) , 2024
- CVPRComputer Vision and Pattern Recognition (CVPR) , 2024 , Codes
- ICLRInternational Conference on Learning Representations (ICLR) , 2024 , Codes
- TPAMIIEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI) , 2024 , Codes
- TPAMIIEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI) , 2024 , Codes , extension of CVPR’22 work
- TPAMIIEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI) , 2024 , Codes , extension of ICLR’22 work
- IJCVInternational Journal of Computer Vision (IJCV) , 2024
- EMNLP FindingsEmpirical Methods in Natural Language Processing (EMNLP Findings) , 2023 , Codes
- NeurIPSNeural Information Processing Systems (NeurIPS) , 2023 , Codes
- ICCVInternational Conference on Computer Vision (ICCV) , 2023 , Codes
- ACL FindingsAnnual Meeting of the Association for Computational Linguistics (ACL Findings) , 2023 , Codes
- ICLRInternational Conference on Learning Representations (ICLR) , 2023 , Codes
- ICLRInternational Conference on Learning Representations (ICLR) , 2023 , Codes
- ICLRInternational Conference on Learning Representations (ICLR) , 2023 , Codes
- TPAMIIEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI) , 2023 , extension of CVPR’20 work