profile.jpeg

Long Chen

Dr. Long CHEN (Chinese Name: 陈隆) is a tenure-track assistant professor at the CSE department in HKUST (2023 - present). Before joining HKUST, he was a postdoctoral research scientist at the DVMM Lab, Columbia University working with Prof. Shih-Fu Chang (2021 - 2023). He obtained his Ph.D. degree in Computer Science from the DCD Lab, Zhejiang University and his Ph.D. advisor is Prof. Jun Xiao (2015 - 2020). During his Ph.D. study period, he also worked closely with Prof. Hanwang Zhang from Nanyang Technological University (NTU), Prof. Shih-Fu Chang from Columbia University, and Prof. Tat-Seng Chua from National University of Singapore (NUS). He obtained his B.Eng. degree in Electronic Information Engineering from Dalian University of Technology (2011 - 2015). He was a senior research scientist at Tencent AI Lab working with Dr. Wei Liu (2020 - 2021).

Research Group: LONG Group @ HKUST CS

1. Based on the current funding situation, we have only a few postdocs, research assistants, and visiting students openings. (Please also highlight if you have other funding sources or supports).
2. As for Ph.D. and M.Phil. positions, we always have the openings all year around. (DDL for 2024 fall intake in 2024 June!).
3. To further increase the diversity, Ph.D./M.Phil applicants from overseas countries and HK are strongly recommended.


Recent Teaching

2024 Spring: COMP6411C: Advanced Topics in Multimodal Machine Learning


Research Interest

His primary research interest are Computer Vision, Machine Learning, and Multimedia. Specifically, he aims to build an efficient vision system that can understand complex visual scenes as humans. By “human-like”, we mean that the vision systems should be equipped with three types of abilities:

1) Explainable: The model should rely on (right) explicit evidences when making decisions, i.e., right for the right reasons.

2) Robust: The model should be robust to some situations with only low-quality training data (e.g., training samples are biased, noisy, or limited).

3) Universal: The model design is relatively universal, i.e., it is expected to be effective for various tasks.

Meanwhile, with the rapid development in other AI areas, such as the appearance of Large Language Models (LLMs) in the Natural Language Processing community, we are also very interested in several releveant cutting-edge directions:

4) Building more explainable, robust, and universal vision models with the help of LLMs.

5) Designing more efficient and stronger multimodal LLMs.

6) The inherent weaknesses in existing LLMs.


News

Jan, 2024 I will serve as an Area Chair for ECCV 2024.
Jan, 2024 Our research group has the 2nd group outing activity: Hiking in Shek-O and Cape D'Aguilar.
Nov, 2023 I will serve as an Area Chair for ACM Multimedia 2024.
Oct, 2023 Our research group has the 1st group outing activity: Hiking in MacLehose Trail (Section 2).
Oct, 2023 I was ranked as the World’s Top 2% Most-cited Scientists (in the single year 2022) by Stanford University.
Jun, 2023 I will serve as an Area Chair for CVPR 2024 and a Senior PC for AAAI 2024.
Mar, 2023 I will serve as an Area Chair for NeurIPS 2023 and an Area Chair for BMVC 2023.
Nov, 2022 I will serve as a Senior PC for IJCAI 2023 and an Action Editor for ACL Rolling Review (ARR).
Oct, 2022 I will serve as an Area Chair for CVPR 2023.
Jul, 2022 I will serve as an Area Chair for BMVC 2022 and a Senior PC for AAAI 2023.

Recent Publications

  1. arXiv
    Zhen Wang, Jun Xiao, Tao Chen, and Long Chen
    arXiv preprint (arXiv) , arXiv
  2. arXiv
    Lin Li, Guikun Chen, Jun Xiao, and Long Chen
    arXiv preprint (arXiv) , arXiv
  3. arXiv
    Chaoyang Zhu, and Long Chen
    arXiv preprint (arXiv) , arXiv
  4. CVPR
    Haiwen Diao, Bo Wan, Ying Zhang, Xu Jia, Huchuan Lu, and Long Chen
    Computer Vision and Pattern Recognition (CVPR) , 2024 , Codes
  5. ICLR
    Yulei Niu, Wenliang Guo, Long Chen, Xudong Lin, and Shih-Fu Chang
    International Conference on Learning Representations (ICLR) , 2024
  6. TPAMI
    Wenxiao Wang, Wei Chen, Qibo Qiu, Long Chen, Boxi Wu, Binbin Lin, Xiaofei He, and Wei Liu
    IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI) , 2024 , extension of ICLR’22 work
  7. EMNLP Findings
    Haoxuan You, Rui Sun, Zhecan Wang, Long Chen, Gengyu Wang, Hammad A. Ayyubi, Kai-Wei Chang, and Shih-Fu Chang
    Empirical Methods in Natural Language Processing (EMNLP Findings) , 2023 , Codes
  8. NeurIPS
    Lin Li, Jun Xiao, Guikun Chen, Jian Shao, Yueting Zhuang, and Long Chen
    Neural Information Processing Systems (NeurIPS) , 2023 , Codes
  9. ICCV
    Lin Li, Guikun Chen, Jun Xiao, Yi Yang, Chunping Wang, and Long Chen
    International Conference on Computer Vision (ICCV) , 2023 , Codes
  10. ACL Findings
    Mingyang Zhou, Yi R. Fung, Long Chen, Christopher Thomas, Heng Ji, and Shih-Fu Chang
    Annual Meeting of the Association for Computational Linguistics (ACL Findings) , 2023 , Codes
  11. ICLR
    Siqi Chen, Jun Xiao, and Long Chen
    International Conference on Learning Representations (ICLR) , 2023 , Codes
  12. ICLR
    Kaifeng Gao, Long Chen, Hanwang Zhang, Jun Xiao, and Qianru Sun
    International Conference on Learning Representations (ICLR) , 2023 , Codes
  13. ICLR
    Yuncong Yang, Jiawei Ma, Shiyuan Huang, Long Chen, Xudong Lin, Guangxing Han, and Shih-Fu Chang
    International Conference on Learning Representations (ICLR) , 2023 , Codes
  14. TPAMI
    Long Chen, Yuhang Zheng, Yulei Niu, Hanwang Zhang, and Jun Xiao
    IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI) , 2023 , extension of CVPR’20 work