Junchen Fu
Research title: Efficiently Adapting Multimodal Foundation Models for Recommendation
Publications
2025
Fu, Junchen, Ge, Xuri, Xin, Xin, Yu, Haitao, Feng, Yue, Karatzoglou, Alexandros, Arapakis, Ioannis and Jose, Joemon ORCID: https://orcid.org/0000-0001-9228-1759
(2025)
The 1st EReL@MIR Workshop on Efficient Representation Learning for Multimodal Information Retrieval.
In: WWW '25: The ACM Web Conference 2025, Sydney, Australia, 28 Apr - 02 May 2025,
pp. 2149-2152.
ISBN 9798400713316
(doi: 10.1145/3701716.3717559)
He, Yaoqin, Fu, Junchen, Zheng, Kaiwen, Xu, Songpei, Chen, Fuhai, Li, Jie, Jose, Joemon M. ORCID: https://orcid.org/0000-0001-9228-1759 and Ge, Xuri
(2025)
Double-Filter: Efficient Fine-tuning of Pre-trained Vision-Language Models via Patch&Layer Filtering.
In: ICML 2025, Vancouver, Canada, 13-19 July 2025,
(Accepted for Publication)
Zhuang, Ziyi, Du, Hanwen, Han, Hui, Li, Youhua, Fu, Junchen, Jose, Joemon M. ORCID: https://orcid.org/0000-0001-9228-1759 and Ni, Yongxin
(2025)
Bridging the Gap: Teacher-Assisted Wasserstein Knowledge Distillation for Efficient Multi-Modal Recommendation.
In: 2025 ACM Web Conference, Sydney, Australia, 28 Apr – 02 May 2025,
pp. 2464-2475.
ISBN 9798400712746
(doi: 10.1145/3696410.3714852)
Zheng, Kaiwen, Ge, Xuri ORCID: https://orcid.org/0000-0002-3925-4951, Fu, Junchen, Peng, Jun and Jose, Joemon
ORCID: https://orcid.org/0000-0001-9228-1759
(2025)
Multimodal Representation Learning Techniques for Comprehensive Facial State Analysis.
In: 2025 IEEE International Conference on Multimedia and Expo (ICME), Nantes, France, 30 Jun - 04 Jul 2025,
(Accepted for Publication)
Ge, Xuri, Li, Linqing, Xu, Songpei, Zheng, Kaiwen, He, Yaoqin, Fu, Junchen and Jose, Joemon M. ORCID: https://orcid.org/0000-0001-9228-1759
(2025)
The DenseCap-Guided Attention Network For Image-Text Matching.
In: ACM Web Conference 2025, Sydney, Australia, 28 April - 2 May 2025,
(Accepted for Publication)
Liu, Zhiyu, Fu, Junchen, Zheng, Kaiwen and Jose, Joemon M. ORCID: https://orcid.org/0000-0001-9228-1759
(2025)
Exploring Multimodal Pre-trained Models for Speech Emotion Recognition.
In: ACM Web Conference 2025, Sydney, Australia, 28 April - 2 May 2025,
(Accepted for Publication)
2024
Ge, Xuri ORCID: https://orcid.org/0000-0002-3925-4951, Fu, Junchen, Chen, Fuhai, An, Shan, Sebe, Nicu and Jose, Joemon M.
ORCID: https://orcid.org/0000-0001-9228-1759
(2024)
Towards End-to-End Explainable Facial Action Unit Recognition via Vision-Language Joint Learning.
In: 32nd ACM Multimedia Conference (MM2024), Melbourne, Australia, 28 Oct - 01 Nov 2024,
pp. 8189-8198.
ISBN 9798400706868
(doi: 10.1145/3664647.3681443)
Fu, Junchen, Ge, Xuri ORCID: https://orcid.org/0000-0002-3925-4951, Xin, Xin, Karatzoglou, Alexandros, Arapakis, Ioannis, Wang, Jie and Jose, Joemon
ORCID: https://orcid.org/0000-0001-9228-1759
(2024)
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFT.
In: 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2024), Washington D.C., USA, 14-18 July 2024,
(Accepted for Publication)
Conference Proceedings
Fu, Junchen, Ge, Xuri, Xin, Xin, Yu, Haitao, Feng, Yue, Karatzoglou, Alexandros, Arapakis, Ioannis and Jose, Joemon ORCID: https://orcid.org/0000-0001-9228-1759
(2025)
The 1st EReL@MIR Workshop on Efficient Representation Learning for Multimodal Information Retrieval.
In: WWW '25: The ACM Web Conference 2025, Sydney, Australia, 28 Apr - 02 May 2025,
pp. 2149-2152.
ISBN 9798400713316
(doi: 10.1145/3701716.3717559)
He, Yaoqin, Fu, Junchen, Zheng, Kaiwen, Xu, Songpei, Chen, Fuhai, Li, Jie, Jose, Joemon M. ORCID: https://orcid.org/0000-0001-9228-1759 and Ge, Xuri
(2025)
Double-Filter: Efficient Fine-tuning of Pre-trained Vision-Language Models via Patch&Layer Filtering.
In: ICML 2025, Vancouver, Canada, 13-19 July 2025,
(Accepted for Publication)
Zhuang, Ziyi, Du, Hanwen, Han, Hui, Li, Youhua, Fu, Junchen, Jose, Joemon M. ORCID: https://orcid.org/0000-0001-9228-1759 and Ni, Yongxin
(2025)
Bridging the Gap: Teacher-Assisted Wasserstein Knowledge Distillation for Efficient Multi-Modal Recommendation.
In: 2025 ACM Web Conference, Sydney, Australia, 28 Apr – 02 May 2025,
pp. 2464-2475.
ISBN 9798400712746
(doi: 10.1145/3696410.3714852)
Zheng, Kaiwen, Ge, Xuri ORCID: https://orcid.org/0000-0002-3925-4951, Fu, Junchen, Peng, Jun and Jose, Joemon
ORCID: https://orcid.org/0000-0001-9228-1759
(2025)
Multimodal Representation Learning Techniques for Comprehensive Facial State Analysis.
In: 2025 IEEE International Conference on Multimedia and Expo (ICME), Nantes, France, 30 Jun - 04 Jul 2025,
(Accepted for Publication)
Ge, Xuri, Li, Linqing, Xu, Songpei, Zheng, Kaiwen, He, Yaoqin, Fu, Junchen and Jose, Joemon M. ORCID: https://orcid.org/0000-0001-9228-1759
(2025)
The DenseCap-Guided Attention Network For Image-Text Matching.
In: ACM Web Conference 2025, Sydney, Australia, 28 April - 2 May 2025,
(Accepted for Publication)
Liu, Zhiyu, Fu, Junchen, Zheng, Kaiwen and Jose, Joemon M. ORCID: https://orcid.org/0000-0001-9228-1759
(2025)
Exploring Multimodal Pre-trained Models for Speech Emotion Recognition.
In: ACM Web Conference 2025, Sydney, Australia, 28 April - 2 May 2025,
(Accepted for Publication)
Ge, Xuri ORCID: https://orcid.org/0000-0002-3925-4951, Fu, Junchen, Chen, Fuhai, An, Shan, Sebe, Nicu and Jose, Joemon M.
ORCID: https://orcid.org/0000-0001-9228-1759
(2024)
Towards End-to-End Explainable Facial Action Unit Recognition via Vision-Language Joint Learning.
In: 32nd ACM Multimedia Conference (MM2024), Melbourne, Australia, 28 Oct - 01 Nov 2024,
pp. 8189-8198.
ISBN 9798400706868
(doi: 10.1145/3664647.3681443)
Fu, Junchen, Ge, Xuri ORCID: https://orcid.org/0000-0002-3925-4951, Xin, Xin, Karatzoglou, Alexandros, Arapakis, Ioannis, Wang, Jie and Jose, Joemon
ORCID: https://orcid.org/0000-0001-9228-1759
(2024)
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFT.
In: 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2024), Washington D.C., USA, 14-18 July 2024,
(Accepted for Publication)