I am a four-year Ph.D. student and luckily advised by the brilliant and kind researcher Prof. Cairong Zhao. I am passionate about computer vision research in the following topics:
Currently, I base my research topics on emerging abilities in foundation models such as LVLM for Occluded ReID and LLM for VIS. I am always grateful to those more senior who have a deep understanding of these topics for their advice. Besides, I am always willing to collaborate with people interested in relevant issues and provide corresponding guidance to younger students (undergrad or master).
You can download my CV at here.
I am actively seeking a Research Scientist or Postdoc Research Fellow. Please feel free to reach out if you have any suitable positions!
April 23, 2024 : A paper on ‘Reviving Static Charts into Live Charts’ is accepted by IEEE Transactions on Visualization and Computer Graphics T-VCG in 2024 (CCF A).
April 21, 2024 : A paper on ‘Hierarchical Recognizing Vector Graphics and A New Chart-based Vector Graphics Dataset’ is accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence in 2024 (CCF A).
April 13, 2024 : A paper on ‘Adaptive Discriminative Regularization for Visual Classification’ is accepted by the International Journal of Computer Vision in 2024 (CCF A).
Feb 14, 2024 : I have been invited to be a Reviewer for the ECCV 2024 Conference (European Conference on Computer Vision 2024).
Jan 26, 2024 : I arrived in Detroit and started an exciting short-term visiting scholar program.
Dec 12, 2023 : I received the Tongji Excellent Doctoral Scholarship (2023).
Dec 06, 2023 : I have been invited to be a Reviewer for the 41st International Conference on Machine Learning (ICML 2024).
Nov 19, 2023 : I have been invited to be a Reviewer for the CVPR 2024 Conference (Conference on Computer Vision and Pattern Recognition 2024).
doushuguang52@163.com
Google Scholar
IEEE Student Member
SH021, Shanghai, China.
For real-world vector graphics, YOLaT only models in flat GNN with vertexes as nodes ignore higher-level information of vector data. Therefore, we propose YOLaT++ to learn Multi-level Abstraction Feature Learning from Primitive Shapes to Curves and Points. On the other hand, given few public datasets focus on vector graphics, data-driven learning cannot exert its full power on this format. We provide a large-scale and challenging dataset for Chart-based Vector Graphics Detection and Chart Understanding, termed VG-DCU, with vector graphics, raster graphics, annotations, and raw data drawn for creating these vector charts.
Shuguang Dou, Xinyang Jiang, Lu Liu, Lu Ying, Caihua Shan, Yifei Shen, Xuanyi Dong, Yun Wang, Dongsheng Li, Cairong Zhao
IEEE T-PAMI 2024 (CCF A)
How to improve discriminative feature learning is central in classification. In this paper, we embrace the real-world data distribution setting in that some classes share semantic overlaps due to their similar appearances or concepts. Regarding this hypothesis, we propose a novel regularization to improve discriminative learning. We first calibrate the estimated highest likelihood of one sample based on its semantically neighboring classes, then encourage the overall likelihood predictions to be deterministic by imposing an adaptive exponential penalty.
Qingsong Zhao*, Yi Wang*, Shuguang Dou, Chen Gong, Yin Wang and Cairong Zhao (*Co-First Authors)
International Journal of Computer Vision 2024 (CCF A)
Existing backdoor attack methods follow an all-to-one or all-to-all attack scenario, where all the target classes in the test set have already been seen in the training set. However, ReID is a much more complex fine-grained open-set recognition problem, where the identities in the test set are not contained in the training set. To ameliorate this issue, we propose a novel backdoor attack on deep ReID under a new all-to-unknown scenario, called Dynamic Triggers Invisible Backdoor Attack (DT-IBA). Instead of learning fixed triggers for the target classes from the training set, DT-IBA can dynamically generate new triggers for any unknown identities.
Wenli Sun, Xinyang Jiang, Shuguang Dou, Dongsheng Li, Duoqian Miao, Cheng Deng, Cairong Zhao
IEEE T-IFS 2023 (CCF A)
We present the first large-scale energy-aware benchmark that allows studying AutoML methods to achieve better trade-offs between performance and search energy consumption, named EA-HAS-Bench. EA-HAS-Bench provides a large-scale architecture/hyperparameter joint search space, covering diversified configurations related to energy consumption.
Shuguang Dou, Xinyang Jiang, Cairong Zhao, Dongsheng Li
ICLR 2023 Spotlight
We propose a novel Human Co-parsing Guided Alignment (HCGA) framework that alternately trains the human co-parsing network and the ReID network, where the human co-paring network is trained in a weakly supervised manner to obtain paring results without any extra annotation.
Shuguang Dou, Cairong Zhao, Xinyang Jiang, Shanshan Zhang, Member, IEEE, Wei-Shi Zheng, Wangmeng Zuo, Senior Member, IEEE
IEEE T-IP 2022 (CCF A)
One of the key challenges to the X-ray security checkis to detect the overlapped items in backpacks or suitcases in the X-ray images. Most existing methods improve the robustness of models to the object overlapping problem by enhancing the underlying visual information such as colors and edges. However, this strategy ignores the situations in that the objects have similar visual clues as to the background, and objects overlapping each other. Since the two cases rarely appear in existing datasets, we contribute a novel dataset – Cutters and Liquid Containers X-ray Dataset (CLCXray) to complete the related research.
Cairong Zhao*, Liang Zhu*, Shuguang Dou, Weihong Deng, and Liang Wang, Fellow, IEEE (*Co-First Authors)
IEEE T-IFS 2022 (CCF A)
To address the occlusion problem, we propose a novel Incremental Generative Occlusion Adversarial Suppression (IGOAS) network.
Cairong Zhao*, Xinbi Lv*, Shuguang Dou*, Shanshan Zhang, Member, IEEE, Jun Wu, Senior Member, IEEE, and Liang Wang, Fellow, IEEE (*Co-First Authors)
IEEE T-IP 2021 (CCF A)
The connection structure in the convolutional layers of most deep learning-based algorithms used for the classification of hyperspectral images (HSIs) has typically been in the forward direction. In this study, an end-to-end alternately updated spectral–spatial convolutional network (AUSSC) with a recurrent feedback structure is used to learn refined spectral and spatial features for HSI classification.
Wenju Wang, Shuguang Dou^, Sen Wang (^Corresponding Author and First Student Author)
Remote sensing 2019
To reduce the training time and improve accuracy, in this paper we propose an end-to-end fast dense spectral–spatial convolution (FDSSC) framework for HSI classification.
Wenju Wang, Shuguang Dou^, Zhongmin Jiang and Liujie Sun (^Corresponding Author and First Student Author)
Remote sensing 2018(ESI Highly Cited PaperCitation 300+)
YuanPeng Tu (https://yuanpengtu.github.io/) : The most abstract man in ViLL Lab.
Shuyang Feng : The most thoughtful man in ViLL Lab.
Qingsong Zhao : A stubborn person who knows his own path.
Junyao Gao : A sincere person in Vill Lab.
Yubin Wang : The most trustworthy person in Vill lab.
Wenli Sun : My makeup artist in Vill lab.
Yan Li : Colleague in the MSRA and MSRA first Winners and Hard Worker.
Longtao Tang : Colleague in the MSRA and the man I admire most in the MSRA.
Learning deep features from 3D point cloud for classification and retrieval. This work will be done before 12/2019.
Shuguang Dou
Master Thesis, College of Communication and Art Design, University of Shanghai for Science and Technology; December, 2019