龚成

作者:发布时间:2025-03-22浏览次数:10

龚成

研究方向:神经网络压缩、异构计算、机器学习

Email: cheng-gong@nankai.edu.cn



博士、博士后、讲师、


个人简介——————————————————————————————

龚成,男,天津工业大学人工智能学院讲师,工学博士,博士后主要研究方向为神经网络压缩、异构计算、机器学习。分别于2016年和2022年在南开大学获得计算机科学与技术专业工学学士学位和工学博士学位。在国内外重要学术期刊和会议上发表学术论文10余篇,申请国家发明专利2主持博士后基金1项,参与多项国家级和省部级项目


教育和工作经历 ———————————————————————————

2025 – 今,天津工业大学 人工智能学院 讲师;

20222024,南开大学 软件学院 博士后;

2016– 2022南开大学 计算机学院 博士;

2012– 2016南开大学 计算机与控制工程学院学士

主要科研项目 ————————————————————————————

[1]深度神经网络的自适应非线性量化与加速,博士后基金面上项目,8万,主持。(结题

主要学术论文 ————————————————————————————

  1. Cheng Gong, Ye Lu, Kunpeng Xie, Zongming Jin, Tao Li, Yanzhi Wang. Elastic Significant Bit Quantization and Acceleration for Deep Neural Networks[J]. IEEE Transactions on Parallel and Distributed Systems, 2021. (CCF-A)

  2. Cheng Gong, Yao Chen, Ye Lu, Tao Li, Cong Hao, Deming Chen. VecQ: Minimal loss DNN model compression with vectorized weight quantization[J]. IEEE Transactions on Computers, 2020, 70(5): 696-710. (CCF-A)

  3. Cheng Gong, Yao Chen, Qiuyang Luo, Ye Lu, Tao Li, Yuzhi Zhang, Yufei Sun, and Le Zhang. Deep Feature Surgery: Towards Accurate and Efficient Multi-Exit Networks. In ECCV, 2024. (CCF-B, 计算机视觉三大顶会)

  4. Cheng Gong, Ye Lu, Su-Rong Dai, Qian Deng, Cheng-Kun Du, and Tao Li. AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks. Journal of Computer Science and Technology, 39(2):401–420, 2024. (CCF-B)

  5. 龚成,卢冶,代素蓉,刘方鑫,陈新伟,李涛.一种超低损失的深度神经网络量化压缩方法.软件学报, 2021, 32(8): 2391−2407. http://www.jos.org.cn/1000-9825/6189.htmĪ. (CCF中文A)

  6. Cheng Gong, Ye Lu, Chunying Song, Tao Li, Kai Wang. OSN: Onion-ring support neighbors for correspondence selection[J]. Information Sciences, 2021, 560: 331-346. (SCI-I, CCF-B)

  7. Cheng Gong, Haoshuai Zheng, Mengting Hu, Zheng Lin, Deng-Ping Fan, Yuzhi Zhang, and Tao Li. Minimize quantization output error with bias compensation. CAAI Artificial Intelligence Research, 2024.

  8. Cheng Gong, Tao Li, Ye Lu, Cong Hao, Xiaofan Zhang, Deming Chen, Yao Chen. µL2Q: An Ultra-Low Loss Quantization Method for DNN Compression[C]// International Joint Conference on Neural Networks (IJCNN). 2019: 1-8. (CCF-C)

  9. Dengping Fan, Cheng Gong, Yang Cao, Bo Ren, Mingming Cheng, Ali Borji. Enhanced-alignment measure for binary foreground map evaluation[C]// International Joint Conference on Artificial Intelligence (IJCAI). 2018: 698–704. (CCF-A)

  10. Yao Chen, Kai Zhang, Cheng Gong, Cong Hao, Xiaofan Zhang, Tao Li, Deming Chen. T-DLA: An open-source deep learning accelerator for ternarized DNN models on embedded FPGA [C]// IEEE Computer Society Annual Symposium on VLSI (ISVLSI). 2019: 13-18.

  11. Fangxin Liu, Kunpeng Xie, Cheng Gong, Shusheng Liu, Ye Lu, Tao Li. LHC: A Low-Power Heterogeneous Computing Method on Neural Network Accelerator[C]// IEEE International Conference on Parallel and Distributed Systems (ICPADS). 2019: 326-334. (CCF-C)