个人简介
学习与工作经历:
学习经历:
2001-2005 南京航空航天大学,计算机科学与技术,本科;
2005-2007 南京航空航天大学,计算机科学与技术,硕士;
2007-2011 南京航空航天大学,计算机科学与技术,博士;
出国经历:
2013.1-2015.1: 加拿大西安大略大学医学生物物理学系数字图像组,以及计算机科学系Charles X. Ling教授数据挖掘以及商业智能组做博士 后研究
2016.8-2017.8: 美国德克萨斯州立大学Arlington分校,Heng Huang组做博后
2017.9-2018.7: 美国匹兹堡大学,Heng Huang组做博后
工作经历:
2010-至今: 南京信息工程大学计算机与软件学院,教授
荣誉职位:
2020.7-至今 加拿大西安大略大学(世界排名200左右)计算机科学系,兼职教授(可以和Charles X. Ling院士联合招生)
社会兼职:
IEEE member;
IEEE Transaction on Neural Networks and Learning Systems, Machine Learning, Neural Networks, Information Science, IEEE Transactions on Knowledge and Data Engineering 审稿人
研究领域和方向:
研究兴趣主要是机器学习、商业智能分析以及医疗图像分析,具体包括机器学习中的优化方法(支持向量机的增量式学习、大数据学习、模型选择、稀疏化学习),代价敏感学习、引入先验知识的学习以及在商业智能以及医疗图像分析中的应用.
学生培养介绍:
目前人工智能的发展之一是让计算机模拟人的经验学习能力。机器学习主要研究经验学习使得机器具备经验学习能力。目前机器学习在产业界得到空前的应用。我们目前主要对机器学习中主流算法进行理论研究,以及产业化应用。
实验室目前招收本科实习生、硕士以及博士生,欢迎对机器学习有兴趣的学生与我联系。适合本实验室的学生应具备的基本特征如下:
1.品行端正,诚实守信
2.对机器学习,人工智能有兴趣
3.能吃得了苦,坐得了板凳
注:编程,数学(线性代数,统计,优化理论),英语能力好是加分项,不是必需项。
表现优秀的博士、硕士、本科生将有全额资助海外访问机会。
学生成绩:
•恭喜张华林将于2023年秋季开始在MBZUAI(穆罕默德·本·扎耶德人工智能大学)攻读博士学位。
•恭喜南信大计算机本科生李笛扬将于2023年秋季开始在康奈尔大学(Cornell University)攻读博士学位
•恭喜张华林(研三学生)以第一作者完成的"Faster Gradient-Free Methods for Escaping Saddle Points."文章在深度学习领域顶级会议ICLR 2023录用。
•恭喜张辰康(研三学生)以第一作者完成的"Denoising Multi-Similarity Formulation: A Self-paced Curriculum-Driven Approach for Robust Metric Learning."文章在人工智能顶级会议AAAI 2023录用。
•恭喜李笛扬(大四学生)以第一作者完成的"When Online Learning Meets ODE: Learning without Forgetting on Variable Feature Space."文章在人工智能顶级会议AAAI 2023录用。
•恭喜翟周以第一作者完成的"Faster Fair Machine via Transferring Fairness Constraints to Virtual Samples."文章在人工智能顶级会议AAAI 2023录用。
•恭喜张华林(研二学生)以第一作者完成的"Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without Gradients."文章在人工智能领域顶级会议NeurIPS 2022录用。
恭喜李笛扬(大三学生)、赵骁汉(大二学生)共同完成的"GAGA: Deciphering Age-path of Generalized Self-paced Regularizer."文章在人工智能领域顶级会议NeurIPS 2022录用。
•恭喜吴惠敏(研二学生)以第一作者完成的"Efficient Semi-Supervised Adversarial Training without Guessing Labels."文章在数据挖掘顶级会议ICDM 2022录用。
•恭喜熊自然(研三学生)以第一作者完成的"End-to-end Semi-Supervised Ordinal Regression AUC Maximization with Convolutional Kernel Networks"文章在数据挖掘顶级会议KDD 2022录用。
•恭喜施万里(博二学生)以第一作者完成的“GradientFree Method for Heavily Constrained Nonconvex Optimization”文章在机器学习顶级会议ICML 2022录用。
•恭喜李笛扬(大二学生)以第一作者完成的“Chunk Dynamic Updating for Group Lasso with ODEs”文章在人工智能顶级会议AAAI 2022录用。
•恭喜张辰康(研二学生)以第一作者完成的“Balanced Self-Paced Learning for AUC Maximization”文章在人工智能顶级会议AAAI 2022录用。
•恭喜魏希源(大三学生)完成的“ Black-Box Reductions for Zeroth-Order Gradient Algorithms to Achieve Lower Query Complexity”文章在机器学习顶刊Journal of Machine Learning Research (JMLR)录用。
•恭喜施万里(博一学生)以第一作者完成的“Improved Penalty Method via Doubly Stochastic Gradients for Bilevel Hyperparameter Optimization”文章在人工智能顶级会议AAAI 2021录用。
•恭喜吴惠敏(研一学生)以第一作者完成的“Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic Gradients”文章在人工智能顶级会议AAAI 2021录用。
•恭喜施万里(研二学生)以第一作者完成的“Semi-Supervised Multi-Label Learning from Crowds via Deep Sequential Generative Model”文章在数据挖掘顶级会议KDD 2020 research track录用。
•恭喜施万里(研二学生)以第一作者完成的“Quadruply Stochastic Gradient Method for Large Scale Nonlinear Semi-Supervised Ordinal Regression AUC Optimization”文章在人工智能顶级会议AAAI 2020录用。
•恭喜翟周(研一学生)以第一作者完成的“Safe Sample Screening for Robust Support Vector Machine”文章在人工智能顶级会议AAAI 2020录用。
•恭喜耿祥(研二学生)在人工智能顶级会议IJCAI 2019发表论文,并成功完成talk汇报。
•恭喜施万里(研一学生)在人工智能顶级会议IJCAI 2019发表论文,并成功完成talk汇报。
•恭喜於舒扬(大二访问学生), 宁鲲鹏(大三访问学生)在数据挖掘顶级会议KDD 2019发表Research Track论文。
科研成果:
近年来主要承担的科研项目
2016.1-2019.122016.1-2019.12 针对来自众包的大数据支持向量机研究,国家自然科学基金面上项目,主持
2013.1-2015.122013.1-2015.12 精确的增量式支持向量机的研究,国家自然科学基金青年项目,主持.
2012.2-2013.122012.2-2013.12 精确的增量式支持向量机的研究与应用,南京信息工程大学科研启动基金,主持.
2007.9-2008.7 基于服务架构的民航公众信息服务平台,国家863重点课题,参与.
代表性科研成果
发表多篇机器学习领域SCI一区期刊论文(如IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Neural Networks and Learning Systems),机器学习顶级会议(NIPS, ICML),数据挖掘顶级会议(KDD)人工智能顶级会议(AAAI,IJCAI)论文
[80]Wanli Shi, Hongchan Gao, Bin Gu. GradientFree Method for Heavily Constrained Nonconvex Optimization. ICML 2022, (accepted)
[79]Alexander Gasnikov, Anton Novitskii, Vasilii Novitskii, Farshed Abdukhakimov, Dmitry Kamzolov, Aleksandr Beznosikov, Martin Takáč, Pavel Dvurechensky, Bin Gu. The power of first-order smooth optimization for black-box nonsmooth problems. ICML 2022, (accepted)
[78]Ziran Xiong, Wanli Shi, Bin Gu. End-to-end Semi-Supervised Ordinal Regression AUC Maximization with Convolutional Kernel Networks. KDD 2022, (accepted)
[77]Ziran Xiong, Charles X. Ling, Bin Gu. Kernel Error Path Algorithm. IEEE Transactions on Neural Networks and Learning Systems. (accepted)
[76]Haiyan Chen, Yizhen Jia, Jiaming Ge, Bin Gu. Incremental learning algorithm for large-scale semi-supervised ordinal regression. Neural Networks. (accepted)
[75]Bin Gu, Chenkang Zhang, Huan Xiong, Heng Huang. Balanced Self-Paced Learning for AUC Maximization. AAAI 2022.
[74]Junyi Li, Bin Gu, Heng Huang. A Fully Single Loop Algorithm for Bilevel Optimization without Hessian Inverse. AAAI 2022.
[73]Diyang Li, Bin Gu. Chunk Dynamic Updating for Group Lasso with ODEs. AAAI 2022.
[72]Bin Gu, Zhou Zhai, Xiang Li, Heng Huang. Finding Age Path of Self-Paced Learning. ICDM 2021.
[71]Qinsong Zhang, Bin Gu, Cheng Deng, Heng Huang. Desirable Companion for Vertical Federated Learning: New Zeroth-Order Gradient Based Algorithm. CIKM 2021.
[70]Qinsong Zhang, Bin Gu, Cheng Deng, Jian Pei, Heng Huang. AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization. KDD 2021.
[69]Bin Gu, Ziran Xiong, Xiang Li, Zhou Zhai, Guansheng Zheng. Kernel Path for -Support Vector Classification. IEEE Transactions on Neural Networks and Learning Systems.
[68]Bin Gu, Xiyuan Wei, Shangqian Gao, Ziran Xiong, Cheng Deng and Heng Huang. Back-Box Reductions for Zeroth-Order Gradient Algorithms to Achieve Lower Query Complexity. Journal of Machine Learning Research (JMLR).
[67]Bin Gu, Charles X. Ling. Generalized Error Path Algorithm. Pattern Recognition.
[66]Wanli Shi, Bin Gu, Xiang Li, Cheng Deng and Heng Huang. Triply Stochastic Gradient Method for Large-Scale Nonlinear Similar Unlabeled Classification. Machine Learning.
[65]Bin Gu, Ziran Xiong, Shuyang Yu, Guansheng Zheng. A Kernel Path Algorithm for General Parametric Quadratic Programming Problem. Pattern Recognition.
[64]Bin Gu, An Xu, Zhouyuan Huo, Cheng Deng and Heng Huang. Privacy-Preserving Asynchronous Vertical Federated Learning Algorithms for Multi-Party Collaborative Learning. IEEE Transactions on Neural Networks and Learning Systems.
[63]Zhiyuan Dang, Bin Gu, Heng Huang. Large-Scale Kernel Method for Vertical Federated Learning. Federated Learning. Springer.
[62]Bin Gu, Zhiyuan Dang, Zhouyuan Huo, Cheng Deng and Heng Huang. Scaling Up Generalized Kernel Methods. IEEE Transactions on Pattern Analysis and Machine Intelligence.
[61]Wanli Shi, Bin Gu, Heng Huang. Improved Penalty Method via Doubly Stochastic Gradients for Bilevel Hyperparameter Optimization. AAAI 2021.
[60]Zhouyuan Huo, Bin Gu, Heng Huang. Large Batch Optimization for Deep Learning Using New Complete Layer-Wise Adaptive Rate Scaling. AAAI 2021.
[59]Huimin Wu, Bin Gu, Zhengmian Hu, Heng Huang. Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic Gradients. AAAI 2021.
[58]Qinsong Zhang, Bin Gu, Cheng Deng, Heng Huang. Secure Bilevel Asynchronous Vertical Federated Learning with Backward Updating. AAAI 2021.
[57]Zhiyuan Dang, Xiang Li, Bin Gu, Cheng Deng, Heng Huang. Large Scale Nonlinear AUC Maximization via Triply Stochastic Gradients. IEEE Transactions on Pattern Analysis and Machine Intelligence. (accepted)
[56]Bin Gu, Wenhan Xian, Zhouyuan Huo, Cheng Deng and Heng Huang. A Unified q-Memorization Framework for Asynchronous Stochastic Optimization. JMLR. (accepted)
[55]Bin Gu, Zhou Zhai, Cheng Deng, and Heng Huang. Efficient Active Learning by Querying Discriminative and Representative Samples and Fully Exploiting Unlabeled Data. IEEE Transactions on Neural Networks and Learning Systems. (accepted)
[54]Bin Gu, Xiang Geng, Xiang Li, Wanli Shi, Guansheng Zheng, Cheng Deng, and Heng Huang. Scalable Kernel Ordinal Regression via Doubly Stochastic Gradients. IEEE Transactions on Neural Networks and Learning Systems. (accepted)
[53]Bin Gu, Xiang Geng, Wanli Shia, Yingying Shana, Yufang Huang, Zhijie Wang, Guansheng Zheng. Solving Large-Scale Support Vector Ordinal Regression with Asynchronous Parallel Coordinate Descent Algorithms. Pattern Recognition. (accepted)
[52]Runxue Bao, Bin Gu, Heng Huang. Fast OSCAR and OWL with Safe Screening Rules. ICML 2020.
[51]Bin Gu, Zhiyuan Dang, Xiang Li and Heng Huang. Federated Doubly Stochastic Kernel Learning for Vertically Partitioned Data. KDD 2020.
[50]Wanli Shi,Victor S. Sheng, Xiang Li, Bin Gu. Semi-Supervised Multi-Label Learning from Crowds via Deep Sequential Generative Model. KDD 2020.
[49]Wanli Shi, Bin Gu, Xiang Li, Heng Huang. Quadruply Stochastic Gradient Method for Large Scale Nonlinear Semi-Supervised Ordinal Regression AUC Optimization. AAAI 2020.
[48]Zhou Zhai, Bin Gu, Xiang Li, Heng Huang. Safe Sample Screening for Robust Support Vector Machine. AAAI 2020.
[47]Runxue Bao, Bin Gu, Heng Huang. Efficient Approximate Solution Path Algorithm for Order Weight L_1-Norm with Accuracy Guarantee. ICDM 2019.
[46]Bin Gu, Xiang Geng, Xiang Li, Guansheng Zheng. Efficient Inexact Proximal Gradient Algorithms for Structured Sparsity-Inducing Norm. Neural Networks.118 (2019): 352-362.
[45]Bin Gu, Wenhan Xian, Heng Huang. Asynchronous Stochastic Frank-Wolfe Algorithms for Non-convex Optimization. IJCAI 2019.
[44]Xiang Geng, Bin Gu, Xiang Li, Wanli Shi, Guansheng Zheng, Heng Huang. Scalable Semi-Supervised SVM via Triply Stochastic Gradients. IJCAI 2019.
[43]Wanli Shi, Bin Gu, Xiang Li, Xiang Geng, Heng Huang. Quadruply Stochastic Gradients for Large-Scale Nonlinear Semi-Supervised AUC Optimization. IJCAI 2019.
[42]Shuyang Yu, Bin Gu, Kunpeng Ning, Haiyan Chen, Jian Pei and Heng Huang. Tackle Balancing Constraint for Incremental Semi-Supervised Support Vector Learning. KDD 2019.
[41]Bin Gu, Yingying Shan,Xin Quan, Guansheng Zheng. Accelerating Sequential Minimal Optimization via Stochastic Sub-Gradient Descent. IEEE Transactions on Cybernetics. DOI: 10.1109/TCYB.2019.2893289
[40]Feihu Huang, Bin Gu, Zhouyuan Huo, Songcan Chen, Heng Huang. Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization. AAAI 2019.
[39]Bin Gu, Zhouyuan Huo, Heng Huang. Scalable and Efficient Pairwise Learning to Achieve Statistical Accuracy. AAAI 2019.
[38]Zhouyuan Huo, Bin Gu, Heng Huang. Training Neural Networks Using Features Replay. NIPS 2018.
[37]Bin Gu, Xin Quan, Yunhua Gu, Victor S. Sheng, Guansheng Zheng. Chunk Incremental Learning for Cost-Sensitive Hinge Loss Support Vector Machine. Pattern Recognition.
[36]Bin Gu, Zhouyuan Huo, Heng Huang. Faster Derivative-Free Stochastic Algorithm for Shared Memory Machines. ICML 2018.
[35]Zhouyuan Huo, Bin Gu, Qian Yang, Heng Huang. Decoupled Parallel Backpropagation with Convergence Guarantee. ICML 2018.
[34]Bin Gu, Xiao-Tong Yuan, Songcan Chen, Heng Huang. New Incremental Learning Algorithm for Semi-Supervised Support Vector Machine. KDD 2018.
[33] Bin Gu, Yinyin Shan, Xiang Geng, Guansheng Zheng, Heng Huang. Accelerated Asynchronous Greedy Coordinate Descent Algorithm for SVMs. IJCAI 2018 .
[32] Bin Gu, Xinwang Ju, Xiang Li, Guansheng Zheng, Heng Huang. Faster Training Algorithms for Structured Sparsity-Inducing Norm. IJCAI 2018 .
[31] Bin Gu, Zhouyuan Huo, Heng Huang. Asynchronous Doubly Stochastic Group Regularized Learning. International Conference on Artificial Intelligence and Statistics. 2018. p. 1791-1800.
[30]Bin Gu, Victor S. Sheng. A Solution Path Algorithm for General Parametric Quadratic Programming Problem. IEEE Transactions on Neural Networks and Learning Systems.
[29]Bin Gu. A Regularization Path Algorithm for Support Vector Ordinal Regression. Neural Networks, 98 (2018): 114-121.
[28]Xiang Li, Huaimin Wang, Bin Gu, Charles X Ling. The convergence of linear classifiers on large sparse data. Neurocomputing, 2018, 273: 622-633.
[27]Bin Gu, De Wang, Zhouyuan Huo, Heng Huang. Inexact Proximal Gradient Methods for Non-convex and Non-smooth Optimization.AAAI 2018.
[26]Zhouyuan Huo, Bin Gu, Heng Huang. Accelerated Method for Stochastic Composition Optimization with Nonsmooth Regularization, AAAI 2018
[25]Bin Gu, Xin Miao, Zhouyuan Huo, Heng Huang. Asynchronous Doubly Stochastic Sparse Kernel Learning, AAAI 2018
[24]Bin Gu, Guodong Liu, Heng Huang. Groups-Keeping Solution Path Algorithm for Sparse Regression with Automatic Feature Grouping. KDD 2017. 2017. p. 185-193. (Oral Presentation)
[23]Xiang Li, Bin Gu, Shuang Ao, Huaiming Wang, Charles X. Ling. Triply Stochastic Gradients on Multiple Kernel Learning, UAI 2017
[22]Victor Sheng, Jing Zhang, Bin Gu, Xindong Wu. Majority Voting and Pairing with Multiple Noisy Labeling. IEEE Transactions on Knowledge and Data Engineering. 2017.
[21]Bin Gu, Victor S. Sheng, Keng Yeow Tay, Walter Romano, and Shuo Li. Cross Validation Through Two-dimensional Solution Surface for Cost-Sensitive SVM. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017, 39.6: 1103-1121.
[20]Bin Gu, Xinming Su, Victor S. Sheng. Structural Minimax Probability Machine. IEEE Transactions on Neural Networks and Learning Systems. 2017, 28.7: 1646-1656.
[19]Bin Gu, Victor S. Sheng. A Robust Regularization Path Algorithm for ν-Support Vector Classification. IEEE Transactions on Neural Networks and Learning Systems. 2017, 28.5: 1241-1248.
[18]Bin Gu, Yingying Shan, Victor S. Sheng, and Shuo Li. Sparse Regression with Output Correlation for Cardiac Ejection Fraction Estimation. Information Sciences. 2018, 423: 303-312.
[17]Bin Gu, and Charles Ling. "A New Generalized Error Path Algorithm for Model Selection." Proceedings of the 32nd International Conference on Machine Learning (ICML-15). 2015.
[16]Bin Gu, Victor S. Sheng, and Shuo Li. Bi-parameter space partition for cost-sensitive SVM. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 3532–3539, 2015.
[15]Xiang Li, Huaiming Wang, Bin Gu, Charles X. Ling. Data Sparseness in Linear SVM. IJCAI 2015:3628-3634.
[14]Bin Gu, Victor S. Sheng, Keng Yeow Tay, Walter Romano, and Shuo Li. Incremental Learning for ν-Support Vector Regression. Neural Networks. 67 (2015): 140-150.
[13]Bin Gu, Victor S. Sheng, Keng Yeow Tay, Walter Romano, and Shuo Li. Incremental Support Vector Learning for Ordinal Regression. IEEE Transactions on Neural Networks and Learning Systems, 26(7), pp. 1403 - 1416, 2015.
[12]Wang, Z.; Salah, M.B.; Gu, B.; Islam, A.; Goela, A.; Li, S., "Direct Estimation of Cardiac Biventricular Volumes With an Adapted Bayesian Formulation," Biomedical Engineering, IEEE Transactions on , vol.61, no.4, pp.1251-1260, 2014.
[11]Victor S. Sheng, Bin Gu, Wei Fang, Jian Wu, Cost-sensitive learning for defect escalation, Knowledge-Based Systems, Volume 66, August 2014, Pages 146-155
[10]Bin Gu, Victor S. Sheng. Feasibility and Finite Convergence Analysis for Accurate On-line ν-Support Vector Learning. IEEE Transactions on Neural Networks and Learning Systems, 24(8):1304-1315, 2013.
[9]Bin Gu, Jian-Dong Wang, Guan-Sheng Zheng, Yue-Cheng Yu. Regularization Path for ν-Support Vector Classification. IEEE Transactions on Neural Networks and Learning Systems, 23(5): 800-811,2012.
[8]Bin Gu, Jian-Dong Wang, Yue-Cheng Yu, Guan-Sheng Zheng, Yu-Fan Huang, and Tao Xu. Accurate on-line ν-support vector learning. Neural Networks, 27(0):51–59, 2012.
[7]Bin Gu, Guan-Sheng Zheng, Jian-Dong Wang. Analysis for Incremental and Decremental Standard Support Vector Machine. Journal of Software, 24(7):1601-1613, 2013. (In Chinese)
[6]Bin Gu, Jian-Dong Wang. Effective ν-Path Algorithm for ν-Support Vector Regression. Journal of Software, 23(10): 2643−2654,2012. (In Chinese)
[5]Bin Gu, Jian-Dong Wang, and Tao Li. Ordinal-Class Core Vector Machine,Journal of Computer Science and Technology. 2010, 25(4): 699-708.
[4]Bin Gu, Jian-Dong Wang, and Hai-yan Chen. On-line Off-line Ranking Support Vector Machine and Analysis. In Proceedings of International Joint Conference on Neural Networks (IJCNN’08), New York: IEEE Press, 2008.
[3]Bin Gu, Jian-Dong Wang. A Novel Feature Extraction Method for QAR Data. Journal of Sichuan University: Engineering Science Edition, 2011, 3(43):113-117. (In Chinese)
[2]Bin Gu, Jian-Dong Wang.A Class of Methods for Calculating the Threshold of Local Outlier Factor.Journal of Chinese Computer Systems,2008,29 (12):2254-2257. (In Chinese)
[1]Tao Xu, Jian_Li Ding, Bin Gu, Jian-Dong Wang. Forecasting Warning Level of Flight Delays Based on Incremental Ranking Support Vector Machine. Acta Aeronautica et Astronautica Sinica,2009,30(7): 1256-1263. (in Chinese)
更多信息请见https://ssl123141924ecb471a6e0c70732bd329da5f5.vpn.nuist.edu.cn/site/jsgubin/
荣誉:
其他学术成就:
帮助08奥运数据安全提供商 “杭州安恒信息技术有限公司” 研发 运维人员偷导数据行为风险预警系统
教育经历
暂无内容
工作经历
暂无内容
社会兼职
- 暂无内容
研究方向
其他联系方式
团队成员
暂无内容