工作动态

当前位置: 首页 > 工作动态 > 正文
第七届南湖国际青年学者论坛(第26场)
发布时间:2022-11-28 浏览次数:

时间:2022年11月30日(星期三)上午8:10-17:00

会议号:腾讯会议:273-798-757

主办:华中农业大学

承办:理学院


报告人1:张树雄 8:10-8:50

题目:带重尾分枝随机游动经验分布的大偏差概率

On large deviation probabilities for empirical distribution of branching random walks with heavy tails




报告人3:袁佩佩 9:30-10:10

题目:相关熵稀疏可加机:泛化分析与应用

英文题目:Correntropy-based Sparse Additive Machine: Generalization Analysis and Applications

摘要:由于表征灵活性和可解释性,稀疏可加机在高维数据的变量选择和分类任务方面表现出了良好的性能。然而,现有的方法往往采用无界或非光滑损失函数来代替0-1分类损失,在处理含有非高斯噪声或离群点的数据时会出现性能的退化。针对该问题,我们融合相关熵诱导损失、数据依赖假设空间和稀疏正则构建了一种稳健的统计学习方法--基于相关熵诱导损失的稀疏可加机(CSAM)。理论上,建立了CSAM的泛化误差和变量选择一致性分析。应用上,在模拟和真实数据实验中验证了CSAM的有效性和稳健性。此外,将简要介绍平均top-k稀疏可加机、可加模型公平性的相关研究。

Sparse additive machines have shown competitive performance on variable selection and classification in high dimensional data due to their representation flexibility and interpretability. However, the existing methods often employ the unbounded or non-smooth functions as the surrogates of 0-1 classification loss, which may encounter the degraded performance for data with non-Gaussian noises or outliers. To alleviate this problem, we propose a robust statistical learning method, named sparse additive machine with the correntropy-induced loss (CSAM), by integrating the correntropy-induced loss, the data-dependent hypothesis space and the sparse L(q,1)-norm regularizer into additive machines. In theory, the generalization error bound andthevariable selection consistency are established. In applications, experimental evaluations on both synthetic and real world data sets consistently validate the effectiveness and robustness of CSAM. In addition, some related work about the average top-k sparse additve machine and fairness with additve models will be briefly introduced.


报告人4:李歌 10:10-10:50

题目:两类带小随机扰动的动力系统的渐近行为

Asymptotic behavior of two classes of dynamical systems with small random perturbations

摘要:随机扰动普遍存在于自然界中,研究随机扰动下的动力系统的渐近行为一直是数学与工程领域的热点和难点之一。当扰动的强度很小,但它却能影响动力系统的行为时,研究小随机扰动下的动力系统是非常有意义的。本论文研究了两类带小随机扰动的动力系统的渐近行为,一类是研究耦合强度和小噪声强度对同步化系统的影响,一类是研究小质量和小噪声强度对二阶McKean-Vlasov随机系统的影响,包括它们的中心极限定理,大偏差原理和中偏差原理等。

Random perturbations are ubiquitous in nature, and studying the asymptotic behavior of dynamical systems under random perturbations has always been one of the hotspots and difficulties in the field of mathematics and engineering. When the magnitude of the perturbation is small, but it can affect the behavior of a dynamical system, it is very meaningful to study dynamical systems under small random perturbations. This dissertation studies the asymptotic behavior of two classes of dynamical systems with small random perturbations. One is to study the effect of coupling strength and small noise strength on the synchronized system, and the other is to study the effect of small mass and small noise strength on the second-order McKean-Vlasov stochastic systems, including their central limit theorem, large deviations principle and moderate deviations principle, etc.


报告人5:张玉越 2:10-2:50

题目:一个具有Bedington-DeAngelis功能反应函数和捕食者竞争的捕食-食饵模型的分支分析

Bifurcation analysis of a predator-prey model with Beddington-DeAngelis functional response and predator competition

摘要:本文考虑一个具有Bedington-DeAngelis功能反应函数和捕食者竞争的捕食-食饵模型,它是一个5参数的平面向量场。研究表明,随着参数的变化,系统会产生余维3的焦点型退化Bogdanov-Takens分支和至少余维2的Hopf分支。我们的理论结果表明,捕食者竞争可以导致更丰富的动力学,例如:2个极限环包围1个或3个双曲正平衡点,三种类型的同宿轨(同宿到双曲鞍点、鞍节点或中性鞍点)。此外,我们发现存在捕食者捕获率m的阈值m_0,若捕获率小于或等于该阈值,则捕食者总是趋于灭绝;反之,若捕获率高于该阈值,则所有正初始种群的捕食者和食饵将以多个正稳态或周期性振荡的形式共存。最后,我们通过一系列的数值模拟来说明理论结果。

In this paper, we consider a predator-prey model with Beddington-DeAngelis functional response and predator competition, which is a five-parameter family of planar vector field. It is shown that the model can undergo a sequence of bifurcations including focus type degenerate Bogdanov-Takens bifurcation of codimension 3 and Hopf bifurcation of codimension at least 2 as the parameters vary. Our theoretical results indicate that predator competition can cause richer dynamics such as two limit cycles enclosing one or three hyperbolic positive equilibria and three kinds of homoclinic orbits (homoclinic to hyperbolic saddle, saddle-node, or neutral saddle). Moreover, there exists a threshold valuem_0for predator capturing ratem, below or equal to which the predators always tend to extinction, above which the predators and preys will coexist in the form of multiple steady states or periodic oscillations for all positive initial populations. Numerical simulations are presented to illustrate the theoretical results.


报告人6:李梅 2:50-3:30

题目:差分隐私的分布式学习

Differentially private decentralized learning framework

摘要:去中心化学习目前已被广泛研究,它是在网络节点上去极小化有限个期望目标函数的和。然而,网络中相邻节点之间的本地交互可能导致隐私信息泄露.为了应对这一挑战,我们提出了一个适用于分布式学习的通用差分隐私(DP)学习框架.我们表明,所提出的方法在稳定性、泛化性和有限样本性能方面有理论保证.我们研究了局部隐私保护计算对全局DP保证的影响.此外,我们通过采用基于广义高斯分布的新型噪声实现DP以改善效用-隐私权衡.数值结果证明了我们算法与已有的工作相比有极大的优势.

Decentralized learning has been hugely successful, which minimizes a finite sum of expected objective functions over a network of nodes. However, the local communication across neighbouring nodes in the network may lead to the leakage of private information. To address this challenge, we propose a general differentially private (DP) learning framework for decentralized data that applies to many non-smooth learning problems. We show that the proposed algorithm retains the performance guarantee in terms of stability, generalization, and finite sample performance. We investigate the impact of local privacy-preserving computation on the global DP guarantee. Further, we extend the discussion by adopting a new class of noise-adding DP mechanisms based on generalized Gaussian distributions to improve the utility-privacy trade-offs. Our numerical results demonstrate the effectiveness of our algorithm and its better performance over the state-of-the-art baseline methods in various decentralized settings.


报告人7:王超 3:30-4:10

题目:Boosting算法在可分数据集上不会过拟合

Booosting Methods Never Overfit on Separable Data

摘要:Boosting算法自被提出以来就受到极大的关注,被广泛应用于经济、生物、模式识别等领域。最近一系列的研究工作指出Boosting算法所得到的预测器总是渐近收敛到最大分类边界。说明,其所得预测器总是渐近抗过拟合的。但是,这并没有回答预测器在有限迭代步骤内的表现是否过拟合。在本文中,我们证明了最常用的Boosting算法,Discrete AdaBoost在可分数据集上不会过拟合。即当迭代步骤时,经验风险和犯法误差收敛速率为。当迭代步骤超过时,泛化误差将停留在一个固定水平。最后,我们用数值实验验证了理论的正确性。

The Boosting algorithm has received great attention since it was proposed, and has been widely used in the fields of finance, biology, and pattern recognition. A series of recent works pointed out that the predictors obtained by the boosting algorithms asymptotically converge to the max-margin predictor. As a consequence, the predictors is asymptotically non-overfitting. However, it does not answer whether they will be overfitting, after a finite number of iterations. In this paper, we shown that the most commonly used version of Boosting algorithm, Discrete Adaboost does not overfit on separable datasets. That’s the empirical risk and generalization error decrease by the rate O˜(1/γ2T), when T≤m. After iterations T≈m, the generalization error remain a fixed level of O˜(1/γ2m), regardless of how large T is. Finally, numerical experiments are designed to verify the theoretical results on real-world datasets


报告人8:罗元 4:10-4:50

题目:Newton-Raphson遇到稀疏性:通过新的惩罚和快速求解器进行稀疏学习

Newton-Raphson meets sparsity: Sparse learning via a novel penalty and a fast solver

摘要:在机器学习和统计学中,惩罚回归方法是高维稀疏数据分析中变量选择(或特征选择)的主要工具。由于常用的最小绝对收缩和选择算子(LASSO)、平滑裁剪绝对偏差(SCAD)和极大极小凹惩罚(MCP)等,相关阈值算子都是具有不光滑性,经典的Newton-Raphson算法无法使用。在我们的文章里,我们提出具有光滑性的阈值算子CHIP。在理论上,我们建立了CHIP惩罚高维线性回归的全局最小值的非渐近估计误差界。此外,我们还证明了估计支持度与目标支持度的高概率重合。推导了CHIP惩罚估计的KKT条件,提出了基于牛顿-Raphson (SDNR)算法求解的支持度检测方法。仿真研究表明,该方法在有限样本范围内具有良好的性能。并以实际数据为例说明了方法的应用。

In machine learning and statistics, the penalized regression methods are main tools for variable selection (or feature selection) in high-dimensional sparse data analysis. Due to the non-smoothness of the associated thresholding operators of commonly used penalties such as the least absolute shrinkage and selection operator (LASSO), the smoothly clipped absolute deviation (SCAD) and the minimax concave penalty (MCP), the classical Newton-Raphson algorithm cannot be used. In this paper, we propose a cubic Hermite interpolation penalty (CHIP) with a smoothing thresholding operator. Theoretically, we establish the non-asymptotic estimation error bounds for the global minimizer of the CHIP penalized high-dimensional linear regression. Moreover, we show that the estimated support coincides with the target support with high probability. We derive the KarushKuhnTucker (KKT) condition for the CHIP penalized estimator and then develop a support detection based Newton Raphson (SDNR) algorithm to solve it. Simulation studies demonstrate that the proposed method performs well in a wide range of fifinite sample situations. We also illustrate the application of our method with a real data example.