上海交通大学刘卫东教授讲座预告

发布日期:2019-06-11   字体:

讲座题目:Quantile Regression Under Memory Constraint

主讲人:刘卫东教授  上海交通大学

讲座时间:2019年6月13日(周四) 15:00-15:45

讲座地点:综合楼615

主讲人简介:刘卫东,上海交通大学数学科学学院副院长,教授,博士生导师。2003年本科毕业于浙江大学数学系,于2008年获得浙江大学博士学位,2008-2011年在香港科技大学和美国宾夕法尼亚大学沃顿商学院担任博士后研究员;2010年获得全国百篇优秀博士论文奖;新世界数学奖,银奖;2011年至今任上海市东方学者高校特聘教授;2013年获得国家优秀青年科学基金;2014年获得上海市曙光学者称号;2016年获得国家万人计划青年拔尖人才;2018年获国家杰出青年科学基金。在《Annals of Statistics》、《Journal of the American Statistical Association》、《Annals of Probability》等顶级期刊发表学术论文40余篇,主要研究方向为统计学理论和机器学习等。


讲座摘要:This paper studies the inference problem in quantile regression (QR) for a large sample size $n$ but under a limited memory constraint, where the memory can only store a small batch of data of size $m$. A natural method is the na\"ive divide-and-conquer approach, which splits data into batches of size $m$, computes the local QR estimator for each batch, and then aggregates the estimators via averaging. However, this method only works when $n=o(m^2)$ and is computationally expensive. This paper proposes a computationally efficient method, which only requires an initial QR estimator on a small batch of data and then successively refines the estimator via multiple rounds of aggregations. Theoretically,  as long as $n$ grows polynomially in $m$, we establish the asymptotic normality for the obtained estimator and show that our estimator with only a few rounds of aggregations achieves the same efficiency as the QR estimator computed on all the data. Moreover, our result allows the case that the dimensionality $p$ goes to infinity. The proposed method can also be applied to address the QR problem under distributed computing environment (e.g., in a large-scale sensor network) or for real-time streaming data.


官方微博

官方微信

  • 分享到

商大要闻

综合新闻

教学科研