微博

ECO中文网

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 5283|回复: 0
打印 上一主题 下一主题
收起左侧

2018 杨立昆

[复制链接]
跳转到指定楼层
1
发表于 2022-4-23 23:29:46 | 只看该作者 回帖奖励 |正序浏览 |阅读模式

马上注册 与译者交流

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
YANN LECUN DL Author Profile link
United States – 2018
CITATION
For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.

SHORT ANNOTATED
BIBLIOGRAPHY
ACM TURING AWARD
LECTURE VIDEO
RESEARCH
SUBJECTS
Yann LeCun spent his early life in France, growing up in the suburbs of Paris. (His name was originally Le Cun, but he dropped the space after discovering that Americans were confused and treated Le as his middle name). His father was an engineer, whose interests in electronics and mechanics were passed on to Yann during a boyhood of tinkering. As a teenager he enjoyed playing in a band as well as science and engineering. He remained in the region to study, earning the equivalent of a masters’ degree from the École Supérieure d'Ingénieurs en Électrotechnique et Électronique, one of France’s network of competitive and specialized non-university schools established to train to the country’s future elite. His work there focused on microchip design and automation.

LeCun attributes his longstanding interest in machine intelligence to seeing the murderous mainframe HAL, whom he encountered as a young boy in the movie 2001. He began independent research on machine learning as an undergraduate, making it the centerpiece of his Ph.D. work at the Sorbonne Université (then called Université Pierre et Marie Curie). LeCun’s research closely paralleled discoveries made independently by his co-awardee Geoffrey Hinton. Like Hinton he had been drawn to the then-unfashionable neural network approach to artificial intelligence, and like Hinton he discovered the well-publicized limitations of simple neural networks could be overcome with what was later called the “back-propagation” algorithm able to efficiently train “hidden” neurons in intermediate layers between the input and output nodes.

A workshop held in Les Houches in the French Alps in 1985 first brought LeCun into direct contact with the international research community working along these lines. It was there that he met Terry Sejnowski, a close collaborator of Hinton’s whose work on backpropagation was not yet published. A few months later when Hinton was in Paris he introduced himself to LeCun, which led to an invitation to a summer workshop at Carnegie Mellon and a post-doctoral year with Hinton’s new research group in Toronto. This collaboration endured: two decades later, in 2004, he worked with Hinton to establish a program on Neural Computation and Adaptive Perception through the Canadian Institute for Advanced Research (CIFAR). Since 2014 he has co-directed it, now renamed Learning in Machines & Brains, with his co-awardee Yoshua Bengio.

At the conclusion of the fellowship, in 1988, LeCun joined the staff of Bell Labs, a renowned center of computer science research. Its Adaptive Systems Research department, headed by Lawrence D. Jackel, focused on machine learning. Jackel was heavily involved in establishing the Neural Networks for Computing workshop series, later run by LeCun and renamed the “Learning Workshop”. It was held annually from 1986 to 2012 at the Snowbird resort in Utah. The invitation-only event brought together an interdisciplinary group of researchers to exchange ideas on the new techniques and learn how to apply them in their own work.

LeCun’s work at Bell Labs focused on the neural network architectures and learning algorithms. His most far-reaching contribution was a new approach, called the “convolutional neural network.” Many networks are designed to recognize visual patterns, but a simple learning model trained to respond to a feature in one location (say the top left of an image) would not respond to the same feature in a different location. The convolutional network is designed so that a filter or detector is swept across the grid of input values. As a result, higher level portions of the network would be alerted to the pattern wherever it occured in the image. This made training faster and reduced the overall size of networks, boosting their performance. This work was an extension of LeCun’s earlier achievements, because convolutional networks rely on backpropagation techniques to train their hidden layers.

As well as developing the convolutional approach, LeCun pioneered its application in “graph transformer networks” to recognize printed and handwritten text. This was used in a widely deployed system to read numbers written on checks, produced in the early 1990s in collaboration with Bengio, Leon Bottou and Patrick Haffner. At that time handwriting recognition was enormously challenging, despite an industry-wide push to make it work reliably in “slate” computers (the ancestors of today’s tablet systems). Automated check clearing was an important application, as millions were processed daily. The job required very high accuracy, but unlike general handwriting analysis required only digit recognition, which reduced the number of valid symbols. The technology was licensed by specialist providers of bank systems such as National Cash Register. LeCun suggests that at one point it was reading more than 10% of all the checks written in the US.

Check processing work was carried out in centralized locations, which could be equipped with the powerful computers needed to run neural networks. Increases in computer power made it possible to build more complex networks and deploy convolutional approaches more widely. Today, for example, the technique is used on Android smartphones to power the speech recognition features of the Google Assistant such as real-time transcription, and the camera-based translation features of the translation app.

His other main contribution at Bell Labs was the development of "Optimal Brain Damage" regularization methods. This evocatively named concept identifies ways to simplify neutral networks by removing unnecessary connections. Done properly, this “brain damage” could produce simpler, faster networks that performed as well or better than the full-size version.

In 1996 AT&T, which had failed to establish itself in the computer industry, spun off most of Bell Labs and its telecommunications hardware business into a new company, Lucent Technologies. LeCun stayed behind to run an AT&T Labs group focused on image processing research. His primary accomplishment there was the DjVu image compression technology, developed with Léon Bottou, Patrick Haffner, and Paul G. Howard. High speed Internet access was rare, so as a communications company AT&T’s services would be more valuable if large documents could be downloaded more quickly. LeCun’s algorithm compressed files more effectively than Adobe’s Acrobat software, but lacked the latter’s broad support. It was extensively used by the Internet Archive in the early 2000s.

LeCun left industrial research in 2003, for a faculty position as a professor of computer science at New York University’s Courant Institute of Mathematical Sciences, the leading center for applied mathematical research in the US. It has a strong presence in scientific computation and particular focus on machine learning. He took the opportunity to restore his research focus on neural networks. At NYU LeCun ran the Computational and Biological Learning Lab, which continued his work on algorithms for machine learning and applications for computer vision. He is still at NYU, though as his reputation has grown he has added several new titles and additional appointments. Most notable of these is Silver endowed professorship awarded to LeCun in 2008, funded by a generous bequest from Polaroid co-founder Julius Silver to allow NYU to attract and retain top faculty.

LeCun had retained his love of building things, including hobbies constructing airplanes, electronic musical instrument, and robots. At NYU he combined this interest in robotics with his work on convolutional networks for computer vision to participate in DARPA-sponsored projects for autonomous navigation. His most important institutional initiative was work in 2011 to create the NYU Center for Data Science, which he directed until 2014. The center offers undergraduate and graduate degrees and functions as a focal point for data science initiatives across the university.

By the early 2010s the leading technology companies were scrambling to deploy machine learning systems based on neural networks. Like other leading researchers LeCun was courted by the tech giants, and from December 2013 he was hired by Facebook to create FAIR (Facebook AI Research), which he led until 2018 in New York, sharing his time between NYU and FAIR. That made him the public face of AI at Facebook, broadening his role from a researcher famous within several fields to a tech industry leader frequently discussed in newspapers and magazines. In 2018, he stepped down from the director role and became Facebook’s Chief AI Scientist to focus on strategy and scientific leadership.



YANN LECUN DL作者简介链接
美国 - 2018年
奖状
因为在概念和工程上的突破,使深度神经网络成为计算的重要组成部分。

短篇注释
参考文献
亚马逊图灵奖
讲座视频
研究
题目
Yann LeCun早年生活在法国,在巴黎的郊区长大。(他的名字原本是Le Cun,但在发现美国人对他的名字感到困惑并把Le当作他的中间名后,他放弃了这个空格)。他的父亲是一名工程师,他对电子和机械的兴趣在他的少年时代就传给了扬。十几岁的时候,他喜欢在乐队中演奏,也喜欢科学和工程。他留在当地学习,在高等电子技术工程师学院获得了相当于硕士学位的学位,该学院是法国为培养国家未来精英而建立的具有竞争力的专业非大学学校网络之一。他在那里的工作侧重于微芯片设计和自动化。

LeCun将他对机器智能的长期兴趣归结为看到了杀人的主机HAL,他在电影《2001》中作为一个小男孩遇到了它。他在本科时就开始独立研究机器学习,并将其作为他在索邦大学(当时称为皮埃尔和玛丽-居里大学)的博士工作的核心。LeCun的研究与他的共同获奖者Geoffrey Hinton的独立发现密切相关。和辛顿一样,他也被当时并不流行的神经网络人工智能方法所吸引,和辛顿一样,他发现简单的神经网络广为人知的局限性可以通过后来的 "反向传播 "算法来克服,该算法能够有效地训练输入和输出节点之间中间层的 "隐藏 "神经元。

1985年在法国阿尔卑斯山的Les Houches举行的一次研讨会首次使LeCun直接接触到了沿着这些路线工作的国际研究团体。正是在那里,他遇到了Hinton的亲密合作者Terry Sejnowski,他在反向传播方面的工作还没有发表。几个月后,当Hinton在巴黎时,他把自己介绍给了LeCun,这导致他被邀请参加卡内基梅隆大学的夏季研讨会,并在多伦多Hinton的新研究小组中做了一年博士后。这种合作经久不衰:二十年后的2004年,他与辛顿合作,通过加拿大高级研究所(CIFAR)建立了一个关于神经计算和自适应感知的项目。自2014年以来,他与他的共同获奖者Yoshua Bengio共同指导该项目,现在更名为 "机器与大脑的学习"。

在奖学金结束后,1988年,LeCun加入了贝尔实验室,一个著名的计算机科学研究中心的工作人员。该实验室的自适应系统研究部门由Lawrence D. Jackel领导,主要研究机器学习。杰克尔在很大程度上参与了 "计算神经网络 "系列研讨会的建立,该研讨会后来由乐存负责,并更名为 "学习研讨会"。从1986年到2012年,它每年都在犹他州的雪鸟度假村举行。这个只接受邀请的活动将一个跨学科的研究小组聚集在一起,就新技术交换意见,并学习如何在他们自己的工作中应用这些技术。

LeCun在贝尔实验室的工作侧重于神经网络架构和学习算法。他最深远的贡献是一种新的方法,称为 "卷积神经网络"。许多网络被设计用来识别视觉模式,但一个简单的学习模型被训练成对一个位置的特征(例如图像的左上方)做出反应,就不会对不同位置的相同特征做出反应。卷积网络是这样设计的:一个过滤器或检测器被扫过输入值的网格。因此,网络的高层部分会被提醒注意图像中出现的任何图案。这使得训练更快,减少了网络的总体规模,提高了它们的性能。这项工作是LeCun早期成就的延伸,因为卷积网络依靠反向传播技术来训练其隐藏层。

除了开发卷积方法外,LeCun还率先将其应用于 "图形变换器网络 "以识别印刷和手写文本。这被用于一个广泛部署的系统,以读取写在支票上的数字,该系统是在20世纪90年代初与Bengio、Leon Bottou和Patrick Haffner合作完成的。当时,手写识别具有巨大的挑战性,尽管整个行业都在推动手写识别在 "石板 "计算机(今天的平板电脑系统的祖先)中可靠地工作。自动支票结算是一个重要的应用,因为每天要处理数百万张支票。这项工作需要非常高的精确度,但与一般的手写分析不同,它只需要数字识别,这就减少了有效符号的数量。这项技术被银行系统的专业供应商如National Cash Register所许可。LeCun表示,它一度读取了美国所有支票中的10%以上。

支票处理工作是在集中的地点进行的,这些地点可以配备运行神经网络所需的强大计算机。计算机功率的增加使得建立更复杂的网络和更广泛地部署卷积方法成为可能。例如,今天,该技术被用于安卓智能手机,为谷歌助手的语音识别功能(如实时转录)和翻译应用的基于摄像头的翻译功能提供动力。

他在贝尔实验室的另一个主要贡献是开发了 "最佳脑损伤 "正则化方法。这个令人回味的概念确定了通过去除不必要的连接来简化中性网络的方法。如果操作得当,这种 "脑损伤 "可以产生更简单、更快速的网络,其性能与全尺寸版本一样好,甚至更好。

1996年,未能在计算机行业站稳脚跟的AT&T将贝尔实验室的大部分及其电信硬件业务分拆成一家新公司--朗讯科技。LeCun留下来管理AT&T实验室一个专注于图像处理研究的小组。他在那里的主要成就是与Léon Bottou、Patrick Haffner和Paul G. Howard共同开发的DjVu图像压缩技术。当时高速互联网接入很少,所以作为一家通信公司,如果能更快地下载大型文件,AT&T的服务会更有价值。LeCun的算法比Adobe的Acrobat软件更有效地压缩文件,但缺乏后者的广泛支持。在21世纪初,它被互联网档案馆广泛使用。

LeCun在2003年离开了工业研究,在纽约大学的库兰特数学科学研究所担任计算机科学教授的职位,该研究所是美国领先的应用数学研究中心。它在科学计算和特别是机器学习方面有很强的影响力。他利用这个机会恢复了对神经网络的研究重点。在纽约大学,LeCun负责计算和生物学习实验室,继续他在机器学习算法和计算机视觉应用方面的工作。他现在仍然在纽约大学,尽管随着他的名声越来越大,他已经增加了几个新的头衔和额外的任命。其中最值得注意的是2008年授予LeCun的Silver捐赠教授职位,该职位由宝丽来公司联合创始人朱利叶斯-西尔弗的慷慨遗赠资助,以使纽约大学能够吸引和留住顶级教师。

LeCun一直保留着他对制造东西的热爱,包括建造飞机、电子乐器和机器人的爱好。在纽约大学,他把对机器人的兴趣与他在计算机视觉的卷积网络方面的工作结合起来,参与了DARPA赞助的自主导航项目。他最重要的机构举措是在2011年创建纽约大学数据科学中心的工作,他担任该中心主任直至2014年。该中心提供本科和研究生学位,并作为整个大学的数据科学倡议的协调中心发挥作用。

到2010年代初,领先的技术公司正在争相部署基于神经网络的机器学习系统。像其他领先的研究人员一样,LeCun受到了科技巨头的青睐,从2013年12月起,他被Facebook聘用,创建了FAIR(Facebook人工智能研究),他在纽约领导该机构直到2018年,在纽约大学和FAIR之间分享他的时间。这使他成为Facebook人工智能的公众形象,扩大了他的角色,从一个在几个领域内著名的研究人员变成了一个经常在报纸和杂志上讨论的科技行业领导者。2018年,他从主任的位置上退下来,成为Facebook的首席AI科学家,专注于战略和科学领导。



分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 分享分享 分享淘帖 顶 踩
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|手机版|网站地图|关于我们|ECO中文网 ( 京ICP备06039041号  

GMT+8, 2024-11-25 16:56 , Processed in 0.077082 second(s), 20 queries .

Powered by Discuz! X3.3

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表