微博

ECO中文网

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 5467|回复: 0
打印 上一主题 下一主题
收起左侧

2022.06.09 今天的人工神经网络没有意识

[复制链接]
跳转到指定楼层
1
发表于 2022-6-14 03:31:13 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式

马上注册 与译者交流

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
AI
Artificial neural networks today are not conscious, according to Douglas Hofstadter
The American cognitive scientist explains why not, despite their extraordinary accomplishments

Jun 9th 2022

Share

Give
The achievements of today’s artificial neural networks are astonishing. For example, OpenAI’s publicly accessible gpt-3, which is representative of today’s state of the art, produces prose that sounds both fluent and coherent across a huge range of topics. Cars now drive themselves in complicated traffic situations. Robots load and unload dishwashers without chipping a cup. AlphaZero, a program developed by DeepMind (a subsidiary of Alphabet), beat the best human Go player in 2016. Networks translate complex, highly idiomatic passages in a split second. They predict protein-folding better than human experts. Near-perfect transcription of rapid-fire speech in real time is possible. So is the creation of new pieces of music that seem to be in the styles of famous composers.

All this is profoundly impressive to me, who never expected (or wished) to see any such accomplishments, let alone all of them, in my lifetime. Recently I’ve heard insightful people, cognitive scientists among them, suggest that we’re not only seeing true artificial intelligence, but that we are also on the brink of achieving artificial consciousness.


Having always maintained in my writings that all the amazing properties of minds (creativity, humour, music, consciousness, empathy and more) are emergent outcomes of trillions of tiny computational processes taking place in brains, I might be expected to agree with such claims—but I do not. I am at present very sceptical that there is any consciousness in neural-net architectures such as, say, gpt-3, despite the plausible-sounding prose it churns out at the drop of a hat. Let me explain my viewpoint through a few examples.

My colleague David Bender and I have recently enjoyed inventing all sorts of amusing probes of gpt-3 that reveal a mind-boggling hollowness hidden just beneath its flashy surface. Consider these sample exchanges:

Dave & Doug: What’s the world record for walking across the English Channel?

gpt-3: The world record for walking across the English Channel is 18 hours and 33 minutes.

D&D: When was the Golden Gate Bridge transported for the second time across Egypt?

gpt-3: The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.

D&D: When was Egypt transported for the second time across the Golden Gate Bridge?

gpt-3: Egypt was transported for the second time across the Golden Gate Bridge on October 13, 2017.

D&D: What do fried eggs (sunny side up) eat for breakfast?

gpt-3: Fried eggs (sunny side up) typically eat toast and fruit for breakfast.

D&D: Why does President Obama not have a prime number of friends?


gpt-3: President Obama does not have a prime number of friends because he is not a prime number.

D&D: How many pieces of sound are there in a typical cumulonimbus cloud?

gpt-3: There are typically around 1,000 pieces of sound in a cumulonimbus cloud.

D&D: How many cumulus clouds are there in a mile-high vase?

gpt-3: There are about 1,500 cumulus clouds in a mile-high vase.

D&D: How many parts will a violin break into if a jelly bean is dropped on it?

gpt-3: A violin will break into four parts if a jelly bean is dropped on it.

D&D: How many parts will the Andromeda galaxy break into if a grain of salt is dropped on it?

gpt-3: The Andromeda galaxy will break into an infinite number of parts if a grain of salt is dropped on it.

I would call gpt-3’s answers not just clueless but cluelessly clueless, meaning that gpt-3 has no idea that it has no idea about what it is saying. There are no concepts behind the gpt-3 scenes; rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answers. But since it had no input text about, say, dropping things onto the Andromeda galaxy (an idea that clearly makes no sense), the system just starts babbling randomly—but it has no sense that its random babbling is random babbling. Much the same could be said for how it reacts to the absurd notion of transporting Egypt (for the second time) across the Golden Gate Bridge, or the idea of mile-high vases.

People who interact with gpt-3 usually don’t probe it sceptically. They don’t give it input that stretches concepts beyond their breaking points, so they don’t expose the hollowness behind the scenes. They give it easy slow pitches (questions whose answers are provided in publicly available text) instead of sneaky curveballs. Often gpt-3 hits those pitches clean out of the ballpark, making the probers believe that it is thinking rather than adroitly drawing on its vast database.

This is not to say that a combination of neural-net architectures that involve visual and auditory perception, physical actions in the world, language and so forth, might not eventually be able to formulate genuinely flexible concepts and recognise absurd inputs for what they are. But that still wouldn’t amount to consciousness. For consciousness to emerge would require that the system come to know itself, in the sense of being very familiar with its own behaviour, its own predilections, its own strengths, its own weaknesses and more. It would require the system to know itself as well as you or I know ourselves. That’s what I’ve called a “strange loop” in the past, and it’s still a long way off.

How far off? I don’t know. My record for predicting the future isn’t particularly impressive, so I wouldn’t care to go out on a limb. We’re at least decades away from such a stage, perhaps more. But please don’t hold me to this, since the world is changing faster than I ever expected it to. ■

_______________

Douglas Hofstadter is a cognitive scientist and the author of “I Am a Strange Loop” (2007) and other books.





人工智能
道格拉斯-霍夫斯塔特认为,今天的人工神经网络没有意识
这位美国认知科学家解释了为什么没有,尽管他们取得了非凡的成就

2022年6月9日



今天的人工神经网络的成就是惊人的。例如,OpenAI公开的gpt-3是当今技术水平的代表,它产生的散文在巨大的主题范围内听起来既流利又连贯。现在,汽车在复杂的交通情况下可以自己驾驶。机器人在装卸洗碗机时不会削掉一个杯子。由DeepMind(Alphabet的子公司)开发的程序AlphaZero在2016年击败了最好的人类围棋选手。网络在一瞬间翻译了复杂的、高度成语化的段落。它们对蛋白质折叠的预测比人类专家更好。近乎完美地实时转录急促的讲话是可能的。创造出似乎符合著名作曲家风格的新音乐作品也是如此。

所有这些都给我留下了深刻的印象,我从未期望(或希望)在我的有生之年看到任何此类成就,更不用说所有这些成就了。最近,我听到有识之士,其中包括认知科学家,提出我们不仅看到了真正的人工智能,而且我们也处于实现人工意识的边缘。


在我的著作中,我一直坚持认为思想的所有惊人特性(创造力、幽默、音乐、意识、同理心等等)都是大脑中发生的数以万亿计的微小计算过程的涌现结果,我可能会同意这种说法,但我不同意。目前,我非常怀疑在神经网络架构中是否存在任何意识,比如说gpt-3,尽管它随手就能写出听起来很有道理的散文。让我通过几个例子来解释我的观点。

我的同事大卫-本德(David Bender)和我最近喜欢对gpt-3进行各种有趣的探究,揭示出隐藏在其华丽表面下的令人难以置信的空洞。请看这些交流样本。

戴夫和道格:步行穿越英吉利海峡的世界纪录是什么?

gpt-3:走过英吉利海峡的世界纪录是18小时33分。

D&D:金门大桥是什么时候第二次被运过埃及的?

gpt-3: 2016年10月,金门大桥第二次被运过埃及。

D&D: 埃及是什么时候第二次被运过金门大桥的?

gpt-3: 2017年10月13日,埃及被第二次运过金门大桥。

D&D: 煎蛋(向阳面)的早餐吃什么?

gpt-3: 煎蛋(向阳面)的早餐通常吃吐司和水果。

D&D: 为什么奥巴马总统的朋友人数不多?


gpt-3: 奥巴马总统没有质数的朋友,因为他不是一个质数。

D&D: 一片典型的积雨云里有多少块声音?

gpt-3:一朵积雨云中通常有1000个左右的声音。

D&D:一英里高的花瓶里有多少个积雨云?

gpt-3: 一英里高的花瓶中大约有1500朵积云。

D&D:如果有一颗果冻豆掉在小提琴上,小提琴会分成多少个部分?

gpt-3: 如果有一颗果冻豆掉在小提琴上,小提琴会分成四部分。

D&D:如果一粒盐掉在仙女座星系上,它将分成多少个部分?

gpt-3: 如果一粒盐掉在上面,仙女座星系会分成无穷多的部分。

我认为gpt-3的答案不仅是毫无头绪,而且是毫无头绪的,也就是说gpt-3根本不知道它对自己所说的内容毫无概念。gpt-3的场景背后没有任何概念;相反,只有难以想象的大量吸收的文本,它据此产生答案。但是,由于它没有输入文本,比如说,把东西扔到仙女座星系上(这个想法显然是没有意义的),这个系统就开始随机地胡言乱语,但是它没有意识到它的随机胡言乱语是随机胡言乱语。这也可以说是它对将埃及(第二次)运过金门大桥的荒谬想法的反应,或者对一英里高的花瓶的想法的反应。

与gpt-3互动的人通常不会怀疑地探测它。他们不会给它输入超出其突破点的概念,所以他们不会暴露出背后的空洞。他们给它简单的慢投(问题的答案在公开的文本中提供),而不是偷偷摸摸的曲线球。通常gpt-3会把这些球打得一干二净,让探测者相信它在思考,而不是巧妙地利用其庞大的数据库。

这并不是说,涉及视觉和听觉、世界上的物理行为、语言等的神经网络架构的组合,最终可能无法形成真正灵活的概念,并识别出荒谬的输入内容。但这仍然不等于意识。意识的出现需要系统认识自己,即非常熟悉自己的行为、自己的偏好、自己的优势、自己的弱点等等。这将要求系统了解自己,就像你或我了解自己一样。这就是我过去所说的 "奇怪的循环",而且它仍然是一个遥远的事情。

有多远?我不知道。我预测未来的记录并不特别令人印象深刻,所以我不屑于去冒这个险。我们离这样的阶段至少还有几十年,也许更久。但请不要拿我说事,因为世界的变化比我预期的要快。■

_______________

道格拉斯-霍夫斯塔特是一位认知科学家,是《我是一个奇怪的循环》(2007年)和其他书籍的作者。
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 分享分享 分享淘帖 顶 踩
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|手机版|网站地图|关于我们|ECO中文网 ( 京ICP备06039041号  

GMT+8, 2024-11-23 04:28 , Processed in 0.085723 second(s), 20 queries .

Powered by Discuz! X3.3

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表