微博

ECO中文网

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 535|回复: 0
收起左侧

2018.06 启蒙运动如何结束

[复制链接]
发表于 2022-9-25 00:28:02 | 显示全部楼层 |阅读模式

马上注册 与译者交流

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
How the Enlightenment Ends
Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.

By Henry A. Kissinger

Edmon de Haro
JUNE 2018 ISSUE
SHARE
Three years ago, at a conference on transatlantic issues, the subject of artificial intelligence appeared on the agenda. I was on the verge of skipping that session—it lay outside my usual concerns—but the beginning of the presentation held me in my seat.

Magazine Cover image
Explore the June 2018 Issue
Check out more from this issue and find your next story to read.

View More
The speaker described the workings of a computer program that would soon challenge international champions in the game Go. I was amazed that a computer could master Go, which is more complex than chess. In it, each player deploys 180 or 181 pieces (depending on which color he or she chooses), placed alternately on an initially empty board; victory goes to the side that, by making better strategic decisions, immobilizes his or her opponent by more effectively controlling territory.


The speaker insisted that this ability could not be preprogrammed. His machine, he said, learned to master Go by training itself through practice. Given Go’s basic rules, the computer played innumerable games against itself, learning from its mistakes and refining its algorithms accordingly. In the process, it exceeded the skills of its human mentors. And indeed, in the months following the speech, an AI program named AlphaGo would decisively defeat the world’s greatest Go players.

As I listened to the speaker celebrate this technical progress, my experience as a historian and occasional practicing statesman gave me pause. What would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?

Aware of my lack of technical competence in this field, I organized a number of informal dialogues on the subject, with the advice and cooperation of acquaintances in technology and the humanities. These discussions have caused my concerns to grow.


Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion. Individual insight and scientific knowledge replaced faith as the principal criterion of human consciousness. Information was stored and systematized in expanding libraries. The Age of Reason originated the thoughts and actions that shaped the contemporary world order.

But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.

The internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet’s purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data become regnant.

Citizens
Building a Better Workforce for Tomorrow
undefined
SPONSORED VIDEO CITIZENS
See More

Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs. In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalize results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity.

The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.

The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences.

As the internet and increased computing power have facilitated the accumulation and analysis of vast data, unprecedented vistas for human understanding have emerged. Perhaps most significant is the project of producing artificial intelligence—a technology capable of inventing and solving complex, seemingly abstract problems by processes that seem to replicate those of the human mind.

This goes far beyond automation as we have known it. Automation deals with means; it achieves prescribed objectives by rationalizing or mechanizing instruments for reaching them. AI, by contrast, deals with ends; it establishes its own objectives. To the extent that its achievements are in part shaped by itself, AI is inherently unstable. AI systems, through their very operations, are in constant flux as they acquire and instantly analyze new data, then seek to improve themselves on the basis of that analysis. Through this process, artificial intelligence develops an ability previously thought to be reserved for human beings. It makes strategic judgments about the future, some based on data received as code (for example, the rules of a game), and some based on data it gathers itself (for example, by playing 1 million iterations of a game).

The driverless car illustrates the difference between the actions of traditional human-controlled, software-powered computers and the universe AI seeks to navigate. Driving a car requires judgments in multiple situations impossible to anticipate and hence to program in advance. What would happen, to use a well-known hypothetical example, if such a car were obliged by circumstance to choose between killing a grandparent and killing a child? Whom would it choose? Why? Which factors among its options would it attempt to optimize? And could it explain its rationale? Challenged, its truthful answer would likely be, were it able to communicate: “I don’t know (because I am following mathematical, not human, principles),” or “You would not understand (because I have been trained to act in a certain way but not to explain it).” Yet driverless cars are likely to be prevalent on roads within a decade.

We must expect AI to make mistakes faster—and of greater magnitude—than humans do.
Heretofore confined to specific fields of activity, AI research now seeks to bring about a “generally intelligent” AI capable of executing tasks in multiple fields. A growing percentage of human activity will, within a measurable time period, be driven by AI algorithms. But these algorithms, being mathematical interpretations of observed data, do not explain the underlying reality that produces them. Paradoxically, as the world becomes more transparent, it will also become increasingly mysterious. What will distinguish that new world from the one we have known? How will we live in it? How will we manage AI, improve it, or at the very least prevent it from doing harm, culminating in the most ominous concern: that AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data.


Artificial intelligence will in time bring extraordinary benefits to medical science, clean-energy provision, environmental issues, and many other areas. But precisely because AI makes judgments regarding an evolving, as-yet-undetermined future, uncertainty and ambiguity are inherent in its results. There are three areas of special concern:

First, that AI may achieve unintended results. Science fiction has imagined scenarios of AI turning on its creators. More likely is the danger that AI will misinterpret human instructions due to its inherent lack of context. A famous recent example was the AI chatbot called Tay, designed to generate friendly conversation in the language patterns of a 19-year-old girl. But the machine proved unable to define the imperatives of “friendly” and “reasonable” language installed by its instructors and instead became racist, sexist, and otherwise inflammatory in its responses. Some in the technology world claim that the experiment was ill-conceived and poorly executed, but it illustrates an underlying ambiguity: To what extent is it possible to enable AI to comprehend the context that informs its instructions? What medium could have helped Tay define for itself offensive, a word upon whose meaning humans do not universally agree? Can we, at an early stage, detect and correct an AI program that is acting outside our framework of expectation? Or will AI, left to its own devices, inevitably develop slight deviations that could, over time, cascade into catastrophic departures?


Second, that in achieving intended goals, AI may change human thought processes and human values. AlphaGo defeated the world Go champions by making strategically unprecedented moves—moves that humans had not conceived and have not yet successfully learned to overcome. Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?


Edmon de Haro
Before AI began to play Go, the game had varied, layered purposes: A player sought not only to win, but also to learn new strategies potentially applicable to other of life’s dimensions. For its part, by contrast, AI knows only one purpose: to win. It “learns” not conceptually but mathematically, by marginal adjustments to its algorithms. So in learning to win Go by playing it differently than humans do, AI has changed both the game’s nature and its impact. Does this single-minded insistence on prevailing characterize all AI?


Other AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions (“What is the temperature outside?”), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI’s learning about its questioners? If so, how do we accomplish these goals?

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions. In certain fields—pattern recognition, big-data analysis, gaming—AI’s capacities already may exceed those of humans. If its computational power continues to compound rapidly, AI may soon be able to optimize situations in ways that are at least marginally different, and probably significantly different, from how humans would optimize them. But at that point, will AI be able to explain, in a way that humans can understand, why its actions are optimal? Or will AI’s decision making surpass the explanatory powers of human language and reason? Through all human history, civilizations have created ways to explain the world around them—in the Middle Ages, religion; in the Enlightenment, reason; in the 19th century, history; in the 20th century, ideology. The most difficult yet important question about the world into which we are headed is this: What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?


How is consciousness to be defined in a world of machines that reduce human experience to mathematical data, interpreted by their own memories? Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them?

Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation. Because of its inherent superiority in these fields, AI is likely to win any game assigned to it. But for our purposes as humans, the games are not only about winning; they are about thinking. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition.

The implications of this evolution are shown by a recently designed program, AlphaZero, which plays chess at a level superior to chess masters and in a style not previously seen in chess history. On its own, in just a few hours of self-play, it achieved a level of skill that took human beings 1,500 years to attain. Only the basic rules of the game were provided to AlphaZero. Neither human beings nor human-generated data were part of its process of self-learning. If AlphaZero was able to achieve this mastery so rapidly, where will AI be in five years? What will be the impact on human cognition generally? What is the role of ethics in this process, which consists in essence of the acceleration of choices?


Typically, these questions are left to technologists and to the intelligentsia of related scientific fields. Philosophers and others in the field of the humanities who helped shape previous concepts of world order tend to be disadvantaged, lacking knowledge of AI’s mechanisms or being overawed by its capacities. In contrast, the scientific world is impelled to explore the technical possibilities of its achievements, and the technological world is preoccupied with commercial vistas of fabulous scale. The incentive of both these worlds is to push the limits of discoveries rather than to comprehend them. And governance, insofar as it deals with the subject, is more likely to investigate AI’s applications for security and intelligence than to explore the transformation of the human condition that it has begun to produce.

RECOMMENDED READING
A man slumps in defeat over a Go board.
The AI That Has Nothing to Learn From Humans
DAWN CHAN
An illustration of an "Internet Patrol" officer writing a ticket while someone stands in front of a "Minimum Speed" sign
Is Google Making Us Stupid?
NICHOLAS CARR

How Google's AlphaGo Beat a Go World Champion
CHRISTOPHER MOYER
The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy. Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions.

AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.

Henry A. Kissinger served as national security adviser and secretary of state to Presidents Richard Nixon and Gerald Ford.



启蒙运动如何结束
在哲学上,在智力上,在各方面,人类社会对人工智能的崛起毫无准备。

作者:亨利-A-基辛格

埃德蒙-德-哈罗
2018年6月号
分享
三年前,在一次关于跨大西洋问题的会议上,人工智能的话题出现在议程上。我当时正准备跳过那场会议--它不在我通常关注的范围之内--但演讲的开头却把我牢牢地吸引住了。

杂志封面图片
探索2018年6月号
查看本期的更多内容,并找到你的下一个故事来阅读。

查看更多
演讲者描述了一个计算机程序的工作原理,这个程序很快将在围棋比赛中挑战国际冠军。我对计算机能够掌握比国际象棋更复杂的围棋感到惊奇。在围棋中,每个玩家部署180或181个棋子(取决于他或她选择的颜色),交替放置在最初的空棋盘上;胜利的一方通过做出更好的战略决策,更有效地控制领土,使其对手无法行动。


演讲者坚持认为,这种能力是无法预先编程的。他说,他的机器通过练习训练自己,学会了掌握围棋。鉴于围棋的基本规则,计算机与自己进行了无数次的对弈,从错误中学习,并相应地完善其算法。在这个过程中,它超过了其人类导师的技能。事实上,在演讲之后的几个月里,一个名为AlphaGo的人工智能程序将决定性地击败世界上最伟大的围棋选手。

当我听着演讲者庆祝这一技术进步时,我作为一个历史学家和偶尔的实践政治家的经验让我停顿下来。自我学习的机器会对历史产生什么影响--这些机器通过自身特有的过程获得知识,并将这些知识应用于人类可能无法理解的目的?这些机器会不会学会相互交流?如何在新出现的选项中做出选择?人类历史是否有可能走印加人的老路,面对他们无法理解甚至是敬畏的西班牙文化?我们是否处于人类历史新阶段的边缘?

意识到自己在这一领域缺乏技术能力,在技术和人文领域熟人的建议和配合下,我组织了一些关于这一问题的非正式对话。这些讨论使我的担忧越来越大。


在此之前,最能改变现代历史进程的技术进步是15世纪印刷术的发明,它使对经验知识的探索取代了礼仪教义,使理性时代逐渐取代了宗教时代。个人的洞察力和科学知识取代了信仰,成为人类意识的主要标准。信息在不断扩大的图书馆中被储存和系统化。理性时代产生了塑造当代世界秩序的思想和行动。

但这一秩序现在正处于一场新的、甚至更彻底的技术革命的动荡之中,其后果我们还没有完全估计到,其高潮可能是一个依靠数据和算法驱动的机器、不受伦理或哲学规范约束的世界。

我们已经生活的互联网时代预示着一些问题和议题,而人工智能只会让这些问题更加尖锐。启蒙运动试图让传统真理服从于解放的、分析性的人类理性。互联网的目的是通过对不断扩大的数据的积累和操作来批准知识。人类的认知失去了其个人特征。个人变成了数据,而数据变成了统治者。

公民
为明天建立一个更好的劳动力队伍
未定义
赞助视频 公民
查看更多

互联网的用户强调检索和操作信息,而不是将信息的背景或意义概念化。他们很少审视历史或哲学;通常情况下,他们需要与他们的直接实际需求相关的信息。在这个过程中,搜索引擎的算法获得了预测个人客户偏好的能力,使算法能够将结果个性化,并将其提供给其他各方用于政治或商业目的。真理变得相对。信息有可能压倒智慧。

通过社交媒体淹没在众人的意见中,用户被转移了自省的注意力;事实上,许多技术爱好者使用互联网来避免他们害怕的孤独。所有这些压力都削弱了发展和维持信念所需的毅力,而这些信念只有在孤独的道路上才能实现,这就是创造力的本质。

互联网技术对政治的影响尤其明显。针对微观群体的能力打破了以往对优先事项的共识,允许人们关注专门的目的或不满。政治领导人被利基压力压得喘不过气来,被剥夺了思考或反思背景的时间,收缩了可供他们发展愿景的空间。

数字世界对速度的强调抑制了反思;它的激励机制使激进的人比深思熟虑的人更有力量;它的价值观是由次群体的共识而不是由内省形成的。就其所有的成就而言,它有可能因为其强加于人的便利性而自相矛盾。

随着互联网和计算能力的提高,促进了大量数据的积累和分析,出现了人类理解的前所未有的远景。也许最重要的是生产人工智能的项目--一种能够通过似乎复制人类思维的过程来发明和解决复杂、看似抽象的问题的技术。

这远远超出了我们所知的自动化。自动化处理的是手段;它通过使达到目标的工具合理化或机械化来实现规定的目标。相比之下,人工智能处理的是目的;它建立了自己的目标。由于它的成就部分是由它自己决定的,因此人工智能在本质上是不稳定的。人工智能系统,通过其运作本身,处于不断的变化之中,因为它们获得并即时分析新的数据,然后在分析的基础上寻求改善自己。通过这个过程,人工智能发展出一种以前被认为是人类才有的能力。它对未来作出战略判断,有些是基于作为代码收到的数据(例如,游戏规则),有些是基于它自己收集的数据(例如,通过玩100万次游戏的迭代)。

无人驾驶汽车说明了传统的由人类控制的、以软件为动力的计算机的行动与人工智能寻求导航的宇宙之间的区别。驾驶汽车需要在多种情况下做出判断,这是无法预料的,因此也无法提前编程。用一个众所周知的假设性例子来说,如果这样的汽车因情况所迫,必须在杀死祖父母和杀死孩子之间做出选择,会发生什么?它将选择谁?为什么?在它的选择中,它将试图优化哪些因素?它能解释它的理由吗?如果被质疑,它的真实答案很可能是,如果它能够沟通的话。"我不知道(因为我遵循的是数学原理,而不是人类原理)",或者 "你不会明白(因为我被训练成以某种方式行动,但不会解释)。" 然而,无人驾驶汽车很可能在十年内就会在道路上盛行。

我们必须预期人工智能会比人类更快、更大规模地犯错。
迄今为止,人工智能的研究仅限于特定的活动领域,现在则寻求实现一种 "普遍智能 "的人工智能,能够在多个领域执行任务。在一个可衡量的时间段内,越来越多的人类活动将由人工智能算法驱动。但这些算法,作为对观察到的数据的数学解释,并不能解释产生它们的基本现实。矛盾的是,随着世界变得更加透明,它也将变得越来越神秘。这个新世界与我们已知的世界有什么区别?我们将如何生活在其中?我们将如何管理人工智能,改进它,或者至少防止它造成伤害,最终导致最不祥的担忧:人工智能通过比人类更迅速和明确地掌握某些能力,随着时间的推移,可能会削弱人类的能力和人类状况本身,因为它把它变成数据。


人工智能到时候会给医学、清洁能源供应、环境问题和许多其他领域带来非凡的好处。但是,正因为人工智能对一个不断发展的、尚未确定的未来做出判断,其结果就具有不确定性和模糊性。有三个领域值得特别关注。

首先,人工智能可能会取得意想不到的结果。科幻小说中想象了人工智能背叛其创造者的场景。更有可能的是,由于人工智能固有的缺乏背景,它有可能误解人类的指令。最近一个著名的例子是名为Tay的人工智能聊天机器人,它被设计为以一个19岁女孩的语言模式进行友好对话。但事实证明,这台机器无法定义由其导师安装的 "友好 "和 "合理 "语言的必要条件,反而在其反应中变得具有种族主义、性别歧视和其他煽动性。一些技术界人士声称,这个实验构思不当,执行不力,但它说明了一个潜在的模糊性。在多大程度上有可能使人工智能理解告知其指令的背景?什么媒介可以帮助Tay为自己定义攻击性,一个人类并不普遍认同其含义的词?我们能否在早期阶段发现并纠正一个在我们预期框架之外行事的人工智能程序?或者说,如果任由人工智能自生自灭,它将不可避免地出现轻微的偏差,随着时间的推移,这些偏差可能会连带成为灾难性的偏差?


第二,在实现预期目标的过程中,人工智能可能会改变人类的思维过程和人类的价值观。AlphaGo通过做出战略上前所未有的举动--人类没有设想过的、尚未成功学会克服的举动,击败了世界围棋冠军。这些招数是否超出了人脑的能力?或者说,既然新的高手已经展示了这些动作,人类是否可以学会?


埃德蒙-德-哈罗
在人工智能开始下围棋之前,围棋有各种不同的、有层次的目的。棋手不仅想赢,还想学习可能适用于生活中其他方面的新策略。相比之下,人工智能只知道一个目的:赢。它的 "学习 "不是从概念上,而是从数学上,通过对其算法的边际调整。因此,在通过与人类不同的下法来学习赢得围棋的过程中,人工智能改变了游戏的性质和影响。这种一意孤行的坚持是所有人工智能的特征吗?


其他人工智能项目致力于通过开发能够生成一系列人类问题答案的设备来修改人类思维。除了事实问题("外面的温度是多少?"),关于现实的本质或生命的意义的问题提出了更深刻的问题。我们是否希望孩子们通过与不受约束的算法的对话来学习价值观?我们是否应该通过限制人工智能对其提问者的学习来保护隐私?如果是这样,我们该如何实现这些目标?

如果人工智能的学习速度比人类快得多,我们必须期望它也能以指数级的速度加快人类决策的试错过程:比人类更快、更严重地犯错。也许不可能像人工智能研究人员经常建议的那样,通过在程序中加入要求 "符合道德 "或 "合理 "的结果的注意事项,来控制这些错误。由于人类无法就如何定义这些术语达成一致,整个学术学科已经产生了。因此,人工智能应该成为他们的仲裁者吗?

第三,人工智能可能达到预期目标,但却无法解释其结论的理由。在某些领域--模式识别、大数据分析、游戏--人工智能的能力可能已经超过人类。如果其计算能力继续迅速提高,人工智能可能很快就能以与人类的优化方式至少略有不同,甚至可能大不相同的方式优化情况。但到那时,人工智能是否能够以人类能够理解的方式解释为什么它的行动是最优的?或者,人工智能的决策将超越人类语言和理性的解释能力?纵观人类历史,各种文明都创造了解释周围世界的方法--在中世纪,是宗教;在启蒙运动,是理性;在19世纪,是历史;在20世纪,是意识形态。关于我们正在进入的世界,最困难而又重要的问题是这样的。如果人类意识自身的解释能力被人工智能超越,社会不再能够用对他们有意义的术语来解释他们所居住的世界,那么人类的意识会变成什么?


在一个由机器将人类经验简化为数学数据,并由它们自己的记忆来解释的世界里,如何定义意识呢?谁对人工智能的行为负责?应该如何确定对他们的错误的责任?由人类设计的法律体系能否跟上能够超越人类思维并有可能超越人类的人工智能所产生的活动的步伐?

归根结底,人工智能这个词可能是一个错误的说法。可以肯定的是,这些机器可以解决复杂的、看似抽象的问题,而这些问题以前只能由人类来认知。但它们所做的独特之处不是以前所设想和经历的思考。相反,它是史无前例的记忆和计算。由于其在这些领域的固有优势,人工智能可能会赢得任何分配给它的游戏。但对于我们人类来说,这些游戏不仅仅是关于胜利;它们是关于思考。通过把一个数学过程当作是一个思维过程,并试图自己模仿这个过程或仅仅接受结果,我们就有可能失去一直以来人类认知的本质的能力。

最近设计的一个程序AlphaZero显示了这种进化的意义,它下棋的水平超过了国际象棋大师,其风格在国际象棋历史上也是前所未有的。就其本身而言,在短短几个小时的自我比赛中,它达到了人类花了1500年才达到的技能水平。只有基本的游戏规则提供给AlphaZero。无论是人类还是人类产生的数据都不是它自我学习过程的一部分。如果AlphaZero能够如此迅速地实现这种掌握,那么五年后的人工智能会是什么样子?对人类的认知普遍会有什么影响?伦理学在这一过程中的作用是什么,它本质上包括加速选择?


通常情况下,这些问题是留给技术专家和相关科学领域的知识分子的。哲学家和人文学科领域的其他人员帮助塑造了以前的世界秩序概念,他们往往处于不利地位,缺乏对人工智能机制的了解,或者被其能力吓倒。相比之下,科学界被驱使探索其成就的技术可能性,而技术界则专注于规模惊人的商业远景。这两个世界的动机都是为了推动发现的极限,而不是为了理解它们。而治理,就其涉及的主题而言,更有可能是调查人工智能在安全和智能方面的应用,而不是探索它已经开始产生的人类状况的转变。

推荐阅读
一个男人在围棋棋盘上颓然败下阵来。
没什么可向人类学习的人工智能
陈晓
当有人站在 "最低速度 "标志前时,"互联网巡逻 "官员开出罚单的插图
谷歌让我们变得愚蠢吗?
NICHOLAS CARR

谷歌的AlphaGo如何击败围棋世界冠军
克里斯托弗-莫耶
启蒙运动开始时,基本上是由一种新技术传播的哲学见解。我们的时代正朝着相反的方向发展。它产生了一种潜在的主导技术,以寻找一种指导性的哲学。其他国家已经将人工智能作为一个主要的国家项目。美国作为一个国家,还没有系统地探索其全部范围,研究其影响,或开始最终的学习过程。首先,从将人工智能与人文传统相联系的角度来看,这应该得到国家的高度重视。

人工智能开发者,就像我在政治和哲学方面缺乏经验的技术一样,应该问自己一些我在这里提出的问题,以便在他们的工程努力中建立答案。美国政府应该考虑成立一个由知名思想家组成的总统委员会,帮助制定一个国家愿景。有一点是肯定的:如果我们不尽快开始这项工作,过不了多久我们就会发现我们开始得太晚了。

亨利-A-基辛格曾担任理查德-尼克松和杰拉尔德-福特总统的国家安全顾问和国务卿。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|手机版|网站地图|关于我们|七月天| ECO中文网 ( 京ICP备06039041号  

GMT+8, 2022-12-8 03:12 , Processed in 0.152412 second(s), 20 queries .

Powered by Discuz! X3.3

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表