公众号
关注微信公众号
移动端
创头条企服版APP

公开信全文:马斯克呼吁暂停大型人工智能实验

13069
工信头条 2023-03-30 10:21 抢发第一评

原题:马斯克公开信(全文)

Future of Life 半导体风向标

暂停大型人工智能实验:公开信

发布网站:Future of Life Institute

会议时间:2023年3月29日

原文链接:

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

1。1.jpg

我们呼吁所有人工智能实验室立即暂停对比GPT-4更强大的人工智能系统的培训至少6个月。

大量研究表明,具有人类竞争情报的人工智能系统可能对社会和人类构成深刻风险,并得到顶尖人工智能实验室的认可。正如得到广泛赞同的Asilomar人工智能原理,先进的人工智能可能代表着地球生命史上的一场深刻变革,应该以相应的关心和资源进行规划和管理不幸的是,这种水平的规划和管理并没有发生,尽管最近几个月来,人工智能实验室陷入了一场失控的竞赛,开发和部署更强大的数字思维,没有人——甚至他们的创造者——能够理解、预测或可靠地控制。

当代人工智能系统在一般任务上正变得具有人类竞争力,我们必须问自己:

应该让机器用宣传和谎言淹没我们的信息渠道吗?

应该自动化了所有的工作吗,包括令人满意的工作吗?

应该让我们开发出非人类的思想,最终可能会在数量上超过我们,比我们聪明,被淘汰,甚至取代我们?

应该让我们会冒着失去文明控制的风险吗?

这样的决策决不能委托给未经选举产生的技术领导者。只有当我们确信强大的人工智能系统的影响将是积极的,其风险将是可控的时候,才应该开发它们。这种信心必须有充分的理由,并随着系统潜在影响的大小而增加。OpenAI的最近关于人工通用智能的声明指出,“在某个时候,在开始训练未来的系统之前获得独立审查可能是重要的,对于最先进的努力,同意限制用于创建新模型的计算增长率。”我们同意这一点,就是现在。

因此,我们呼吁所有的人工智能实验室立即暂停至少 6 个月的比GPT-4更强大的 AI 系统的培训。这一暂停应该是公开的和可核查的,并包括所有关键行为者。如果这样的暂停不能迅速实施,各国政府应该介入并实施暂停。

人工智能实验室和独立专家应该利用这段时间,共同开发和实施一套高级人工智能设计和开发的共享安全协议,由独立外部专家进行严格审计和监督。这些协议应确保遵守协议的系统是安全的,排除合理的怀疑。这不是意味着人工智能发展总体上的暂停,只是从危险的竞赛中后退一步,以便向具有突发能力的更大的不可预测的黑盒模型前进。

人工智能的研究和开发应该重新聚焦于使今天强大的、最先进的系统更加准确、安全、可解释、透明、强大、一致、值得信赖和忠诚。

同时,人工智能开发者必须与政策制定者合作,大幅加快开发强大的人工智能治理系统。这些至少应该包括:专门针对人工智能的新的和有能力的监管机构;监督和跟踪高能力的人工智能系统和大型计算能力池;出处和水印系统,以帮助区分真实和合成,并跟踪模型泄漏;强大的审计和认证生态系统;对人工智能造成的伤害承担责任;为人工智能安全技术研究提供强大的公共资金;以及资源充足的机构,以应对人工智能将导致的巨大的经济和政治破坏。

人类可以通过人工智能享受一个繁荣的未来。现在,在成功创造出强大的人工智能系统之后,我们可以享受一个 "人工智能之夏"。在这个夏天,我们收获了回报,为所有人的共同利益设计这些系统,并给社会一个适应的机会。社会已经暂停了对社会有潜在灾难性影响的其他技术,我们在这里也可以这样做。让我们享受一个漫长的人工智能夏季,而不是在毫无准备的情况下匆忙坠入秋季。

公开信原文

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research,and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.  We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

声明:该文章版权归原作者所有,转载目的在于传递更多信息,并不代表本网赞同其观点和对其真实性负责。如涉及作品内容、版权和其它问题,请在30日内与本网联系。
您阅读这篇文章花了0
转发这篇文章只需要1秒钟
喜欢这篇 8
评论一下 0
凯派尔知识产权全新业务全面上线
相关文章
评论
试试以这些内容开始评论吧
登录后发表评论
凯派尔知识产权全新业务全面上线
阿里云创新中心
×
#热门搜索#
精选双创服务
历史搜索 清空

Tel:18514777506

关注微信公众号

创头条企服版APP