{"id":1117,"date":"2024-09-22T07:39:56","date_gmt":"2024-09-21T23:39:56","guid":{"rendered":"https:\/\/www.kafeizha.com\/?p=1117"},"modified":"2024-09-22T07:39:56","modified_gmt":"2024-09-21T23:39:56","slug":"%e4%ba%ba%e5%b7%a5%e6%99%ba%e8%83%bd%e5%85%88%e9%a9%b1%e5%91%bc%e5%90%81%e4%bf%9d%e6%8a%a4%e3%80%8c%e7%81%be%e9%9a%be%e6%80%a7%e9%a3%8e%e9%99%a9%e3%80%8d","status":"publish","type":"post","link":"https:\/\/news.tomjun.com\/?p=1117","title":{"rendered":"\u4eba\u5de5\u667a\u80fd\u5148\u9a71\u547c\u5401\u4fdd\u62a4\u300c\u707e\u96be\u6027\u98ce\u9669\u300d"},"content":{"rendered":"<p><b>\u65b0\u95fb\u6765\u6e90\uff1a<\/b>www.nytimes.com<br \/> <b>\u539f\u6587\u5730\u5740\uff1a<\/b><font size=\"-1\"><a href=\"https:\/\/www.nytimes.com\/2024\/09\/16\/business\/china-ai-safety.html target=\"_blank\">A.I. Pioneers Call for Protections Against \u2018Catastrophic Risks\u2019<\/a><\/font><br \/> <b>\u65b0\u95fb\u65e5\u671f\uff1a<\/b>2024-09-16<\/p>\n<p> \u4eba\u5de5\u667a\u80fd\u5148\u9a71\u547c\u5401\u5236\u5b9a\u5168\u7403\u76d1\u7ba1\u673a\u5236<\/p>\n<p>\u5168\u7403\u591a\u540d\u4eba\u5de5\u667a\u80fd\u4e13\u5bb6\u5171\u540c\u53d1\u8868\u58f0\u660e\uff0c\u8981\u6c42\u5404\u56fd\u653f\u5e9c\u51fa\u53f0\u76d1\u7ba1\u63aa\u65bd\uff0c\u4ee5\u5e94\u5bf9\u5feb\u901f\u53d1\u5c55\u7684\u6280\u672f\u53ef\u80fd\u5e26\u6765\u7684\u707e\u96be\u6027\u98ce\u9669\u3002\u4ed6\u4eec\u8ba4\u4e3a\uff0c\u8fd9\u9879\u6280\u672f\u7684\u53d1\u5c55\u901f\u5ea6\u5982\u6b64\u4e4b\u5feb\uff0c\u5355\u9760\u4f01\u4e1a\u548c\u56fd\u5bb6\u96be\u4ee5\u51b3\u5b9a\u5982\u4f55\u76d1\u7763\u548c\u7ba1\u7406\u3002<\/p>\n<p>\u4f1a\u8bae\u53ec\u96c6\u4eba\u4e4b\u4e00\u7684 Yoshua Bengio \u8868\u793a\uff0c\u8fd9\u6b21\u6d3b\u52a8\u65e8\u5728\u4fc3\u8fdb\u56fd\u9645\u4ea4\u6d41\u4e0e\u5408\u4f5c\uff0c\u907f\u514d\u201c\u6f58\u591a\u62c9\u9b54\u76d2\u201d\u88ab\u6253\u5f00\u3002\u6765\u81ea\u4e2d\u56fd\u77e5\u540d\u4eba\u5de5\u667a\u80fd\u7814\u7a76\u673a\u6784\u7684 Fu Hongyu \u4e5f\u8868\u8fbe\u4e86\u7c7b\u4f3c\u89c2\u70b9\uff0c\u8ba4\u4e3a\u9700\u8981\u5171\u540c\u534f\u4f5c\u6765\u5236\u5b9a\u4eba\u5de5\u667a\u80fd\u76d1\u7ba1\u673a\u5236\u3002<\/p>\n<p>\u6b64\u6b21\u4f1a\u8bae\u4e0a\uff0cGeoffrey Hinton\u3001Andrew Yao \u548c\u591a\u4f4d\u4e2d\u56fd\u4eba\u5de5\u667a\u80fd\u9886\u57df\u7684\u77e5\u540d\u4eba\u58eb\u53c2\u52a0\u4e86\u8ba8\u8bba\u3002\u4ed6\u4eec\u5728\u4e2d\u56fd\u6587\u827a\u590d\u5174\u5bab\u6bbf\u4e3e\u884c\u4f1a\u8bae\uff0c\u63a2\u8ba8\u5982\u4f55\u907f\u514d\u4eba\u5de5\u667a\u80fd\u5e26\u6765\u7684\u707e\u96be\u6027\u7ed3\u679c\u3002<\/p>\n<p>\u968f\u7740\u4e2d\u7f8e\u4e24\u56fd\u5728\u79d1\u6280\u9886\u57df\u7684\u7ade\u4e89\u52a0\u5267\uff0c\u4eba\u5de5\u667a\u80fd\u7684\u5b89\u5168\u95ee\u9898\u6108\u53d1\u53d7\u5230\u5173\u6ce8\u3002\u6700\u65b0\u4e00\u6b21\u7684\u4f1a\u8bae\u7ed3\u679c\u663e\u793a\uff0c\u6765\u81ea\u5168\u7403 28 \u4e2a\u56fd\u5bb6\u7684\u4ee3\u8868\u5df2\u7b7e\u7f72\u4e86\u4e00\u4efd\u8054\u5408\u58f0\u660e\uff0c\u4ee5\u4fc3\u8fdb\u4eba\u5de5\u667a\u80fd\u7684\u5408\u4f5c\u4e0e\u534f\u4f5c\u3002 <\/p>\n<hr>\n<p> <b>\u539f\u6587\u6458\u8981\uff1a<\/b><\/p>\n<p> Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology.The release of ChatGPT and a string of similar services that can create text and images on command have shown how A.I. is advancing in powerful ways. The race to commercialize the technology has quickly brought it from the fringes of science to smartphones, cars and classrooms, and governments from Washington to Beijing have been forced to figure out how to regulate and harness it.In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that \u201closs of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity.\u201dIf A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University.\u201cIf we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?\u201d Dr. Hadfield said.On Sept. 5-8, Dr. Hadfield joined scientists from around the world in Venice to talk about such a plan. It was the third meeting of the International Dialogues on A.I. Safety, organized by a nonprofit research group in the United States called Far.AI.Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors.The group proposed that countries set up A.I. safety authorities to register the A.I. systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an A.I. system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body.Scientists from the United States, China, Britain, Singapore, Canada and elsewhere signed the statement.Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China\u2019s top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing.The group also included scientists from several of China\u2019s leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.Their latest gathering in Venice took place at a building owned by the billionaire philanthropist Nicolas Berggruen. The president of the Berggruen Institute think tank, Dawn Nakagawa, participated in the meeting and signed the statement released on Monday.The meetings are a rare venue for engagement between Chinese and Western scientists at a time when the United States and China are locked in a tense competition for technological primacy.In recent months, Chinese companies have unveiled technology that rivals the leading American A.I. systems.Government officials in both China and the United States have made artificial intelligence a priority in the past year. In July, a Chinese Communist Party conclave that takes place every five years called for a system to regulate A.I. safety. Last week, an influential technical standards group in China published an A.I. safety framework.Last October, President Biden signed an executive order that required companies to report to the federal government about the risks that their A.I. systems could pose, like their ability to create weapons of mass destruction or potential to be used by terrorists.President Biden and China\u2019s leader, Xi Jinping, agreed when they met last year that officials from both countries should hold talks on A.I. safety. The first took place in Geneva in May.In a broader government initiative, representatives from 28 countries signed a declaration in Britain last November, agreeing to cooperate on evaluating the risks of artificial intelligence. They met again in Seoul in May. But these gatherings have stopped short of setting specific policy goals.Distrust between the United States and China adds to the difficulty of achieving alignment.\u201cBoth countries are hugely suspicious of each other\u2019s intentions,\u201d said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, who was not part of the dialogue. \u201cThey\u2019re worried that if they pump the breaks because of safety concerns, that will allow the other to zoom ahead,\u201d Mr. Sheehan said. \u201cThat suspicion is just going to be baked in.\u201dThe scientists who met in Venice this month said their conversations were important because scientific exchange is shrinking amid the competition between the two geopolitical superpowers.In an interview, Dr. Bengio, one of the founding members of the group, cited talks between American and Soviet scientists at the height of the Cold War that helped bring about coordination to avert nuclear catastrophe. In both cases, the scientists involved felt an obligation to help close the Pandora\u2019s box opened by their research.Technology is changing so quickly that is difficult for individual companies and governments to decide how to approach it, and collaboration is crucial, said Fu Hongyu, the director of A.I. governance at Alibaba\u2019s research institute, AliResearch, who did not participate in the dialogue.\u201cIt\u2019s not like regulating a mature technology,\u201d Mr. Fu said. \u201cNobody knows what the future of A.I. looks like.\u201d<\/p>\n<div style=\"margin: 20px 0;\"><div class=\"qrcswholewtapper\" style=\"text-align:left;\"><div class=\"qrcprowrapper\"  id=\"qrcwraa2leds\"><div class=\"qrc_canvass\" id=\"qrc_cuttenpages_2\" style=\"display:inline-block\" data-text=\"https:\/\/news.tomjun.com\/?p=1117\"><\/div><div><a download=\"\u4eba\u5de5\u667a\u80fd\u5148\u9a71\u547c\u5401\u4fdd\u62a4\u300c\u707e\u96be\u6027\u98ce\u9669\u300d.png\" class=\"qrcdownloads\" id=\"worign\">\r\n           <button type=\"button\" style=\"min-width:200px;background:#44d813;color:#000;font-weight: 600;border: 1px solid #44d813;border-radius:20px;font-size:12px;padding: 6px 0;\" class=\"uqr_code_btn\">\u6587\u7ae0\u4e8c\u7ef4\u7801<\/button>\r\n           <\/a><\/div><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>\u65b0\u95fb\u6765\u6e90\uff1awww.nytimes.com \u539f\u6587\u5730\u5740\uff1a<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[90],"tags":[1879,1160,1228,1880,1878],"class_list":["post-1117","post","type-post","status-publish","format-standard","hentry","category-90","tag-1879","tag-1160","tag-1228","tag-1880","tag-1878"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/news.tomjun.com\/index.php?rest_route=\/wp\/v2\/posts\/1117","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/news.tomjun.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/news.tomjun.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/news.tomjun.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/news.tomjun.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1117"}],"version-history":[{"count":1,"href":"https:\/\/news.tomjun.com\/index.php?rest_route=\/wp\/v2\/posts\/1117\/revisions"}],"predecessor-version":[{"id":1118,"href":"https:\/\/news.tomjun.com\/index.php?rest_route=\/wp\/v2\/posts\/1117\/revisions\/1118"}],"wp:attachment":[{"href":"https:\/\/news.tomjun.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1117"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/news.tomjun.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1117"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/news.tomjun.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1117"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}