人工智能生成的面孔更可信吗?

来源:优秀文章 发布时间:2023-04-18 点击:

文/埃米莉·威林厄姆 译/陈先宇

When TikTok videos emerged in 2021 that seemed to show “Tom Cruise” making a coin disappear and enjoying a lollipop, the account name was the only obvious clue that this wasn’t the real deal.The creator of the“deeptomcruise” account on the social media platform was using “deepfake”technology to show a machine-generated version of the famous actor performing magic tricks and having a solo danceoff.

2One tell for a deepfake used to be the “uncanny valley” effect, an unsettling feeling triggered by the hollow look in a synthetic1synthetic 合成的,人造的。person’s eyes.But increasingly convincing images are pulling viewers out of the valley and into the world of deception promulgated by deepfakes.

3The startling realism has implications for malevolent2malevolent 恶毒的。uses of the technology: its potential weaponization in disinformation campaigns for political or other gain, the creation of false porn for blackmail, and any number of intricate manipulations for novel forms of abuse and fraud.Developing countermeasures to identify deepfakes has turned into an“arms race” between security sleuths on one side and cybercriminals and cyberwarfare operatives on the other.

2021 年,抖音海外版上出现了几个“汤姆·克鲁斯”的视频。视频中的“汤姆·克鲁斯”或表演硬币消失魔术,或在吃棒棒糖,只有账号名能清楚地表明视频内容并不真实。在抖音海外版上创建“深度汤姆克鲁斯”账号的人正是使用了“深度伪造”技术,通过机器生成知名影星汤姆·克鲁斯的图像,让其表演魔术和独舞。

2以往辨别深度伪造的要素是“恐怖谷”效应,即人们看到合成人空洞漠然的眼神时会感到不安。但日趋逼真的图像正将观者拉出深谷,带入深度伪造所宣扬的欺骗世界。

3深度伪造技术能达到的真实程度让人吃惊,这意味着存在恶意使用该技术的可能:用作虚假宣传的武器,以获取政治或其他方面的利益;
用于制造虚假色情内容进行敲诈;
通过一些复杂操作,实施新型虐待和诈骗。开发识别深度伪造的反制手段已演变为一场“军备竞赛”,竞赛一方是安全“侦探”,另一方则是网络罪犯和网战特工。

4A new study published in the Proceedings of the National Academy of Sciences of the United States of America provides a measure of how far the technology has progressed.The results suggest that real humans can easily fall for machine-generated faces—and even interpret them as more trustworthy than the genuine article.“We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” says study co-author Hany Farid,a professor at the University of California, Berkeley.The result raises concerns that “these faces could be highly effective when used for nefarious3nefarious 邪恶的,不道德的。purposes.”

5“We have indeed entered the world of dangerous deepfakes,” says Piotr Didyk, an associate professor at the University of Italian Switzerland in Lugano, who was not involved in the paper.The tools used to generate the study’s still images are already generally accessible.And although creating equally sophisticated video is more challenging,tools for it will probably soon be within general reach, Didyk contends.

4《美国国家科学院院刊》上发表了一份新的研究报告,该报告对深度伪造技术的发展程度进行了评估。研究结果表明,真人易为机器生成的面孔所骗,甚至认为其比真实人脸更可信。报告合著者、加利福尼亚大学伯克利分校教授哈尼·法里德说:“我们发现,合成人脸不仅非常逼真,而且被认为比真实人脸更可信。”这一结果引发了人们的担忧——“使用合成人脸行不法之事可能很有效果”。

5“我们确实已进入危险的深度伪造世界。”未参与上述研究的瑞士意大利语区大学(位于卢加诺)副教授彼得·迪迪克如此说道。研究所用生成静态图像的工具已普及。迪迪克认为,尽管同样复杂的视频较难制作,但公众也许很快就能用上相关工具。

6这项研究使用的合成人脸是在两个神经网络反复交互往来的过程中生成的。这两个网络是典型的生成对抗网络。其中一个名为生成器,生成一系列不断演变的合成人脸,就像一名学生逐步完成草图一样。另一个名为鉴别器,对真实图像进行学习后,通过比对真实人脸的数据,评定生成器输出的图像。

6The synthetic faces for this study were developed in back-and-forth interactions between two neural networks,examples of a type known as generative adversarial networks.One of the networks, called a generator, produced an evolving series of synthetic faces like a student working progressively through rough drafts.The other network, known as a discriminator, trained on real images and then graded the generated output by comparing it with data on actual faces.

7The generator began the exercise with random pixels.With feedback from the discriminator, it gradually produced increasingly realistic humanlike faces.Ultimately, the discriminator was unable to distinguish a real face from a fake one.

8The networks trained on an array of real images representing Black, East Asian, South Asian and white faces of both men and women, in contrast with the more common use of white men’s faces in earlier research.

9After compiling 400 real faces matched to 400 synthetic versions, the researchers asked 315 people to distinguish real from fake among a selection of 128 of the images.Another group of 219 participants got some training and feedback about how to spot fakes as they tried to distinguish the faces.Finally,a third group of 223 participants each rated a selection of 128 of the images for trustworthiness on a scale of one (very untrustworthy) to seven (very trustworthy).

7生成器从随机像素开始训练。得益于鉴别器的反馈,生成器生成的人脸越来越逼真,直至鉴别器无法区分真假面孔。

8与早期研究更常用白人男性面孔不同,两个神经网络的训练素材是大量再现黑人、东亚人、南亚人以及白人男女面孔的真实图像。

9研究人员先汇集了400 张真实人脸及与之匹配的400 张合成人脸,然后从中选择128张,要求315 名受试者辨别真假。另一组219 名受试者在进行辨别时获得了一定的培训和反馈,其内容涉及如何识别出假面孔。第三组223 名受试者对选出的128 张图像进行可信度评分,评分范围为1(非常不可信)到7(非常可信)。

10第一组受试者辨别真假人脸完全靠猜,平均准确率为48.2%。第二组的准确率也没高多少,仅为59%左右,即便他们了解到那些受试者选择的反馈信息也没有用。在第三组的可信度评分中,合成人脸的平均得分略高,为4.82,而真实人脸的平均得分为4.48。

10The first group did not do better than a coin toss4coin toss 掷硬币,此处引申为“没把握,碰运气”。at telling real faces from fake ones, with an average accuracy of 48.2 percent.The second group failed to show dramatic improvement,receiving only about 59 percent, even with feedback about those participants’choices.The group rating trustworthiness gave the synthetic faces a slightly higher average rating of 4.82, compared with 4.48 for real people.

11The researchers were not expecting these results.“We initially thought that the synthetic faces would be less trustworthy than the real faces,” says study co-author Sophie Nightingale.

12The uncanny valley idea is not completely retired.Study participants did overwhelmingly identify some of the fakes as fake.“We’re not saying that every single image generated is indistinguishable from a real face, but a significant number of them are,” Nightingale says.

11上述结果出乎研究人员的预料。“我们最初认为合成人脸的可信度要比真实人脸低。”论文合著者索菲·奈廷格尔如是说。

12恐怖谷效应并没有完全退去。绝大多数受试者都认为其中一些合成人脸是假的。奈廷格尔说:“我们并不是说,生成的每张图像都难以与真实人脸区分开来,但其中相当一部分确实如此。”

13这一发现增加了人们对技术可及性的担忧,因为有了该技术,几乎人人都可创建欺骗性的静态图像。奈廷格尔说:“一个人即使没有Photoshop 或CGI 的专业知识,也能创建合成内容。”南加利福尼亚大学视觉智能与多媒体分析实验室创始负责人瓦埃勒·阿布德-阿尔马吉德没有参与上述研究,但他表示:另一个担忧是研究结果会给人留下一种印象,即深度伪造将完全无法检测出来。阿布德-阿尔马吉德担心,科学家可能会放弃开发针对深度伪造的对策,尽管他认为保持检测技术与深度伪造不断提高的真实性同步发展,“不过是又一个取证问题”。

13The finding adds to concerns about the accessibility of technology that makes it possible for just about anyone to create deceptive still images.“Anyone can create synthetic content without specialized knowledge of Photoshop or CGI5= computer-generated imagery计算机生成图像。,” Nightingale says.Another concern is that such findings will create the impression that deepfakes will become completely undetectable, says Wael Abd-Almageed, founding director of the Visual Intelligence and Multimedia Analytics Laboratory at the University of Southern California, who was not involved in the study.He worries scientists might give up on trying to develop countermeasures to deepfakes, although he views keeping their detection on pace with their increasing realism as “simply yet another forensics6forensics 取证。problem.”

14“The conversation that’s not happening enough in this research community is how to start proactively to improve these detection tools,” says Sam Gregory, director of programs strategy and innovation at WITNESS7自1992 年以来,WITNESS 一直致力于让世界各地的任何人都能利用视频和技术的力量来争取人权。该组织通过向数百万人提供相关培训、支持和工具,动员有能力改变世界的21 世纪新一代维权人士积极参与。WITNESS 凭借其庞大的全球合作伙伴网络,帮助受害者公开反对强权和勇敢面对不公正待遇。, a human rights organization that in part focuses on ways to distinguish deepfakes.Making tools for detection is important because people tend to overestimate their ability to spot fakes, he says, and “the public always has to understand when they’re being used maliciously.”

14WITNESS 是一家人权组织,其重点关注的领域之一便是深度伪造检测方法。该组织的项目战略与创新主管萨姆·格雷戈里说:“研究界还没有充分讨论如何积极主动地开始改进检测工具。”他还表示,开发检测工具非常重要,因为人们往往会高估自己识别假货的能力,而“公众永远都必须了解自己何时被恶意利用了”。

15Gregory, who was not involved in the study, points out that its authors directly address these issues.They highlight three possible solutions, including creating durable watermarks for these generated images, “like embedding fingerprints so you can see that it came from a generative process,” he says.

15格雷戈里没有参与上述研究,但他指出研究报告的作者直接探讨了相关问题。他们重点提出了3 种可行的解决方案,包括在生成的图像上添加持久水印,“像嵌入指纹一样,这样你就可以看出它是人工合成而来的。”格雷戈里说。

16The authors of the study end with a stark conclusion after emphasizing that deceptive uses of deepfakes will continue to pose a threat: “We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits,”they write.“If so, then we discourage the development of technology simply because it is possible.” ■

16报告作者强调,利用深度伪造行骗将继续构成威胁,最后他们得出立场明确的结论:“因此,我们敦促技术开发者考虑相关风险是否大于收益。”他们写道,“如果是,那我们就不鼓励该技术的发展,只因其确实可能弊大于利。” □

猜你喜欢克鲁斯面孔汤姆本期面孔辽河(2022年4期)2022-06-09多变的面孔智慧少年·故事叮当(2020年2期)2020-03-08贪吃的汤姆学生天地(2019年35期)2019-08-25自然面孔人与自然(2019年4期)2019-07-26掉钱幸福家庭(2016年12期)2016-12-22不怕你惦记微型小说选刊(2015年2期)2015-11-17汤姆·克鲁斯电影故事(2015年38期)2015-09-06烙饼超市三月三(2014年4期)2014-04-01高楼上的上帝故事林(2013年22期)2013-05-14我们的面孔当代贵州(2009年8期)2009-05-31推荐访问:人工智能 可信 面孔
上一篇:烙画葫芦,千年传承
下一篇:美国人心里的dragon,和tofu

Copyright @ 2013 - 2018 优秀啊教育网 All Rights Reserved

优秀啊教育网 版权所有