This is where all download will be listed, utilizing the Page Add plugin.
File Name | S22-Policy-Stoa-33-NEG-DeepFake.docx |
File Size | 67.33 KB |
Date added | January 3, 2022 |
Category | Policy (Stoa) |
Author | Vance Trefethen |
Resolved: The United States federal government substantially reform the use of Artificial Intelligence technology
Case Summary: The AFF plan regulates/restricts/bans the distribution of “deep fake” products. Deep Fakes are pictures or videos that have been altered using AI to make it appear, falsely, that someone has done or said things that never actually happened. The most common use of DF is for embarrassing pornography, where a porn video is edited to put someone else’s face into it. DF can also be used for political purposes, to deceive voters into thinking some political leader said things he never actually said.
There are multiple avenues of attack. You can argue that detection technology will advance enough to enable fakes to be detected and flagged. And you can argue that if it doesn’t, then any ban or regulation becomes unenforceable. After all, if AI can make a DF that is indistinguishable from reality, you would never be able to ban them, because no one would be able to recognize it and prosecute whoever made it. After all, to prosecute someone, you have to prove in a court of law that it’s a fake, and how could you do that if they’re impossible to distinguish from reality?
The harms are exaggerated anyway. Fake media is nothing new. John Adams and Thomas Jefferson both complained about the media deceiving people with false stories that led voters astray. Somehow we and our democracy survived. And the disadvantage of having the government get in the business of censoring media content to decide what’s true or not means we lose the vital protections of the First Amendment, which outweighs any benefit of the Plan.