This is where all download will be listed, utilizing the Page Add plugin.
File Name | S22-Policy-Stoa-65-NEG-SigDrone.docx |
File Size | 58.68 KB |
Date added | February 28, 2022 |
Category | Archived, Policy (Stoa) |
Author | Vance Trefethen |
Resolved: The United States federal government substantially reform the use of Artificial Intelligence technology
Case Summary: The AFF plan bans the use of AI-fueled “Signature” drone strikes. There are 2 types of weaponized drone strikes the US CIA and military use (under AFF’s theory). “Personality” strikes involve careful intelligence and certainty that the target is who we think he is, and he’s a legitimate bad guy. “Signature” strikes (allegedly) use AI to analyze the behavior, patterns, movements, etc. of a target and then make an educated guess that this is a bad guy, and then the drone bombs him and anyone around him. Mistakes happen and innocent civilians get killed.
Listen carefully to how the AFF is running this case. You need to pick carefully the arguments that deal specifically with the way their case is written and don’t run arguments from this brief that don’t apply.
The definition of AI may be an issue depending on how AFF defines it and what evidence is introduced to prove that drones actually do use AI (and not just “software” or “metadata” or “algorithms” or some other vague term.
AFF case may also involve issues surrounding the Authorization for Use of Military Force (AUMF) 2001. This was the resolution passed by Congress in the wake of the 9/11/2001 terrorist attacks authorizing the President to take any necessary military action to go after any and all bad guys who might have been involved. AFF may argue that drones (all drones) are used to strike all kinds of targets who had nothing to do with 9/11 and therefore are not authorized by AUMF. This is not a harm unless someone (besides a terrorist) gets harmed. There has to be some specific impact to “not complying with AUMF” or else it doesn’t matter.