This is where all download will be listed, utilizing the Page Add plugin.
File Name | S22-Policy-Stoa-43-NEG-PredictivePolicing.docx |
File Size | 112.68 KB |
Date added | February 11, 2022 |
Category | Policy (Stoa) |
Author | Vance Trefethen |
Resolved: The United States federal government substantially reform the use of Artificial Intelligence technology
Case Summary: The AFF plan bans local government from using AI systems to guide police on where crimes are most likely to occur, in order to focus their resources in the most needed spots. Most likely, the issue will be racial bias that the AI will be perpetuating, which will be worse than human policing because, as we all know, there is no human racial bias anywhere today (ahem….<cough>). The test is not “does AI have bias?” – the question should be “Does AI have more bias than humans?” And the answer is no.
The most powerful Solvency argument is probably #2, where the evidence says each police department should study it for themselves to determine effectiveness. A blanket federal policy is bad because different police departments do it differently and local conditions are different. If it works in some places and doesn’t work in other places, the unsuccessful places shouldn’t stop the successful ones from using it. Studies and winning the evidence battle will be a key Negative strategy. There are no empirical studies actually showing PP has ever harmed anyone; two that found it has no effect (and we have specific refutation on both of those studies – Shreveport and Chicago); and lots of studies showing it’s beneficial.