This is where all download will be listed, utilizing the Page Add plugin.
File Name | S22-Policy-Stoa-34-NEG-EmotionRecog.docx |
File Size | 58.66 KB |
Date added | January 3, 2022 |
Category | Policy (Stoa) |
Author | Vance Trefethen |
Resolved: The United States federal government substantially reform the use of Artificial Intelligence technology
Case Summary: The AFF plan bans companies from using “Emotion Recognition Technology” in interviewing and hiring job candidates. It is supposedly biased against minorities and disabled people, denying them equal opportunities. ERT uses AI to look at facial expressions in a video of a job interview. It reports on the candidate’s emotions. The question should not be “Is AI accurately judging emotions?” It should be “Is AI judging emotions less accurately than humans?” The fact that AI makes mistakes doesn’t prove it’s bad, because humans make mistakes in judging facial expressions too. The AFF’s evidence about “1000 studies” saying facial analysis is bad also indict human recognition as well. And the plan banning business from using AI to make hiring decisions violates a basic property right. Property owners should have the ability to exercise their God-given right to make their own decisions about who they let on to their property and who they hire to work for them. Government taking that decision away from them violates human rights. Businesses that make mistakes and pass over well qualified candidates will suffer their own punishment: Other firms will hire those good candidates and out-compete them in the market. No need for government to do anything.