Harvard Law Clinic and Jenner & Block LLP Sue for Information Refugees International Requested on AI’s Role in Asylum Decisions
Two years after asking the Biden administration how it was using artificial intelligence (AI) in the process of adjudicating asylum claims, Refugees International is going to court to demand an answer. Today, the Harvard Immigration and Refugee Clinical Program and Jenner & Block LLP have filed a lawsuit in DC District Court to get documents Refugees International requested through a Freedom of Information Request (FOIA) in December 2022.
In 2022, the U.S. Department of Homeland Security (DHS) posted a list of new AI tools on its website. One was called Asylum Text Analytics (ATA), which “employs machine learning and data graphing techniques to identify plagiarism-based fraud in applications for asylum status and for withholding of removal by scanning the digitized narrative sections of the associated forms and looking for common language patterns.” As detailed in the complaint filed in Court today, the only public information about the use of this tool was in a privacy impact assessment about the AI application DHS was using to run ATA. Further, over the last two years, ATA has been included inconsistently by DHS on the publicly available list of tools being used by United States Citizenship and Immigration Services (USCIS). USCIS has not held any engagements with stakeholders about its use of ATA or indicated which asylum applications have been screened by the application.
The FOIA Refugees International filed asked for data as to how the AI technology was trained to detect fraud, how USCIS officers used the reports generated by ATA to adjudicate claims, and what impact it has had on different claimants. In particular, the FOIA seeks to understand the impact of the use of the software on the handling of applications submitted by those unrepresented by counsel and who do not speak English, since many asylum seekers lack attorneys and all applications must be submitted in English. The use of AI may thus compound features of the U.S. asylum system that negatively impact meritorious claimants rather than work to improve fairness and efficiency of adjudications.
“What looks like a common language pattern may be an attempt by a pro se asylum seeker to fill out an application to the best of their ability in a foreign language. Do asylum officers give them a chance to explain this when ATA flags their application as fraudulent?” asked Yael Schacher, Director for the Americas and Europe at Refugees International. “DHS has said it is being careful with its use of AI but has not been transparent about its use of a tool that may be determining whether someone is denied refuge and deported. ”
“USCIS’s increased reliance on AI-based tools risks jeopardizing asylum seekers’ access to a life-saving legal status,” said Jessenia Class, clinical law student with the Harvard Immigration and Refugee Clinical Program. “Transparency into the AI technology’s function, accuracy rate, and impact is critical to ensuring the United States is adhering to its asylum obligations to provide protection for vulnerable noncitizens. USCIS must comply with FOIA laws and release these records immediately.”