Ban AI tools in asylum decision making

Deciding whether someone can seek refuge or not is one of the most life-changing decisions that the state can make. It's vital that accurate information is used to inform these decisions.

But this is at risk because the Home Office is using AI tools in asylum applications. This includes using ChatGPT to 'summarise' highly personal interviews where people outline the reasons they are seeking sanctuary in the UK.  AI is changing these interviews, deciding what to keep in and what to leave out. Applicants aren't even told that AI is being used so they can't check how it has changed the information they have provided.

AI is not neutral. It can discriminate and make mistakes. It should not be used to change information that informs life-changing asylum assessments.

Ask your MP to take a stand against the use of AI tools in asylum assessments.


 

Help us meet our goal of 250 emails sent. Let's make sure every MP sees an email!

145 of 250 emails have been sent. Will you help us send another 105?

Your data
Write your email
Send your email

Any personal data you provide will be held in-line with our privacy policy

Write to your MP

We ask for a phone number, as some MPs have stated they require constituents provide one. If you choose to give your phone number it will be added along with your address in the email to the MP.
We ask for your full address because MPs will only respond to messages they know are from their own constituents.

What’s the problem?

The Home Office is using two AI tools: The Asylum Case Summarisation (ACS) tool uses ChatGPT-4 to summarise asylum interview transcripts. The Asylum Policy Search (APS) tool summarises country Policy and Information Notes (CPINs), guidance documents, and Country of Origin Information (COI) reports.

The Home Office’s own evaluation revealed that 9% of the ACS AI-generated summaries were so flawed they had to be removed from the pilot.

It also found that 23% of caseworkers lacked full confidence in the tools’ outputs.

Despite this, the Home Office has pushed ahead and rolled out the use of these tools.

But even worse, individuals affected by these tools will NOT be informed about the use of AI in their cases. So applicants have no way of checking how the information they supplied has been changed by AI. Now a legal opinion has found that the Home Office’s failure to tell people that AI tools are being used in their assessments is likely to be unlawful.

Bringing a legal challenge will take time – we need to stop their use now.

Governments have a history of testing novel technologies on vulnerable populations before rolling them out more widely. So it’s in all of our interests to stop these tools being used before AI is used across government departments to inform life-changing decision.

Write to your MP now.