US immigration body seeking automated social media monitoring tools

24 Nov 2017

ICE is seeking information from tech firms. Image: Jim Parkin/Shutterstock

The ICE is asking big tech names to help it conduct social media surveillance of visa holders deemed ‘high-risk’.

The US Department of Immigration and Customs Enforcement (ICE) gave a presentation at a tech industry conference last week, laying out its plans to monitor US visa holders’ social media activity. It put out a call to major tech firms for algorithms to aid the surveillance project.

ProPublica reported that the programme name is now changed from ‘Extreme Vetting’ to ‘Visa Lifecycle Vetting’ and it demonstrates ICE’s plan to place an enormous amount of people under its watch.

Companies that had representatives at the presentation in Arlington, Virginia, last week included Deloitte, Motorola, Accenture and Microsoft.

ICE gathering information from tech firms

Deputy assistant director of ICE Homeland Security Investigations’ National Security Program, Louis Rodi, explained that the agency needs a predictive tool with “risk-based matrices” to ostensibly flag dangers posed by visa holders.

Rodi said the surveillance would be large-scale and continuous: “Everything we’re dealing with is in bulk, so we need batch-vetting capabilities for any of the processes that we have.”

Another ICE representative, Alysa Erichs, said that the body wants to be able to receive automated notifications about any visa-holder’s activity that it could view as suspicious.

ICE spokesperson Carissa Cutrell said it has not begun building any type of programme as of yet. She said: “The request for information on this initiative was simply that: an opportunity to gather information from industry professionals and other government agencies on current technological capabilities to determine the best way forward.”

A veneer of objectivity

Earlier in November, 54 AI experts wrote a letter to Elaine C Duke, acting US secretary of the Department of Homeland Security, raising concerns around the potential dangerous consequences of an automated surveillance plan.

It said that an initiative such as this could replicate biases “under a veneer of objectivity”, citing the vague nature of concepts touted by the authorities, including “contribution to society”.

It continued: “Inevitably, because these characteristics are difficult (if not impossible) to define and measure, any algorithm will depend on ‘proxies’ that are more easily observed and may bear little or no relationship to the characteristics of interest.

“For example, developers could stipulate that a Facebook post criticising US foreign policy would identify a visa applicant as a threat to national interests.

“They could also treat income as a proxy for a person’s contributions to society, despite the fact that financial compensation fails to adequately capture people’s roles in their communities or the economy.”

The letter added that accurate operations of such a system at a large scale is not feasible. “As a result, even the most accurate possible model would generate a very large number of false positives – innocent individuals falsely identified as presenting a risk of crime or terrorism who would face serious repercussions not connected to their real level of risk.”

Last week, Reuters reported that a non-profit group wrote to IBM about its attendance at a meeting in July about vetting technologies, and a spokesperson for the company said it “would not work on any project that runs counter to our company’s values, including our longstanding opposition to discrimination against anyone on the basis of race, gender, sexual orientation or religion”.

Ellen Tannam was a journalist with Silicon Republic, covering all manner of business and tech subjects

editorial@siliconrepublic.com