Artificial intelligence (AI) is seeping into every sector, and that now includes border control. The U.S. Immigration and Customs Enforcement (ICE) agency is leveraging an AI-powered tool, Giant Oak Search Technology (GOST), to scan social media posts for content deemed “derogatory” to the U.S.
The revelation, first reported by 404 Media, has ignited concerns about privacy and the ethical implications of such surveillance.
GOST assists the agency by scrutinizing social media posts and determining their potential risk to the U.S., according to the report, which cited confidential documents. “The documents peel back the curtain on a powerful system, both in a technological and a policy sense—how information is processed and used to decide who is allowed to remain in the country and who is not,” 404 reported.
The system ranks a person’s social media scores from one to 100 based on its relevance to the user’s perceived mission.
Social media reviews are not new. Published posts have been used in the past to investigate people who could be considered dangerous. However, the line between homeland security and basic individual liberties could be further blurred by these kinds of tools, which can process information much faster than humans.
The Maine mass shooter had a Twitter account. Elon Musk scrubbed it. Why?
Here is a list of accounts that the mass shooter visited on Twitter: pic.twitter.com/pdPm2Pjpmh
— Dan Whitfield (@DanWhitCongress) October 26, 2023
Patrick Toomey, Deputy Director of the ACLU’s National Security Project, voiced concerns about the government’s use of algorithms to scrutinize social media posts. “The government should not be using algorithms to scrutinize our social media posts and decide which of us is ‘risky,'” he told 404. “And agencies certainly shouldn’t be buying this kind of black box technology in secret without any accountability.”
The geopolitical relevance of AI has grown substantially this year. Beyond its use in political races, AI is playing a pivotal role in the Israel-Palestine conflict, with both sides leveraging the technology to enhance their position or attack the other.
The public remains highly skeptical of AI, however, especially where personal privacy is concerned. As reported by Decrypt, a study by the Pew Research Center revealed that 32% of Americans believe that AI in hiring and evaluating workers is more likely to harm rather than help job applicants and employees. A Reuters poll over the summer found that most Americans see AI as a threat to humanity.
The broader implications of such technological advancements are undeniable. While AI offers efficiency and precision, it also poses potential threats to individual privacy rights—a common paradox across many kinds of technological advancements.