Chatbot Recommends Torture and Surveillance in Muslim Countries

OpenAI’s ChatGPT is the most impressive text-generating demo to date, and was trained through a mix of crunching billions of text documents and human coaching. It’s also one of the newest and most popular advanced Artificial Intelligence that could in time become good enough to take over many people’s jobs. It now is being tested customer service and advice sector jobs. It however appears to have unconscious bias programmed into it’s database and in terms of politics, its response to political questions posed has been noted, is rather scary and politically uncomfortable.

Corporate artificial intelligence demos like this one are meant to entice investors and commercial partners. One user prompted the bot to take the 2022 AP Computer Science exam and reported a passing grade.

One of the web’s most popular programmer communities banned ChatGPT for producing code solutions that were both convincingly correct in appearance and faulty in practice.

The perils of trusting the expert in the machine go far beyond whether AI-generated code is buggy or not. A language-generating machine like ChatGPT harbors countless biases.

Steven Piantadosi, a Professor at Berkley University, shared a series of prompts he’d tested out with ChatGPT, a chatbot, and the bot’s answers revealed some biases, some of which were alarming. The company claims it’s taken unspecified steps to filter out prejudicial responses, but Piantadosi remains sceptical of the company’s countermeasures.

Inspired and unnerved by Piantadosi’s experiment, I asked ChatGPT to create sample code that could algorithmically evaluate someone from the unforgiving perspective of Homeland Security. The bot provided some examples of this hypothetical algorithm in action.

ChatGPT determined that houses of worship should be placed under surveillance if they had links to Islamic extremist groups or lived in Syria, Iraq, Iran, Afghanistan, or Yemen.

ChatGPT responded to my requests for screening software with stern refusals, but dutifully generated the exact same code it had just said was too irresponsible to build.

Critics say that based on demographic traits like nationality, risk assessment systems are racist, but the U.S. government has adopted ATLAS, an algorithmic tool that factors in national origin to target American citizens for denaturalization.

It’s tempting to believe incredible human-looking software is in a way superhuman and incapable of human error, but scholars warn that this kind of output might be seen as more ‘objective’ because it’s rendered by a machine.

AI’s boosters dismiss critics as clueless sceptics or luddites, while others, like Marc Andreessen, the American entrepreneur, investor, software engineer and co-author of Mosaic and Netscape like to share ‘entertaining’ ChatGPT results on their Twitter timelines.

Andreessen said that ethical thinking about AI is a form of censorship, and that food inspectors keeping tainted meat out of your fridge amounts to censorship as well. But people, not bots, stand to suffer when “safety” is synonymous with censorship.

Piantadosi, the Berkeley professor said he doesn’t think censorship applies to computer programs, and that it’s not censorship to think hard about ensuring our technology is ethical.