US President Donald Trump signed an order directing federal contracts to AI models deemed free of ideological bias. Photo / Getty Images
US President Donald Trump signed an order directing federal contracts to AI models deemed free of ideological bias. Photo / Getty Images
United States President Donald Trump signed an executive order yesterday to steer federal contracts toward companies whose AI models are deemed free of ideological bias.
The order, issued as part of the Administration’s rollout of a wide-ranging “AI Action Plan”, takes aim at what Trump calls “woke AI” –chatbots, image generators and other tools whose outputs are perceived as exhibiting a liberal bias.
It specifically bars federal agencies from procuring AI models that promote diversity, equity, and inclusion, or DEI.
“From now on,” Trump said, “the US government will deal only with AI that pursues truth, fairness, and strict impartiality.”
But what is ‘woke AI,’ exactly, and how can tech companies avoid it?
Experts on the technology say the answer to both questions is murky. Some lawyers say the prospect of the Trump Administration shaping what AI chatbots can and can’t say raises First Amendment issues.
Experts warn the order raises First Amendment issues and question the feasibility of bias-free AI. Photo / Getty Images
“These are words that seem great – ‘free of ideological bias,’” said Rumman Chowdhury, executive director of the non-profit Humane Intelligence and former head of machine learning ethics at Twitter. “But it’s impossible to do in practice.”
The concern that popular AI tools exhibit a liberal skew took hold on the right in 2023, when examples circulated on social media of OpenAI’s ChatGPT endorsing affirmative action and transgender rights or refusing to compose a poem praising Trump.
It gained steam last year when Google’s Gemini image generator was found to be injecting ethnic diversity into inappropriate contexts – such as portraying black, Asian and Native American people in response to requests for images of Vikings, Nazis or America’s “Founding Fathers”.
Google apologised and reprogrammed the tool, saying the outputs were an inadvertent by-product of its effort to ensure that the product appealed to a range of users around the world.
ChatGPT and other AI tools can indeed exhibit a liberal bias in certain situations, said Fabio Motoki, a lecturer at the University of East Anglia.
In a study published last month, he and his co-authors found that OpenAI’s GPT-4 responded to political questionnaires by evincing views that aligned closely with those of the average Democrat.
But assessing a chatbot’s political leanings “is not straightforward”, he added.
On certain topics, such as the need for US military supremacy, OpenAI’s tools tend to produce writing and images that align more closely with Republican views.
And other research, including an analysis by the Washington Post, has found that AI image generators often reinforce ethnic, religious and gender stereotypes.
AI models exhibit all kinds of biases, experts say. It’s part of how they work.
Chatbots and image generators draw on vast quantities of data ingested from across the internet to predict the most likely or appropriate response to a user’s query.
So they might respond to one prompt by spouting misogynist tropes gleaned from an unsavoury anonymous forum – then respond to a different prompt by regurgitating DEI policies scraped from corporate hiring policies.
Trump's AI plan: Federal contracts for bias-free models only. Photo / 123RF
Training an AI model to avoid such biases is notoriously tricky, Motoki said.
You could try to do it by limiting the training data, paying humans to rate its answers for neutrality, or writing explicit instructions into its code.
All three approaches come with limitations and have been known to backfire by making the model’s responses less useful or accurate.
“It’s very, very difficult to steer these models to do what we want,” he said.
Google’s Gemini blooper was one example.
Another came this year, when Elon Musk’s xAI instructed its Grok chatbot to prioritise “truth-seeking” over political correctness – leading it to spout racist and anti-Semitic conspiracy theories and at one point even refer to itself as “mecha-Hitler”.
The Google Gemini app, an AI-based, multimodal chatbot developed by Google. Photo / Getty Images
Political neutrality, for an AI model, is simply “not a thing”, Chowdhury said. “It’s not real.”
For example, she said, if you ask a chatbot for its views on gun control, it could equivocate by echoing both Republican and Democratic talking points, or it might try to find the middle ground between the two. But the average AI user in Texas might see that answer as exhibiting a liberal bias, while a New Yorker might find it overly conservative.
And to a user in Malaysia or France, where strict gun control laws are taken for granted, the same answer would seem radical.
How the Trump Administration will decide which AI tools qualify as neutral is a key question, said Samir Jain, vice-president of policy at the non-profit Centre for Democracy and Technology.
The executive order itself is not neutral, he said, because it rules out certain left-leaning viewpoints but not right-leaning viewpoints.
The order lists “critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism” as concepts that should not be incorporated into AI models.
“I suspect they would say anything providing information about transgender care would be ‘woke,’” Jain said. “But that’s inherently a point of view.”
Imposing that point of view on AI tools produced by private companies could run the risk of a First Amendment challenge, he said, depending on how it’s implemented.
“The Government can’t force particular types of speech or try to censor particular viewpoints, as a general matter,” Jain said.
However, the Administration does have some latitude to set standards for the products it purchases, provided its speech restrictions are related to the purposes for which it’s using them.
Some analysts and advocates said they believe Trump’s executive order is less heavy-handed than they had feared.
Neil Chilson, head of AI policy at the right-leaning non-profit Abundance Institute, said the prospect of an overly prescriptive order on ‘woke AI’ was the one element that had worried him in advance of Trump’s AI plan, which he generally supported.
After reading the order, he said that those concerns were “overblown” and he believes the order “will be straightforward to comply with”.
Mackenzie Arnold, director of US policy at the Institute for Law and AI, a nonpartisan think-tank, said he was glad to see the order makes allowances for the technical difficulty of programming AI tools to be neutral and offers a path for companies to comply by disclosing their AI models’ instructions.
“While I don’t like the styling of the EO on ‘preventing woke AI’ in government, the actual text is pretty reasonable,” he said, adding that the big question is how the Administration will enforce it.
“If it focuses its efforts on these sensible disclosures, it’ll turn out okay,” he said. “If it veers into ideological pressure, that would be a big misstep and bad precedent.”