The National Institute of Standards and Technology (NIST) has released new guidelines for scientists collaborating with the US Artificial Intelligence Safety Institute (AISI), eliminating references to “AI safety,” “responsible AI,” and “AI fairness.” Instead, the updated instructions emphasize the need to focus on “reducing ideological bias to promote human flourishing and enhance economic competitiveness.”
This information was disclosed as part of a revised cooperative research and development agreement for AI Safety Institute consortium members, which was distributed in early March. The prior agreement had encouraged researchers to work on technical solutions aimed at identifying and mitigating discriminatory model behavior concerning gender, race, age, and economic disparity. Such biases hold significant importance, as they can adversely impact end users and disproportionately affect marginalized and economically disadvantaged communities.
The new agreement removes references to creating tools for “authenticating content and tracking its provenance” as well as “labeling synthetic content,” indicating a reduced focus on counteracting misinformation and deep fakes. It also emphasizes a “America first” approach, instructing one working group to devise testing instruments aimed at bolstering the US’s position in the global AI landscape.
“The Trump administration has de-emphasized safety, fairness, misinformation mitigation, and responsibility as priorities for AI, which I think is quite revealing,” stated a researcher affiliated with the AI Safety Institute, who preferred to remain unnamed due to concerns about potential repercussions.
The researcher expresses concern that overlooking these critical issues may be detrimental to the general public, allowing algorithms that discriminate based on income or other demographic factors to remain unchecked. “Unless you’re among the tech elite, this could lead to a grim future for you and your loved ones. Prepare for AI systems to be unfair, discriminatory, unsafe, and recklessly implemented,” the researcher asserts.
“It’s unbelievable,” remarks another researcher who has previously collaborated with the AI Safety Institute. “What does it even mean for humans to flourish?”
Elon Musk, who is currently spearheading a contentious initiative aimed at reducing government expenditure and bureaucracy on behalf of President Trump, has openly criticized AI models developed by OpenAI and Google. In February, he shared a meme on X depicting Gemini and OpenAI as “racist” and “woke.” He frequently points to an instance where one of Google’s models debated the ethical implications of misgendering someone, even in the hypothetical scenario of preventing a nuclear catastrophe—a situation deemed highly implausible. In addition to Tesla and SpaceX, Musk operates xAI, an AI firm that competes directly with OpenAI and Google. An advisor to xAI recently developed an innovative method that could influence the political biases of large language models, as reported by WIRED.
A growing body of evidence indicates that political biases in AI models can impact individuals across the spectrum, affecting both liberals and conservatives. For instance, a 2021 study of Twitter’s recommendation algorithm demonstrated that users were more frequently exposed to right-leaning viewpoints on the platform.
Since January, Musk’s so-called Department of Government Efficiency (DOGE) has been making sweeping changes within the US government, resulting in the dismissal of civil servants, a freeze on expenditures, and fostering an environment perceived as hostile toward those who might oppose the Trump administration’s objectives. Certain government agencies, including the Department of Education, have archived and deleted documents containing references to DEI. Recently, DOGE has also turned its attention to NIST, the overseeing body for AISI, leading to the termination of numerous employees.