Members of the House Oversight Committee from the Democratic Party launched approximately twenty-four inquiries early Wednesday, urging leaders of federal agencies to provide insights regarding plans to implement AI technologies across various government sectors, all amidst ongoing reductions in the federal workforce.
These inquiries come in light of recent reports by WIRED and the Washington Post that shed light on the initiatives led by Elon Musk’s so-called Department of Government Efficiency (DOGE), which seeks to automate functions using a range of proprietary AI applications and access confidential information.
“The citizens of the United States place their trust in the federal government to safeguard sensitive personal data related to health, financial matters, and other personal information with the expectation that it will not be improperly disclosed or misused without their approval,” the inquiries state, “including through the deployment of unauthorized and unaccountable third-party AI technologies.”
The primary goal of these inquiries is to compel the agencies to prove that their potential use of AI complies with existing laws and that adequate measures are being implemented to protect Americans’ private information. Additionally, Democrats are probing whether Musk might stand to gain financially from any AI deployments, given his ownership of xAI and the challenges faced by his electric vehicle company, Tesla, as it shifts towards robotics and AI. Connolly further raised concerns that Musk might exploit his access to sensitive governmental data for his own gain, using it to “supercharge” his proprietary AI model referred to as “Grok.”
These inquiries, initially acquired by WIRED, are endorsed by Gerald Connolly, a Democratic congressman from Virginia.
In his inquiries, Connolly emphasizes that federal agencies must adhere to numerous statutory obligations regarding their AI software usage, referencing primarily the Federal Risk and Authorization Management Program (FedRAMP), which aims to standardize the government’s approach to cloud-based services and ensure the thorough evaluation of AI tools concerning security risks. He also cites the Advancing American AI Act, which mandates federal agencies to “curate and maintain a database of their artificial intelligence applications,” alongside making these agency databases publicly accessible.
Documents obtained by WIRED last week indicate that operatives from DOGE have rolled out a proprietary chatbot called GSAi to about 1,500 federal employees. The General Services Administration (GSA) oversees federal properties and provides IT services to numerous agencies.
A memo acquired by WIRED reporters reveals that employees have been cautioned against sharing any controlled unclassified information with the software. Other federal bodies, including the Departments of Treasury and Health and Human Services, are also contemplating the use of a chatbot, although not specifically GSAi, as per documents reviewed by WIRED.
Additionally, WIRED has reported that the U.S. Army is actively employing software referred to as “CamoGPT” to examine its record systems for mentions of diversity, equity, inclusion, and accessibility (DEIA). An Army representative confirmed the tool’s existence but refrained from providing further details on its intended applications.
In the inquiries, Connolly points out that the Department of Education holds personally identifiable information for over 43 million individuals associated with federal student aid programs. “Given the rapid and opaque actions of DOGE,” he argues, “I am incredibly apprehensive that the sensitive information of students, parents, spouses, family members, and all other borrowers is being managed by secretive members of the DOGE team for ambiguous reasons, accompanied by a lack of safeguards against disclosure or unethical use.” The Washington Post previously reported that Musk’s Department of Government Efficiency initiated the feeding of sensitive federal data from the Department of Education’s databases to assess its spending.
Education Secretary Linda McMahon announced on Tuesday that she is moving ahead with plans to terminate over a thousand employees at the department, following hundreds of others who accepted DOGE “buyouts” last month. The Education Department has seen close to half of its workforce cut—this, according to McMahon, is the first step toward dissolving the agency.
“Utilizing AI to assess sensitive data presents significant risks that extend beyond improper disclosure,” Connolly warns, stating that “inputs used and the criteria chosen for analysis may be flawed, errors could be integrated through the AI software’s design, and employees may misinterpret AI recommendations, among other issues.”
He concludes: “In the absence of a clear justification for the use of AI, mechanisms to ensure proper data handling, and sufficient oversight and transparency, the deployment of AI poses significant dangers and could potentially infringe upon federal law.”