Texas Firm Faces Criminal Probe for Misleading US Voters with Joe Biden AI

Texas Firm Faces Criminal Probe for Misleading US Voters with Joe Biden AI
Adoption & Regulations
Like? Do Rank It! Likes

The New Hampshire Department of Justice has identified the source behind misleading AI-generated robocalls featuring what sounded like United States President Joe Biden's voice, revealing the involvement of a Texas-based firm, Life Corporation, and an individual named Walter Monk.

Attorney General John Formella announced that the AG's Office Election Law Unit uncovered the origin of the automated messages, which aimed to influence the 2024 presidential election by instructing New Hampshire voters not to participate in the Jan. 23 primary.

The robocalls, deemed misinformation by the state attorney general's office, utilized an AI deepfake tool to create deceptive audio content. AI deepfake tools leverage advanced algorithms to produce highly realistic but fabricated digital content, such as videos, audio recordings, or images.

Prompted by reports of voter suppression calls in mid-January, an investigation was launched in collaboration with state and federal partners, including the Anti-Robocall Multistate Litigation Task Force and the Federal Communications Commission Enforcement Bureau.

The Election Law Unit issued a cease-and-desist order to Life Corporation for violating New Hampshire statutes related to bribery, intimidation, and suppression, demanding immediate compliance and reserving the right to pursue further enforcement actions.

Tracing the calls to a Texas-based telecoms provider, Lingo Telecom, investigators took action, with the U.S. Federal Communications Commission also issuing a cease-and-desist letter to Lingo Telecom for its alleged support of illegal robocall traffic involving AI-generated voice cloning.

FCC Chairwoman Jessica Rosenworcel proposed categorizing calls featuring AI-generated voices as illegal, subject to penalties outlined in the Telephone Consumer Protection Act, in response to growing concerns surrounding deepfake technology and its potential for misuse.

The proliferation of deepfakes has raised alarm bells globally, with organizations like the World Economic Forum and the Canadian Security Intelligence Service warning about the adverse impacts of AI-generated disinformation campaigns across digital platforms.

As investigations continue and regulatory measures are proposed, the incident underscores the urgent need for robust safeguards against the misuse of AI technology in influencing democratic processes.