Deputy Attorney General Monaco Warns Industries: “Fraud Using AI is Still Fraud”

On March 7, Deputy Attorney General Lisa Monaco delivered the keynote remarks at the American Bar Association’s (ABA) 39th National Institute on White Collar Crime.
Off

She noted that artificial intelligence (AI) “holds great promise to improve our lives — but great peril when criminals use it to super charge their illegal activities, including corporate crime.” She cautioned individuals and corporations that “[f]raud using AI is still fraud.”

AI and machine learning (ML) have enabled remarkable advancements in the financial technology (Fintech), banking and finance, and healthcare industries. The accumulation of mass data — large, complex, fast-moving, or weakly structured data — in these industries make them ideal settings for AI empowered innovations. In the finance and banking sectors, mass data empowers AI and ML to transform services, introducing automated trading, risk management, customer service via chatbots, and predictive analytics for future market trends. In Fintech, AI and ML have played a crucial role in developing cryptocurrencies, algorithmic trading, and blockchain technologies. Meanwhile, in healthcare, “[m]achine learning algorithms are used with large datasets such as genetic information, demographic data, or electronic health records to provide prediction of prognosis and optimal treatment strategy.” 

As industries embrace these AI technologies and a host of innovative application, heavily regulated industries will face an increasingly complex landscape of liability, regulation, and enforcement. Deputy Attorney General Monaco’s recent remarks emphasize the government’s demonstrated appetite to pursue liability for those who fraudulently oversell their AI application’s capabilities or fraudulently exploit their AI application’s capabilities or the data that drives it. Recent enforcement actions in the FinTech industry are helpful illustrations.

For example, government authorities have pursued civil and criminal fraud charges for individuals or firms who make fraudulent statements concerning AI and ML capabilities to attract investors. Notably, the US Securities and Exchange Commission (SEC) charged Brian Sewell, an online crypto trading course owner, for allegedly misleading students into investing over $1 million in his hedge fund, which he claimed would use AI and ML. Instead, he held the funds as bitcoin until his digital wallet was hacked. Similarly, the US Department of Justice (DOJ) charged David Saffron and Vincent Mazzotta for allegedly inducing individuals to invest in trading programs by falsely promising AI-driven high-yield profits. According to the DOJ, instead of investing victims' funds in cryptocurrency, the defendants allegedly misused the funds for personal luxury expenses.

In the healthcare industry — the leading source of false claims act cases — the government is conversely concerned with companies understating an AI application’s capabilities or exploiting AI applications to defraud patients and government healthcare programs. For instance, if a pharmaceutical manufacturer has a financial interest in a ML driven electronic medical records software, if the output informs (or induces) the health care provider’s ultimate decision, is the anti-kickback statute implicated? Or, if an AI/ML application suggests unnecessary or inappropriate healthcare devices or services and the government receives a claim for the services, has a False Claims Act violation occurred?

As the use of AI permeates industries, heavily regulated industries can expect to see more government insight and enforcement. In fact, US Deputy Attorney General Lisa Monaco stated, “Like a firearm, AI can enhance the danger of a crime,” and thus, the DOJ will seek harsher penalties for offenses made more harmful by the misuse of AI. Further, in recent remarks before Yale Law School, the chair of the SEC promised that those who use deploy AI to sell securities by fraud or misrepresentation should expect “war without quarter.” In February 2024, the FTC also proposed a new rule that would “it unlawful for a firm, such as an AI platform that creates images, video, or text, to provide goods or services that they know or have reason to know is being used to harm consumers through impersonation.”

Meanwhile, the federal government continues to utilize its own AI and ML innovations to enforce anti-fraud regulations and detect and investigate fraud and abuse. Agencies like the IRS, FinCEN, HHS, and DOJ use AI and ML for typically laborious and human error prone tasks such as detecting fraud, tracing illegal drugs, and understanding tips received by the FBI. In fact, using an AI empowered fraud detection process, the U.S. Department of the Treasury reported that it recovered over $375 million in the 2023 Fiscal Year. 

In light of the government’s increased scrutiny of AI and ML in regulated industries, it is crucial that companies’ leadership work closely with their innovators to ensure that the use of AI and ML do not run afoul of the existing and burgeoning regulations. Industry leaders, general counsel, and compliance officers should be particularly wary if the findings of a Berkeley Research Group survey of healthcare professionals are a bellwether for other heavily regulated industries: 75% of all healthcare professionals surveyed expected the use of AI to be widespread within three years, while only 40% of health professionals reported that their organizations plan to review regulatory guidance on AI. 

In the coming years, the government and industries will undoubtably continue to utilize and develop AI and ML to achieve better outcomes, detect criminal or fraudulent conduct, increase worker productivity, and maximize advantages. However, as Deputy AG Monaco warned, companies’ compliance officers and general counsel must also be prepared to “manage AI-related risks as part of [the company’s] overall compliance efforts. The future of AI and ML in regulated industries promises to be a dynamic and evolving landscape, but it requires savvy legal, regulatory, and compliance knowhow to avoid facing liability for AI-enabled fraud.

Contacts

Continue Reading