Biden-era AI guidelines may be gone, but the SEC and FTC have made it clear they’re watching (for now) how public companies use and talk about their AI capabilities. Haynes Boone attorneys Alla Digilova, Eugene Goryunov and Alok Choksi map out how companies can stay compliant in this shifting landscape.
The past few years have seen a significant rise in the popularity and influence of AI technologies. Many public companies in the US have either already implemented or are actively exploring the adoption of AI in their business.
AI tools are rapidly changing the market landscape, promising significant technological progress. Most recently, promulgation of generative AI (GenAI) tools, such as ChatGPT, has further enhanced interest among companies and the general public in such technology.
Adoption of GenAI tools is not without risks, with irresponsible use having been tied to fraud, discrimination and disinformation and posing risks to national security. The risks of misinformation and fraud are especially pertinent in the context of public companies, which should take care to ensure safe adoption of AI in their business practices.
Regulatory overview
AI policies and priorities have recently become a highly contested regulatory topic in the US, with President Donald Trump’s series of newly issued executive orders indicating upcoming policy changes. On his first day in office, Trump revoked former President Joe Biden’s prior AI directives and policies, issued in 2023 and 2025.
Two days after his inauguration, on Jan. 23, Trump issued a new executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” which sets forth a policy goal “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security” and calls for members of his administration to develop a new AI plan within 180 days.
Guided by a set of principles, the 2023 Biden order had instructed federal agencies and the National Institute of Standards and Technology (NIST) to develop guidelines and best practices that would govern how the US government uses AI. Consistent with the order’s directive, NIST published its “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” on July 26, 2024. The profile was designed to help organizations integrate trustworthiness considerations into the design, development, use and evaluation of GenAI systems. It also outlined the risks that are unique to GenAI, suggested corresponding actions to manage these risks and summarized operational considerations for effective risk management.
The 2025 order instructed the federal government, in collaboration with the private sector, to develop AI infrastructure within the US, with the goal of enabling the US government to continue harnessing AI in service of national-security missions while preventing the US from becoming dependent on other countries’ infrastructure to develop and operate powerful AI tools.
While White House moves play a major role in how AI develops in the US, other federal bodies have also played a role, though shakeups in leadership also could signal major changes in how these agencies approach enforcement of corporate AI use, too.
Public company considerations
Accuracy of disclosure
Shortly after his inauguration Jan. 21, Trump appointed SEC Commissioner Mark Uyeda as acting chair of the commission, while his nominee for SEC chair, Paul Atkins, awaits Senate confirmation. While speculation of what this change in leadership will mean for SEC enforcement actions in AI is beyond the scope of this writing, the agency has previously placed a high priority on pursuing issuers or their employees who make materially inaccurate disclosures, with the Enforcement Division regularly investigating and recommending enforcement actions charging misconduct by issuers, auditors and their employees.
In July 17, 2023, remarks before the National Press Club, former SEC Chair Gary Gensler reiterated that “public companies making statements on AI opportunities and risks need to take care to ensure that material disclosures are accurate and don’t deceive investors.”
In March 2024, the SEC brought its first enforcement actions against two investment advisers for making false and misleading statements about their use of AI. Several SEC enforcement actions have followed since then. Most recently, on Jan. 14, 2025, the SEC announced charges against restaurant-technology company Presto Automation for misleading statements about its flagship AI product.
Separately, the Federal Trade Commission (FTC) is empowered to bring enforcement action against companies that mislead consumers about their use of automated tools. This becomes especially relevant in the context of GenAI tools like chatbots, where consumers can be misled about the nature of the interaction. Under the FTC Act, a company’s statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence. The FTC has warned businesses to be careful not to overpromise what an algorithm can deliver in a rush to embrace new technology. Deceptive practices can lead to FTC enforcement action.
In light of the SEC’s and FTC’s past positions on AI and with an overall view of ensuring truthfulness of disclosure, public companies should carefully review their public filings to ensure accuracy and completeness.
With the number of public companies reporting some form of GenAI adoption on the rise, it is imperative for such companies to give an accurate depiction of their GenAI technology and not overstate the extent to which GenAI is utilized. This includes avoiding making embellishing statements on potential benefits of GenAI adoption, failing to properly assess and disclose the risks posed by such technology to the company’s overall business or industry or otherwise misleading investors.
Special attention should be paid to registration statements, such as Forms S-1/F-1 and S-4/F-4, and periodic public filings, such as Forms 10-K/20-F and 10-Q/6-K, with respect to description of company’s business and industry, forward-looking statements, risk factors, management discussion and analysis, corporate governance and financial statements. Similar considerations should also be given with respect to proxy statements, as applicable. Relevant considerations in risk factors disclosure are described below.
Risk factors
As noted above, the need to ensure accuracy of disclosure in public filings requires special consideration by public companies with respect to adequately addressing potential risks of GenAI technology in their risk factors. Such risks range from general risks AI may pose to the market and the industry in which the company operates to the more company-tailored risks relating to how the company utilizes or plans to utilize GenAI in its business and operations and the steps the company takes or intends to take to mitigate such risks.
Risks factors related to GenAI can be grouped into several categories:
- Optics issues: Market acceptance of technology; rapidly evolving nature of technology and competition; potential reputational risks to the company from breaches, misuse, etc.,; and general ethical concerns raised by AI, such as any resulting jobs displacement, human rights issues and other concerns.
- Legal, compliance and regulatory issues: Regulatory uncertainty surrounding AI and emerging regulation; data security and privacy; unintended bias and discriminatory practices; and IP considerations.
- Technological issues: Failures of GenAI technology to function as designed; databases used to train GenAI; accuracy of output; general reliability of technology, including any supply chain vulnerabilities; and unintended use.
- Security issues: Cybersecurity; data breaches; fraud; and unpredictable disruptions.
Given the newness of GenAI and the emerging issues posed by its widespread adoption, the full scope of risks posed by such technology is presently difficult to predict. Nevertheless, public companies should make a good faith effort in providing a balanced picture of potential overall benefits GenAI can have for its business and operations, while also identifying and disclosing the relevant, applicable and specific risks.
Due diligence
Public companies should consider implementing record-keeping policies and procedures related to their implementation and use of GenAI technology. Robust record-keeping practices will facilitate due diligence investigations of the company in case of relevant offering or financing. In performing such due diligence, underwriters and investors can request data related, but not limited, to:
- Compliance with applicable rules and regulations, including privacy laws, applicable FTC rules, etc.
- Data security measures, including steps undertaken by the company to safeguard any information disclosed by customers to GenAI tools, such as ChatGPT. For example, ChatGPT robots can ask customers for their name, email address and or other personally identifiable information.
- IP rights held by the company with respect to GenAI tools. Presently, IP protections afforded to AI are still unclear. This can become relevant if a due diligence investigation reveals that a company cannot assert IP protection over its unique, internally engineered GenAI tools, impacting the transaction valuation and overall viability of the deal. Further, due diligence investigation can focus on whether underlying data sets used in the GenAI process do not infringe on others’ IP rights.
- Risk management procedures, including any applicable policies or guidelines adopted by the company’s board to assess and mitigate the risks associated with GenAI.
- AI-related litigation.
- Overall disclosure of AI tools as used by the company to ensure that such statements do not contain any material misstatements or omissions.
Board considerations
With evolving regulatory requirements and technological advancements, the risk oversight role of public companies’ boards of directors is more vital and challenging than ever before. Given the growing utilization of GenAI technology by companies and the associated risks described above, public company directors should pay attention to the following:
- GenAI literacy: Public company boards should ensure their members have sufficient knowledge of GenAI technology and the role it plays in the company’s business, the business of the company’s competitors, as well as the broader industry and market in which the company operates. Where GenAI is particularly important to a company’s operations, directors should consider gaining or retaining relevant expertise, and/or establishing a dedicated AI committee responsible for AI oversight.
- Risk evaluation: Public company directors should examine the company’s systems and operations utilizing GenAI technology, including with respect to any third-party products used by the company and the extent of the company’s reliance on such technology. The board should also know what management position is ultimately responsible for implementation and continuous utilization of GenAI technology by the company and how those functions are performed. The board should then evaluate all attendant risks and other implications and examine the tools and processes implemented by the company to ensure safety, accuracy and fairness of the AI generated processes and outcomes.
- Regulatory compliance framework: Directors of public companies should consider applicable rules and regulations governing their operations as relevant with respect to GenAI technology. If GenAI forms an integral part of the company’s systems and operation, the board of directors should develop a robust framework to ensure adequate compliance and risk mitigation strategies and implement a comprehensive response plan that can be utilized in case of any GenAI incidents.
Discriminatory practices
Integration of AI technology into public company operations can also pose issues in connection with inadvertent introduction of bias and resulting unfair outcomes. As a result, public companies can potentially run afoul of the following laws within FTC’s enforcement jurisdiction:
- Section 5 of the FTC Act, which prohibits unfair or deceptive practices and can become implicated through use of biased algorithms.
- Fair Credit Reporting Act, which comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance or other benefits.
- Equal Credit Opportunity Act, which makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age or because a person receives public assistance.
In September, the FTC announced that it is taking five law enforcement actions against operations that use AI hype or sell AI technology that can be used in deceptive and unfair ways. The cases are the latest in a string of recent FTC enforcement actions involving claims about AI. In its public announcement, the FTC reiterated its continued focus on claims surrounding AI technology made by companies: “The cases included in this sweep show that firms have seized on the hype surrounding AI and are using it to lure consumers into bogus schemes, and are also providing AI powered tools that can turbocharge deception.”
To help safeguard against potential unintended bias introduced by use of GenAI, FTC recommends that, among other things, AI developers and users ensure that their data sets are complete and are not missing information from a particular population, test their algorithms to ensure that they do not result in discriminatory outcomes, use transparent frameworks and independent standards to audit their data and do not exaggerate what the algorithm can do and whether it can deliver fair and unbiased results.
Conclusion
GenAI has the potential to revolutionize the way in which businesses innovate, operate and continue to grow. However, increased adoption of such technologies by public companies requires that management and boards of directors carefully consider the overall benefits and risks associated with GenAI and ensure that disclosures in their public filings are fulsome and accurate and appropriate safeguards are put in place to protect against unintended negative outcomes.