Seyfarth Synopsis: In an April 8, 2020 post on the Federal Trade Commission (“FTC”)’s Business Blog, the Director of the FTC Bureau of Consumer Protection, Andrew Smith, provided helpful guidance on the use of artificial intelligence technology in businesses’ decision-making. In particular, Smith emphasized (i) transparency, both in informing consumers about how automated tools are used and how sensitive data is collected, (ii) the importance of explaining the reasoning behind algorithmic decision-making to consumers, (iii) ensuring that decisions are fair and do not discriminate against protected classes, (iv) ensuring accuracy of data used in algorithmic decision-making, and (v) for businesses to hold themselves accountable.

As more and more businesses turn to artificial intelligence and algorithms to make decisions that impact consumers—such as deciding whom to insure or to extend credit—they face the serious risk that such decision-making will be challenged as being biased, unfair, or otherwise violating consumer protection laws. Last month the Director of the FTC Bureau of Consumer Protection, Andrew Smith, published a blog post[1] that provides helpful tips and guidance that businesses should keep in mind when implementing artificial intelligence in any decision-making that can impact consumers.

As Smith explains, “while the sophistication of AI and machine learning technology is new, automated decision-making is not, and we at the FTC have long experience dealing with the challenges presented by the use of data and algorithms to make decisions about consumers.”[2]  In light of that experience, Smith discusses how “the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability.”[3] In particular, businesses should consider:

  • Transparency. As Smith explains, though artificial intelligence often “operates in the background,” it can also be used to interact with consumers, such as when companies use “chatbots,” and businesses should be transparent about the nature of that interaction. In addition, businesses that make algorithmic decisions based on data collected from consumers should be transparent as to how that data is collected—secretly collecting sensitive data could give rise to an FTC action. Finally, there are some notices that must be given to consumers under the Fair Credit Reporting Act in connection with use of consumer information to automate decision-making on a number of subjects (e.g. credit eligibility, employment, insurance, and housing) and businesses should ensure they comply with any applicable requirements.[4]
  • Explaining Decisions. Businesses should also be transparent about their decision-making process. If a business denies consumers something of value based on algorithmic decision-making, it should be able to explain that decision to consumers. As Smith explains, “[t]his means that you must know what data is used in your model and how that data is used to arrive at a decision. And you must be able to explain that to the consumer.”[5] If an algorithm is used to assign “risk scores” to consumers, businesses may be required to disclose the key factors that affect that score (for example, there are a number of required disclosures in connection with credit scores). And if the terms of a deal might change based on automated tools, that fact should be disclosed to consumers as well.[6]
  • Ensuring Fairness. Businesses should be careful to ensure that their use of artificial intelligence does not discriminate against any protected class. That means that businesses should look both at the data inputted into an algorithm, such as whether a model looks at protected characteristics “or proxies for such factors, such as census tract[,]” and also whether the outcome of an algorithmic decision has a disparate impact on protected classes.[7] In addition, in the interest of fairness, businesses should give consumers an opportunity to correct or dispute any information used to make decisions about them.[8]
  • Accuracy of data. Data used in algorithmic decision-making should be accurate and up-to-date. Businesses that provide data about their customers to others for use in automated decision-making (i.e. “furnishers” of data) should ensure that such data is accurate. Furthermore, any artificial intelligence models used by businesses should not only be developed and validated using accepted statistical principles and methodology, but should also be “periodically revalidated . . . and adjusted as necessary to maintain predictive ability.”[9]
  • Holding Oneself Accountable. Smith identifies four key questions that businesses should ask themselves to hold themselves accountable when using algorithmic decision-making: (1) “How representative is your data set?” (2) “Does your data model account for biases?” (3) “How accurate are your predictions based on big data?” and (4) “Does your reliance on big data raise ethical or fairness concerns?” In addition, businesses should protect their algorithms from any unauthorized use or abuse, and should consider using outside, objective observers to test the fairness and accuracy of their algorithms.

Key Takeaways

While using artificial intelligence and algorithmic decision-making can help businesses operate efficiently and reduce costs, businesses should keep in mind the above guidance by taking a proactive approach in protecting themselves from litigation risk, whether in the form of consumer class actions or FTC enforcement actions, and also as part of good corporate governance in a data-driven age.

[1] “Using Artificial Intelligence and Algorithms,” Business Blog, Federal Trade Commission (April 8, 2020), https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms?utm_source=govdelivery.

[2] Id.

[3] Id.

[4] Id.

[5] Id. (emphasis in original).

[6] Id.

[7] Id.

[8] Id.

[9] Id.