Artificial Intelligence (AI) is one of those high-risk, high-reward opportunities that is irresistible and inevitable. It inspires wonder-filled dreams and liability nightmares.
On a basic level, Al has been part of our lives for decades. Search engines used for research or for finding a good place to eat, online symptom checkers, and supply chain management tools—all involve Al and we tend to rely on their output.
Recently, more sophisticated Al systems are programmed to learn and improve on their own, are fed massive amounts of data, can read, write and speak human languages, and can even create original content. These developments have raised the bar and our hopes. They also have raised concerns of governments and societies, not only about the unintended consequences but also the danger posed by Al’s power in the wrong hands.
Putting aside for a moment the loftier moral and global promises and challenges of Al, it is now part of our world and our businesses. Companies of all sizes are faced with the need to take advantage of Al’s promise in a way that manages its precariousness.
Adopting an Al governance program is critical to leveraging Al’s potential while also harnessing Al’s capabilities in a way that minimizes liability risk.
Managing AI Risk
Errors in programming and data input can be human, and machines are not yet sufficiently aware or smart enough to recognize those errors. We can review machine output, but sometimes we are not sufficiently aware or smart enough to recognize the falsity of that output. This difficulty in understanding Al, along with the pace of development, makes the situation fluid, uncertain, and ambiguous.
As a result, Al may seem overwhelming and paralyzing, but it is quite doable to implement a governance program to manage its risks. As emphasized by government agencies and ongoing legal actions, many existing laws apply to Al, including the training information, the input and output, the models and algorithms. We know the legal pitfalls developers and users can look for and work to avoid.
We also know the developing legal landscape. New laws and proposed regulations related to Al are coalescing around very similar principles:
- Accuracy and reliability
- Safety and accountability
- Fairness and non-discrimination
- Transparency and explainability
- Privacy and security
Similarly, the risk management wisdom we have also coalesces around classic actions:
- Map AI systems
- Identify, measure and manage risks
- Periodically review and revise governance program
The last step–reviewing and revising the program–allows for the adjustment needed as technology and law —and our own knowledge–develops.
Companies can reap the competitive and efficiency rewards of Al while managing its risk. Responsible and proactive businesses will begin immediately to create and implement a strong Al governance program. Machines and applications are proliferating at a pace that over time will make starting the task more difficult and may increase liability risk significantly.
How We Can Help
Frost Brown Todd stands ready to work with you and your business to create and document your AI governance program. We have knowledgeable attorneys who will collaborate with you to perform any or all the following tasks for your company:
- Initial Overview Review. Includes interviewing relevant staff, identifying and describing the AI applications used by the organization, and issuing a report setting forth organized steps to create and implement an AI governance program.
- Design of AI Governance Program. Includes starting with the initial report described above and working with identified staff to characterize the AI applications in more detail, analyze and prioritize risks in each case, and recommend risk management tools for avoidance, mitigation or acceptance of each risk.
- Includes working with staff to implement the risk management decisions made, including drafting organizational policies and procedures. These policies and procedures also can be integrated with the organization’s privacy, cybersecurity, and ESG policies and procedures.
In addition to helping you establish an AI governance program, Frost Brown Todd can provide contract and transaction support designed to further minimize AI risk. All contracts that involve AI, including M&A transactions, will benefit from appropriate diligence and specific AI-related provisions (e.g., reps and warranties, indemnification, and insurance) to use with respect to those transactions.
For more information, contact any attorney with Frost Brown Todd’s Digital Assets & Technology practice group.