Bradley Merrill Thompson, Strategic Advisor with EBG Advisors and Member of the Firm at Epstein Becker Green, presents "Artificial Intelligence: Bias and Explainability in Algorithms" at the 2021 Convergence virtual conference, which runs from September 12 to 15, hosted by the Regulatory Affairs Professionals Society (RAPS).
This discussion was initiated through an Executive Order from the White House to encourage the use of so-called trustworthy AI. The White House specifically assigned NIST the task of developing what that looks like.
NIST has outlined at least seven qualities that make up trustworthy AI, including:
- reliable
- free from bias
- self explaining
- robust
- protecting privacy
- secure
- and an open-ended anything else
NIST has started the process by focusing first on explainability and bias. They published a whitepaper on explainability that was open for comment last fall.
It is important to have a national conversation around what exactly these goals or objectives mean. One question would be what constitutes an algorithm that is adequately explainable? Another question would be what constitutes an algorithm that is adequately free of bias?
These issues are very important for AI based medical devices. Explainability in fact has a jurisdictional impact when it comes to clinical decision support software. If an algorithm is adequately explainable, FDA doesn't regulate it. Further, FDA has been very clear that they are focusing more and more on making sure that algorithms are free of bias. During this session we will explore what that means specifically.
Learning Objectives:
- Understand design requirements for software in terms of their explainability
- Understand acceptable forms of training data for algorithms in terms of diversity and inclusion, and the avoidance of bias
- Understand the broader framework of what it takes to make artificial intelligence trustworthy
For more information and to register, please visit RAPS.org.