Tags:Artificial Intelligence, Best Practices and Critical Evaluation
Abstract:
There is growing concern and call for standard practices in evaluating Artificial Intelligence (AI) models and systems, especially those with ‘black-box’ style operations with innerworkings that lack direct human interpretability. As construction of intelligent systems continue to become cheaper and easier with commercially available products, the task of qualifying these systems has fallen on developers and providers who may lack necessary industrial domain application expertise and/or proper motivation to provide critical evaluations for these systems in a manner that is objective, easily interpretable, and directly comparable to similar products. This talk will highlight some of the most common pitfalls and mistakes made when creating or deploying an AI solution. Even with little to no understanding of the inner working of these ‘black box’ architectures, there are many common tests and philosophies that can identify potential problems with AI solutions early in the evaluation stages, prior to the precipitation of any deleterious effects.
Recognizing and Avoiding Common Pitfalls of AI Development