Untransparent black box models feed scepsis towards AI in the general public and prevent the full use of powerful possibilities of AI. In the past, we often focussed on building the newest, most ingenious, and exciting models. Nowadays, our attention has to shift towards building ethical models, models that we can explain and for which we comfortably can take full responsibility. The field of explainable artificial intelligence (XAI) is concerned with the explanation of (black box) models. There are many solutions for explaining your model, but none of them is a one-size-fits-all. Python however has many possibilities to get started with XAI. In this talk, Nino van Halem will show the most frequently used libraries and frameworks for XAI in python.
About Nino van Halem
After completing his master’s programme in Artificial Intelligence, Nino joined the Netherlands Forensic Institute to work as a software developer and a data scientist. After two years, he left the NFI to join the Rijks ICT Gilde. He currently works at the RIG as a data scientist. His interests include deep learning and explainable AI.