Insights

How can companies get started with explainable AI?

Post by
R&D Team
Learn how companies can implement explainable AI faster without spending tons of money

Artificial intelligence has become a vital part of our lives. From image recognition systems to recommendation engines to credit scoring systems, AI systems are affecting us daily. With the recent launch of GPT-3 which has shown what AI is capable of, it is becoming increasingly difficult to differentiate between AI and humans. The models we are building are becoming complicated but we are still far away from understanding their decision rationale. OpenAI, the creator of GPT-3, stepped back from their open-source approach fearing the misuse of their powerful AI system. 

In light of such developments, we need to invest resources in making sure the AI systems we are building are responsible, ethical and understandable by people using them or getting affected by their decisions. 

This is even more relevant in today’s business landscape. In 2018, McKinsey found that 47% of business executives reported that their company has incorporated at least one AI application in their business functions. Companies are continuously seeking new ways to increase efficient processes, and explainable AI offers transparent, interpretable solutions for confident implementation. With emerging issues of security, transparency and potential biases that undermine algorithmic decision making, it is crucial for business leaders to understand the systems behind human/AI collaboration to remain competitive in a progressively AI-driven future. This poses a question: “How can I get started?”

Taking Accountability

Companies need to establish a clear set of guidelines that are aligned with their core values and regulatory constraints. Without explainability, companies are working with opaque black box systems that provide little insight behind their conclusions. Consequently, the risk of bias and the lack of understanding create distrust among their customers and stakeholders. Business leaders should recognize that the decisions their models make are ultimately their responsibility. As such, they should hold these technologies accountable to a clear standard. For instance, Google has a list of objectives for AI applications such as incorporating privacy design principles, avoiding reinforcing bias, and allowing feedback among other guiding principles. Ultimately, setting up principles will guide the explainability process in developing and maintaining reliable AI systems. 

Training Employees

After the development and monitoring process, explainable AI is now put to the test—can people work with this technology, and more importantly, will they understand? 

Employees are becoming increasingly counted on to collaborate with smart machines. Accenture indicated that 74% of federal workers believe that it is important for them to develop skills to work with AI. As such, it is crucial for companies to train their workforce so that they build trust and become well-equipped to operate this technology. Employees should understand how AI will be integrated into operations and why they should develop the skills necessary to take advantage of AI insight to create the most optimal outcomes. If internal employees don’t understand the AI systems, clearly their business stakeholders and customers won’t either. 

Explain, Debug & Validate your black-box models

When thinking of designing and implementing black box AI models, companies should keep explainability in mind. Ante-hoc techniques are ways explainability is incorporated from the beginning of the design. 

Research progress in xAI has been rapidly advancing, from input attribution (LIME, SHAP, Integrated Gradients), concept testing/extraction (TCAV, Towards Automatic Concept-based Explanations), example influence/matching (ProtoDash, Attention-Based Prototypical Learning) and counterfactual explanations - you can find most of these techniques in explainX.

Addition to algorithms, data scientists should train their models accounting for variables such as gender, age, and race. They have a responsibility for examining data that feeds into the algorithms and to validate their objectivity to account for any potential biases. Additionally, companies can increase transparency by releasing aspects of their code as open-source or use existing peer-reviewed tested coding. Ultimately, open-source coding encourages quality assurance and viability.

These methods indicate that there are many ways to design systems that ultimately address AI solutions explaining their rationale in decision making. The most important part comes down to execution and implementation of these techniques: data scientists need to figure out how to combine these techniques to build a narrative and a process that effectively explains the model.  

Active Monitoring

After setting up standards and integrating explainability, the AI models should be monitored for their long term success. We suggest that “model monitoring” should include active monitoring (tracks model behavior to identify anomalies), performance drift (observes KPI to measure performance and training), operational bias review (detects irregularities that may indicate bias), and model retraining (implements new data to account for changes). All in all, it is not enough to only develop explainable models—companies must account for external and internal changes to ensure the models’ continuing success.

Explainable AI with ExplainX

ExplainX.ai is building an open-source explainable AI library that combines state-of-the-art explainability techniques to help data scientists explain, debug and validate any black-box machine learning model. Businesses can continuously monitor and regulate their AI systems and ensure they are building responsible, explainable and ethical AI systems. 

To get started with explainable AI, visit our website and download our open-source explainable AI platform. Also, connect with us on our Linkedin, Twitter and Slack channels to join a community of like-minded professionals passionate about building trustworthy and unbiased AI systems. 




Share this with your friends.

Ready to dive in?
Create your free account now.