Search
Close this search box.

Mapping out a Trip to a Digital & Intelligent Future

By Anirban Nandi – Head of Analytics (Vice President) at Rakuten India

In the journey towards a digital and intelligent future, understanding the mechanisms that drive technological advancements is crucial. While terms like Artificial Intelligence (AI) and digitization have become common in product management, understanding their detailed and intricate workings remains a challenge for many.  At one point of time, situations were so difficult that complexity and simplicity never used to go hand in hand. But times are changing, especially in the times of AI. The emergence of Explainable AI (XAI) offers insights into the opaque decisions made by AI systems.

Key Takeaways:

  • The use of AI began in the 1950s with the discovery of statistical models.
  • Some challenges faced with AI systems include the lack of explainability and trust that hampers our ability to fully have faith in AI systems, unclear security, privacy, and ethical regulations, and regulating bias with AI systems based on input data.
  • Explainable AI or XAI is an emerging field in machine learning that aims to address how black box decisions of AI systems are made. It converts black-box decisions into white-box decisions. 
  • Different aspects of interpretable AI are fairness, accountability, and transparency.
  • Some common ML interpretability methods include SHAP, MAGIE, and counterfactural explanations.
  • The advantages of using interpretability tools/products include ease of use, flexibility, customizability, and comprehensive capabilities.
  • Some famous companies with interpretability tools/products include Mixpanel’s anomaly explanations, Tableau’s explain data, and Clearbrain.
In this article
    Add a header to begin generating the table of contents

    The History of AI

    We have been hearing the word AI very recently but its history is pretty long. If you look at the history of AI, it started way before the 1950s when statistical models were discovered. That is when ML research started happening. AI winter was the time when a lot of investments around AI stopped. This happened in the 1970s and then towards the later part of the 1980s as well. A lot of people lost belief in the power of AI. But fortunately from 2000 onwards, AI has been booming with supervised ML methods and becoming very widespread. Since 2010, deep learning has become very popular. 

    Currently, AI is everywhere- augmented reality, virtual reality, automation, etc. AI has engraved all the fields that we touch in our day-to-day life. What has happened is that even with this huge increase in the use of AI, as humans we find it difficult to comprehend how AI makes decisions.

    Challenges faced with AI systems

    Even with the huge increase in the use of AI, humans sometimes find it difficult to understand how these AI systems make decisions. These are the challenges that are faced by humans while handling AI:

    • How AI systems make decisions
    • The lack of explainability and trust hampers our ability to fully have faith in AI systems
    • Unclear security, privacy, and ethical regulations
    • Regulating bias with AI systems based on input data

    Explainable AI (XAI)

    This is an emerging field in machine learning that aims to address how black box decisions of AI systems are made. It converts black-box decisions into white-box decisions. This area inspects and tries to understand the steps and models involved in making decisions. The digital and intelligent future has a lot to do with intelligent AI. The reason behind this is because most of the organizations in the world are driven by people. If you want to convert these people-driven organizations into data-driven organizations, explainable AI plays a very important role. 

    XAI aims to produce more explainable models while maintaining a high level of learning performance (prediction accuracy) and enable human users to understand, trust, and effectively manage the emerging generation of artificially intelligent partners.

    Why explain AI Systems?

    The need to explain AI systems may stem from four important reasons:

    1. Explain to Justify: XAI ensures that there is an auditable and provable way to defend algorithmic decisions as being fair and ethical, which leads to building trust.

    2. Explain to control: Understanding more about system behavior provides greater visibility over unknown vulnerabilities and flaws, and helps to rapidly identify and correct errors, thus enabling control. 

    3. Explain to improve: Since users know why the system produced specific outputs, they will also know how to make it smarter. Thus, XAI could be the foundation for ongoing iteration and improvement between humans and machines.

    4. Explain to Discover: Asking for explanations is a helpful tool to learn new facts, gather information, and thus gain knowledge.

    Academic Publications for XAI/ML

    If you look at explainable AI as a concept, a concept exists within it called interpretable machine learning. The academic publications that are happening around explainable AI have been growing pretty significantly from 2005 onwards. In the last 3 years especially, you will see a huge rise. But interpretable ML algorithms which is a subset of explainable AI, if you see the blue line you see a lot of searches. Because that’s where data scientists, data analysts, data engineers, product managers, and software engineers are implementing this. That is why you will see a huge rise in the blue line.

    Different Aspects of Interpretable AI

    Interpretable ML talks about 4 different things with trust being at the center. 

    1. Fairness: The first thing that it talks about is fairness. You need to make sure that your predictions are made without discriminable bias. So we want unbiased predictions to happen. 

    2. Accountability: The second thing is accountability. Let’s say if we make some predictions, can we track it back? You should be able to hold yourself and your model algorithm accountable for any decision-making that they are doing. 

    3. Transparency: The final thing is transparency which involves giving complete transparency to the business and all human beings, that is why certain predictions are made. For example, let’s say you go to a bank and apply for a home loan, but your home loan gets rejected. With explainable AI, the bank can explain to the customer that there are certain parameters for why your home loan might have been rejected.

    Taxonomy of Interpretable ML

    Algorithms can be divided into several groups based on different properties:

    1. Scope related:

      • Global:  Methods that apply to full data
      • Local: Methods that deal with an instance or a group of instances

    2. Application related:

      • Certain applications can only be provided for certain models.
      • Model specific/ model agnostic: Some methods apply specifically to the model while others can be used universally for all models.
      • Pre/Intrinsic/Post hoc: Pre-methods are applied before the model process, intrinsic are embedded into the model, while post hoc are the most common methods that are applied after the model process.

    3. Algorithm related:

      • Feature importance: Methods that explain models based on the importance of individual features.
      • Rule-based: Methods that output if-else rules to explain models and their decisions.
      • Counterfactuals: Methods that generate contrastive explanations to understand the effect of feature changes on the model.

    Different ML Interpretability Methods

    1. SHAP

    SHAP or SHapely Addictive exPlanations is a method to explain individual predictions based on the game theory of optimal Shapely values by computing the contribution of each feature to the prediction. 

    2. MAGIE

     MAGIE (Model Agnostic Globally Interpretable Explanations) is an approach that learns the if-then rules to explain the behavior of black box machine learning models that have been used to solve classification problems. The rules are learned by a genetic algorithm that tries different combinations of conditions to optimize a fitness function.

    It gives a flexible output to choose between different combinations of metrics and rules. 

    3. Counter Factural Explanations

    These help us understand what could have happened to the model output if an input was changed in a particular way. It starts by finding multiple instances belonging to the opposite class which is nearest to the input observation using the KD tree. It validates if the features changing are allowed to be changed or not. Then the nearest instance is optimized to obtain an observation with the least changes.

    Advantages of Using Interpretability Tools/Products

    1. Ease of Use

    Access state-of-the-art interpretability techniques through a platform or an API set along with rich visualizations. 

    2. Flexible and customizable

    Understand models using a wide range of explainers and techniques using interactive visuals. Choose your algorithms and easily experiment with combinations of algorithms.

    3. Comprehensive capabilities

    Explore model attributes such as performance, and global and local features and compare multiple models simultaneously. Run a what-if analysis as you manipulate data and view the impact on the model.

    Who can benefit from Interpretability Tools/Products?

    1. Data scientists 

    Understand models, debug or uncover issues, and explain your model to other stakeholders.

    2. Auditors 

    Validate a model before it is deployed and audit it post-deployment.

    3. Business Leaders

     Understand how models behave, to provide transparency about predictions to customers.

    4. Researchers 

    Integrate with new interpretability techniques and compare against other algorithms.

    Some famous Companies with Interpretability Tools/Products

    1. Mixpanel’s Anomaly Explanations

    Mixpanel added explanations to its automatic anomaly detection product. Once it detects an anomaly, Mixpanel will dig through data to find only critical drivers so you can save time and act faster.

    2. Tableau’s Explain Data

    Tableau recently launched a new feature ‘Explain data’ to understand the ‘why’ behind a data point. The feature will evaluate hundreds of potential explanations and present a focused set of explanations so you avoid spending time chasing answers that aren’t there.

    3. Clearbrain

    A marketing optimization platform that uses causal analytics to extract causal reasons for user behavior. It kind of replaces your analyst by automatically determining the causes, determining possible actions, and estimating the benefits of each.

    Tips for Choosing Interpretability Methods for a Model

    While choosing interpretability methods for a model, attempt to answer the following questions first:

      • What is the user’s expertise level?
      • What’s your time limitation?
      • Do you need to understand the whole logic of a model, or do you only care about the reasons for a specific decision?

     

    Hence, to thrive in the digital future, it’s crucial to grasp how AI works. Explainable AI acts as a guide, helping us understand complicated algorithms better. This clarity builds trust, and transparency, and sparks new ideas. As we move forward, embracing interpretability is key to creating a future where people and technology work together smoothly

    About the Author:

    Anirban Nandi – Head of Analytics (Vice President) at Rakuten India

    Frequently Asked Questions

    AI in product management utilizes artificial intelligence, deep learning, or machine learning to solve product problems.

    It is highly unlikely that AI will fully replace product management. AI is used to assist product managers and enhance aspects of product management, like data analytics and customer support. But it cannot completely replace a product manager.

    Challenges associated with AI product management include working with more stakeholders, the ambiguity of outcomes, difficulty in explaining the rationale behind the outcomes, addressing fairness and bias concerns, adapting to new infrastructure and tools, and selecting the right problems to solve with AI.

    In traditional product management, product behavior is usually binary and predetermined. AI product management deals with probabilistic outcomes.

    Some common ML interpretability methods include SHAP, MAGIE, and counterfactural explanations.

    Explainable AI is an emerging field in machine learning that aims to address how black-box decisions of AI systems are made. It converts black-box decisions into white-box decisions. This area inspects and tries to understand the steps and models involved in making decisions.

    Facebook
    Twitter
    LinkedIn

    Leave a Reply

    Your email address will not be published. Required fields are marked *