Posted on LinkedIn article.
I recently attended AAAI-21 to present my work titled “Anomaly Attribution with Likelihood Compensation.” The paper falls into the field of explainability of AI (XAI), which is one of the hottest research areas in AI. IBM Research has been among the main contributors in the area for several years. My organization, led by Saska Mojsilovic, an IBM Fellow, represents XAI research in IBM.
Broadly speaking, the work is to explain the behavior of a black-box AI model. This sounds like a familiar topic in XAI, but it turned out that the problem we wanted solve was not able to be solved by existing XAI methods. The task is to explain deviations between prediction of black-box model and actual observations, not to explain the black-box model itself, as the presentation slides illustrate.
This problem came out of a collaboration with IBM IoT Business Unit. The problem setting, “anomaly attribution,” is an almost straightforward outcome from the discussions with the BU. However, many paper reviewers seemed confused with the problem setting itself, judging from their review comments.
This is interesting. They are supposed to be making AI more friendly and accessible to humans. In reality, most of them do not seem to be strongly motivated by how AI technologies are used in the real world. This is most likely because they have never been involved in the process of modeling real-world business problems.
I believe that my recent work can be one meaningful attempt to shift the center of gravity of XAI research. Someone like me, who has deep expertise in both industrial applications and AI algorithms, should contribute to bridging the huge gap between the two AIs. Life is short. Hope I can find a good opportunity allowing me to make a difference.