This page introduces the projects I have been involved to date. Since about 2003, I have engaged in many data mining projects at IBM Research across many different industries. Our activities are a unique example, where basic research and real business make a nice synergy.
Currently, I am part of Trusted AI, IBM Research AI, at IBM T. J. Watson Research Center. As the name implies, the team’s mission is to make AI more trustworthy and thus practical. The current project aims to develop a new causal learning framework in the intersection between point processes, graphical causal modeling, and graph neural networks. We closely work with Prof. Rose Yu and her student Dongxia Wu from UC San Diego.
This is another project that falls into the category of condition-based maintenance of industrial systems. In this project, we took on a new type of data: stochastic events from computer systems (computer alerts, warnings, etc.). I developed a new causal analysis framework called the Granger-Hawkes model to answer the question of “who caused this?” for event sequences (see the figure below). We applied the model to event grouping for alert and working events from IBM’s cloud system.
The detail of the technology and use-case scenarios are described in my NeurIPS paper. The technology has been delivered to the IBM Watson Core library.
Tsuyoshi Idé, Georgios Kollias, Dzung T. Phan, Naoki Abe, “Cardinality-Regularized Hawkes-Granger Model,” Advances in Neural Information Processing Systems 34 (NeurIPS 21, Dec 6-14, 2021, virtual), pp.2682-2694 [slides, poster].
Explainable AI for IoT (2018-2020)
We collaborated with the IBM Watson IoT business unit. In Research, I led the project of developing methods and algorithms that explain unusual events, detected by some black-box prediction model y=f(x), with building energy management as the main application. I invented a fundamentally new approach for explaining detected anomalies, which has been delivered to two IBM products, IBM Tririga Building Insights as well as IBM OpenScale, combined with an uncertainty quantification method.
My AAAI paper nicely summarizes the technology we developed:
Tsuyoshi Idé, Amit Dhurandhar, Jiri Navratil, Moninder Singh, Naoki Abe, “Anomaly Attribution with Likelihood Compensation,” In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 21, February 2-9, 2021, virtual), pp.4131-4138 [slides, poster, official version, code].
AI for Blockchain (2018)
In 2015, IBM published a white paper beautifully titled “Device Democracy,” which celebrated the profound impact of the general concept of Blockchain in the IT industry. I proposed new research and
business directions to advance Blockchain from a mere transaction management system to a decentralized platform for value co-creation among participants, in which machine learning plays a critical role.
Although my business proposal did not work out internally, it got significant traction in the research community. I gave an invited talk at IEEE International Symposium on Blockchain in 2021 under the title “Decentralized Collaborative Learning with Probabilistic Data Protection.” Its extended abstract has been published as a full conference paper as a proceedings paper of the prestigious IEEE International Conference on Smart Data Services (SMDS 21).
Tsuyoshi Idé, Rudy Raymond, “Decentralized Collaborative Learning with Probabilistic Data Protection,” In Proceedings of the 2021 IEEE International Conference on Smart Data Services (SMDS 21, September 5-10, 2021, virtual), pp.234-243 [slides, IEEE Xplore].
The technical concept has also appeared in IJCAI 19, one of the world’s top AI conference:
Tsuyoshi Idé, Rudy Raymond, Dzung T. Phan, “Efficient Protocol for Collaborative Dictionary Learning in Decentralized Networks,” Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 19, August 10-16, Macao, China), pp.2585-2591 [slides, poster].
Smarter Manufacturing (2015-2017)
About a decade after I started the project of Sensor Data Analytics, the world finally began realizing the huge potential of AI in manufacturing industries. A lot of “new” business proposals full of fancy terms like Industry 4.0 and Smarter Manufacturing came out. Before Industry 4.0 became a buzzword, I had already done hundreds of customer engagements and invented many machine learning algorithms for specific business problems. I even had gotten much experience in negotiating legal terms in customer engagements. This may be another example of how timely productizing a new technology is challenging.
For a general overview of my activities, including a few projects I led in the US, see the presentation slides of my invited talk at an international conference:
Tsuyoshi Idé, “Recent advances in machine learning from industrial sensor data,” The 12th ICME International Conference on Complex Medical Engineering (CME 2018, September 6-8, 2018), Shimane, Japan [slides].
Service Delivery & Risk Analytics (2013-2014)
In 2013, I was appointed as the manager of Service Delivery & Risk Analytics, IBM T. J. Watson Research Center, New York, USA. The major goal of my team was to improve the current practice of IT (information technology) service delivery using machine learning. Smarter IT service management was one of the three strategic focuses of Services Research in IBM Research at that time, and actually is the area where machine learning can make a huge difference.
As the manager, I led two major initiatives. The first one was about the solution design phase of IT service delivery. I developed a new algorithm for project risk prediction based on questionnaire data generated in the quality assurance process of IBM (see the figure), which can be viewed as being among earliest works in the field of Explainability of AI.
The algorithm leverages a psychometrics approach called the item response theory and opened a new door to questionnaire data analytics. For details, see the KAIS paper:
Tsuyoshi Idé and Amit Dhurandhar, “Supervised Item Response Models for Informative Prediction,” Knowledge and Information Systems, p.1-23, 2016 [link, slides for related paper].
The other initiative was about the service delivery phase. In collaboration with the team members, I developed a text mining approach to IT service tickets. See, e.g.,
Kuan-Yu Chen, Ee-Ea Jan, Tsuyoshi Idé, “Probabilistic Text Analytics Framework for information Technology Service Desk Tickets,” Proceedings of the 14th IFIP/IEEE International Symposium on Integrated Network Management (IM 2015), 2015, pp.870-873.
Analytics & Optimization (2010-2013)
During 2010-2013, I was the head of the AI department (the Analytics & Optimization group) at IBM Research – Tokyo. The first thing I did is to define strategic research agenda, to which all team members are supposed to contribute :
- Analysis of stochastic interacting systems
- Analysis of industrial dynamic systems
For the analysis of stochastic interacting systems, our ultimate goal was to establish the methodology to analyze complex systems such as societies, cities and enterprises. I started several new and exciting projects across different industries. Among them, Frugal Intelligent Transportation Systems for Kenya was one of the most successful projects and in fact gained a lot of media coverage. The key concept was “Frugal Innovation.” Instead of relying on expensive social infrastructure in the traditional way, we built a full-fledged ITS based only on cheap Web cameras empowered with a sophisticated image analysis and network inference algorithms (see the figure below).
For more technical details, see:
Tsuyoshi Idé, Takayuki Katsuki, Tetsuro Morimura, and Robert Morris, “City-Wide Traffic Flow Estimation from Limited Number of Low Quality Cameras” IEEE Transactions on Intelligent Transportation Systems, 18 (2017) 950-959 [link, slides for related paper].
Since establishing fully analytic models is not possible in complex systems, simulation technologies can be a powerful approach. However, one critical issue is how to validate the simulation result. To address this, I was interested in how simulation could be combined with optimization technologies. For example, we may want to optimize the model of individual agents in multi-agent traffic simulation using sophisticated machine learning technologies, possibly through a method similar to Bayesian optimization. It was my great honor to have an opportunity to launch a new Strategic Initiative in this area in the Math department of Global IBM Research.
For the analysis of industrial dynamic systems, major research topics included sensor data analytics and production optimization, which are particularly important in the Japanese market. My own research including anomaly detection and trajectory analytics is playing a critical role in real production systems in e.g., ClassNK’s ship maintenance system.
Sensor Data Analytics (2005-2013)
After wrapping up the Autonomic Computing project, I started a new project, Data Analytics for Quality Control, or simply Sensor Data Analytics, which aimed at improving the quality of products mainly in the manufacturing industries by taking full advantages of advanced analytics for sensor data. It started as a single-person project, but eventually ended up being a corporate-wide major initiative, thanks to many colleagues’ efforts.
One of the most important work in this period is the development of “proximity-based” anomaly detection algorithms. In particular, In the SDM paper below, I first introduced the sparse graphical model in the context of correlational anomaly detection.
Tsuyoshi Idé et al., “Proximity-Based Anomaly Detection using Sparse Structure Learning,” Proceedings of 2009 SIAM International Conference on Data Mining (SDM 09), pp.97-108 [slides].
The project was one of the world’s earliest systematic efforts toward data-driven management of Internet-of-Things (IoT). When I initiated the project, utilizing the data in the IoT domain simply meant the development of a centralized database to store the attributes of numerous parts of production equipment. Almost one decade after my proposal and successful customer projects, many people finally started realizing how advanced analytics combined with sophisticated database systems can bring a revolutionary change to the way of doing business.
Automated Analysis Initiative (AAI; 2003-2004)
This project aimed to develop a general framework for analyzing sensor data, with particular attention to the automotive industry. Unlike the previous project, we had a clearly defined research agenda. I introduced the new notion of change-point correlation (see the SDM paper below). My work became a critical part of the framework, which was later productized as the IBM Parametric Analysis Center. The method of change point correlation is designed to nicely handle the heterogeneity over different sensor data, which is quite common in industrial physical systems that involve many different physical quantities such as temperature and pressure. The success of this attempt motivated my next project, Sensor Data Analytics.
Tsuyoshi Idé, “Knowledge Discovery from Heterogeneous Dynamic Systems using Change-Point Correlations,” Proceedings of 2005 SIAM International Conference on Data Mining (SDM 2005), pp.571-576 [slides].
Autonomic Computing (2002-2003)
This project was a company-wide initiative that aimed at handling the growing complexities of computer systems. While I had just started in this new area, having moved from a totally different area of physics/optics, I managed to set a research agenda that would be useful in the domain: anomaly detection for system monitoring. The KDD paper, which is my very first paper in computer science, was written in this project.
The paper is well-known to be one of the first works of subspace-based anomaly detection and enjoys 300+ citations as of 2022. The paper is also one of the first works that leverage directional statistics for scoring anomalies.
Tsuyoshi Idé and Hisashi Kashima, “Eigenspace-based Anomaly Detection in Computer Systems,” Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2004), pp. 440-449 [slides].
Collimated backlight (2000-2001)
The goal of this project was to develop a new type of backlighting system that achieves the world’s highest light utilization. For that goal, the team led by Dr. Yoichi Taira invented a novel backlighting approach named “collimated backlight.” However, as a result of ultra-efficient design, it turned out that maintaining the luminance uniformity over the entire surface of the display was extremely hard. In particular, it suffered from visible Moire patterns on the display, which is caused by the optical interference between a light guide and a grid-like circuit pattern of LCDs.
Despite all kinds of disparate efforts, the team could not find any practical solutions to resolve the issue. The management gave me an opportunity to take on this challenge. It was a stretch for a new hire from theoretical physics. Fortunately, I managed to invent a novel approach efficiently removing the Moiré patterns. The key idea was to use a particular irregular dot pattern for light scatterers. The above figure compares the conventional method and our approach. The difference in the uniformity is evident. One interesting observation was that mathematically defined random numbers do not necessarily look random to the human eyes. My approach is based on a mathematical theory to control the level of irregularity as well as a molecular dynamics simulation. See presentation slides for details.
The invention was delivered to IBM’s Display Business Unit and became par of ThinkPad A30p, which was the world’s first laptop PC equipped with a UXGA IPS display.
- T. Idé, An essay on the development of a dot-pattern generation method (in Japanese)
- T. Idé et al., “Dot pattern generation technique using molecular dynamics, “Journal of the Optical Society of America, A, 20 (2003) 242-255.
- T. Idé, et al., “Moire-Free Collimating Light Guide with Low-Discrepancy Dot Patterns,” Digest of Technical Papers (Society for Information Display, Boston, 2002), pp. 1232-1235 [slides].