This page introduces the projects I have been involved to date. Since about 2003, I have engaged in many data mining projects at IBM Research across many different industries. Our activities are a unique example, where basic research and real business make a nice synergy.
Current project
Currently, I am part of Trusted AI, IBM Research AI, at IBM T. J. Watson Research Center. As the name implies, the team’s mission is to make AI more trustworthy and thus practical. My current research interests include privacy in AI (as a follow-up of the Blockchain project) and explainability in AI.
Previous projects
AI for Blockchain (2018)
In 2015, IBM published a white paper beautifully titled “Device Democracy,” which celebrated the profound impact of the general concept of Blockchain in the IT industry. I proposed new research and business directions to advance Blockchain from a mere transaction management system to a decentralized platform for value co-creation among participants, in which machine learning plays a critical role. Here is a summary. One of the research outcomes along with that agenda has been published in IJCAI:
Tsuyoshi Idé, Rudy Raymond, Dzung T. Phan, “Efficient Protocol for Collaborative Dictionary Learning in Decentralized Networks,” Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 19, August 10-16, Macao, China), pp.2585-2591 [slides, poster].
Smarter Manufacturing (2015-2017)
About a decade after the launch of the project of Sensor Data Analytics at IBM Research – Tokyo, the world finally began realizing a huge potential of AI in manufacturing industries. I saw a lot of business proposals using fancy terms like Industry 4.0 and Smarter Manufacturing over and over again. It was an interesting sensation. Before coming to the US, I had led hundreds of customer engagements with major Japanese companies as a project leader and technical leader; I had invented many solutions for specific business problems using machine learning for a decade; I even have much experience in negotiating with lawyers of the clients over the detail of terms and conditions of legal contracts. Then all of a sudden, they started talking about the potential of AI as if no one had never realized. This may be a good example of how timely productizing a new technology to a potentially emerging business field is hard even to the best kind of companies like IBM.
For a general overview of my activities, including a few projects I led in the US, see presentation slides of an invited talk:
Tsuyoshi Idé, “Recent advances in machine learning from industrial sensor data,” The 12th ICME International Conference on Complex Medical Engineering (CME 2018, September 6-8, 2018), Shimane, Japan [slides].
Service Delivery & Risk Analytics (2013-2014)
In 2013, I was appointed as the manager of Service Delivery & Risk Analytics, IBM T. J. Watson Research Center, New York, USA. The major goal of my team was to improve the current practice of IT (information technology) service delivery using analytics. Smarter IT service management was one of the three strategic focuses of Services Research in IBM Research at that time, and actually is the area where analytics can make a huge difference.
As the manager, I led two major initiatives. The first one was about the solution design phase of IT service delivery. I developed a new efficient and interpretable algorithm for project risk prediction based on questionnaire data generated in the quality assurance process of IBM (see the figure).

The algorithm leverages a psychometrics approach called the item response theory and opened a new door to questionnaire data analytics. For details, see the KAIS paper:
Tsuyoshi Idé and Amit Dhurandhar, “Supervised Item Response Models for Informative Prediction,” Knowledge and Information Systems, p.1-23, 2016 [link, slides for related paper].
The other initiative was about the service delivery phase. In collaboration with the team members, I developed a text mining approach to IT service tickets. See, e.g.,
Kuan-Yu Chen, Ee-Ea Jan, Tsuyoshi Idé, “Probabilistic Text Analytics Framework for information Technology Service Desk Tickets,” Proceedings of the 14th IFIP/IEEE International Symposium on Integrated Network Management (IM 2015), 2015, pp.870-873.
Analytics & Optimization (2010-2013)
For 2010-2013, I had been leading Analytics & Optimization at IBM Research – Tokyo as the manager. I defined two new strategic research areas:
- Analysis of stochastic interacting systems
- Analysis of industrial dynamic systems
For the analysis of stochastic interacting systems, our ultimate goal was to establish the methodology to analyze complex systems such as societies, cities and enterprises. I started several new and exciting projects across different industries. Among them, Frugal Intelligent Transportation Systems for Kenya was one of the most successful projects and in fact attracted a lot of attention in the mainstream mass media. The key concept was “Frugal Innovation.” Instead of relying on expensive social infrastructure in the traditional way, we built a full-fledged ITS based only on cheap Web cameras empowered with a sophisticated image analysis and network inference algorithms (see the figure below).

For more technical details, see:
T. Idé, T. Katsuki, T. Morimura, and R. Morris, “City-Wide Traffic Flow Estimation from Limited Number of Low Quality Cameras” IEEE Transactions on Intelligent Transportation Systems, 18 (2017) 950-959 [link, slides for related paper].
Since establishing fully analytic models is hopeless in complex systems, simulation technologies can be a powerful approach. However, one critical issue is how to validate the simulation result. To address this, I was interested in how simulation could be combined with optimization technologies. For example, we may want to optimize the model of individual agents in multi-agent traffic simulation using sophisticated machine learning technologies, possibly through a method similar to Bayesian optimization. It was my great honor to have an opportunity to launch a new Strategic Initiative in this area in the Math department of Global IBM Research.
For the analysis of industrial dynamic systems, major research topics included sensor data analytics and production optimization, which are particularly important in the Japanese market. My own research including anomaly detection and trajectory analytics is playing a critical role in real production systems in e.g., ClassNK’s ship maintenance system.
Sensor Data Analytics (2005-2013)
After wrapping up the Autonomic Computing project, I launched a new project, Data Analytics for Quality Control, or simply Sensor Data Analytics, which aimed at improving the quality of products mainly in the manufacturing industries by taking full advantages of advanced analytics for sensor data.
One of the most important work in this period is the development of practical anomaly detection algorithms. In particular, In the SDM paper below, I first introduced the sparse graphical model in the context of correlational anomaly detection.

Tsuyoshi Idé et al., “Proximity-Based Anomaly Detection using Sparse Structure Learning,” Proceedings of 2009 SIAM International Conference on Data Mining (SDM 09), pp.97-108 [slides].
I am proud that the project was one of the world’s earliest systematic efforts towards data-driven management of Internet-of-Things (IoT). When I initiated the project, utilizing the data in the IoT domain simply meant the development of a centralized database to store the attributes of numerous parts of production equipment. Almost one decade after my proposal and successful customer projects, many people finally started noticing how advanced analytics combined with sophisticated database systems can bring in a revolutionary change to the way of business.
Automated Analysis Initiative (AAI; 2003-2004)
At least in the Tokyo Research Lab, the autonomic computing project did not necessarily resulted in a remarkable success. In my opinion, the activities looked more like development work, and lacked original research agenda. This project, a joint effort between Thomas J. Watson Center and IBM Research – Tokyo, aimed at developing a general framework for sensor data particularly in the automotive industry.
I introduced the new notion of change-point correlation (see the SDM paper below). My work became an important part of the framework, which was later productized as IBM Parametric Analysis Center. The method of change point correlation designed to nicely handle the heterogeneity over different sensor data, which is quite common in industrial physical systems that involved many different physical quantities such as temperature and pressure. The success of this attempt motivated my next project, Sensor Data Analytics.

Tsuyoshi Idé, “Knowledge Discovery from Heterogeneous Dynamic Systems using Change-Point Correlations,” Proceedings of 2005 SIAM International Conference on Data Mining (SDM 2005), pp.571-576 [slides].
Autonomic Computing (2002-2003)
This project was a company-wide initiative that aimed at handling the growing complexities of computer systems. While I had just started in this new area, having moved from the totally different area of LCD technologies, I set a research agenda that would be useful in the domain: anomaly detection for system monitoring. The KDD paper, which is my very first paper in computer science, was written in this project.

The paper is well-known to be one of the first works of subspace-based anomaly detection and enjoys 150+ citations as of 2016. The paper is also one of the first work that leverages directional statistics for scoring anomalies.
Tsuyoshi Idé and Hisashi Kashima, “Eigenspace-based Anomaly Detection in Computer Systems,” Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2004), pp. 440-449 [slides].
Collimated backlight (2000-2001)
The goal of this project was to develop a new type backlighting system that achieves world’s highest light utilization. For that goal, the team led by Dr. Yoichi Taira invented a novel backlighting approach named “collimated backlight.” However, as the result of ultra-efficient design, it turned out that maintaining the luminance uniformity over the entire surface of the display was extremely hard. In particular, it suffered from visible Moire patterns on the display, which is caused by the optical interference between a light guide and a grid-like circuit pattern of LCDs.

In spite of all kind of disparate efforts, the team could not find any practical solutions to resolve the issue. I, a new hire from theoretical physics, invented a novel approach to efficiently remove the Moire patterns. The key idea was to use a special type of irregular dot patterns as light scatterers. The above figure compares the conventional approach and our approach. One interesting observation was that mathematically defined random numbers do not necessarily look random to the human eyes. My approach is based on a mathematical theory to control the level of irregularity as well as a molecular dynamics simulation. See presentation slides for details.
The invention was successfully delivered to Display Business Unit of IBM, a2nd shipped as a part of ThinkPad A30p, which was world’s first laptop PC equipped with a UXGA IPS display.

- T. Idé, An essay on the development of a dot-pattern generation method (in Japanese)
- T. Idé et al., “Dot pattern generation technique using molecular dynamics,”Journal of the Optical Society of America, A, 20 (2003) 242-255.
- T. Idé, et al., “Moire-Free Collimating Light Guide with Low-Discrepancy Dot Patterns,” Digest of Technical Papers (Society for Information Display, Boston, 2002), pp. 1232-1235 [slides].