An explainable and sustainable deep learning approach for anomaly detection in time series data

In industry sectors, it is crucial to monitor the manufacturing process and identify abnormal behaviour that could happen during the process as soon as it occurs. Suspicious events could indicate a potential problem and major consequences can happen down the line that could lead to significant costs for the company if the issue remains undetected and the cause of the problem is not handled in a timely and efficient way. Many researchers have been focused on the challenge of identifying anomalies in the data produced by IoT devices or in time series data. Most solutions, however, are domain specific and don’t generalise well when the context of the data is changed. In addition, data transmitted by sensors over time have both intrametric and temporal dependencies which can become complex dependencies as the number of sensors increases. The relation between these sensors can be dynamic and it could change over time depending on driftnet metrics. Algorithms in the field of anomaly detection currently focus on only temporal dependencies or only intrametric dependencies. Moreover, most of the approaches focus on the accuracy of the detection while ignoring the explainability of the detected anomaly, which is essential to allow the user to diagnose and specify the root cause of the problem.

This research project proposes an anomaly detection model that is scalable and adaptable for different domains and outperforms state-of-the-art methods. It also intends to develop an explainable approach to state the root cause of detected anomalies. Recently, large language models (LLM) have gained a lot of attention due to their great performance in the domain of natural language processing (NLP). Many researchers have adapted transformer architecture. This study aims to answer the question of how to utilise the pre-trained LLM for multivariate time series anomaly detection. By adapting and utilising pre-trained LLM models, this project will benefit from architecture design and transfer learning across models. In addition to utilizing LLM, this study seeks to build an explainable approach to track the root cause of the anomaly event to develop an anomaly detection model with explainable approach using deep learning.

Publications

Reference

UDC/Navantia-DC1

Researcher

Maadh Hmosze

Research Host

University of A Coruña (UDC)/Navantia

PhD awarding institution/s

University of A Coruña (UDC) and RMIT University

Location

A Coruña (Spain)

Publications

RMIT and many of the REDI partners are HSR4R certified
europe-1-1.svg

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101034328.

Results reflect the author’s view only. The European Commission is not responsible for any use that may be made of the information it contains