Accelerating Time Series Analysis via Near-Data-Processing Approaches

Loading...
Thumbnail Image

Identifiers

Publication date

Reading date

2023-06-30

Authors

Fernández-Vega, Iván

Collaborators

Tutors

Editors

Journal Title

Journal ISSN

Volume Title

Publisher

UMA Editorial

Metrics

Google Scholar

Share

Research Projects

Organizational Units

Journal Issue

Department/Institute

Abstract

The explosion of the Internet-Of-Things and Big Data era has resulted in the continuous generation of a very large amount of data, which is increasingly difficult to store and analyze. Such a collection of data is also referred to as a time series, a common data representation in almost every scientific discipline and business application. Time series analysis (TSA) splits the time series into subsequences of consecutive data points to extract valuable information. In this thesis, we characterize state-of-the-art TSA algorithms and find their bottlenecks in commodity computing platforms. We observe that the performance and energy efficiency of TSA algorithms are heavily burdened by data movement. Based on that, we propose software and hardware solutions to accelerate time series analysis and make its computation as energy-efficient as possible. To this end, we provide four contributions: PhiTSA, NATSA, MATSA and TraTSA. PhiTSA optimizes and characterizes state-of-the-art TSA algorithms in a many-core Intel Xeon Phi KNL platform. NATSA is a novel Processing-Near-Memory accelerator for TSA. This accelerator places custom floating-point processing units close to High-Bandwidth-Memory, exploiting its memory channels and the lower latency of accesses. MATSA is a novel Processing-Using-Memory accelerator for TSA, known as MATSA. The key idea is to exploit magneto-resistive memory crossbars to enable energy-efficient and fast time series computation in memory while overcoming endurance issues of other non-volatile memory technologies. Finally, TraTSA evaluates the benefits of applying Transprecision Computing to TSA, where the number of bits dedicated to floating-point operations is reduced.

Description

Bibliographic citation

Collections

Endorsement

Review

Supplemented By

Referenced by

Creative Commons license

Except where otherwised noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 Internacional