Real-Time Calibration Model for Low-Cost Sensor in Fine-Grained Time Series
Authors: Seokho Ahn, Hyungjin Kim, Sungbok Shin, Young-Duk Seo
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that TESLA outperforms existing novel deep learning and newly crafted linear models in accuracy, calibration speed, and energy efficiency. Experimental results demonstrate that TESLA successfully achieves this balance, efficiently managing calibration speed, energy usage, and accuracy. Experiments with real-world benchmarks show that TESLA outperforms the baseline deep learning and newly designed linear models in most cases. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Inha University, Incheon 22212, South Korea 2Team Aviz, Inria, Universit e Paris-Saclay, Saclay, France |
| Pseudocode | No | The paper describes the TESLA model architecture in detail in Section 4 and illustrates it in Figure 1, but it does not contain an explicitly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper mentions the use of 'Tensor Flow 2.14' and deployment on 'Arduino Nano 33 BLE Sense', implying implementation details, but it does not provide an explicit statement about open-sourcing its own code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | Our experiment is based on a large-scale dataset suitable for calibration task (Van Poppel et al. 2023). This dataset includes sensor data collected from numerous sensors across three regions Antwerp (Ant.), Oslo (Oslo), and Zagreb (Zag.), with three individual features PM10, PM2.5, and PM1. ... The Sens EURCity dataset (Van Poppel et al. 2023), specifically designed for calibration tasks, provides a robust foundation for addressing these challenges. |
| Dataset Splits | Yes | Our evaluation process assumes that we use multiple same types of sensors for training in a given space. Each sensors are uniquely identified by the name. We organize the sensors in alphabetical order: the second-to-last sensor is set as the validation set, and the last sensor as the test set. All remaining sensors are used for training. This configuration is repeated across different regions Ant., Oslo, and Zag. and features PM10, PM2.5, and PM1. |
| Hardware Specification | Yes | All experiments were conducted using a machine equipped with an AMD EPYC 7763 and an NVIDIA RTX A6000 Ada with Tensor Flow 2.14... For evaluation on microcontrollers, the trained models were converted to Flat Buffers and deployed on an Arduino Nano 33 BLE Sense for evaluation. |
| Software Dependencies | Yes | All experiments were conducted using a machine equipped with an AMD EPYC 7763 and an NVIDIA RTX A6000 Ada with Tensor Flow 2.14, which supports conversion to Tensor Flow Lite for Microcontrollers. |
| Experiment Setup | Yes | All models were trained using a batch size of 32 and the Adam optimizer for 10 epochs with mean squared error objective function, following configurations commonly found in existing time series forecasting (Liu et al. 2024). ... For this reason, we set N = 360 as our optimal length for all models in our experiment. |