You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We seek to develop a public archive of time series data in the .ts format used by the aeon toolkit. There are multiple public data archives containing time series datasets for a variety of learning tasks (i.e. classification, regression, forecasting) which are critical for time series algorithmic research. The grant will fund the creation of an open central hub for these datasets in a standard format (integrating both existing and newly collected datasets from our developers), develop tools to make archive maintenance and proposing new datasets easier for contributors, and help raise awareness of the new archive within the scientific computing and academic communities.
Like most fields of research, collecting data is a significant challenge for time series researchers. Often, they rely on existing benchmark datasets to evaluate and compare new methods. Fortunately, researchers have invested considerable time and effort in mining datasets and making them available to the community. These are usually provided in various formats depending on the researcher's preference, however. A persistent issue is limited visibility and accessibility of such resources. Additionally, a lot of these resources remain static after their publication, making it difficult for outside contributors to propose new datasets.
Dataset archives are essential for stimulating algorithmic research. For example, time series classification datasets gained widespread recognition only after being consolidated into the well-known benchmark archive hosted by the University of California, Riverside (UCR). Similar challenges exist across other tasks such as regression, anomaly detection, and forecasting. These archives have been widely adopted by the research community and are still actively maintained in venues such as timeseriesclassification.com.
The UCR archive, in use since 2018, has supported numerous studies in time series classification, regression, and generative modeling. It has been cited in over 1,000 academic publications. Similarly, the UEA archive has significantly contributed to the community by offering multivariate benchmark datasets across a broad range of applications. It has been cited more than 500 times. The Monash Time Series Extrinsic Regression (TSER) archive is a recent addition but has also become a key resource, expanding research beyond classification into regression tasks. It was recently expanded, is frequently used in benchmarks, and has been cited over 50 times. Finally, the Monash Time Series Forecasting (TSF) archive has supported the development of benchmark datasets for forecasting. With over 200 citations, it offers researchers a diverse set of applications to validate their time series forecasting models. These are just a select few examples, there are many collections of time series data available beyond these all using their own format. We include citations as a metric of usage from an academic perspective, but this does not fully capture the usage and influence on research these archives have.
This project builds upon the efforts of previous time series researchers and archivers by continuing to collect and unify time series datasets for a wide range of tasks. A key objective is to standardize these datasets into a common format .ts used and promoted by the aeon toolkit. This unified format enables consistent inclusion of both data and metadata, facilitating reproducibility and ease of use. We aim to host this archive on open platforms with a peer review process, which combined with a standard data format will simplify the submission process for any researcher.
Through the aeon toolkit, we aim to go beyond simply hosting datasets online. We will provide seamless loading mechanisms, allowing researchers to integrate datasets and benchmark results directly into their workflows with minimal friction.
We believe this work will benefit not only users of the aeon package, but also the broader academic and scientific communities working in time series analysis. By making high quality datasets more accessible and usable, these archives will support the development of new benchmarking standards, stimulate further research, and encourage continued investment in the field. Time series data is present in a wide variety of domains such as EEG and ECG in medical care, coordinate data in human activity recognition, and industrial sensor outputs. We aim to provide an archive which can be used for general algorithmic development to push the frontier in these applications and be filtered to provide specific datasets such as EEG for targeted model development.
Amount requested
$10000
Execution plan
There are too many publicly available time series datasets for all of them to be integrated in a single short grant. We will focus this project on developing a framework, building an API, and unifying dataset archives for classification, clustering and regression data while making sure functionality is available to integrate tasks such as forecasting and anomaly detection in the future. If time remains, we will begin work on including one of these tasks.
Of the grant $9600 will fund developers to work on the project for 400 hours at a rate of $24 per hour. This work will be carried out either full-time over three months or part-time over a longer period of development depending on the status of our developers when the project begins. $400 will be used as funding towards travel for events such as PyData. What funded developers will do:
1. Format and store datasets (backend) - 60 hours
Format our dataset collection in the .ts format including appropriate metadata and header comments
Upload archive datasets to Zenodo, ensuring these records are well documented and attributed
Appropriately tag records to allow users to filter by learning task and series type
We estimate that there are 300+ classification, clustering, and regression datasets split among various public and in-development archives, not counting standalone datasets. These are in various formats. While some are already formatted as .ts, others are distributed as .csv or .npy files for example. Some of these .ts files are incomplete, missing metadata and header documentation. The beginning of this project will unify these archives into a single location and format.
2. Develop a GitHub site (front end) - 120 hours
Design and develop a website to display the archive datasets. This must be openly hosted and easy to maintain. Our ideal would be a GitHub hosted website
All archive datasets should be listed as a collection with options to filter data i.e. by learning type (classification or regression), size (univariate or multivariate, series length), and source (i.e. EEG, human activity recognition etc.)
Allow selection of individual datasets which will provide further information on the data and a visualization of the series it contains
Iterate on design and functionality based on community feedback
3. Develop API and maintenance tools - 120 hours
Develop a standalone Python tool set for interacting with the archive i.e. loading data, processing .ts files, visualisation, and submitting/updating records
Develop CI to ensure both the tools remain functional and archive datasets meet the required format
Rework the aeon datasets module, tidying and improving current functions for writing and loading datasets
Curate a small selection of datasets to package in aeon as examples, replacing the current bulky set, which contains redundancies.
Integrate the new archive and tools into aeon, allowing users to easily load any uploaded dataset into standard data types
4. Benchmark aeon implementations - 40 hours
Run benchmarking experiments using aeon implementations on the new archive datasets
Develop a framework to easily add dataset results, improve reproducibility, and update performance metrics as aeon evolves
Store and display results on the webpage for datasets
No grant funding is needed for compute power as we have developers with access to HPC facilities who will be able to run the experiments necessary for this.
5. Investigation and discussion on formats and future proofing - 20 hours
Look at file formats used for forecasting, anomaly detection, segmentation etc. data and discuss how this can be integrated
Assess the feasibility of expanding the .ts file format to allow for these types of data with prototype files and functions
6. Documentation and publication - 40 hours
Write usage and contribution guides on the Zenodo and GitHub pages, as well as an example Jupyter notebook for the aeon repository
Document maintenance tools for easy access by new archive maintainers and individual contributors
Contribute to writing a paper for the new unified archive
Help develop material for presenting the archive and utilities at conferences and events
Community engagement and communication
Throughout the project developers will engage with the aeon community and wider time series research audience on project direction.
Our plans for communicating this new archive are:
Promote the new archive on affiliated webpages, data archives and social media accounts
Write a paper for presentation at a conference or publication in a journal
Submit an application for a PyData talk
Showcase progress and advertise at a TSC tutorial given at DSAA 2025 and an aeon demo at ECML-PKDD 2025
Who will do this work
This grant will fund two aeon core developers to work on this project for 200 hours each. Two other developers are externally funded on related work and are willing to put time towards this project if dedicated project developers are present. We have received expressions of interest for occasional unfunded contributions from other developers towards this project. All listed developers are experienced with the aeon toolkit, time series data and Python coding.
Chris completed his PhD at the University of East Anglia and has done postdoc work at the University of Southampton. He has used the UCR time series archive throughout his research, and is familiar with the .ts format. His web development skills will be key to creating a functional and accessible archive front end.
Sebastian is currently a PhD student affiliated with the Philipps-University of Marburg and Hasso Plattner Institute. He is a developer and maintainer for the TimeEval anomaly detection archive, and his experience will help us future proof our developments for adding new dataset types to the archive.
Ali Ismail-Fawaz (@hadifawaz1999) - Will co-lead and contribute without funding from this grant
Ali completed his PhD at the Université de Haute-Alsace and is currently a postdoc at the same institution. He has consistently used the UCR, UEA, and Monash TSER archives throughout his research and is well-versed in their formatting conventions and practical value to the research community. His expertise in working with 3D skeleton-based human motion data will contribute significantly to this project, particularly in the collection and standardization of such datasets.
Matthew Middlehurst (@MatthewMiddlehurst) - Will lead the project, provide guidance and contribute without funding from this grant
Matthew is a University of Southampton Research Fellow working on developing aeon and will soon take an academic post at the University of Bradford. He is a maintainer for the Southampton hosted UCR and UEA archives (timeseriesclassification.com) and is familiar with the various time series archives used by researchers and their maintainers.
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
Project
aeon
Summary
We seek to develop a public archive of time series data in the
.ts
format used by theaeon
toolkit. There are multiple public data archives containing time series datasets for a variety of learning tasks (i.e. classification, regression, forecasting) which are critical for time series algorithmic research. The grant will fund the creation of an open central hub for these datasets in a standard format (integrating both existing and newly collected datasets from our developers), develop tools to make archive maintenance and proposing new datasets easier for contributors, and help raise awareness of the new archive within the scientific computing and academic communities.Submitter
Matthew Middlehurst
Project lead
@MatthewMiddlehurst
Community benefit
Like most fields of research, collecting data is a significant challenge for time series researchers. Often, they rely on existing benchmark datasets to evaluate and compare new methods. Fortunately, researchers have invested considerable time and effort in mining datasets and making them available to the community. These are usually provided in various formats depending on the researcher's preference, however. A persistent issue is limited visibility and accessibility of such resources. Additionally, a lot of these resources remain static after their publication, making it difficult for outside contributors to propose new datasets.
Dataset archives are essential for stimulating algorithmic research. For example, time series classification datasets gained widespread recognition only after being consolidated into the well-known benchmark archive hosted by the University of California, Riverside (UCR). Similar challenges exist across other tasks such as regression, anomaly detection, and forecasting. These archives have been widely adopted by the research community and are still actively maintained in venues such as timeseriesclassification.com.
The UCR archive, in use since 2018, has supported numerous studies in time series classification, regression, and generative modeling. It has been cited in over 1,000 academic publications. Similarly, the UEA archive has significantly contributed to the community by offering multivariate benchmark datasets across a broad range of applications. It has been cited more than 500 times. The Monash Time Series Extrinsic Regression (TSER) archive is a recent addition but has also become a key resource, expanding research beyond classification into regression tasks. It was recently expanded, is frequently used in benchmarks, and has been cited over 50 times. Finally, the Monash Time Series Forecasting (TSF) archive has supported the development of benchmark datasets for forecasting. With over 200 citations, it offers researchers a diverse set of applications to validate their time series forecasting models. These are just a select few examples, there are many collections of time series data available beyond these all using their own format. We include citations as a metric of usage from an academic perspective, but this does not fully capture the usage and influence on research these archives have.
This project builds upon the efforts of previous time series researchers and archivers by continuing to collect and unify time series datasets for a wide range of tasks. A key objective is to standardize these datasets into a common format
.ts
used and promoted by theaeon
toolkit. This unified format enables consistent inclusion of both data and metadata, facilitating reproducibility and ease of use. We aim to host this archive on open platforms with a peer review process, which combined with a standard data format will simplify the submission process for any researcher.Through the
aeon
toolkit, we aim to go beyond simply hosting datasets online. We will provide seamless loading mechanisms, allowing researchers to integrate datasets and benchmark results directly into their workflows with minimal friction.We believe this work will benefit not only users of the
aeon
package, but also the broader academic and scientific communities working in time series analysis. By making high quality datasets more accessible and usable, these archives will support the development of new benchmarking standards, stimulate further research, and encourage continued investment in the field. Time series data is present in a wide variety of domains such as EEG and ECG in medical care, coordinate data in human activity recognition, and industrial sensor outputs. We aim to provide an archive which can be used for general algorithmic development to push the frontier in these applications and be filtered to provide specific datasets such as EEG for targeted model development.Amount requested
$10000
Execution plan
There are too many publicly available time series datasets for all of them to be integrated in a single short grant. We will focus this project on developing a framework, building an API, and unifying dataset archives for classification, clustering and regression data while making sure functionality is available to integrate tasks such as forecasting and anomaly detection in the future. If time remains, we will begin work on including one of these tasks.
Of the grant $9600 will fund developers to work on the project for 400 hours at a rate of $24 per hour. This work will be carried out either full-time over three months or part-time over a longer period of development depending on the status of our developers when the project begins. $400 will be used as funding towards travel for events such as PyData. What funded developers will do:
1. Format and store datasets (backend) - 60 hours
.ts
format including appropriate metadata and header commentsWe estimate that there are 300+ classification, clustering, and regression datasets split among various public and in-development archives, not counting standalone datasets. These are in various formats. While some are already formatted as
.ts
, others are distributed as.csv
or.npy
files for example. Some of these.ts
files are incomplete, missing metadata and header documentation. The beginning of this project will unify these archives into a single location and format.2. Develop a GitHub site (front end) - 120 hours
3. Develop API and maintenance tools - 120 hours
.ts
files, visualisation, and submitting/updating recordsaeon
datasets module, tidying and improving current functions for writing and loading datasetsaeon
as examples, replacing the current bulky set, which contains redundancies.aeon
, allowing users to easily load any uploaded dataset into standard data types4. Benchmark
aeon
implementations - 40 hoursaeon
implementations on the new archive datasetsaeon
evolvesNo grant funding is needed for compute power as we have developers with access to HPC facilities who will be able to run the experiments necessary for this.
5. Investigation and discussion on formats and future proofing - 20 hours
.ts
file format to allow for these types of data with prototype files and functions6. Documentation and publication - 40 hours
aeon
repositoryCommunity engagement and communication
Throughout the project developers will engage with the
aeon
community and wider time series research audience on project direction.Our plans for communicating this new archive are:
aeon
demo at ECML-PKDD 2025Who will do this work
This grant will fund two
aeon
core developers to work on this project for 200 hours each. Two other developers are externally funded on related work and are willing to put time towards this project if dedicated project developers are present. We have received expressions of interest for occasional unfunded contributions from other developers towards this project. All listed developers are experienced with theaeon
toolkit, time series data and Python coding.Chris completed his PhD at the University of East Anglia and has done postdoc work at the University of Southampton. He has used the UCR time series archive throughout his research, and is familiar with the
.ts
format. His web development skills will be key to creating a functional and accessible archive front end.Sebastian is currently a PhD student affiliated with the Philipps-University of Marburg and Hasso Plattner Institute. He is a developer and maintainer for the TimeEval anomaly detection archive, and his experience will help us future proof our developments for adding new dataset types to the archive.
Ali completed his PhD at the Université de Haute-Alsace and is currently a postdoc at the same institution. He has consistently used the UCR, UEA, and Monash TSER archives throughout his research and is well-versed in their formatting conventions and practical value to the research community. His expertise in working with 3D skeleton-based human motion data will contribute significantly to this project, particularly in the collection and standardization of such datasets.
Matthew is a University of Southampton Research Fellow working on developing
aeon
and will soon take an academic post at the University of Bradford. He is a maintainer for the Southampton hosted UCR and UEA archives (timeseriesclassification.com) and is familiar with the various time series archives used by researchers and their maintainers.The text was updated successfully, but these errors were encountered: