Predicting real-time scientific experiments using transformer models and reinforcement learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)
232 Downloads (Pure)

Abstract

Life and physical sciences have always been quick to adopt the latest advances in machine learning to accelerate scientific discovery. Examples of this are cell segmentation or cancer detection. Nevertheless, these exceptional results are based on mining previously created datasets to discover patterns or trends. Recent advances in AI have been demonstrated in real-time scenarios like self-driving cars or playing video games. However, these new techniques have not seen widespread adoption in life or physical sciences because experimentation can be slow. To tackle this limitation, this work aims to adapt generative learning algorithms to model scientific experiments and accelerate their discovery using in-silico simulations. We particularly focused on real-time experiments, aiming to model how they react to user inputs. To achieve this, here we present an encoder-decoder architecture based on the Transformer model to simulate real-time scientific experimentation, predict its future behaviour and manipulate it on a step-by-step basis. As a proof of concept, this architecture was trained to map a set of mechanical inputs to the oscillations generated by a chemical reaction. The model was paired with a Reinforcement Learning controller to show how the simulated chemistry can be manipulated in real-time towards user-defined behaviours. Our results demonstrate how generative learning can model real-time scientific experimentation to track how it changes through time as the user manipulates it, and how the trained models can be paired with optimisation algorithms to discover new phenomena beyond the physical limitations of lab experimentation. This work paves the way towards building surrogate systems where physical experimentation interacts with machine learning on a step-by-step basis.
Original languageEnglish
Title of host publication2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)
EditorsM. Arif Wani, Ishwar K. Sethi, Weisong Shi, Guangzhi Qu, Daniela Stan Raicu, Ruoming Jin
PublisherIEEE
Pages502-506
Number of pages5
ISBN (Electronic)9781665443371
ISBN (Print)9781665443388
DOIs
Publication statusPublished - 25 Jan 2022
Event20th IEEE International Conference on Machine Learning and Applications - Online
Duration: 13 Dec 202115 Dec 2021
https://www.icmla-conference.org/icmla21/ (Link to conference website)

Publication series

Name
ISSN (Print)None

Conference

Conference20th IEEE International Conference on Machine Learning and Applications
Abbreviated titleICMLA 2021
Period13/12/2115/12/21
Internet address

Keywords

  • machine learning in science
  • transformer model
  • reinforcement learning
  • chemistry modelling

ASJC Scopus subject areas

  • Artificial Intelligence
  • Safety, Risk, Reliability and Quality
  • Health Informatics
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Predicting real-time scientific experiments using transformer models and reinforcement learning'. Together they form a unique fingerprint.

Cite this