Deep Q-learning for the selection of optimal isocratic scouting runs in liquid chromatography

Alexander Kensert, Gilles Collaerts, Kyriakos Efthymiadis, Gert Desmet, Deirdre Cabooter

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)
53 Downloads (Pure)

Abstract

An important challenge in chromatography is the development of adequate separation methods. Accurate retention models can significantly simplify and expedite the development of adequate separation methods for complex mixtures. The purpose of this study was to introduce reinforcement learning to chromatographic method development, by training a double deep Q-learning algorithm to select optimal isocratic scouting runs to generate accurate retention models. These scouting runs were fit to the Neue-Kuss retention model, which was then used to predict retention factors both under isocratic and gradient conditions. The quality of these predictions was compared to experimental data points, by computing a mean relative percentage error (MRPE) between the predicted and actual retention factors. By providing the reinforcement learning algorithm with a reward whenever the scouting runs led to accurate retention models and a penalty when the analysis time of a selected scouting run was too high (> 1h); it was hypothesized that the reinforcement learning algorithm should by time learn to select good scouting runs for compounds displaying a variety of characteristics. The reinforcement learning algorithm developed in this work was first trained on simulated data, and then evaluated on experimental data for 57 small molecules – each run at 10 different fractions of organic modifier (0.05 to 0.90) and four different linear gradients. The results showed that the MRPE of these retention models (3.77% for isocratic runs and 1.93% for gradient runs), mostly obtained via 3 isocratic scouting runs for each compound, were comparable in performance to retention models obtained by fitting the Neue-Kuss model to all (10) available isocratic datapoints (3.26% for isocratic runs and 4.97% for gradient runs) and retention models obtained via a “chromatographer's selection” of three scouting runs (3.86% for isocratic runs and 6.66% for gradient runs). It was therefore concluded that the reinforcement learning algorithm learned to select optimal scouting runs for retention modeling, by selecting 3 (out of 10) isocratic scouting runs per compound, that were informative enough to successfully capture the retention behavior of each compound.
Original languageEnglish
Article number461900
JournalJournal of Chromatography A
Volume1638
DOIs
Publication statusPublished - 8 Feb 2021

Bibliographical note

Copyright © 2021. Published by Elsevier B.V.

Keywords

  • Deep q-learning
  • Machine learning
  • Method development
  • Reinforcement learning
  • Retention models

Fingerprint

Dive into the research topics of 'Deep Q-learning for the selection of optimal isocratic scouting runs in liquid chromatography'. Together they form a unique fingerprint.

Cite this