Transfer Learning for Molecular Property Predictions from Small Data Sets
Abstract
Machine learning has emerged as a new tool in chemistry to bypass expensive experiments or quantum-chemical calculations, for example, in high-throughput screening applications. However, many machine learning studies rely on small data sets, making it difficult to efficiently implement powerful deep learning architectures such as message passing neural networks. In this study, we benchmark common machine learning models for the prediction of molecular properties on two small data sets, for which the best results are obtained with the message passing neural network PaiNN, as well as SOAP molecular descriptors concatenated to a set of simple molecular descriptors tailored to gradient boosting with regression trees. To further improve the predictive capabilities of PaiNN, we present a transfer learning strategy that uses large data sets to pre-train the respective models and allows to obtain more accurate models after fine-tuning on the original data sets. The pre-training labels are obtained from computationally cheap ab initio or semi-empirical models and both data sets are normalized to mean zero and standard deviation one to align the labels' distributions. This study covers two small chemistry data sets, the Harvard Organic Photovoltaics data set (HOPV, HOMO-LUMO-gaps), for which excellent results are obtained, and on the Freesolv data set (solvation energies), where this method is less successful, probably due to a complex underlying learning task and the dissimilar methods used to obtain pre-training and fine-tuning labels. Finally, we find that for the HOPV data set, the final training results do not improve monotonically with the size of the pre-training data set, but pre-training with fewer data points can lead to more biased pre-trained models and higher accuracy after fine-tuning.
AI-Generated Overview
-
Research Focus: The study investigates the use of transfer learning to enhance molecular property predictions using small data sets in machine learning, particularly focusing on the Harvard Organic Photovoltaics (HOPV) and Freesolv datasets.
-
Methodology: The research benchmarks various machine learning models, including message passing neural networks (PaiNN) and gradient boosting with molecular descriptors, and implements a transfer learning approach using large datasets to pre-train models prior to fine-tuning with small target datasets.
-
Results: The PaiNN model achieved mean absolute errors (MAE) of 0.01 eV for HOPV and 0.56 kcal/mol for Freesolv. The transfer learning strategy improved performance on HOPV, reducing MAE from 7.6 meV to 6.1 meV after pre-training, while it did not yield improvements for Freesolv.
-
Key Contribution(s): This study contributes a novel transfer learning framework for machine learning in molecular property prediction, demonstrating that pre-training with closely aligned datasets can significantly improve model accuracy for small datasets.
-
Significance: The findings highlight the potential of transfer learning to address the data limitations common in molecular sciences, paving the way for more efficient predictions in chemical research and related fields.
-
Broader Applications: The methodology can be applied to various fields requiring molecular property predictions, including drug discovery, materials science, and environmental studies, facilitating rapid advancements in research and development processes.