Mardochée Réveil, PhD
Back to Publications

A Framework of SO(3)-equivariant Non-linear Representation Learning and its Application to Electronic-Structure Hamiltonian Prediction

Shi Yin, Xinyang Pan, Fengyan Wang, Lixin He
5/9/2024

Abstract

We propose both a theoretical and a methodological framework to address a critical challenge in applying deep learning to physical systems: the reconciliation of non-linear expressiveness with SO(3)-equivariance in predictions of SO(3)-equivariant quantities. Inspired by covariant theory in physics, we present a solution by exploring the mathematical relationships between SO(3)-invariant and SO(3)-equivariant quantities and their representations. We first construct theoretical SO(3)-invariant quantities derived from the SO(3)-equivariant regression targets, and use these invariant quantities as supervisory labels to guide the learning of high-quality SO(3)-invariant features. Given that SO(3)-invariance is preserved under non-linear operations, the encoding process for invariant features can extensively utilize non-linear mappings, thereby fully capturing the non-linear patterns inherent in physical systems. Building on this, we propose a gradient-based mechanism to induce SO(3)-equivariant encodings of various degrees from the learned SO(3)-invariant features. This mechanism can incorporate non-linear expressive capabilities into SO(3)-equivariant representations, while theoretically preserving their equivariant properties as we prove, establishing a strong foundation for regressing complex SO(3)-equivariant targets. We apply our theory and method to the electronic-structure Hamiltonian prediction tasks, experimental results on eight benchmark databases covering multiple types of systems and challenging scenarios show substantial improvements on the state-of-the-art prediction accuracy of deep learning paradigm. Our method boosts Hamiltonian prediction accuracy by up to 40% and enhances downstream physical quantities, such as occupied orbital energy, by a maximum of 76%.

AI-Generated Overview

  • Research Focus: The paper addresses the challenge of reconciling non-linear expressiveness with SO(3)-equivariance in deep learning models for predicting physical properties, specifically electronic-structure Hamiltonians.

  • Methodology: The authors propose a theoretical and methodological framework that constructs SO(3)-invariant quantities from SO(3)-equivariant regression targets. They utilize these invariant quantities to guide the learning of high-quality features and introduce a gradient-based mechanism to produce SO(3)-equivariant encodings with enhanced non-linear expressiveness.

  • Results: Experimental results demonstrate significant improvements in prediction accuracy for electronic-structure Hamiltonians on eight benchmark databases, with accuracy boosts of up to 40% for Hamiltonian predictions and a 76% improvement for downstream physical quantities such as occupied orbital energy.

  • Key Contributions: The framework provides a solid theoretical foundation for achieving non-linear expressiveness while maintaining strict SO(3)-equivariance, reshaping how deep learning can be applied to physical systems and enriching regression tasks that depend on rotational symmetries.

  • Significance: This work contributes to both advancing deep learning techniques for modeling complex, symmetrical physical systems and addressing a significant gap in ensuring that neural networks conform to fundamental physical laws, which could enhance the accuracy and reliability of predictions in computational physics.

  • Broader Applications: The proposed framework is seen as beneficial not only in electronic structure calculations but also potentially applicable in other fields requiring equivariant representations, such as robotics, autonomous vehicles, and motion tracking systems, where maintaining symmetries under transformations is critical.

Relevant Links

Stay Updated

Subscribe to my Substack for periodic updates on AI and Materials Science