AIResearch AIResearch
Back to articles
AI

AI Models Can Learn Physics Symmetries Without Being Told

A new analysis reveals how unconstrained AI models, like those used in atomistic simulations and particle physics, can approximate fundamental physical symmetries through data alone, offering a path to more flexible and scalable scientific tools.

AI Research
March 28, 2026
4 min read
AI Models Can Learn Physics Symmetries Without Being Told

In the quest to build machine-learning models that can simulate the physical world, researchers have long assumed that strict adherence to symmetry principles—like rotational invariance in molecular systems—must be hard-coded into the architecture. This approach, while successful, imposes rigid constraints that can limit model expressivity and computational efficiency. A new study s this paradigm by demonstrating that unconstrained models, which lack built-in symmetry guarantees, can learn these fundamental symmetries directly from data, achieving competitive performance with simpler, more scalable designs. This finding has broad for fields ranging from materials science to particle physics, where flexible AI tools could accelerate without sacrificing physical accuracy.

The researchers, led by M. Domina, J. W. Abbott, P. Pegolo, F. Bigi, and M. Ceriotti from EPFL, developed rigorous metrics to quantify how well unconstrained models learn and preserve physical symmetries. They focused on two transformer-based models operating on decorated point clouds: a graph neural network for atomistic simulations (the Point-Edge Transformer, or PET) and a PointNet-style architecture for classifying particle trajectories in liquid argon detectors (PoLAr-MAE). By introducing metrics called Aα and Bα, the team measured the equivariance error of model outputs and the symmetry content of internal features across architectural layers and during training. Their analysis revealed that these models can approximate rotational symmetries to a high degree, with symmetry errors often being a small fraction of the overall model error, as shown in Figure 2 of the paper where equivariance errors for energy, forces, and stress were significantly lower than absolute errors.

To assess symmetry learning, the researchers employed a ology centered on group theory and Haar integration over symmetry groups like O(3), which includes rotations and inversions. The Aα metric quantifies how much a model's outputs violate equivariant conditions by computing the variance of back-transformed predictions over group orbits, as illustrated in Figure 1c of the paper. Meanwhile, the Bα metric decomposes internal features into contributions from different irreducible representations, akin to a spectral analysis of symmetry content. These metrics were applied to models trained with data augmentation—random rotations during training—to encourage symmetry learning without architectural constraints. The team tracked symmetry evolution from random initialization through full training, using character projection heatmaps to visualize how features like angular momentum channels develop over time, as detailed in Figure 3.

Show that unconstrained models can effectively learn symmetries, but with notable dynamics and limitations. For the PET model trained on the MAD-1.5 dataset, equivariance errors for energy, forces, and stress were 10%, 31%, and 26% of the absolute errors, respectively, indicating high symmetry fidelity. Character decomposition revealed that internal features are dominated by low-angular-momentum components, with pseudo channels (involving inversion symmetry) being particularly weak, as seen in Figure 2. During training, the model exhibited sudden transitions in symmetry content, such as a drop in scalar character and rise in vectorial character for forces after about 20 epochs, shown in Figure 3b. However, the model struggled with high-angular-momentum and pseudo-scalar targets, requiring architectural modifications like solid spherical harmonics embeddings to improve learning, as demonstrated in Figure 7 where increasing the angular order of inputs enabled successful learning of λ=8 electron density projections.

This work has significant for the design and application of AI in scientific research. By showing that unconstrained models can learn symmetries, it opens the door to more flexible architectures that scale efficiently on modern hardware while maintaining physical consistency. For instance, in atomistic simulations, such models could accelerate drug or materials design by providing fast, accurate potentials without the computational overhead of strict equivariance. In particle physics, as analyzed with PoLAr-MAE, symmetry analysis can diagnose classification instabilities related to rotational invariance, suggesting ways to improve model robustness. The researchers also proposed a simple post-hoc purification protocol for readout layers, which reduced equivariance errors for stress components by half with minimal accuracy loss, offering a practical tool for model refinement.

Despite these advances, the study highlights key limitations. Unconstrained models may require longer training times to sample symmetry orbits that equivariant models encode by design, and they can struggle with targets involving high-angular-momentum or pseudo-symmetries unless specific inductive biases are injected. For example, the PET model failed to learn a geometric pseudo-scalar target without modified embeddings, as shown in Figure 6, indicating that some symmetry channels are poorly represented in standard architectures. Additionally, the analysis framework currently focuses on compact groups like O(3), with extensions to non-compact groups like the Lorentz group requiring task-dependent sampling measures. These limitations underscore the need for careful architectural choices and continued research into balancing expressivity with physical fidelity in AI models for science.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn