Papers
arxiv:2605.09241

Sub-JEPA: Subspace Gaussian Regularization for Stable End-to-End World Models

Published on May 10
ยท Submitted by
Kai Zhao
on May 12
Authors:
,
,
,
,
,
,

Abstract

Joint-Embedding Predictive Architectures training is improved by applying Gaussian constraints in multiple random subspaces rather than the original embedding space, achieving better bias-variance balance and superior performance in continuous-control environments.

AI-generated summary

Joint-Embedding Predictive Architectures (JEPAs) provide a simpleframework for learning world models by predicting future latent representations.However, JEPA training is subject to a bias-variance tradeoff.Without sufficient structural constraints, excessive representationalvariance causes the model to collapse to trivial solutions.The recent LeWorldModel (LeWM) shows that this issue can be alleviated bysimply constraining latent embeddings with an isotropic Gaussian prior.However, latent representations inherently lie on low-dimensional manifoldswithin a high-dimensional ambient space, and enforcing an isotropic Gaussianprior directly in this ambient space introduces an overly strong bias.In this work, we propose ame, which seeks a favorable operatingpoint on the bias-variance frontier by applying Gaussian constraints inmultiple random subspaces rather than in the originalembedding space.This design relaxes the global constraint while preserving itsanti-collapse effect, leading to a better balance between trainingstability and representation flexibility.Extensive experiments across fourcontinuous-control environments demonstrate that consistentlyoutperforms LeWM with very clear margins.Our method is simple yet effective, and serves as a strong baseline for future JEPA-based world model research.fdefinedeeemodeThe code is available at https://github.com/intcomp/Sub-JEPA.

Community

Paper submitter

We're releasing Sub-JEPA ๐ŸŒ

LeWM (from LeCun's group) is the first end-to-end trainable JEPA world model โ€” it uses isotropic Gaussian regularization to prevent representation collapse. Clean and effective.

Our take: latent representations sit on low-dimensional manifolds, so enforcing a full-space Gaussian is too strong a bias.

We propose Subspace Gaussian Regularization: instead of constraining the full embedding space, we project latents into multiple orthogonal subspaces and apply Gaussian constraints there. Simple change, better inductive bias.

Results on 4 continuous-control benchmarks consistently outperform LeWM, with gains correlated to reductions in effective rank โ€” the lower the task's intrinsic dimensionality, the larger the gain.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.09241 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.09241 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.09241 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.