A Lightweight Library for Energy-Based Joint-Embedding Predictive Architectures
Paper β’ 2602.03604 β’ Published
FinJEPA is a JEPA-based world model for portfolio optimization over a separated action space consisting of:
| Component | Source | Key Innovation |
|---|---|---|
| Time Series Encoder | TS-JEPA (Sennadir 2025) | 1D-CNN patch tokenizer + Transformer |
| Action Conditioning | JEPA-WMs (Terver 2025) | AdaLN + RoPE in predictor |
| Collapse Prevention | EB-JEPA (Terver 2026) | SIGReg + Inverse Dynamics Model |
| Multi-step Rollout | EB-JEPA | K-step autoregressive training |
| Planner | JEPA-WMs + EB-JEPA | CEM L2 cost / MPPI cumulative cost in latent space |
| TD Branch | TD-JEPA (Bagatella 2025) | Optional separate task encoder for zero-shot RL |
Financial Time Series (T, F)
β
βΌ
[TimeSeriesTokenizer] ββ 1D-CNN patches + position encoding
β
βββββΊ [Context Encoder] (student)
β β
β βΌ
β [Predictor] ββββ Action embedding (weights + signals)
β (AdaLN + RoPE) β
β β β
β βΌ βΌ
β Predicted target [ActionEmbedder]
β embeddings βββ weights (continuous)
β βββ signals (discrete)
β βββ hedge (binary)
β
βββββΊ [Target Encoder] (teacher, EMA frozen)
β
βΌ
Ground truth target embeddings
python finjepa/run_training_fast.py
Full training on real data:
python finjepa/train.py --data_source hf \
--dataset_name paperswithbacktest/Stocks-Daily-Price \
--n_assets 5 --batch_size 128 --epochs 50 --push_to_hub
This model repository was generated by ML Intern, an agent for machine learning research and development on the Hugging Face Hub.