Click any block to expand. Every block has a diagram showing what it does.
InputProtein Backbone DataNEW: full backbone▶
v14 predicts all 4 backbone atoms per residue, not just Cα. Each residue is represented as [N, CA, C, O] — the minimal backbone that defines the protein chain geometry.
1Frozen Protein Encoder▶
Pre-trained encoder converts amino acid sequence into per-residue features (128-dim) and a contact probability map (which residues are physically close). All weights frozen — no gradients flow back.
2Pair Stack — Triangle Updates ×8▶
The pair stack builds a pairwise relationship matrix (B, L, L, 128). Each entry describes how two residues relate structurally. The triangle update is inspired by AlphaFold2: if residue i is near k, and k is near j, then i is likely near j. Eight rounds of this propagate information across the whole chain. Also includes contact map conditioning (gated injection from encoder) and OuterProductMean (d=32).
3Diffusion — Add Noise, Learn to Denoise▶
Training: Take a known protein structure, add random Gaussian noise at a random timestep t (out of 1000), then train the model to predict the original clean structure from the noisy version. Generation: Start from pure noise, iteratively denoise using DDIM (50 steps) to generate new protein structures.
4Frame Initialization — Peptide PlaneNEW▶
Each residue gets a local coordinate frame (3 axes + origin at CA). The frame is built from the peptide plane defined by N, CA, C within the same residue. This is more stable than v13's approach of using 3 consecutive CA atoms. At high noise levels, CA triplets become nearly collinear and crash. Also includes SNR-gated confidence (SLERP toward identity at low signal) and self-conditioning (50% of the time, use a previous prediction as extra input).
5IPA Denoiser — 8 Layers▶
The core denoiser. Invariant Point Attention (IPA) combines standard sequence attention with 3D geometric attention — queries and keys are actual 3D points positioned in each residue's local coordinate frame. This makes the attention SE(3)-equivariant: the output doesn't change if you rotate the whole structure. After each layer, the frame update refines each residue's rotation and translation. 8 layers of refinement progressively sharpen the structure prediction.
6BackboneAtomHead — Place N, C, ONEW▶
The key innovation of v14. After the IPA denoiser predicts frames (R, t), the BackboneAtomHead places N, C, O atoms relative to CA using learned offsets from ideal bond geometry (N-CA = 1.458Å, CA-C = 1.523Å, C=O = 1.231Å). The MLP is zero-initialized, so it starts by producing ideal geometry and learns residue-specific corrections (proline kinks, glycine flexibility). CA is always exactly at the frame origin.
7Loss Functions — 13 Terms▶
CA-level losses (inherited from v13b) — supervise Cα positions
FAPE
w = 1.0
Frame-aligned point error — how well do predicted points match true points in each residue's local frame?
Frame Rotation
w = 0.5
Angular distance between predicted and true coordinate frames: 1 - cos(θ)
Distance MSE
w = 1.0
MSE on all pairwise Cα distances
Bond Geometry
w = 3.0 (annealed 1→3)
Consecutive Cα distance vs 3.8Å target
Chirality
w = 0.1
Signed volume of Cα quartets — correct handedness
Angle
w = 0.5
MSE on Cα-Cα-Cα bond angles
Clash
w = 0.1
Penalizes atoms closer than 3.8Å
Aux Distance
w = 0.03
Ordinal BCE on 32-bin distance predictions
Rg Loss
w = 0.5
MSE on log(radius of gyration) — ensures correct overall compactness
Backbone lossesNEW — supervise N, C, O atom positions (scaled by bb_ramp: 0→1 over first 5 epochs)
BB FAPE
w = 1.0
FAPE over all 4 backbone atoms in local frames
BB Bond
w = 2.0
MSE on N-CA, CA-C, C=O, C-N bond lengths vs ideal
BB Angle
w = 0.5
MSE on N-CA-C, CA-C-N, C-N-CA angles
Omega
w = 0.5
Peptide bond planarity: 1 + cos(ω). Trans (ω=180°) → 0