Much of the shared content (published papers, oral presentations, etc.) leans toward low-entropy, story-heavy narratives. My blogs aim to simplify fundamental concepts and underlying mechanisms. Minimal functioning code, but enough to grasp the underlying ideas, are provided in some blogs.

Technical BLOGs

  • All
  • Generative Models
  • Equivariance
  • Neural PDE Solvers
  • Others

Flow Matching

- ODE-based Flow
- Interpolants
- Conditional Velocity
- Marginal Velocity

Mean Flow

- One-step Generation
- Jacobian-vector Product
- Reverse Training
- Reverse Integration

Improved Mean Flow

- Mean Flow → Improved Mean Flow
- Mean Flow: Unstable Training
- Improved Mean Flow: Stable Training
- Training Objectives

Rectified Flow

- ODE-based Flow
- Straight Line Trajectories
- One-step Generation
- Optimal Transport

Mean Flow Explained with High School Math

- High School Math
- Calculus I
- Calculus III

The Analytical Optimal Velocity in Flow Matching

- Derivation and Interpretation
- Continuity of Time-marginals
- Non-uniqueness of Velocity Fields
- Coupling of Distributions

Intro: Generative Models

- Generative model → Probability distribution
- Sampling → Generation
- Discriminative vs. Generative
- Broad applications beyond generation

Diffusion: Math and Derivation

- Adding Noise → Forward SDE
- Denoising → Reverse SDE
- Score, Probability Flow, ODE
- Fokker-Planck Equation

Group CNNs

- Lift the features to the group space
- Input Transformation → Output transformation
- Used more often in computer vision
- With broader applications to many fields

Geometric GNNs

- Taking 3D geometric graphs as inputs
- Ensure Euclidean (Roto-translation) symmetries
- Used more often in 3D molecular learning
- Broader applications to any geometric point clouds

Unconstrained Methods

- Offset the symmetries to the data
- The model can be of any architecture
- Does not constrain the features or operations
- Broad applications to any geometric data

Operator Learning

- Learning mappings between function spaces
- Data-driven PDEs solving
- Bridging physics and data
- Weather forecast, fluid dynamics, etc.

PINN in 5 Minutes

- Solving PDEs with neural networks
- Physics constraints in loss functions
- Can train without data
- No explicit numerical discretization

DeepONet

- Operator with learned basis functions
- No explicit discretization for the output
- Fixed sensor points for the input
- Based on universal approximation

Fourier Neural Operator

- Integral neural operator
- Learn in the frequency domain directly
- Global sinusoidal filters
- Fast and accurate

LLM Architecture

- Tokenizer
- Embedding
- Transformer Blocks
- LM Head

Contact