Reproduce Results
Use the selector below to jump straight to a single recipe.
Commands are shown with --seed 0 for readability. To reproduce the reported averages, run seeds 0, 1, 2 and average
the metrics (individual runs should land in the same ballpark).
Environment
- Python: 3.10+
- PyTorch: 2.x with CUDA 11/12 (match your GPU driver)
- Hardware: 1x A100 or equivalent; adjust batch size for smaller GPUs.
Install
requirements.txt pins the core dependencies for reproducibility, including default (CPU) PyTorch versions. If you need CUDA, reinstall torch/torchvision afterward using the wheels from pytorch.org to match your GPU driver.
git clone https://github.com/AhmedTarek62/wavesfm.git
cd wavesfm
python -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -r requirements.txt
# If you need CUDA, reinstall torch/torchvision from pytorch.org after this step.
# DeepMIMO preprocessing dependency (only needed for DeepMIMO recipes):
pip install DeepMIMOv3
Recipe browser
Clicking a finetuning label on the Benchmarks page will open the matching recipe here.
POWDER (RFP) - LP
Dataset: POWDER (RFP) | Task: rfp | Finetuning: LP
Preprocess
python preprocessing/preprocess_rfp.py --data-path <POWDER_DIR> --output data/rfp.h5
Train
python main_finetune.py \
--task rfp \
--train-data data/rfp.h5 \
--val-split 0.2 \
--output-dir runs/v1.0/rfp/lp \
--finetune <WAVESFM_BASE_CKPT> \
--model vit_multi_small \
--use-conditional-ln \
--warmup-epochs 5 \
--num-workers 2 \
--seed 0 \
--smoothing 0.1 \
--epochs 10 --batch-size 256
Checkpoint: replace --finetune <WAVESFM_BASE_CKPT> with --download-pretrained --hf-repo ahmedaboulfo/wavesfm --hf-file wavesfm-v1p0.pth if you don't have it locally.
Evaluate
python main_finetune.py \
--task rfp \
--train-data data/rfp.h5 \
--val-split 0.2 \
--finetune <FINETUNED_CKPT> \
--model vit_multi_small \
--use-conditional-ln \
--seed 0 \
--eval-only
Need dataset details? See the dataset pages.