selfclean_audio.ssl_adapt#

Members

LoraAdaptConfig

ProjectionHead

adapt_model_with_lora

Adapt the model on the target dataset using SimCLR and LoRA adapters.

class selfclean_audio.ssl_adapt.LoraAdaptConfig(enable: bool = False, r: int = 8, alpha: int = 16, dropout: float = 0.05, target_modules: tuple[str, ...] = ('q_proj', 'k_proj', 'v_proj', 'out_proj', 'fc1', 'fc2'), epochs: int = 1, lr: float = 0.0001, weight_decay: float = 0.0, temperature: float = 0.2, projection_dim: int = 256, max_steps: int | None = None, objective: str = 'infonce', vicreg_sim_coeff: float = 25.0, vicreg_var_coeff: float = 25.0, vicreg_cov_coeff: float = 1.0, sample_rate: int = 16000, strong_aug: bool = True, time_shift_max: float = 0.1, add_noise_snr_db: float = 15.0, tempo_min: float = 0.9, tempo_max: float = 1.1, pitch_semitones: float = 2.0, reverb_prob: float = 0.3, eq_prob: float = 0.4, time_mask_prob: float = 0.5, time_mask_max_ratio: float = 0.2, gradient_accumulation_steps: int = 1)[source]#
selfclean_audio.ssl_adapt.adapt_model_with_lora(model: Module, dataloader, device: device | str, cfg: LoraAdaptConfig)[source]#

Adapt the model on the target dataset using SimCLR and LoRA adapters.

The base model is frozen; only LoRA adapters (if enabled) and a small projection head are trained. After adaptation, the projection head is discarded and the model remains with the adapted LoRA weights active.