Skip to content

praveena2j/LAVViT

Repository files navigation

In this work, we present LAVViT: Latent Audio-Visual Vision Transformers for Speaker Verification

References

If you find this work useful in your research, please consider citing our work 📝 and giving a star 🌟 :

@INPROCEEDINGS{10888977,
  author={Praveen, R. Gnana and Alam, Jahangir},
  booktitle={ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  title={LAVViT: Latent Audio-Visual Vision Transformers for Speaker Verification}, 
  year={2025},
}

There are three major blocks in this repository to reproduce the results of our paper. This code uses Mixed Precision Training (torch.cuda.amp). The dependencies and packages required to reproduce the environment of this repository can be found in the environment.yml file.

Creating the environment

Create an environment using the environment.yml file

conda env create -f environment.yml

Text Files

The text files can be found here

train_list :  Train list
val_trials :  Validation trials list
val_list : Validation list
test_trials : VoX1-O trials list
test_list : Vox 1-O list

Table of contents

Preprocessing

Return to Table of Content

Step One: Download the dataset

Return to Table of Content Please download the following.

  • The images of Voxceleb1 dataset can be downloaded here

Step Two: Preprocess the visual modality

Return to Table of Content

  • The downloaded images are not properly aligned. So the images are aligned using Insightface The preprocessing scripts are provided in preprocessing folder
  • Please note that it is important to compute mean and standard deviation for audio data (spectrograms) using the command 'sbatch run_mean.sh'

Training

Return to Table of Content

  • sbatch run_train.sh

Inference

Return to Table of Content

  • sbatch run_eval.sh

👍 Acknowledgments

Our code is based on AVCleanse and [LAVISH] (https://github.com/GenjiB/LAVISH)

Releases

No releases published

Packages

No packages published