In this work, we present LAVViT: Latent Audio-Visual Vision Transformers for Speaker Verification
If you find this work useful in your research, please consider citing our work 📝 and giving a star 🌟 :
@INPROCEEDINGS{10888977,
author={Praveen, R. Gnana and Alam, Jahangir},
booktitle={ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={LAVViT: Latent Audio-Visual Vision Transformers for Speaker Verification},
year={2025},
}There are three major blocks in this repository to reproduce the results of our paper. This code uses Mixed Precision Training (torch.cuda.amp). The dependencies and packages required to reproduce the environment of this repository can be found in the environment.yml file.
Create an environment using the environment.yml file
conda env create -f environment.yml
The text files can be found here
train_list : Train list
val_trials : Validation trials list
val_list : Validation list
test_trials : VoX1-O trials list
test_list : Vox 1-O list
Return to Table of Content Please download the following.
- The images of Voxceleb1 dataset can be downloaded here
- The downloaded images are not properly aligned. So the images are aligned using Insightface The preprocessing scripts are provided in preprocessing folder
- Please note that it is important to compute mean and standard deviation for audio data (spectrograms) using the command 'sbatch run_mean.sh'
- sbatch run_train.sh
- sbatch run_eval.sh
Our code is based on AVCleanse and [LAVISH] (https://github.com/GenjiB/LAVISH)