Skip to content

FP8 Megatron training question #6980

@chapter544

Description

@chapter544

Hi,
I am doing experiments with FP8 training using megatron on H100s, but I still have some questions (do not see in the documentations):

  1. Does FP8 Megatron training reduces VRAM on GPUs so that I can increase batch-size or increase sequence-length?
  2. Does the repo currently support LORA FP8 training and then directly merge/export LORA to FP8 checkpoints (without converting checkpoints from FP8->FP16 model and then FP16->FP8 for inferencing)?
  3. Does FP8 training offer speedup in your tests? I setup a quick training run but the training speed is 1.5x-2x slower than FP16, I used the example script llm.sh in the fp8 folder of the repo.

Thanks,

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions