Skip to content

Conversation

@cncastillo
Copy link
Member

@cncastillo cncastillo commented Sep 16, 2025

Fixes #583, instead of using sim_params["Nblocks"] (removed) a new variable sim_params["max_block_length"] is used to better control block size.

Before, if you had a block with 11 time points and wanted to devide it into 3 blocks you would get:

1:4, 5:8, 9:11

Now you define max_block_length = 5 and get:

1:5, 6:10, 11:11

Maximixing per-block GPU usage.

This block-size limit should only be relevant for precession blocks. If you want to split RF blocks (more kernel calls), use sim_params["max_rf_block_length"].

Another change is that the precense of an RF now is done by abs.(seqd.B1) > 0.0 instead of a very small number, like 1e-9.

  • Update CPU-based methods
    • BlochCPU
    • BlochSimple
    • BlochDict
  • Update GPU-based methods
    • BlochGPU
    • BlochSimple
    • BlochDict

As the fix of this bug was related to some issues with time-stepping, I also implemented a new experimental Magnus-based method that takes advantage of having the information of the current and next time point.

Right now, this is only compatible with CPU-based simulations, and there are some issues that need to be fixed.

If you want to try this PR:

Install this PR

pkg> add https://github.com/JuliaHealth/KomaMRI.jl:KomaMRIBase#fix-sim-block-issue
pkg> add https://github.com/JuliaHealth/KomaMRI.jl:KomaMRICore#fix-sim-block-issue
pkg> add KomaMRI

@Tooine
Copy link

Tooine commented Sep 17, 2025

Thank you for the bug investigation @cncastillo!

I do not understand what was the problem with the use of sim_params["Nblocks"] regarding #583. Could you briefly explain? In your example, why is it better to get 1:5, 6:10, 11:11 instead of 1:4, 5:8, 9:11? Thanks!

@cncastillo
Copy link
Member Author

cncastillo commented Sep 17, 2025

Hi, I'm not really sure if this is a fix yet, I think the problem is related to some blocks having length 1 - 2. The way blocks were split before was kind of random (the position in the arrays where things were split). Now, most blocks will use the max GPU group size, and GPU memory usage is easier to control (by changing this new param). So it was a change I wanted to do anyway.

We can still have length 1 - 2 blocks, so I need check this further. I also saw a similar error for block_length of 2. I restricted it to be set to values more than 5, but it will probably change.

@codecov
Copy link

codecov bot commented Sep 19, 2025

Codecov Report

❌ Patch coverage is 80.55556% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 89.94%. Comparing base (8b62195) to head (95f41ee).

Files with missing lines Patch % Lines
KomaMRICore/src/simulation/SimulatorCore.jl 80.55% 7 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #612      +/-   ##
==========================================
- Coverage   90.08%   89.94%   -0.15%     
==========================================
  Files          56       56              
  Lines        3118     3122       +4     
==========================================
- Hits         2809     2808       -1     
- Misses        309      314       +5     
Flag Coverage Δ
base 89.19% <ø> (ø)
core 88.84% <80.55%> (-0.82%) ⬇️
files 93.60% <ø> (ø)
komamri 88.21% <ø> (ø)
plots 91.12% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
KomaMRICore/src/simulation/SimulatorCore.jl 92.02% <80.55%> (-2.95%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KomaMRI Benchmarks

Benchmark suite Current: 95f41ee Previous: 8b62195 Ratio
MRI Lab/Bloch/CPU/2 thread(s) 343531668 ns 328508190 ns 1.05
MRI Lab/Bloch/CPU/4 thread(s) 276532377 ns 225624482 ns 1.23
MRI Lab/Bloch/CPU/8 thread(s) 150655666 ns 163355495 ns 0.92
MRI Lab/Bloch/CPU/1 thread(s) 556857950.5 ns 548280455 ns 1.02
MRI Lab/Bloch/GPU/CUDA 22142265 ns 22724945 ns 0.97
MRI Lab/Bloch/GPU/oneAPI 75084170 ns 77230670 ns 0.97
MRI Lab/Bloch/GPU/Metal 89922270.5 ns 96602916.5 ns 0.93
MRI Lab/Bloch/GPU/AMDGPU 20495279 ns 22289157 ns 0.92
Slice Selection 3D/Bloch/CPU/2 thread(s) 1558788300 ns 1558872240 ns 1.00
Slice Selection 3D/Bloch/CPU/4 thread(s) 898089106 ns 859609769 ns 1.04
Slice Selection 3D/Bloch/CPU/8 thread(s) 652087225 ns 522488232.5 ns 1.25
Slice Selection 3D/Bloch/CPU/1 thread(s) 2984269895 ns 3010268970 ns 0.99
Slice Selection 3D/Bloch/GPU/CUDA 34803377 ns 32524226 ns 1.07
Slice Selection 3D/Bloch/GPU/oneAPI 136108698.5 ns 123943367 ns 1.10
Slice Selection 3D/Bloch/GPU/Metal 143251417 ns 116308437.5 ns 1.23
Slice Selection 3D/Bloch/GPU/AMDGPU 35018165.5 ns 31118611 ns 1.13

This comment was automatically generated by workflow using github-action-benchmark.

@cncastillo cncastillo force-pushed the fix-sim-block-issue branch from 95f41ee to c37c7d3 Compare October 6, 2025 23:26
@cncastillo cncastillo linked an issue Oct 13, 2025 that may be closed by this pull request
@cncastillo cncastillo changed the title Fix simulation block splitting issues Add Magnus-based methods & fix simulation block splitting issues Oct 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Troubles at beginning of blocks in BlochDict simulations Add 2nd-order Runge Kutta for RF excitation

4 participants