Skip to content

Conversation

@folkertdev
Copy link
Contributor

@folkertdev folkertdev commented Jan 27, 2026

The custom intrinsics for vld1_* optimize less well than a standard unaligned read.

https://rust.godbolt.org/z/8T6Kr63K4

That seems like something that should be fixed in LLVM, it should be able to eliminate this store to an alloca:

  %val1 = alloca [64 x i8], align 4
  call void @llvm.lifetime.start.p0(i64 64, ptr nonnull %val1)
  call void @llvm.memcpy.p0.p0.i64(ptr noundef nonnull align 4 dereferenceable(64) %val1, ptr noundef nonnull align 4 dereferenceable(64) %val, i64 64, i1 false)
  %0 = call { <4 x float>, <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld1x4.v4f32.p0(ptr noundef nonnull %val1) #4

But for now we can fix it here. The only problem is how to test it. Maybe there is some clever way in the yml format, but the issue is that some vector sizes use ldp, others use ldr. I don't currently see a way to encode that nicely. I have it on good authority (by an arm engineer), that there is no readon to prefer ld1 over two ldps.

This was found in fearless_simd, @Shnatsel and I went on a bughunt, and finally found the cause for their weird codegen in a read that first dereferenced a reference to an array. There is some extra context here linebender/fearless_simd#185 (comment).

cc @adamgemmell @CrooseGit if I'm missing anything, and if we can find a proper way to test this (or just accept testing for ld and not specifying the exact instruction).

edit: also I can't find anywhere whether the intrinsic assumes an aligned pointer or not, so maybe we should be using core::ptr::read instead?

@rustbot
Copy link
Collaborator

rustbot commented Jan 27, 2026

r? @sayantn

rustbot has assigned @sayantn.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

@Shnatsel
Copy link
Member

I think it would be nice to add a comment to the .yaml documenting the rationale and linking to this PR

@adamgemmell
Copy link
Contributor

I'm happy just checking for LD, seems enough for the quick optimisation sanity check that the instruction assertion tests are there for. We have behavioural tests for these, and what variant of load ends up used in the test shims (that just have one intrinsic called, not really a realistic example) isn't too important

As far as I know these intrinsics support unaligned reads, so read_unaligned is correct.

The patch makes sense on its own regardless of the bug as part of #1659

I haven't dug much into the missed optimisation you've shown, but the ref intrinsic examples seems to optimise much better if you avoid dereferencing the reference and just as cast it straight to a pointer before passing it to the intrinsic. When implementing these intrinsics in Rust we've basically aimed to generate similar LLVMIR to clang to ensure that we make use of the beaten path when it comes to optimising these intrinsics, which is why we preferred using LLVM builtins. Clearly read_unaligned seems to optimise well in real code though!

@folkertdev
Copy link
Contributor Author

the ref intrinsic examples seems to optimise much better if you avoid dereferencing the reference and just as cast it straight to a pointer before passing it to the intrinsic.

Sure, that would never create the intermediate alloca. But because of how the fearless_simd signatures worked together, they did dereference the array reference.

Clearly read_unaligned seems to optimise well in real code though!

There would be generic optimizations for memcpy I imagine. But yeah we might miss some combines that are specific to the arm/aarch64 backend now. At least for some earlier cases, the LLVM folks believed that the arm backend should desugar the target-specific intrinsic to the general one, so maybe that is the case here too.

@adamgemmell
Copy link
Contributor

Gotcha, yeah that is weird. Here's a simpler example in C that does the exact same thing with the vector tuple, but not with the single vector result. To me this matches your suggestion that there's something special about this load intrinsic that the optimiser can't reason about.

@adamgemmell
Copy link
Contributor

We have behavioural tests for these,

Correction, we don't have behavioural tests for many of these, I think for f16 and _xN variants.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants