Skip to content

Conversation

@bkirwi
Copy link
Contributor

@bkirwi bkirwi commented Jan 30, 2026

Originally, Persist clients shared nothing aside from the underlying connection pool... if I had 20 clients for a particular shard, I'd have 20 different copies of state in memory being kept in sync. This was great for isolation, but wasteful of memory. Nowadays we keep just one copy in memory, but all 20 clients may still race to CaS it at once; this wastes a bunch of requests against the backing store, since only one of those 20 CaS operations can succeed, and 19 of them will fail. In particularly gnarly cases, this can get us into a durably bad state: almost all CaSs are failing and getting retried, causing the overall CaS rate to shoot up, meaning that new CaSs are even more likely to time out...

An obvious workaround is to limit each process to one outstanding CaS at a time. This is a little risky, though -- if we have a semaphore of limit 1, even 1 hung connection can cause all other clients to hang. If semaphore permits could time out, that would be ideal... but that's not how Tokio semaphores work. This PR implements its own little thing to solve that, and puts a flag around it so it can be tuned or disabled.

Motivation

https://github.com/MaterializeInc/incidents-and-escalations/issues/324 most recently, but I'm certain it's come up before.

This is a little too strong, I think, but we'll see if it causes
problems in testing.
A risk of a semaphore is that a single hung request holding a token
ruins things for everyone. In general we'd much rather send concurrent
requests -- which are possible anyways in a multiprocess system --
instead of risking deadlock.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant