Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the Atomics page left me confused about "relaxed". What is it for? #309

Open
makoConstruct opened this issue Sep 9, 2021 · 7 comments
Open

Comments

@makoConstruct
Copy link

Atomics

The example of a counter is given, the counter would keep an accurate count, but no guarantee is given that another thread reading the counter will get the final, up to date count?

If there's no guarantee that the count being read is correct, from the perspective of a reader, why ever read from it? Why bother with it at all?

@lambda-fairy
Copy link

I'm no atomics expert. But I think the idea is that it will be correct, eventually. You might have to wait some time for it to settle first, but it'll get there.

@makoConstruct
Copy link
Author

makoConstruct commented Sep 15, 2021

I'd assume that, but I can't see any way of using it without some guarantee about when it will have settled. Even if it were just "guaranteed to become readable after 2 milliseconds" that would be really good to know, I would probably be able to think of some uses for that, but with this few guarantees it makes me confused about what atomicity even is at that point.

I think the source of the confusion might be "They can be freely re-ordered and provide no happens-before relationship". It looks like they do provide a happens-before relationship, as long as you use Acquire?

@atsuzaki
Copy link

atsuzaki commented Sep 26, 2021

@makoConstruct

The linked cppreference page has a good usecase example for it:

Typical use for relaxed memory ordering is incrementing counters, such as the reference counters of std::shared_ptr, since this only requires atomicity, but not ordering or synchronization (note that decrementing the shared_ptr counters requires acquire-release synchronization with the destructor)

Incrementing counters would still require atomiticity, i.e. if two threads try to increment you want to make sure that the thread will increment only after the other thread is done so neither of the increment operation is accidentally swallowed in data race--it would be disastrous in the case of the reference counters of shared_ptr (equivalent with Arc in Rust), as it may cause UAF. But it does not require ordering, i.e. it doesn't matter which thread gets to increment first.

(I highly recommend that cppreference page for trying to understand the different memory ordering schemes, by the way)

@makoConstruct
Copy link
Author

I do not see from reading that why it's safe to use relaxed there, as Arc::drop needs to be sure that it has an up to date value, but apparently relaxed is used in rust's Arc https://doc.rust-lang.org/src/alloc/sync.rs.html#1298 I should gaze at the code more tomorrow I guess.

I notice that the CPPReference mentions that relaxed writes will be synchronized if the other thread does a release on a different variable, and then you do an acquire on that variable. So that's the first guarantee that I've heard. I don't think you could use that to make an Arc? But I trust that it would have a use.

@yvt
Copy link

yvt commented Oct 6, 2021

There are no memory accesses that need to be synchronized by Arc::clone, hence the use of Relaxed ordering. (Note that Arc::clone alone doesn't grant access to the ref-counted object to any additional threads.)

The fetch_add in Arc::clone happens-before the fetch_sub operations in Arc::drop of both the original Arc and the new Arc. This happens-before relationship is implied from sequenced-before if the Arc is not moved between threads and synchronizes-with if it's sent between threads (supposing you didn't use a buggy sending mechanism). So Arc::drop won't miscount.

Also, although atomic orderings don't dictate when a sideeffect will be visible to another thread, there's a single modification order on each atomic object, and the N-th modification, if it happens, will reveal the sideeffect of the (N - 1)-th modification, i.e., the accumulation of all (N - 1) previous modifications.

@ireina7
Copy link

ireina7 commented Mar 31, 2024

I'm also confused by this "relaxed" fetch_add. I don't think I can rely on "relaxed" order of atomic operations for visibility among different threads. It's apparently I missed something. Can anyone explain this better? How does the "relaxed" order guarantee the counter been always correct?

@ireina7
Copy link

ireina7 commented Mar 31, 2024

This happens-before relationship is implied from sequenced-before if the Arc is not moved between threads and synchronizes-with if it's sent between threads

Could you explain this sentence better? what does the sequenced-before mean?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants