Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding the Arc optimizations #16

Open
thunderbiscuit opened this issue Sep 13, 2024 · 13 comments
Open

Understanding the Arc optimizations #16

thunderbiscuit opened this issue Sep 13, 2024 · 13 comments

Comments

@thunderbiscuit
Copy link
Member

We are currently using this optimization when using Arcs (example using TxOut):

impl From<TxOut> for BitcoinTxOut {
    fn from(tx_out: TxOut) -> Self {
        let value = match Arc::try_unwrap(tx_out.value) {
            Ok(val) => val.0,
            Err(arc) => arc.0,
        };

        let script_pubkey = match Arc::try_unwrap(tx_out.script_pubkey) {
            Ok(val) => val.0,
            Err(arc) => arc.0.clone(),
        };

        BitcoinTxOut {
            value,
            script_pubkey,
        }
    }
}

I'm trying to wrap my head around how this really works. Here is my rough understanding:

In the case where we have a TxOut type on the Kotlin side and there is only one reference to it, when we feed this type to a method that takes TxOut, it will de-Arc it and consume the inner value of the Amount type and provide it to the method that requires it.

What I'm still unclear on:

The issue I am unclear on is:

  • When the Rust method that consumes the TxOut type returns, isn't the memory de-allocated, therefore the value inside Amount cleared?
  • If that's the case but I still attempt to use the txOut variable created in Kotlin, will we have a problem? Am I miscounting the references here and the Arc will account for this somehow?

Kotlin example to show what I'm wondering about:

val txOut = wallet.giveMeATxOut()                 // some method that returns a TxOut object
importantInformation = wallet.analyzeTxOut(txOut) // the variable goes through the From<TxOut> pipeline and gets "de-Arc'ed"
println(txOut.value)                              // will this be a dangling pointer??
@thunderbiscuit
Copy link
Member Author

I have attempted to recreate this in a Kotlin test on bdk-ffi. First I added this method on the wallet type:

pub fn use_tx_out(&self, tx_out: TxOut) -> String {
    println!("Reference count before try_unwrap: {}", Arc::strong_count(&tx_out.value));
    let rust_tx_out: BitcoinTxOut = tx_out.into();
    "Your TxOut has been consumed".to_string()
}

And then used it in a test:

val output1: LocalOutput = wallet.listOutput().first()
val message = wallet.useTxOut(output1.txout)
println("Message: $message")
println("TxOut is: ${output1.txout.value.toSat()}")

The printed output is:

Reference count before try_unwrap: 2

ArcTest > testArcs() STANDARD_OUT
    Balance: 2024456
    Message: Your TxOut has been consumed
    TxOut is: 4200

So the test succeeds in printing the TxOut after consuming it in a method and we know why in this case: the reference count is 2. My question is now... why is the count at 2 here? ChatGPT seems to think it might be that uniffi holds an extra reference somewhere for you, but then... what's our optimization for? I'd love to clear that up for myself.

@rustaceanrob
Copy link
Contributor

There may be two references out in this program, one for the collection after calling listOutput and another for calling first. Can we try this with constructing a TxOut directly?

@rustaceanrob
Copy link
Contributor

Actually I may be putting the cart before the horse here. If you remove the print of the TxOut, can you see if the reference count is one?

@thunderbiscuit
Copy link
Member Author

thunderbiscuit commented Sep 13, 2024

Hey good ideas.

  1. Creating the txOut directly:
val txOut = TxOut(value = Amount.fromSat(1000uL), scriptPubkey = wallet.revealNextAddress(KeychainKind.EXTERNAL).address.scriptPubkey())
val message = wallet.useTxOut(txOut)
println("Message: $message")
println("TxOut amount: ${txOut.value.toSat()}")

// Reference count before try_unwrap: 2
//     Balance: 2024456
//     Message: Your TxOut has been consumed
//     TxOut amount: 1000
  1. Removing the println:
val txOut = TxOut(value = Amount.fromSat(1000uL), scriptPubkey = wallet.revealNextAddress(KeychainKind.EXTERNAL).address.
val message = wallet.useTxOut(txOut)

// Reference count before try_unwrap: 2
//     Balance: 2024456

@rustaceanrob
Copy link
Contributor

Welp, now I'm lost. @tnull may have a better understanding. Thanks for trying those out

@reez
Copy link
Collaborator

reez commented Sep 14, 2024

Hey good ideas.

  1. Creating the txOut directly:
val txOut = TxOut(value = Amount.fromSat(1000uL), scriptPubkey = wallet.revealNextAddress(KeychainKind.EXTERNAL).address.scriptPubkey())
val message = wallet.useTxOut(txOut)
println("Message: $message")
println("TxOut amount: ${txOut.value.toSat()}")

// Reference count before try_unwrap: 2
//     Balance: 2024456
//     Message: Your TxOut has been consumed
//     TxOut amount: 1000
  1. Removing the println:
val txOut = TxOut(value = Amount.fromSat(1000uL), scriptPubkey = wallet.revealNextAddress(KeychainKind.EXTERNAL).address.
val message = wallet.useTxOut(txOut)

// Reference count before try_unwrap: 2
//     Balance: 2024456

My understanding is when we create a TxOut in Kotlin and pass it to Rust, uniffi manages the object across the FFI boundary by keeping an internal reference:

  • The variable txOut in Kotlin holds one reference to the TxOut.
  • uniffi holds an additional reference to manage the Rust object's lifetime safely.
    ... so even if it appears that there's only one reference, the Arc's count is 2 so in the test.

@reez
Copy link
Collaborator

reez commented Sep 14, 2024

but then... what's our optimization for? I'd love to clear that up for myself.

Yeah, if uniffi is consistently holding extra references then we might be cloning anyway which would negate the optimization, if I'm thinking about this correctly.

And if relying on specific reference counts opens up any potential risks like deallocating if something is still in use by kotlin somehow... this is where I feel like it gets hazier for me in knowing with certainty how small/large that risk is and potential future changes; but the example I was constructing in my head was if for any reason Arc::try_unwrap() succeeds (and the reference count drops to 1 because uniffi somehow changed how it manages references or something) we might deallocate data that Kotlin still expects to be valid which could cause crashes or undefined behavior on the kotlin side.

@rustaceanrob
Copy link
Contributor

we might deallocate data that Kotlin still expects to be valid which could cause crashes or undefined behavior on the kotlin side.

My understanding would be the reference count would actually just be 2 in this program. I am certainly not an expert in any of this though. I think we should actually open an issue on UniFFI to clarify this, using this example

@reez
Copy link
Collaborator

reez commented Sep 14, 2024

we might deallocate data that Kotlin still expects to be valid which could cause crashes or undefined behavior on the kotlin side.

My understanding would be the reference count would actually just be 2 in this program. I am certainly not an expert in any of this though. I think we should actually open an issue on UniFFI to clarify this, using this example

As it stands now yeah I agree it would be 2, but what I was thinking thru was if some thing changed where uniffi somehow changed how it managed references or something then there could be potential for it to be 1, and if we are relying on reference counts then we might get in a situation on the Kotlin side where it expects data to be there but on the Rust side it has deallocated it. But maybe I'm misunderstanding your follow up, if so my bad!

Yeah potentially opening and Issue on UniFFI is a good suggestion as well.

@tnull
Copy link
Collaborator

tnull commented Sep 16, 2024

I think it's correct that Uniffi will generate the Arc, call std::ManuallyDrop and make sure the reference count is only decreased once it would be dropped by the target language:

This is the "arc to pointer" dance. Note that this has "leaked" the Arc<> reference out of Rusts ownership system and given it to the foreign-language code. The foreign-language code must pass that pointer back into Rust in order to free it, or our instance will leak.

When invoking a method on the instance: - The foreign-language code passes the raw pointer back to the Rust code, conceptually passing a "borrow" of the Arc<> to the Rust scaffolding. - The Rust side calls Arc::from_raw to convert the pointer into an an Arc<> - It wraps the Arc in std::mem::ManuallyDrop<>, which we never actually drop. This is because the Rust side is borrowing the Arc and shouldn't run the destructor and decrement the reference count. - The Arc<> is cloned and passed to the Rust code

Finally, when the foreign-language code frees the instance, it passes the raw pointer a special destructor function so that the Rust code can drop that initial reference (and if that happens to be the final reference, the Rust object will be dropped.). This simply calls Arc::from_raw, then lets the value drop.

(See https://mozilla.github.io/uniffi-rs/0.27/internals/object_references.html)

So it's definitely not unsafe, it would merely avoid additional cloning in edge cases just before we'd drop the last Arc reference. But yeah, likely it wouldn't make a whole lot of difference in the common case.

Given the confusion around it, we should probably drop the optimization as it doesn't gain us much but seems to make reasoning about the code harder.

@thunderbiscuit
Copy link
Member Author

Good response here: mozilla/uniffi-rs#2239 (comment)

@tnull
Copy link
Collaborator

tnull commented Sep 20, 2024

Yep, this confirms what we thought. Should we then just drop the optimization and close this issue?

@thunderbiscuit
Copy link
Member Author

thunderbiscuit commented Sep 23, 2024

Yeah sounds good. If I understand correctly, for all garbage-collected languages you'd never hit the Ok branch, because by definition the runtime still holds a reference to the object when you pass it in, and will only GC it in the future, so you don't have a way to arrive at that condition with 1 reference only.

As for Swift, there I'm not as clear on what she means by her answer, but it'd be fun to try it out with the test!

In any case I think it's safe to remove the optimization for now. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants