Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adds second kuksa client in zenoh-kuksa-provider #10

Merged

Conversation

eriksven
Copy link
Contributor

To avoid a deadlock between waiting for the kuksa subscription and sending messages on the same client, a second client gets initialized. In addition, some clippy warning get fixed.

@eriksven
Copy link
Contributor Author

@max-grzanna what do you think?

@eriksven
Copy link
Contributor Author

I ran into a deadlock where the task with the function publish_to_zenoh could not acquire the lock for the kuksa_client because it seems to be locked by the the other task with the handling_zenoh_subscription . There is most likely a better solution, but my quick fix here now, is now to instantiate a second kuksa_client.

The approach of locking the client whenever one actually wants to send something might fall short too, when the kuksa_client is occupied by waiting on the subscription in the publish_to_zenoh (currently line 115 in main.rs). Because we obviously do not want that the handling_zenoh_subscription has to wait for a new incoming messages before being able to acquire the lock to execute set_current_values when required.

@max-grzanna
Copy link
Contributor

Instantiating a second client works for me. I can't think of a better option either.

Copy link
Contributor

@max-grzanna max-grzanna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@lukasmittag lukasmittag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments

@@ -12,6 +12,8 @@
########################################################################


[workspace]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not needed

@@ -123,7 +121,7 @@ async fn publish_to_zenoh(
let vss_path = &entry.path;

if let Some(publisher) = publishers.get(vss_path.as_str()) {
let buf = match datapoint_to_string(&datapoint) {
let buf = match datapoint_to_string(datapoint) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please keep using the reference

@lukasmittag
Copy link
Contributor

I think dropping the sub_client in the handle_zenoh_subscription should solve the issue already. https://fongyoong.github.io/easy_rust/Chapter_43.html If thats not the case please let me know

@lukasmittag
Copy link
Contributor

Okay, might not solve it entirely since you're blocking with a while loop after subscribing to databroker. As you described not what you want yes so keep using two clients in this case. Nonetheless some comments about the implementation.

@@ -182,7 +180,8 @@ async fn main() {

let uri = kuksa::Uri::try_from(provider_config.kuksa.databroker_url.as_str())
.expect("Invalid URI for Kuksa Databroker connection.");
let client = Arc::new(Mutex::new(kuksa::Client::new(uri)));
let client = Arc::new(Mutex::new(kuksa::Client::new(uri.clone())));
let actuation_client = kuksa::Client::new(uri);

fetch_metadata(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since you await this function you do not need to have a

let client = Arc::new(Mutex::new(kuksa::Client::new(uri.clone())));

@@ -182,7 +180,8 @@ async fn main() {

let uri = kuksa::Uri::try_from(provider_config.kuksa.databroker_url.as_str())
.expect("Invalid URI for Kuksa Databroker connection.");
let client = Arc::new(Mutex::new(kuksa::Client::new(uri)));
let client = Arc::new(Mutex::new(kuksa::Client::new(uri.clone())));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let client = kuksa::Client::new(uri.clone());

@@ -182,7 +180,8 @@ async fn main() {

let uri = kuksa::Uri::try_from(provider_config.kuksa.databroker_url.as_str())
.expect("Invalid URI for Kuksa Databroker connection.");
let client = Arc::new(Mutex::new(kuksa::Client::new(uri)));
let client = Arc::new(Mutex::new(kuksa::Client::new(uri.clone())));
let actuation_client = kuksa::Client::new(uri);

fetch_metadata(
client.clone(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use a reference here instead

}
}
}

async fn publish_to_zenoh(
provider_config: Arc<ProviderConfig>,
session: Arc<Session>,
kuksa_client: Arc<Mutex<kuksa::Client>>,
mut kuksa_client: kuksa::Client,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use reference. mut not needed since you're not editing the client

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is necessary to have a mutable client because the implementation for the functions requires a mutable reference for self.

    pub async fn subscribe_target_values(
        &mut self,
        paths: Vec<&str>,

Correct me if I'm wrong.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will check if mut is necessary there, but yes you're right. Nice catch!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've checked it is necessary so all good here

@@ -55,8 +55,6 @@ async fn handling_zenoh_subscribtion(
) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use reference for kuksa_client

@eriksven
Copy link
Contributor Author

@lukasmittag I pushed a new commit, could you please have another look at some time. Sorry for the delay, I was on business trip this week.

}
}
}

async fn publish_to_zenoh(
provider_config: Arc<ProviderConfig>,
session: Arc<Session>,
kuksa_client: Arc<Mutex<kuksa::Client>>,
mut kuksa_client: kuksa::Client,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've checked it is necessary so all good here

paths: Vec<&str>,
metadata_store: &super::metadata_store::MetadataStore,
) {
let mut client = kuksa_client.lock().await;
) -> Client {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should not return anything here right?

@@ -40,6 +39,8 @@ pub async fn fetch_metadata(
},
);
}

kuksa_client
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unnecessary to return this

fetch_metadata(
client.clone(),
client = fetch_metadata(
client,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should work if you give a reference or clone the client

To avoid  a deadlock between waiting for the kuksa subscription and sending messages on the same client, a second client gets initialized. In addition, some clippy warning get fixed.
Copy link
Contributor

@lukasmittag lukasmittag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good enough to be merged. Note: TODO - check if mutable is really necessary in rust lib of kuksa

@lukasmittag lukasmittag merged commit 1cb785f into eclipse-kuksa:main Nov 4, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants