Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for VHOST_USER_GPU_SET_SOCKET, and the protocol on this socket #236

Open
mtjhrc opened this issue Mar 12, 2024 · 2 comments
Open

Comments

@mtjhrc
Copy link
Contributor

mtjhrc commented Mar 12, 2024

Hi,
me and @dorindabassey are working on a vhost-user-gpu implementation, see: rust-vmm/vhost-device#598.

We are aiming at being compatible with QEMU's displays, which means supporting the VHOST_USER_GPU_SET_SOCKET message and the protocol on this socket.
For specification see: https://www.qemu.org/docs/master/interop/vhost-user-gpu.html.

The VHOST_USER_GPU_SET_SOCKET is currently defined in the FrontendReq enum:

GPU_SET_SOCKET = 33,
)

but is not being handled here:

/// Main entrance to server backend request from the backend communication channel.
///
/// Receive and handle one incoming request message from the frontend. The caller needs to:
/// - serialize calls to this function
/// - decide what to do when error happens
/// - optional recover from failure
pub fn handle_request(&mut self) -> Result<()> {

The protocol on this socket is nearly the same as the normal vhost user protocol and the header structure is exactly the same as the existing VhostUserMsgHeader. The problem are the flags, because only the REPLY flag is specified, not even a VERSION flag is used.

I would like to implement this protocol, but my main problem is I don't know if adding new device specific protocol code into this project is ok. I see that there are FS specific things, but they might be removed (see #213)

I can think of 3 solution with diferent levels genericness:

  • creating a GpuBackend struct and passing it to a set_gpu_socket when VHOST_USER_GPU_SET_SOCKET is received in a similar fashion to the existing Backend and set_backend_req_fd callback

    Ok(FrontendReq::SET_BACKEND_REQ_FD) => {
    self.check_proto_feature(VhostUserProtocolFeatures::BACKEND_REQ)?;
    self.check_request_size(&hdr, size, hdr.get_size() as usize)?;
    let res = self.set_backend_req_fd(files);
    self.send_ack_message(&hdr, res)?;
    }
    fn set_backend_req_fd(&mut self, files: Option<Vec<File>>) -> Result<()> {
    let file = take_single_file(files).ok_or(Error::InvalidMessage)?;
    // SAFETY: Safe because we have ownership of the files that were
    // checked when received. We have to trust that they are Unix sockets
    // since we have no way to check this. If not, it will fail later.
    let sock = unsafe { UnixStream::from_raw_fd(file.into_raw_fd()) };
    let backend = Backend::from_stream(sock);
    self.backend.set_backend_req_fd(backend);
    Ok(())
    }

  • having a set_gpu_socket but passing it a UnixStream and let the user parse the messages

    • here it would be nice to make some utilities public from vhost-user, such as Endpoint so there is no code duplication
  • not handling VHOST_USER_GPU_SET_SOCKET in this project at all, but providing a fully generic way to handle custom messages from the frontend, for messages that are not handled by the handle_request referenced above.

I am not sure on what is the stance on device-specific and/or non-standard features in this project, any suggestions?

@Ablu
Copy link
Collaborator

Ablu commented Mar 13, 2024

To me it looks like providing the infrastructure to receive and send messages could be useful to have here (or in a new subcrate potentially). Then devices could implement against that interface. So I think the first option sounds best to me.

@elmarco
Copy link

elmarco commented Mar 15, 2024

Just a quick note: having VHOST_USER_GPU_SET_SOCKET support is nice, but even better would be to have org.qemu.Display1 dbus support, to avoid QEMU roundtrip when displaying with libmks for example. The two are fairly similar, so it shouldn't be hard to have both anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants