-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
io_uring support #468
Comments
Sorry, I cannot follow you. Can you describe your idea more specific? I'm not sure about "async I/O" that you are talking... From your description "Linux native async IO", I guess it is I/O between LTFS and tape drive. Is is correct? If so, I don't think it is reasonable for tape device at all. In tapa drive world, sequence of command and data is the most important because we cannot specify address to write in a SCSI WRITE command. If we use async I/O, LTFS might misdetect block number on tape. If you are saying about data transfer between an application and LTFS. LTFS uses async I/O in the |
OK. another request, we can see FUSE support the io_uring, https://lwn.net/Articles/932079/. Is it possible that LTFS can support it. It could avoid the memcopy between user space and kernel space. |
I'm not sure I understand the description correctly. But it looks the story in the description is completed within FUSE. And no change is needed for LTFS. I think the only thing we need to do is enable this feature in FUSE (by option? I don't know how to do it). But I don't know which part is improved by this feature. Do you have any idea? |
I need more information, and will see more detail in fuse project about this, maybe we need change the interfaces called from libfuse. |
Oh, that reminds me they might be I don't know they can be used in LTFS actually. First of all, technically, it might be able to use for write side. But as I mentioned before we need to construct 512KB block for tape write. So no copy from application to sg device is impossible. The only thing we can do is remove a copy within FUSE. In other words, Call tree is
For read side, it is really difficult to use because we have one block (512KB) cache. At this time, all examples are connected to a file on under hosted filesystem, I don't know we can provide a internal memory block with no care (I don't believe this...). We need to have a study what happens in the FUSE layer on read side. Finally, I don't understand the benefit of this calls. At this time, we can drive the tape drive at max speed. And multiple file access (concurrent file access) in LTFS always degrade the performance. I think the only benefit is reducing usage of bus bandwidth and temporary tiny memory. On the other hand, these calls have a chance to make data corruption. So I think balance between effort and reward is not matched. Of cause, I can review any code if someone provide a code with careful test. But priority is not high at this time in my mind. |
OK, I agree with you. |
Is it possible that LTFS can support io_uring? So that LTFS can support Linux native async IO, would get a performance benefit.
The text was updated successfully, but these errors were encountered: