-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Support memory freeing #95
feat: Support memory freeing #95
Conversation
Do we want to include this change in the upcoming 0.6.0 release? |
No, ideally this would follow in a minor release, so that we have had time to test thoroughly before releasing it. |
@ielashi I saw that we have a new release. Can you please approve it now? |
Hey @dragoljub-duric, I'd like to do some more testing given that we found a number of bugs previously in the PR, and given how important this code is, it's better to err on the side of caution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting back to this PR now :) I left some early comments, still taking a closer look.
Co-authored-by: Islam El-Ashi <[email protected]>
f8ee0ff
to
2a75e50
Compare
} | ||
} | ||
|
||
fn load_layout_v2(memory: M, header: Header) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been finding this function generally difficult to read. I'll add some pointers below on how to make it more readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. PTAL.
Co-authored-by: Islam El-Ashi <[email protected]>
@ielashi Can you please take another look? |
@dragoljub-duric Thanks! From a code standpoint this looks good. The remaining concern I have is related to performance. Prior to this PR, memory performance operations were O(1), but with this change they would be O(# buckets used), correct? I don't yet know exactly how big of a deal this is, as we don't have benchmarks for that. Let's discuss more in our 1:1 about next steps. |
At a high-level, I think we should work on:
|
We had a discussion offline and agreed that this approach may not be the best approach. This design was intended to keep the memory manager header bound to 64KiB to make migration from V1 simple, but that came with the additional complexity of using 15-bits for bucket indexes, and also a performance degradation where the In the future, we’ll likely need to expand the memory manager header beyond 64KiB in order to, for example, support a larger number of buckets. If that’s going to happen eventually, then there are simpler and more efficient ways to implement memory freeing. For example, we won’t need to store the bucket IDs as 15-bits, and we can use linked lists to make growing and freeing at O(# buckets), as opposed to O(max number of buckets) with the current approach. There hasn’t been a lot of pressure on us to deliver this feature, so we can park this for now, close the ticket, and revisit with a more comprehensive design in the future. |
Problem:
Stable structures do not support memory freeing.
Solution:
To free memory, we’d need to add the following API to the memory manager:
fn free(memory: MemoryId) -> ();
Post Conditions:
To do:
Maybe in the future remove the external dependency BitVec and use bit operations instead.