-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vm/mman: handle properly munmap of chunks larger than SIZE_PAGE #578
Conversation
Unit Test Results7 723 tests +23 7 008 ✅ +23 37m 41s ⏱️ +24s Results for commit 7fab743. ± Comparison against base commit 1f9ea40. This pull request removes 1 and adds 24 tests. Note that renamed tests count towards both.
♻️ This comment has been updated with latest results. |
33e69a9
to
78b613a
Compare
78b613a
to
90e2626
Compare
90e2626
to
f9c1ba8
Compare
f9c1ba8
to
5a92597
Compare
836e921
to
5d5ec9e
Compare
3729873
to
57a576f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we could use vm_mapEntrySplit
in case of the "in the middle" munmap
57a576f
to
21ca374
Compare
Only thing that comes to my mind is changing this part to s = map_alloc();
/* This case if only possible if an unmapped region is in the middle of a single entry,
* so there is no possibility of partially unmapping. */
if (s == NULL) {
return -ENOMEM;
}
vm_mapEntrySplit(proc, map, e, s, overlapStart);
continue; /* Process in next iteration. */ What do you think about it? |
21ca374
to
6bfdeca
Compare
Looks nice! |
6bfdeca
to
38a0b69
Compare
38a0b69
to
7fab743
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@agkaminski @damianloew https://github.com/phoenix-rtos/phoenix-rtos-kernel/actions/runs/11050286361/job/30697841405?pr=578 |
I guess it did pass on the rerun. @damianloew perhaps this timeout is somewhat on the edge? |
Yeah, we had a similar problem some time ago and changed the value to 350 from 310: We will consider increasing the value more - probably we will check 400. However I am not sure whether higher values are caused by our system or by the gh machine on which the tests are executed |
JIRA: RTOS-893
Description
Munmap handled badly segments bigger that SIZE_PAGE, after it the rb tree was in corrupted state.
Motivation and Context
Types of changes
How Has This Been Tested?
Checklist:
Special treatment