-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make literals chunk uncompressed on Erlang/OTP 28 #8967
Comments
Thanks for the suggestion! I've asked the OTP team for feedback on this idea. |
Here is a WIP branch that no longer compresses the literal chunk: https://github.com/bjorng/otp/tree/bjorn/uncompressed-literals/GH-8967/rfc I have calculated the total size of all stripped BEAM files in OTP with and without this branch. Note that an undocumented feature of the
@josevalim Do you have any benchmark to test whether this change would make any difference in load times? @pguyot You will still have to convert all literals from the external term format to the external. Do you think that not having to uncompress the literal chunk will noticeably decrease load times? |
Here is something I used the last time around: defmodule Run do
def run([path]) do
entries =
for beam <- Path.wildcard(Path.join(path, "**/*.beam")) do
module = beam |> Path.basename() |> Path.rootname() |> String.to_atom()
{module, File.read!(beam)}
end
IO.puts("Loading #{length(entries)}")
:timer.tc(:lists, :foreach, [
entries,
fn {module, binary} -> :erlang.prepare_loading(module, binary) end
])
|> elem(0)
|> IO.inspect()
:timer.tc(:lists, :foreach, [
entries,
fn {module, binary} -> :erlang.prepare_loading(module, binary) end
])
|> elem(0)
|> IO.inspect()
end
end
Run.run(System.argv()) Which you can call as |
True. If the literals are compressed (LITT), AtomVM uncompresses them when the module is loaded, allocating a copy in RAM. If they are not compressed (LITU chunk that PackBeam tool creates on the desktop), AtomVM just maps them, allocating no additional RAM at all. Then when a literal is referenced, it is copied on heap, except for binaries (which we can do as long as modules are not unloaded in AtomVM). Regarding the load times, I guess this question was not for AtomVM. Not having to uncompress the literal chunks reduces the load time as we just mmap it as opposed to allocating and inflating it, but we're not measuring this on microcontrollers. |
Thanks! I couldn't find any difference in load times. I also calculated the size of the untouched BEAM files (that is, with all chunks present).
That slight size increase seems acceptable to me. Therefore, it seems to me that the positive aspects of this change slightly outweigh the negative aspects. |
See erlang#8967 and erlang#8940 for a discussion why this is useful. This commit also makes it possible to read out the literal chunk decoded like so: beam_lib:chunks(Beam, [literals]) Closes erlang#8967
See erlang#8967 and erlang#8940 for a discussion about why this is useful. This commit also makes it possible to read out the literal chunk decoded like so: beam_lib:chunks(Beam, [literals]) Closes erlang#8967
See erlang#8967 and erlang#8940 for a discussion about why this is useful. This commit also makes it possible to read out the literal chunk decoded like so: beam_lib:chunks(Beam, [literals]) Closes erlang#8967
I've finished the implementation and published a pull request. |
If compression is important, it can be enabled for the whole .beam.
See #8940 (comment).
Also, to quote @pguyot from that thread:
The text was updated successfully, but these errors were encountered: