Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Dockerfile to build a buildroot-based cross-compile environment for vzlogger #486

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

r00t-
Copy link
Contributor

@r00t- r00t- commented Apr 10, 2021

see #478/comment

TODO:

  • add libsml ✔️
  • optimize image size. ✅
    the critical part is buildroot/output, which is ~300mb, it should be possible to get rid of everything else.
  • build static binary for easier deployment?

@r00t- r00t- force-pushed the vzlogger_buildroot_builder branch from 24d9026 to 6e70998 Compare April 10, 2021 01:58
@r00t-
Copy link
Contributor Author

r00t- commented Apr 10, 2021

abbreviated example run, to give readers an idea of what this does:

$ time docker build - <vzlogger_buildroot_builder.Dockerfile                                                            
[...]
2021-04-10 01:56:15 (1.68 MB/s) - 'buildroot-2021.02.1.tar.bz2' saved [5904336/5904336]
+ echo BR2_PACKAGE_VZLOGGER=y
Step 12/16 : RUN        ./br_make_wrapper.bash source
Step 13/16 : RUN        ./br_make_wrapper.bash toolchain
Step 14/16 : RUN        ./br_make_wrapper.bash vzlogger-depends && rm -fr dl/vzlogger output/build/vzlogger-*
=== make vzlogger-depends ===
>>> host-cmake 3.15.5 Building
>>> host-pkgconf 1.6.3 Building
>>> json-c 0.15 Building
>>> libzlib 1.2.11 Building
>>> zlib  Building
>>> libopenssl 1.1.1k Building
>>> openssl  Building
>>> libcurl 7.76.0 Building
>>> host-libtool 2.4.6 Building
>>> host-autoconf 2.69 Building
>>> host-automake 1.15.1 Building
>>> libgpg-error 1.41 Building
>>> libgcrypt 1.9.2 Building
>>> libmicrohttpd 0.9.72 Building
>>> libunistring 0.9.10 Building
>>> util-linux 2.36.1 Building
Step 15/16 : RUN        rm -fr dl output/build/*/* && du -chx . | sort -h | tail -n 30 ; ( du -sh / 2>/dev/null || true )
307M    ./output
788M    /
Step 16/16 : RUN 	set -xe ; 	./br_make_wrapper.bash vzlogger-build ;	file output/build/vzlogger-*/src/vzlogger ; 	ls -l output/build/vzlogger-*/src/vzlogger ; 	rm -fr output/build/vzlogger-*
+ ./br_make_wrapper.bash vzlogger-build
=== make vzlogger-build ===
>>> vzlogger origin_master Downloading
>>> vzlogger origin_master Extracting
>>> vzlogger origin_master Patching
>>> vzlogger origin_master Configuring
>>> vzlogger origin_master Building
+ file output/build/vzlogger-origin_master/src/vzlogger
output/build/vzlogger-origin_master/src/vzlogger: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-uClibc.so.0, with debug_info, not stripped
+ ls -l output/build/vzlogger-origin_master/src/vzlogger
-rwxr-xr-x 1 root root 1424172 Apr 10 02:35 output/build/vzlogger-origin_master/src/vzlogger

real    12m46.154s

@r00t- r00t- force-pushed the vzlogger_buildroot_builder branch 3 times, most recently from d2b0fcf to 8e23794 Compare April 11, 2021 04:16
@r00t-
Copy link
Contributor Author

r00t- commented Apr 11, 2021

using builder-pattern and some hacks, i managed to optimize the image down to:

$ time docker image save 71f41fecb48e | wc -c
576,486,912
$ time docker image save 71f41fecb48e | gzip -9 | wc -c
188,008,859

the individual layers are:
120mb of the base debian image,
120mb of debian updates + installed packages on the final image
340mb of buildroot tree

@r00t- r00t- force-pushed the vzlogger_buildroot_builder branch from 8e23794 to f471ca4 Compare April 11, 2021 06:30
@r00t-
Copy link
Contributor Author

r00t- commented Apr 11, 2021

now with libsml and also cross-compiles the tests

@r00t- r00t- force-pushed the vzlogger_buildroot_builder branch from f471ca4 to 6a6e2bd Compare April 11, 2021 06:32
apt-get -y --force-yes upgrade ; \
apt-get -y --force-yes install \
# https://buildroot.org/downloads/manual/manual.html#requirement
make gcc g++ \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to use the gcc base image (https://hub.docker.com/_/gcc) and install less packages?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's what my comment above says,
i only had not done any research yet,
thanks for the suggestion of the 'gcc' image(s).

Copy link
Contributor Author

@r00t- r00t- Apr 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the gcc image itself does not make any sense to use,
because it's based on an image that already contains gcc 🤣,
which is then used to build gcc from source.
(and it does not even use a multi-stage build to remove the initial gcc.)
https://github.com/docker-library/gcc/blob/master/10/Dockerfile

but we might use the image that the gcc image is built on:
https://hub.docker.com/_/buildpack-deps/
https://github.com/docker-library/buildpack-deps/blob/master/debian/buster/Dockerfile
https://github.com/docker-library/buildpack-deps/blob/master/debian/buster/scm/Dockerfile
it has most of what we need. still not all, and also lots of stuff we don't need...
but unlikely to get a perfect match.
is there a best practice for installings deps at build-time vs. using pre-built images?

there is an existing official docker build that uses buildroot to build uLibc using buildroot,
similar to what i do here,
but is doesn't look seem to be something we can re-use:
https://github.com/docker-library/busybox/blob/master/stable/uclibc/Dockerfile.builder

# so we can see their size in a separate layer
RUN set -xe ; \
apt-get update ; \
apt-get -y --force-yes upgrade ; \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The debian image is build often and released with the latest version of the packages. See https://hub.docker.com/_/debian?tab=tags&page=1&ordering=last_updated. Can we improve build speed by removing this step and ensure that a pull of the debian image is done before the buil

Copy link
Contributor Author

@r00t- r00t- Apr 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a tag that simply always points to the latest pre-updated version?
(it's strange that 'latest' does not. i remember 'latest' being regularly updated at least for unstable.)

but also see above.

the only real issue is that if we need to install packages, that might pull in any amount of partially related updates.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be able to run docker build --pull. This should ensure that the build is using the latest available image under this tag.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i missed that--pull is needed use the latest image release.
should save some time by avoiding updates.

is there a best practice for this?
i.e., can we rely on the latest image being up-to-date, or should we run the update in any case?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it runs on the build server, it is always the latest version at the build time. (Sorry for the delay, I was some days off)

# only for showing the type of the binary below
file \
; \
apt-get purge ; \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The purge is something that is not in the docker best practices (https://docs.docker.com/develop/develop-images/dockerfile_best-practices/). Does it provide a reduce of the image size?

Copy link
Contributor Author

@r00t- r00t- Apr 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this deletes the downloaded packages,
recent images are configured to do this automatically,
it does not do any harm if done redundantly.

set -xe ; \
\
# download and unpack buildroot
wget --progress=dot:mega https://buildroot.org/downloads/buildroot-2021.02.1.tar.bz2 ; \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the buildroot version should be an ARG, so that testing a other version is easy

Copy link
Contributor Author

@r00t- r00t- Apr 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could even pull buildroot from git.

i did not make this a parameter yet because i am unsure how much of the trickery below will break on updates anyway and need fixing.
i don't assume this would simply work.

@r00t-
Copy link
Contributor Author

r00t- commented Apr 14, 2021

it turns out that buildroot can be globally configured to compile everything as static binaries using:

BR2_STATIC_LIBS=y
# BR2_SHARED_LIBS is not set
# BR2_SHARED_STATIC_LIBS is not set

but this fails for both libsml and vzlogger for various reasons.

also it turns out that linking vzlogger with uclibc is kinda futile, because libstdc++ links glibc anyway.

@r00t-
Copy link
Contributor Author

r00t- commented Jan 18, 2023

giving up on static linking for now,
i changed this to just package everything up including the set of libraries,
and added a "docker buildx" docker-file to generate an arm image.
i uploaded a pre-built image (from vzlogger_buildroot_builder.Dockerfile) to dockerhub,
you can run this as:

$ docker buildx build --platform linux/arm64 --build-arg BUILDER=r000t/vzlogger-builder:latest https://raw.githubusercontent.com/r00t-/vzlogger/vzlogger_buildroot_builder/vzlogger_dockerx.Dockerfile

(i have not actually tested running the resulting image.)
(alternatively copy the contents of output/target to an arm system without using docker and chroot into it.)
it's currently hardcoded to build master, which is a little boring,
but after adding some parameters and/or or code to inject the source, we could use this to cross-compile in a github action and then run the tests in qemu via docker buildx.

@maxberger
Copy link
Contributor

Do you want this one or #563? You should decide on either one.

@r00t-
Copy link
Contributor Author

r00t- commented Jan 18, 2023

@maxberger:
these do rather different things, and i think we can use both.

the alpine (or debian)-based images are "simple" traditional docker images built in the "usual way" from an OS image, and easy to handle for anybody, so i think we still want to offer this.
(those CAN be used to build foreign architectures, but only very inefficiently via qemu.)

while THIS has a dockerfile that generates a cross-compile environment. which is WAY more complex and initially slower than the traditional approach, but which we can store for re-use (see my test-image on dockerhub),
and can then re-use to cross-compile efficiently.

you might read this article on the approach:
https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/
but sadly it focuses on creating multi-arch dockerfiles, and totally ignores the complexity of cross-compilatiom (by focusing on go, which makes this very easy).

my personal main target is to be able to efficiently build and run the unit-tests for multiple architectures, see the discussion above.

@maxberger
Copy link
Contributor

Ok, so this one is for running tests, whereas the alpine one would be for distribution. Makes sense, thanks for the explanation!

@r00t-
Copy link
Contributor Author

r00t- commented Jan 18, 2023

Ok, so this one is for running tests, whereas the alpine one would be for distribution. Makes sense, thanks for the explanation!

i'd rather say, this is for efficient cross-compilation,
which regular users do not need, but which is critical for running tests in the github action.

Copy link
Contributor

@StefanSchoof StefanSchoof left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I build the image on my machine. That quite a while. So we need to build a a lot of arm images with one instance to save some compute time.

I also think to maintain this, is a task that takes some effort. In the modern docker with multi platform support is good. It may need some compute time to build, but the tooling is simple. In my opinion to balance this, I would go with run the test with docker qemu.

Comment on lines +55 to +63
echo '#!/bin/bash' ;\
echo 'f=/tmp/br_make_wrapper.$$' ; \
echo 'trap "rm -f $f" EXIT' ; \
echo 'echo "=== make $@ ===" >&2' ; \
echo 'make "$@" &> >(tee "$f" | grep --line-buffered ">>>") || {' ; \
echo ' e=$?' ; \
echo ' cat "$f" >&2' ; \
echo ' exit $e' ; \
echo '}' ; \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would easier to maintain if this file would be a normal file and copied with ADD into the image?

Copy link
Contributor Author

@r00t- r00t- Jan 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i authored this particular script in 2020, and it never needed an any change since.

other than that, i inlined all the files in the dockerfile to avoid cluttering the repo with single-purpose files.
but i guess we should create a subdirectory for this anyway, and could then split them out.

@r00t-
Copy link
Contributor Author

r00t- commented Jan 20, 2023

I build the image on my machine. That quite a while. So we need to build a a lot of arm images with one instance to save some compute time.

the idea is that we can build this once, store it somewhere (like on dockerhub now) and use it almost idefinitely.
(it will only need an update to change the compiler or a system library, or to build it for a new architecture.)
(one issue might be that the image is somewhat large, which is why i tried to trim it as much as possible.)

I also think to maintain this, is a task that takes some effort.

yes, but cross-compiling is complex in most cases.
i must admit that the buildroot+docker solution is somewhat exotic, it just happens to be something i use for my day job, so i can maintain it relatively easily.
we could also research alternative cross-compilation solutions, but i guess those would be equally complex.

In the modern docker with multi platform support is good. It may need some compute time to build, but the tooling is simple. In my opinion to balance this, I would go with run the test with docker qemu.

i don't understand if you are arguing for or against here.

even if the build of the buildroot image takes time, that only needs to be done once (see above),
and building the tests in qemu can imho be considered prohibitively slow, which would be my main target, as stated before.

execution of the tests, once built, would have to happen in qemu either way.

@r00t-
Copy link
Contributor Author

r00t- commented Jan 21, 2023

previoys discussion:
building and running the tests in qemu takes almost an hour:
#478 (comment)

@StefanSchoof
Copy link
Contributor

Okay, I will try to run some docker builds of this and of the alpine Image in the GitHub Agent in the next week. So we get some current durations.

I am totaly happy, If we get an community managed alpine based Dockerfile in the master branch.

@narc-Ontakac2
Copy link
Collaborator

narc-Ontakac2 commented Jan 21, 2023

Not sure if I am missing the point, but why don't you just use pbuilder? You need a base system which is created with

sudo pbuilder create --architecture armhf --distribution unstable --debootstrap qemu-debootstrap --basetgz /var/cache/pbuilder/unstable-armhf.tgz

Then you run debuild to create a dsc file.
Then you can build an armhf package:

sudo pbuilder build --architecture armhf --basetgz /var/cache/pbuilder/unstable-armhf.tgz vzlogger_0.3.5.dsc

This works, I did however not yet test the resulting package.

@narc-Ontakac2
Copy link
Collaborator

The above is OK for doing a build, but not for developing. Is that it?

@r00t-
Copy link
Contributor Author

r00t- commented Jan 21, 2023

@narc-Ontakac2:
i'm not familiar with pbuilder,
can pbuilder generate a cross-compiler for us?
(i see no mention of that, only of chroot.)

see the discussion in #478 ,
we would like to run the unit-tests on arm from the github action,
#478 contains code to do that using a native arm compiler inside qemu
but that's extremely inefficient.
this sets up cross-compiler using buildroot,
it takes some time because buildroot builds it from source.
(but otoh we can have it build any architecture desired.)

@narc-Ontakac2
Copy link
Collaborator

For running the tests pbuilder is oversized. It starts with a debian base system, installs all dependencies and the runs debuild (which will also run the tests). The main purpose is to check for forgotten build dependencies. It can run a qemu vm to run builds on foreign platforms.

On of the things I want to do is to run these builds (amd64, armhf, arm64) as a release action and publish the packages to a repository.

@r00t-
Copy link
Contributor Author

r00t- commented Jan 22, 2023

On of the things I want to do is to run these builds (amd64, armhf, arm64) as a release action and publish the packages to a repository

as said in #478 , for occasional building of a releases, using qemu is probably ok.

@StefanSchoof
Copy link
Contributor

StefanSchoof commented Jan 27, 2023

Okay, I will try to run some docker builds of this and of the alpine Image in the GitHub Agent in the next week. So we get some current durations.

I run some builds of the alpine docker image
https://github.com/StefanSchoof/vzlogger/actions/workflows/docker.yml

I took about 30 min for the majority of the runs to build an docker image for linux/arm/v6,linux/arm64 and linux/amd64 including tests for all platforms. One took 48 mins.

I did not find time to run a docker build for this image on github actions.

@StefanSchoof
Copy link
Contributor

Building the buildroot image took 37 min on a github agent: https://github.com/StefanSchoof/vzlogger/actions/workflows/build_buildroot.yml
Building with this image vzlogger took 3 to 4 min: https://github.com/StefanSchoof/vzlogger/actions/workflows/docker_vzlogger.yml

@r00t-
Copy link
Contributor Author

r00t- commented Feb 20, 2023

@StefanSchoof:
many thanks for writing those github actions,
i pulled your commits into my branch to avoid them getting lost on your master branch.

i'll look into updating the buildx action to actually build the requested branch (it's hardcoded to build master atm),
when i find the time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants