A multiarchitecture container image for running Celery. This image precompiles dependencies such as gevent
to speed up builds across all architectures.
Looking for the containers? Head over to the Github Container Registry!
- python-celery
Every tag in this repository supports these architectures:
- linux/amd64
- linux/arm64
- linux/arm/v7
All libraries are compiled in one image before being moved into the final published image. This keeps all of the build tools out of the published container layers.
This project uses the Github Container Registry to store images, which have no rate limiting on pulls (unlike Docker Hub).
Within 30 minutes of a new release to celery on PyPI builds will kick off for new containers. This means new versions can be used in hours, not days.
Containers are rebuilt weekly in order to take on the security patches from upstream containers.
The Full Images use the base Python Docker images as their parent. These images are based off of Ubuntu and contain a variety of build tools.
To pull the latest full version:
docker pull ghcr.io/multi-py/python-celery:py3.12-LATEST
To include it in the dockerfile instead:
FROM ghcr.io/multi-py/python-celery:py3.12-LATEST
The Slim Images use the base Python Slim Docker images as their parent. These images are very similar to the Full images, but without the build tools. These images are much smaller than their counter parts but are more difficult to compile wheels on.
To pull the latest slim version:
docker pull ghcr.io/multi-py/python-celery:py3.12-slim-LATEST
To include it in the dockerfile instead:
FROM ghcr.io/multi-py/python-celery:py3.12-slim-LATEST
The Alpine Images use the base Python Alpine Docker images as their parent. These images use Alpine as their operating system, with musl instead of glibc.
In theory these images are smaller than even the slim images, but this amounts to less than 30mb difference. Additional Python libraries tend not to be super well tested on Alpine. These images should be used with care and testing until this ecosystem matures.
To pull the latest alpine version:
docker pull ghcr.io/multi-py/python-celery:py3.12-alpine-LATEST
To include it in the dockerfile instead:
FROM ghcr.io/multi-py/python-celery:py3.12-alpine-LATEST
It's also possible to copy just the Python packages themselves. This is particularly useful when you want to use the precompiled libraries from multiple containers.
FROM python:3.12
COPY --from=ghcr.io/multi-py/python-celery:py3.12-slim-LATEST /usr/local/lib/python3.12/site-packages/* /usr/local/lib/python3.12/site-packages/
By default the startup script checks for the following packages and uses the first one it can find-
/app/app/worker.py
/app/worker.py
By default the celery application should be inside the package in a variable named celery
. Both the locations and variable name can be changed via environment variables.
If you are using pip to install dependencies your dockerfile could look like this-
FROM ghcr.io/multi-py/python-celery:py3.12-5.4.0
COPY requirements /requirements
RUN pip install --no-cache-dir -r /requirements
COPY ./app app
When the container is launched it will run the script at /app/prestart.sh
before starting the celery service. This is an ideal place to put things like database migrations.
If the container is launched with the environment variable ENABLE_BEAT
it will run the beat scheduler instead of the normal worker process.
Only one scheduler should run at a time, otherwise duplicate tasks will run. The container running the scheduler will not process tasks, so a second container should be launched.
This container runs the celery defaults where appropriate, which includes using the prefork
pool as the default option. This project precompiled gevent
as well to enable easy switching between pool types.
Choosing the right pool is extremely important for efficient use of resources. As a starting basis you can rely on these rules-
- If the tasks are CPU bound (processing lots of data, generating images, running inference on cpu based ML models) then you should stick with the prefork model and set the
CONCURRENCY
to the number of CPUs. This will then run one task at a time split by the number of CPUs. - If the tasks rely on external resources (filesystem reads, database calls, API requests) then the
gevent
pool with a highCONCURRENCY
(100 per CPU to start, then adjust based on how it works) will work best. This is because these types of tasks spend more time waiting than they do processing, so more tasks are able to run at a time.
These variables are in addition to the environment variables defined by Celery itself.
When set to true
the container will start the Beat Scheduler instead of a normal worker. Only one container in each cluster should have Beat enabled at a time in order to prevent duplicate tasks from being created.
Beat Schedulers will not run tasks, so at least one additional container running as a normal worker needs to be launched.
Can be prefork
, eventlet
, gevent
, solo
, processes
, or threads
.
As a simple rule use prefork
(the default) when your tasks are CPU heavy and gevent
otherwise.
How many tasks to run at a time. For process based pools this will define the number of processes, and for others it will define the number of threads.
The prefetch multiplier tells Celery how many items in the queue to reserve for the current worker.
The celery log level. Must be one of the following:
critical
error
warning
info
debug
trace
The python module that celery will import. This value is used to generate the APP_MODULE value.
The python variable containing the celery application inside of the module. This value is used to generate the APP_MODULE value.
The python module and variable that is passed to celery. When used the VARIABLE_NAME
and MODULE_NAME
environmental variables are ignored.
Where to find the prestart script, if a developer adds one.
If RELOAD
is set to true
and any files in the /app
directory change celery will be restarted, allowing for quick debugging. This comes at a performance cost, however, and should not be enabled on production machines.
This functionality is not available on the linux/arm/v7
images.
When RELOAD
is set this value determines how long to wait for the worker to gracefully shutdown before forcefully terminating it and reloading.
Defaults to 30 seconds.
This project actively supports these Python versions:
- 3.12
- 3.11
- 3.10
- 3.9
- 3.8
Like the upstream Python containers themselves a variety of image variants are supported.
The default container type, and if you're not sure what container to use start here. It has a variety of libraries and build tools installed, making it easy to extend.
This container is similar to Full but with far less libraries and tools installed by default. If yo're looking for the tiniest possible image with the most stability this is your best bet.
This container is provided for those who wish to use Alpine. Alpine works a bit differently than the other image types, as it uses musl
instead of glibc
and many libaries are not well tested under musl
at this time.
Every tag in this repository supports these architectures:
- linux/amd64
- linux/arm64
- linux/arm/v7
If you get use out of these containers please consider sponsoring me using Github!
- Recommended Image:
ghcr.io/multi-py/python-celery:py3.12-5.4.0
- Slim Image:
ghcr.io/multi-py/python-celery:py3.12-slim-5.4.0
Tags are based on the package version, python version, and the upstream container the container is based on.
celery Version | Python Version | Full Container | Slim Container | Alpine Container |
---|---|---|---|---|
latest | 3.12 | py3.12-latest | py3.12-slim-latest | py3.12-alpine-latest |
latest | 3.11 | py3.11-latest | py3.11-slim-latest | py3.11-alpine-latest |
latest | 3.10 | py3.10-latest | py3.10-slim-latest | py3.10-alpine-latest |
latest | 3.9 | py3.9-latest | py3.9-slim-latest | py3.9-alpine-latest |
latest | 3.8 | py3.8-latest | py3.8-slim-latest | py3.8-alpine-latest |
5.4.0 | 3.12 | py3.12-5.4.0 | py3.12-slim-5.4.0 | py3.12-alpine-5.4.0 |
5.4.0 | 3.11 | py3.11-5.4.0 | py3.11-slim-5.4.0 | py3.11-alpine-5.4.0 |
5.4.0 | 3.10 | py3.10-5.4.0 | py3.10-slim-5.4.0 | py3.10-alpine-5.4.0 |
5.4.0 | 3.9 | py3.9-5.4.0 | py3.9-slim-5.4.0 | py3.9-alpine-5.4.0 |
5.4.0 | 3.8 | py3.8-5.4.0 | py3.8-slim-5.4.0 | py3.8-alpine-5.4.0 |
5.3.6 | 3.12 | py3.12-5.3.6 | py3.12-slim-5.3.6 | py3.12-alpine-5.3.6 |
5.3.6 | 3.11 | py3.11-5.3.6 | py3.11-slim-5.3.6 | py3.11-alpine-5.3.6 |
5.3.6 | 3.10 | py3.10-5.3.6 | py3.10-slim-5.3.6 | py3.10-alpine-5.3.6 |
5.3.6 | 3.9 | py3.9-5.3.6 | py3.9-slim-5.3.6 | py3.9-alpine-5.3.6 |
5.3.6 | 3.8 | py3.8-5.3.6 | py3.8-slim-5.3.6 | py3.8-alpine-5.3.6 |
5.3.5 | 3.12 | py3.12-5.3.5 | py3.12-slim-5.3.5 | py3.12-alpine-5.3.5 |
5.3.5 | 3.11 | py3.11-5.3.5 | py3.11-slim-5.3.5 | py3.11-alpine-5.3.5 |
5.3.5 | 3.10 | py3.10-5.3.5 | py3.10-slim-5.3.5 | py3.10-alpine-5.3.5 |
5.3.5 | 3.9 | py3.9-5.3.5 | py3.9-slim-5.3.5 | py3.9-alpine-5.3.5 |
5.3.5 | 3.8 | py3.8-5.3.5 | py3.8-slim-5.3.5 | py3.8-alpine-5.3.5 |
5.3.4 | 3.12 | py3.12-5.3.4 | py3.12-slim-5.3.4 | py3.12-alpine-5.3.4 |
5.3.4 | 3.11 | py3.11-5.3.4 | py3.11-slim-5.3.4 | py3.11-alpine-5.3.4 |
5.3.4 | 3.10 | py3.10-5.3.4 | py3.10-slim-5.3.4 | py3.10-alpine-5.3.4 |
5.3.4 | 3.9 | py3.9-5.3.4 | py3.9-slim-5.3.4 | py3.9-alpine-5.3.4 |
5.3.4 | 3.8 | py3.8-5.3.4 | py3.8-slim-5.3.4 | py3.8-alpine-5.3.4 |
5.3.1 | 3.12 | py3.12-5.3.1 | py3.12-slim-5.3.1 | py3.12-alpine-5.3.1 |
5.3.1 | 3.11 | py3.11-5.3.1 | py3.11-slim-5.3.1 | py3.11-alpine-5.3.1 |
5.3.1 | 3.10 | py3.10-5.3.1 | py3.10-slim-5.3.1 | py3.10-alpine-5.3.1 |
5.3.1 | 3.9 | py3.9-5.3.1 | py3.9-slim-5.3.1 | py3.9-alpine-5.3.1 |
5.3.1 | 3.8 | py3.8-5.3.1 | py3.8-slim-5.3.1 | py3.8-alpine-5.3.1 |
Older tags are left for historic purposes but do not receive updates. A full list of tags can be found on the package's registry page.