database
contains the MySQL database structure.chronos
is cron-job.org's cron job execution daemon and is responsible for fetching the jobs.protocol
contains the interface definitions for interaction between system nodes.frontend
contains the web interfacestatuspage
contains the status page UIapi
contains the server API used by web interface and status page UI.
chronos checks the MySQL database every minute to collect all jobs to execute. For every minute, a thread is spawned which processes all the jobs. Actual HTTP fetching is done using the excellent curl multi library with libev as library used to provide the event loop. Together with the c-ares resolver this allows for thousands of parallel HTTP requests.
cron-job.org supports storing the job results for the user's convenience. This can quickly lead to I/O bottlenecks when storing the result data in a MySQL database. (Which also has the downside that cleaning up old entries is extremely expensive.) To solve this issue, chronos stores the results in per-user per-day SQLite databases. Cleaning up old entries is as easy as deleting the corresponding day's databases.
The whole software is optimized on performance rather than on data integrity, i.e. when your server crashes or you have a power outage / hardware defect, the job history is most likely lost. Since this is volatile data anyway, it's not considered a big issue.
chronos
can now run on multiple nodes. Each node requires an own MySQL server/database and stores its own jobs. The host
running the web interface also manages the user database and an association between job and node. The web interface can
create, delete, update and fetch jobs and job logs from the particular node via a Thrift-based protocol defined in the
protocol
folder.
In order to build chronos, you need development files of:
- curl (preferably with c-ares as resolver and libidn2 for IDN support)
- libev
- mysqlclient
- sqlite3
- thrift (compiler and libthrift)
To build, you need a C++14 compiler and cmake.
- Create and enter a build folder:
mkdir build && cd build
- Run cmake:
cmake -DCMAKE_BUILD_TYPE=Release ..
- Build the project:
make
- Ensure you've imported the DB scheme from the
database
folder - Customize
chronos.cfg
according to your system (especially add your MySQL login) - Execute
./chronos /path/to/chronos.cfg
The API is written in PHP and needs to be hosted on a webserver (cron-job.org uses nginx with php-fpm). It is used by the console and the status page UI.
- nginx with php-fpm (PHP 7)
- Optionally, a redis instance to support API call rate limiting
- Copy the
api/
folder to your webserver - Create a copy of
config/config.inc.default.php
aslib/config.inc.php
and customize it according to your environment
- When changing the thrift protocol, don't forget to re-compile the PHP glue code and copy it to
lib/protocol/
. When committing, include the updated PHP code. Currently, this is a manual step.
The frontend is written in JavaScript using React and material-ui. You need npm
to build it.
- Node.js
- Go to the
frontend/
folder - Install all required dependencies by running
npm install
- Create a copy of
src/utils/Config.default.js
assrc/utils/Config.js
and customize it according to your environment - Run the web interface via
npm start
The status page frontend is written in JavaScript using React and material-ui. You need npm
to build it.
- Node.js
- Go to the
statuspage/
folder - Install all required dependencies by running
npm install
- Create a copy of
src/utils/Config.default.js
assrc/utils/Config.js
and customize it according to your environment - Run the web interface via
npm start
- We strongly recommend to build curl using the c-ares resolver. Otherwise every request might spawn its own thread for DNS resolving and your machine will run out of resources very soon.
- Before running chronos, ensure that the limit of open files/sockets is not set too low. You might want to run
ulimit -n 65536
or similar first. - If data integrity is not important for you, we highly recommend to set
innodb_flush_log_at_trx_commit=0
andinnodb_flush_method=O_DIRECT
in your MySQL config for best performance. Otherwise the update thread (which is responsible for storing the job resuls) might lag behind the actual job executions quite soon. - Parts of the source are quite old and from early stages of the project and might require a refactoring sooner or later.