You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Django Q keeps up to 250 successful task results in total by default.
Assert the following situation: Two tasks are scheduled to run, task 1 runs every 10 minutes, task 2 runs every week. The results of task 2 get removed from the database before it is run again, due to task 1 producing many results. With this, the task appears in the admin as if the result of the last execution is unknown.
Solution 1 would be to increase save_limit to 15000 or something so that this never happens.
The far better approach would be to have save_limit work for each group of tasks independently, so that django q keeps up to 250 results for the group task 1, and up to 250 results for the group task 2.
Maybe we can implement this as a new configuration option save_limit_per_group, which works alongside the already existing save_limit. This way, existing setups won't be affected at all.
If this is a desirable feature, I'd look into the implementation myself.
The text was updated successfully, but these errors were encountered:
Django Q keeps up to 250 successful task results in total by default.
Assert the following situation: Two tasks are scheduled to run, task 1 runs every 10 minutes, task 2 runs every week. The results of task 2 get removed from the database before it is run again, due to task 1 producing many results. With this, the task appears in the admin as if the result of the last execution is unknown.
Solution 1 would be to increase save_limit to 15000 or something so that this never happens.
The far better approach would be to have save_limit work for each group of tasks independently, so that django q keeps up to 250 results for the group task 1, and up to 250 results for the group task 2.
Maybe we can implement this as a new configuration option
save_limit_per_group
, which works alongside the already existingsave_limit
. This way, existing setups won't be affected at all.If this is a desirable feature, I'd look into the implementation myself.
The text was updated successfully, but these errors were encountered: