Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build a run evaluation engine in Python #1

Open
PGijsbers opened this issue Sep 2, 2024 · 1 comment
Open

Build a run evaluation engine in Python #1

PGijsbers opened this issue Sep 2, 2024 · 1 comment

Comments

@PGijsbers
Copy link

The evaluation engine is a component on the server which handles multiple tasks. This is currently implemented in Java and we want to rebuild it in Python, and compartmentalised per each function, for easier maintenance/more accessible to new contributors. One of its tasks is evaluating run results.

So we want an engine which can take in any run result, and produce a number of metrics of those results. It should be easily extendable towards new task types, and cover many (all?) of the currently available metrics - or at least ensure that those that share a name produce identical results. It would be best to have a base implementation that could be inherited from for separate evaluation engines that are specific to a task type.

@PGijsbers PGijsbers converted this from a draft issue Sep 2, 2024
@joaquinvanschoren
Copy link

This is a nice standalone project, assuming we can build this on top of the Python API. What would make a lot of sense is to sit together for an hour during the hackathon to design the overall architecture and concrete next steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Todo
Development

No branches or pull requests

2 participants