This is the inference backend to the Tibava.
This is where the actual machine learning models are running.
For the time being there aren't any strict contribution guides, but more will follow.
./analyser_interface_python
contains the Protobuf definitions used for the communication between the backend and the inference server. Usually you won't be working on this directly./analyser_python
contains the gRPC server listening for tasks from the backend and dispaches the tasks to the plugin manager defined in./inference_ray
./data_python
contains a lot of the type definitions used and the Tibava and their coressponding protobuf definitions../inference_ray
contains the main ray inference code, this where the ML models of the plugins are being executed. It's being provided with ray serve./utils_python
contains some of the helpers like plugins caching.