Skip to content

trustyai-explainability/trustyai-python.github.io

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python bindings to TrustyAI's explainability library.

Setup

PyPi

Install from PyPi with

pip install trustyai

Local

The minimum dependencies can be installed with

pip install -r requirements.txt

If running the examples or developing, also install the development dependencies:

pip install -r requirements-dev.txt

Docker

Alternatively create a container image and run it using

$ docker build -f Dockerfile -t ruivieira/python-trustyai:latest .
$ docker run --rm -it -p 8888:8888 ruivieira/python-trustyai:latest

The Jupyter server will be available at localhost:8888.

Binder

You can also run the example Jupyter notebooks using mybinder.org:

Getting started

To initialise, import the module and initialise it. For instance,

import trustyai

trustyai.init()

If the dependencies are not in the default dep sub-directory, or you want to use a custom classpath you can specify it with:

import trustyai

trustyai.init(path="/foo/bar/explainability-core-2.0.0-SNAPSHOT.jar")

In order to get all the project's dependencies, the script deps.sh can be run and dependencies will be stored locally under ./dep.

This needs to be the very first call, before any other call to TrustyAI methods. After this, we can call all other methods, as shown in the examples.

Writing your model in Python

To code a model in Python you need to write it a function with takes a Python list of PredictionInput and returns a (Python) list of PredictionOutput.

This function will then be passed as an argument to the Python PredictionProvider which will take care of wrapping it in a Java CompletableFuture for you. For instance,

from trustyai.model import PredictionProvider

def myModelFunction(inputs):
    # do something with the inputs
    output = [predictionOutput1, predictionOutput2]
    return output

model = PredictionProvider(myModelFunction)

inputs = [predictionInput1, predictionInput2]

prediction = model.predictAsync(inputs).get()

You can see the sumSkipModel in the LIME tests.

Examples

You can look at the tests for working examples.

There are also Jupyter notebooks available.