Skip to content

Commit

Permalink
Initial version of vector model guide notebook
Browse files Browse the repository at this point in the history
  • Loading branch information
opcode81 committed Aug 13, 2024
1 parent be005ed commit d921324
Showing 1 changed file with 282 additions and 0 deletions.
282 changes: 282 additions & 0 deletions docs/1-supervised-learning/1-vector-models.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,282 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Models with Modular Data Pipelines"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"hide-cell"
]
},
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2\n",
"\n",
"import sys; sys.path.append(\"../../src\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"hide-cell"
]
},
"outputs": [],
"source": [
"import sensai\n",
"import pandas as pd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## VectorModel\n",
"\n",
"The backbone of supervised learning implementations is the `VectorModel` abstraction.\n",
"It is so named, because, in computer science, a *vector* corresponds to an array of data,\n",
"and vector models map such vectors to the desired outputs, i.e. regression targets or \n",
"classes.\n",
"\n",
"It is important to note that this does *not* limit vector models to tabular data, because the data within\n",
"a vector can take arbitrary forms (in contrast to vectors as they are defined in mathematics).\n",
"Every element of an input vector could itself be arbitrary\n",
"complex, and could, in the most general sense, be any kind of object.\n",
"\n",
"### The VectorModel Class Hierarchy\n",
"\n",
"`VectorModel` is an abstract base class.\n",
"From it, abstract base classes for classification (`VectorClassificationModel`) and regression (`VectorRegressionModel`) are derived. And we furthermore provide base classes for rule-based models, facilitating the implementation of models that do not require learning (`RuleBasedVectorClassificationModel`, `RuleBasedVectorRegressionModel`).\n",
"\n",
"These base classes are, in turn, specialised in order to provide direct access to model implementations based on widely used machine learning libraries such as scikit-learn, XGBoost, PyTorch, etc.\n",
"Use your IDE's hierarchy view to inspect them.\n",
"\n",
"<!-- TODO: hierarchical bullet item list with hierarchy (or maybe auto-generate?) -->\n",
"\n",
"### DataFrame-Based Interfaces\n",
"\n",
"Vector models use pandas DataFrames as the fundmental input and output data structures.\n",
"Every row in a data frame corresponds to a vector of data, and an entire data frame can thus be viewed as a dataset or batch of data. Data frames are a good base representation for input data because\n",
" * they provide rudimentary meta-data in the form of column names, avoiding ambiguity.\n",
" * they can contain arbitrarily complex data, yet in the simplest of cases, they can directly be mapped to a data matrix (2D array) of features that simple models can directly process.\n",
"\n",
"The `fit` and `predict` methods of `VectorModel` take data frames as input, and the latter furthermore returns its predictions as a data frame.\n",
"It is important to note that the DataFrame-based interface does not limit the scope of the models that can be applied, as one of the key principles of vector models is that they may define arbitrary model-specific transformations of the data originally contained in a data frame (e.g. a conversion from complex objects in data frames to one or more tensors for neural networks), as we shall see below.\n",
"\n",
"Here's the particularly simple Iris dataset for flower species classification, where the features are measurements of petals and sepals:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataset = sensai.data.dataset.DataSetClassificationIris()\n",
"io_data = dataset.load_io_data()\n",
"io_data.to_df().sample(8)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, `io_data` is an instance of `InputOutputData`, which contains two data frames, `inputs` and `outputs`.\n",
"The `to_df` method merges the two data frames into one for easier visualisation.\n",
"\n",
"Let's split the dataset and apply a model to it:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load and split a dataset\n",
"splitter = sensai.data.DataSplitterFractional(0.8)\n",
"train_io_data, test_io_data = splitter.split(io_data)\n",
"\n",
"# train a model\n",
"model = sensai.sklearn.classification.SkLearnRandomForestVectorClassificationModel(\n",
" n_estimators=15)\n",
"model.fit_input_output_data(train_io_data)\n",
"\n",
"# make predictions\n",
"predictions = model.predict(test_io_data.inputs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `fit_input_output_data` method is just a convenience method to pass an `InputOutputData` instance instead of two data frames. It is equivalent to\n",
"\n",
"```python\n",
"model.fit(train_io_data.inputs, train_io_data.outputs)\n",
"```\n",
"\n",
"where the two data frames containing inputs and outputs are passed separately.\n",
"\n",
"Now let's compare the ground truth to some of the predictions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pd.concat((test_io_data.outputs, predictions), axis=1).sample(8)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Implementing Custom Models\n",
"\n",
"It is straightforward to implement your own model. Simply subclass the appropriate base class depending on the type of model you want to implement.\n",
"\n",
"For example, let us implement a simple classifier where we always return the a priori probability of each class in the training data, ignoring the input data for predictions. For this case, we inherit from `VectorClassificationModel` and implement the two abstract methods it defines."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class PriorProbabilityVectorClassificationModel(sensai.VectorClassificationModel):\n",
" def _fit_classifier(self, x: pd.DataFrame, y: pd.DataFrame):\n",
" self._prior_probabilities = y.iloc[:, 0].value_counts(normalize=True).to_dict()\n",
"\n",
" def _predict_class_probabilities(self, x: pd.DataFrame) -> pd.DataFrame:\n",
" values = [self._prior_probabilities[cls] for cls in self.get_class_labels()]\n",
" return pd.DataFrame([values] * len(x), columns=self.get_class_labels(), index=x.index)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Adapting a model implementation from another machine learning library is typically just a few lines. For models that adhere to the scikit-learn interfaces for learning and prediction, there are abstract base classes that make the adaptation particularly straightforward."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configuration\n",
"\n",
"Apart from the parameters passed at construction, which are specific to the type of model in question, all vector models can be flexibly configured via methods that can be called post-construction.\n",
"These methods all have the `with_` prefix, indicating that they return the instance itself (akin to the builder pattern), allowing calls to be chained in a single statement.\n",
"\n",
"The most relevant such methods are:\n",
"\n",
"* `with_name` to name the model (for reporting purposes)\n",
"* `with_raw_input_transformer` for adding an initial input transformation\n",
"* `with_feature_generator` and `with_feature_collector` for specifying how to generate features from the input data\n",
"* `with_feature_transformers` for specifying how the generated features shall be transformed\n",
"\n",
"The latter three points are essential for defining modular input pipelines and will be addressed in detail below.\n",
"\n",
"All configured options are fully reflected in the model's string representation, which can be pretty-printed with the `pprint` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"str(model.with_name(\"RandomForest\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.pprint()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Modular Pipelines\n",
"\n",
"A key principle of sensAI's vector models is that data pipelines \n",
"* can be **strongly associated with a model**. This is critically important of several heterogeneous models shall be applied to the same use case. Typically, every model has different requirements regarding the data it can process and the representation it requires to process it optimally.\n",
"* are to be **modular**, meaning that a pipeline can be composed from reusable and user-definable components.\n",
"\n",
"An input pipeline typically serves the purpose of answering the following questions:\n",
"\n",
"* **How shall the data be pre-processed?**\n",
"\n",
" It might be necessary to process the data before we can use it and extract data from it.\n",
" We may need to filter or clean the data;\n",
" we may need to establish a usable representation from raw data (e.g. convert a string-based representation of a date into a proper data structure);\n",
" or we may need to infer/impute missing data.\n",
"\n",
" The relevant abstraction for this task is `DataFrameTransformer`, which, as the name suggests, can arbitrarily transform a data frame.\n",
" All non-abstract class implementations have the prefix `DFT` in sensAI and thus are easily discovered through auto-completion.\n",
"\n",
" A `VectorModel` can be configured to apply a pre-processing transformation via method `with_raw_input_transformers`.\n",
"\n",
"* **What is the data used by the model?**\n",
"\n",
" The relevant abstraction is `FeatureGenerator`. Via `FeatureGenerator` instances, a model can define which set of features is to be used. Moreover, these instances can hold meta-data on the respective features, which can be leveraged for downstream representation. \n",
" In sensAI, the class names of all feature generator implementations use the prefix `FeatureGenerator`.\n",
"\n",
" A `VectorModel` can be configured to answer this question via method `with_feature_generator` (or `with_feature_collector`).\n",
"\n",
"* **How does that data need to be represented?**\n",
"\n",
" Different models can require different representations of the same data. For example, some models might require all features to be numeric, thus requiring categorical features to be encoded, while others might work better with the original representation.\n",
" Furthermore, some models might work better with numerical features normalised or scaled in a certain way while it makes no difference to others.\n",
" We can address these requirements by adding model-specific transformations.\n",
"\n",
" The relevant abstraction is, once again, `DataFrameTransformer`.\n",
"\n",
" A `VectorModel` can be configured to apply a transformation to its features via method `with_feature_transformers`.\n",
"\n",
"The three pipeline stages are applied in the order presented above, and all components are optional, i.e. if a model does not define any raw input transformers, then the original data remains unmodified. If a model defines no feature generator, then the set of features is given by the full input data frame, etc.\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "sensai",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.18"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

0 comments on commit d921324

Please sign in to comment.