Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Keras documentation #832

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions ci/posix.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,12 @@ jobs:

- bash: |
source activate dask-ml-test
pip install tensorflow>=2.3.0
pip install scikeras>=0.1.8
pip install tensorflow>=2.4.0
pip install scikeras>=0.3.2
python -c "import tensorflow as tf; print('TF ' + tf.__version__)"
python -c "import scikeras; print('SciKeras ' + scikeras.__version__)"
displayName: "install Tensorflow and SciKeras"
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
# condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')

- script: |
source activate dask-ml-test
Expand Down
33 changes: 17 additions & 16 deletions docs/source/keras.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ these packages need to be installed:

.. code-block:: bash

$ pip install tensorflow>=2.3.0
$ pip install scikeras>=0.1.8
$ pip install tensorflow>=2.4.0
$ pip install scikeras>=0.3.2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this required? Dask-ML will still work with the removed versions, right?

Copy link
Author

@adriangb adriangb May 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Things should work generally, but some of the syntax in this tutorial may not. I think we should either update the versions, or remove them altogether (since not specifying a version usually gets you the latest version anyway).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dask-ML will not work with SciKeras v0.1.7. I think that version didn't have serialization (?).

We should make a note about the versioning. "The example below uses X. The usage with lower versions may be different than this example."

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasn't there also some issue about serialization of stateful optimizers like Adam?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a note along the lines of #832 (comment)

Wasn't there also some issue about serialization of stateful optimizers like Adam?

Yeah, we fixed that in v0.3.0, which is another good reason to bump the "recommended" version numbers in these docs, although I don't think we want to mention that here right?

Copy link
Member

@stsievert stsievert May 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we fixed [serialization] in v0.3.0

That's a really good reason to require SciKeras v0.3.0.


These are the minimum versions that Dask-ML requires to use Tensorflow/Keras.

Expand All @@ -36,24 +36,18 @@ normal way to create a `Keras Sequential model`_
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential

def build_model(lr=0.01, momentum=0.9):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

def build_model():
layers = [Dense(512, input_shape=(784,), activation="relu"),
Dense(10, input_shape=(512,), activation="softmax")]
model = Sequential(layers)

opt = tf.keras.optimizers.SGD(
learning_rate=lr, momentum=momentum, nesterov=True,
)
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
return model
return Sequential(layers)

Now, we can use the SciKeras to create a Scikit-learn compatible model:

.. code-block:: python

from scikeras.wrappers import KerasClassifier
niceties = dict(verbose=False)
model = KerasClassifier(build_fn=build_model, lr=0.1, momentum=0.9, **niceties)
model = KerasClassifier(build_model, loss="categorical_crossentropy", optimizer=tf.keras.optimizers.SGD, **niceties)

This model will work with all of Dask-ML: it can use NumPy arrays as inputs and
obeys the Scikit-learn API. For example, it's possible to use Dask-ML to do the
Expand All @@ -63,12 +57,19 @@ following:
:class:`~dask_ml.model_selection.HyperbandSearchCV`.
* Use Keras with Dask-ML's :class:`~dask_ml.wrappers.Incremental`.

If we want to tune ``lr`` and ``momentum``, SciKeras requires that we pass
``lr`` and ``momentum`` at initialization:
If we want to tune SGD's ``learning_rate`` and ``momentum``, SciKeras requires that we pass
``learning_rate`` and ``momentum`` at initialization:

.. code-block::
.. code-block:: python

model = KerasClassifier(build_fn=build_model, lr=None, momentum=None, **niceties)
model = KerasClassifier(
build_model,
loss="categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD,
optimizer__learning_rate=0.1,
optimizer__momentum=0.9,
**niceties
)

.. _SciKeras: https://github.com/adriangb/scikeras

Expand Down Expand Up @@ -101,7 +102,7 @@ And let's perform the basic task of tuning our SGD implementation:
.. code-block:: python

from scipy.stats import loguniform, uniform
params = {"lr": loguniform(1e-3, 1e-1), "momentum": uniform(0, 1)}
params = {"optimizer__learning_rate": loguniform(1e-3, 1e-1), "optimizer__momentum": uniform(0, 1)}
X, y = get_mnist()

Now, the search can be run:
Expand Down
58 changes: 28 additions & 30 deletions tests/model_selection/test_keras.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,29 +18,22 @@

pytestmark = [
pytest.mark.skipif(
version.parse(tf.__version__) < version.parse("2.3.0"),
version.parse(tf.__version__) < version.parse("2.4.0"),
reason="pickle support",
),
pytest.mark.skipif(
version.parse(scikeras.__version__) < version.parse("0.1.8"),
reason="partial_fit support",
version.parse(scikeras.__version__) < version.parse("0.3.2"),
reason="default parameter syntax",
),
]
except ImportError:
pytestmark = pytest.mark.skip(reason="Missing tensorflow or scikeras")


def _keras_build_fn(lr=0.01):
layers = [
Dense(512, input_shape=(784,), activation="relu"),
Dense(10, input_shape=(512,), activation="softmax"),
]

model = Sequential(layers)

opt = tf.keras.optimizers.SGD(learning_rate=lr)
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
return model
def _keras_build_fn():
layers = [Dense(512, input_shape=(784,), activation="relu"),
Dense(10, input_shape=(512,), activation="softmax")]
return Sequential(layers)


@gen_cluster(client=True, Worker=Nanny, timeout=20)
Expand All @@ -51,23 +44,28 @@ def test_keras(c, s, a, b):
assert y.dtype == np.dtype("int64")

model = KerasClassifier(
model=_keras_build_fn, lr=0.01, verbose=False, loss="categorical_crossentropy",
_keras_build_fn,
verbose=False,
loss="categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD,
optimizer__learning_rate=0.01,
)
params = {"lr": loguniform(1e-3, 1e-1)}
model.fit(X, y).score(X, y)
# params = {"optimizer__learning_rate": loguniform(1e-3, 1e-1)}

search = IncrementalSearchCV(
model, params, max_iter=3, n_initial_parameters=5, decay_rate=None
)
yield search.fit(X, y)
# search.fit(X, y)
# search = IncrementalSearchCV(
# model, params, max_iter=3, n_initial_parameters=5, decay_rate=None
# )
# yield search.fit(X, y)
# # search.fit(X, y)

assert search.best_score_ >= 0
# assert search.best_score_ >= 0

# Make sure the model trains, and scores aren't constant
scores = {
ident: [h["score"] for h in hist]
for ident, hist in search.model_history_.items()
}
assert all(len(hist) == 3 for hist in scores.values())
nuniq_scores = [pd.Series(v).nunique() for v in scores.values()]
assert max(nuniq_scores) > 1
# # Make sure the model trains, and scores aren't constant
# scores = {
# ident: [h["score"] for h in hist]
# for ident, hist in search.model_history_.items()
# }
# assert all(len(hist) == 3 for hist in scores.values())
# nuniq_scores = [pd.Series(v).nunique() for v in scores.values()]
# assert max(nuniq_scores) > 1