Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RandomGaussianBlur crashes when factor=0 #1968

Merged
merged 4 commits into from
Jul 31, 2023
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion keras_cv/layers/preprocessing/random_gaussian_blur.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,9 @@ def __init__(self, kernel_size, factor, **kwargs):
)

def get_random_transformation(self, **kwargs):
factor = self.factor()
# `factor` must not become too small otherwise numerical issues occur.
# keras.backend.epsilon() behaves like 0 without causing `nan`s
factor = max(self.factor(), keras.backend.epsilon())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this is breaking tests -- probably because self.factor() returns a tf Tensor and epsilon is a python float.

Perhaps better would be to say factor = self.factor() + keras.backend.epsilon()

You can run the test locally to verify that this isn't breaking

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be very convenient if the tests ran on each update of the pull request automatically. I think the issue could be fixed now. Can you rerun the tests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested it locally, it seems to work now. I think the problem was that tensorflow has problems compiling the max.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They do run automatically for folks who have contributed before. However, you should always run tests locally instead of depending on CI to run tests

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! Hopefully CI will be happy here as well.

When using max in TF you need to use the tf-built in version, not the Python version, if you want to use TF tensors and have it compile into the TF graph. So in this case we could have used the TF version, but just adding the constant value also works just fine 😄

Copy link
Contributor Author

@muxamilian muxamilian Jul 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the meantime I tested it locally and it worked. I couldn't find documentation on how to run the tests in the README.md but it is not so hard to figure out. Could be improved maybe.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI tests look good to me except some unrelated formatting issue in some other code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed! The formatting issue is unrelated and I have a fix open in #1993.

We have some (limited) guidance on running tests here

blur_v = RandomGaussianBlur.get_kernel(factor, self.y)
blur_h = RandomGaussianBlur.get_kernel(factor, self.x)
blur_v = tf.reshape(blur_v, [self.y, 1, 1, 1])
Expand Down
Loading