-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tft.apply_saved_model raises ValueError when running beam pipeline #231
Comments
@thisisandreeeee , as per link InvalidArgumentError occurs when an operation receives an input tensor that has an invalid value or shape. |
@arghyaganguly I think you're right, but I can't quite figure out what the input shape should be. When I try to call # this works
tf.keras.models.load_model(model_dir).predict(
{f: tf.constant([np.random.uniform() for _ in range(5)]) for f in FEATURES}
) This also seems to be the same shape of the
The input shapes look the same to me, so I'm unclear as to why there's an |
I believe apply_saved_model is incompatible with Keras, The following works, but is not practical as it would only work with beam's DirectRunner:
Note that I'm using TFXIO because CsvCoder.decode is deprecated in the most recent version of TFT, and I'm also setting force_tf_compat_v1=False since you're running Keras related code in your preprocessing_fn. |
Followup to Zohar's suggestion above, using tft.make_and_track_object [1] to create the keras model and invoke will allow doing that inside inference_fn and hence allow using other beam runners as well. Note this only works when TF2 behavior is not disabled and force_tf_compat_v1=False.
[1] https://www.tensorflow.org/tfx/transform/api_docs/python/tft/make_and_track_object |
Closing this as there has been no update to the comment thread (awaiting response from the user)lately.Please feel free to reopen based on above comment trace.Thanks. |
If we were to integrate this inference_fn in a preprocessing_fn, would that mean that the tft output graph we would get would have internalized those tensor operations, and the dependency on the physical location of saved_model would no longer matter? |
@varshaan I see within |
I would like to ensemble several pre-trained models within a single TF graph. I'd like to understand if this is feasible using TensorFlow Transform, and I am planning to use the
tft.apply_saved_model
function to calculate some predictions, before exporting thetransform_fn
to be used in the serving signature of some wrapper model. However, I am encountering aValueError
when attempting to perform inference on a simple toy model, and the stack trace isn't very informative.Versions
Steps to reproduce
Create toy model
First, I create a toy classification model that takes two float features as input and returns the probability that the predicted label is positive/negative.
Run beam pipeline
Then, I construct a beam pipeline to run
tft.apply_saved_model
on a sample dataset. To create this dataset:We can then proceed to write the inference function and execute it:
Stack trace
When the above pipeline is run, I encounter the following error:
I'm not too sure what the issue is. I'm guessing it's something silly like an incorrect input signature, but when I try to perform inference manually it works fine.
The text was updated successfully, but these errors were encountered: