We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello. Could you please help me figure out this error when i run this training command
for epochs, parameters in train_schedule.items(): print("") print("training layers {} until epoch {} with learning_rate {}".format(parameters["layers"], epochs, parameters["learning_rate"])) model.train(coco_train, coco_val, learning_rate=parameters["learning_rate"], epochs=epochs, layers=parameters["layers"])
After that. I got this following output and error
training layers heads until epoch 1 with learning_rate 0.02 Starting at epoch 0. LR=0.02 Checkpoint Path: C:\Users\supha\Desktop\FIBO\intern\Onboard\siamese-mask-rcnn-master - Copy\logs\siamese_mrcnn_small_coco_example\siamese_mrcnn_{epoch:04d}.h5 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [11], in <cell line: 1>() 2 print("") 3 print("training layers {} until epoch {} with learning_rate {}".format(parameters["layers"], 4 epochs, 5 parameters["learning_rate"])) ----> 6 model.train(coco_train, coco_val, 7 learning_rate=parameters["learning_rate"], 8 epochs=epochs, 9 layers=parameters["layers"]) File ~\Desktop\FIBO\intern\Onboard\siamese-mask-rcnn-master - Copy\lib\model.py:711, in SiameseMaskRCNN.train(self, train_dataset, val_dataset, learning_rate, epochs, layers, augmentation) 709 modellib.log("Checkpoint Path: {}".format(self.checkpoint_path)) 710 self.set_trainable(layers) --> 711 self.compile(learning_rate, self.config.LEARNING_MOMENTUM) 713 # Work-around for Windows: Keras fails on Windows when using 714 # multiprocessing workers. See discussion here: 715 # https://github.com/matterport/Mask_RCNN/issues/13#issuecomment-353124009 716 if os.name is 'nt': File ~\Desktop\FIBO\intern\Onboard\siamese-mask-rcnn-master - Copy\lib\model.py:540, in SiameseMaskRCNN.compile(self, learning_rate, momentum) 538 continue 539 loss = (tf.reduce_mean(input_tensor=layer.output, keepdims=True) * self.config.LOSS_WEIGHTS.get(name, 1.)) --> 540 self.keras_model.add_loss(loss) 542 # Add L2 Regularization 543 # Skip gamma and beta weights of batch normalization layers. 544 reg_losses = [ 545 keras.regularizers.l2(self.config.WEIGHT_DECAY)(w) / tf.cast(tf.size(w), tf.float32) 546 for w in self.keras_model.trainable_weights 547 if 'gamma' not in w.name and 'beta' not in w.name] File ~\Desktop\FIBO\intern\Onboard\.venv\lib\site-packages\keras\engine\base_layer_v1.py:1054, in Layer.add_loss(self, losses, inputs) 1052 for symbolic_loss in symbolic_losses: 1053 if getattr(self, '_is_graph_network', False): -> 1054 self._graph_network_add_loss(symbolic_loss) 1055 else: 1056 # Possible a loss was added in a Layer's `build`. 1057 self._losses.append(symbolic_loss) File ~\Desktop\FIBO\intern\Onboard\.venv\lib\site-packages\keras\engine\functional.py:908, in Functional._graph_network_add_loss(self, symbolic_loss) 906 new_nodes.extend(add_loss_layer.inbound_nodes) 907 new_layers.append(add_loss_layer) --> 908 self._insert_layers(new_layers, new_nodes) File ~\Desktop\FIBO\intern\Onboard\.venv\lib\site-packages\keras\engine\functional.py:851, in Functional._insert_layers(self, layers, relevant_nodes) 848 self._nodes_by_depth[depth].append(node) 850 # Insert layers and update other layer attrs. --> 851 layer_set = set(self._self_tracked_trackables) 852 deferred_layers = [] 853 for layer in layers: File ~\Desktop\FIBO\intern\Onboard\.venv\lib\site-packages\tensorflow\python\training\tracking\data_structures.py:668, in ListWrapper.__hash__(self) 665 def __hash__(self): 666 # List wrappers need to compare like regular lists, and so like regular 667 # lists they don't belong in hash tables. --> 668 raise TypeError("unhashable type: 'ListWrapper'") TypeError: unhashable type: 'ListWrapper'
I run this code with tensorflow 2.5.0 and keras 2.8.0 and i tried to change something in loss function code but it's still same error Thanks for help!
The text was updated successfully, but these errors were encountered:
The project was built using Tensorflow 1 (I don't exactly remember which version) and Keras 2.1.6.
Sorry, something went wrong.
No branches or pull requests
Hello. Could you please help me figure out this error when i run this training command
After that. I got this following output and error
I run this code with tensorflow 2.5.0 and keras 2.8.0 and i tried to change something in loss function code but it's still same error
Thanks for help!
The text was updated successfully, but these errors were encountered: