Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Printing pretrained torch model options #82

Open
engahmed1190 opened this issue Sep 26, 2017 · 4 comments
Open

Printing pretrained torch model options #82

engahmed1190 opened this issue Sep 26, 2017 · 4 comments

Comments

@engahmed1190
Copy link

engahmed1190 commented Sep 26, 2017

I am trying to print the torch model options of training, I have used the provided model model.t7

I am using this code to print the option

require 'torch'
require 'nn'

require 'InstanceNormalization'


--[[
Prints the options that were used to train a a feedforward model.
--]]


local cmd = torch.CmdLine()
cmd:option('-model', 'models/instance_norm/candy.t7')
local opt = cmd:parse(arg)

print('Loading model from ' .. opt.model)
local checkpoint = torch.load(opt.model)

for k, v in pairs(checkpoint.opt) do
  if type(v) == 'table' then
    v = table.concat(v, ',')
  end
  print(string.format('%s: %s', k, v))
end




but I am getting this error:

/src/torch/install/bin/luajit: print_options.lua:22: bad argument #1 to 'pairs' (table expected, got nil)
stack traceback:
	[C]: in function 'pairs'
	print_options.lua:22: in main chunk
	[C]: in function 'dofile'
	.../src/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
	[C]: at 0x00406670

@engahmed1190
Copy link
Author

this is the model that you provided for reference , it loaded perfectly find but the main issue with the evaluate method .

nn.Sequential {
  [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> (15) -> (16) -> (17) -> (18) -> (19) -> (20) -> (21) -> (22) -> (23) -> output]
  (0): TorchObject(nn.TVLoss, {'_type': 'torch.FloatTensor', 'strength': 0, 'x_diff': 
  ( 0 ,.,.) = 
    0.0000  0.0000  0.0000  ...  -0.0039  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0039  0.0000  0.0000
    0.0000  0.0000  0.0000  ...   0.0078 -0.0157  0.0000
             ...             ⋱             ...          
   -0.0196  0.0157 -0.0196  ...   0.0078  0.0588  0.0392
   -0.0196  0.0196 -0.0235  ...   0.0235  0.0745  0.1451
   -0.0078  0.0000  0.0118  ...   0.0431  0.1882  0.1569
  
  ( 1 ,.,.) = 
    0.0000  0.0000  0.0000  ...  -0.0039  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0039  0.0000  0.0000
    0.0000  0.0000  0.0000  ...   0.0078 -0.0078  0.0000
             ...             ⋱             ...          
   -0.0196  0.0157 -0.0196  ...   0.0078  0.0431  0.0235
   -0.0196  0.0196 -0.0235  ...   0.0196  0.0588  0.1373
   -0.0078  0.0000  0.0118  ...   0.0353  0.1765  0.1451
  
  ( 2 ,.,.) = 
    0.0000  0.0000  0.0000  ...  -0.0039  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0039  0.0000  0.0000
    0.0000  0.0000  0.0000  ...   0.0078 -0.0078  0.0000
             ...             ⋱             ...          
   -0.0196  0.0157 -0.0196  ...   0.0078  0.0471  0.0235
   -0.0196  0.0196 -0.0235  ...   0.0118  0.0510  0.1216
   -0.0078  0.0039  0.0118  ...   0.0157  0.1647  0.1176
  [torch.FloatTensor of size 3x511x511]
  , 'gradInput': [torch.FloatTensor with no dimension]
  , 'y_diff': 
  ( 0 ,.,.) = 
    0.0000  0.0000  0.0000  ...   0.0000  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0078  0.0118 -0.0039
    0.0000  0.0000  0.0000  ...  -0.0118 -0.0078  0.0000
             ...             ⋱             ...          
    0.0039  0.0039  0.0078  ...   0.0157  0.0314  0.0471
   -0.0157 -0.0039 -0.0235  ...   0.0863  0.1059  0.2196
    0.0039  0.0039  0.0039  ...   0.1020  0.1961  0.1451
  
  ( 1 ,.,.) = 
    0.0000  0.0000  0.0000  ...   0.0000  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0000  0.0039 -0.0039
    0.0000  0.0000  0.0000  ...  -0.0118 -0.0078  0.0000
             ...             ⋱             ...          
    0.0039  0.0039  0.0078  ...   0.0196  0.0314  0.0471
   -0.0157 -0.0039 -0.0235  ...   0.0706  0.0863  0.2039
    0.0039  0.0039  0.0039  ...   0.0941  0.1922  0.1451
  
  ( 2 ,.,.) = 
    0.0000  0.0000  0.0000  ...   0.0000  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0000  0.0039 -0.0039
    0.0000  0.0000  0.0000  ...  -0.0196 -0.0078  0.0000
             ...             ⋱             ...          
    0.0039  0.0039  0.0078  ...   0.0039  0.0078  0.0118
   -0.0196 -0.0078 -0.0235  ...   0.0275  0.0314  0.1451
    0.0039  0.0039  0.0039  ...   0.0745  0.1725  0.0980
  [torch.FloatTensor of size 3x511x511]
  , 'train': True, 'output': [torch.FloatTensor with no dimension]
  })
  (1): nn.SpatialReplicationPadding(4, 4, 4, 4)
  (2): nn.SpatialConvolution(3 -> 32, 9x9)
  (3): nn.InstanceNormalization
  (4): nn.ReLU
  (5): nn.SpatialConvolution(32 -> 64, 3x3, 2, 2, 1, 1)
  (6): nn.InstanceNormalization
  (7): nn.ReLU
  (8): nn.SpatialConvolution(64 -> 128, 3x3, 2, 2, 1, 1)
  (9): nn.InstanceNormalization
  (10): nn.ReLU
  (11): nn.Sequential {
    [input -> (0) -> (1) -> output]
    (0): torch.legacy.nn.ConcatTable.ConcatTable {
      input
        |`-> (0): nn.Identity
        |`-> (1): nn.Sequential {
               [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
               (0): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (1): nn.SpatialConvolution(128 -> 128, 3x3)
               (2): nn.InstanceNormalization
               (3): nn.ReLU
               (4): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (5): nn.SpatialConvolution(128 -> 128, 3x3)
               (6): nn.InstanceNormalization
             }
         +. -> output
    }
    (1): nn.CAddTable
  }
  (12): nn.Sequential {
    [input -> (0) -> (1) -> output]
    (0): torch.legacy.nn.ConcatTable.ConcatTable {
      input
        |`-> (0): nn.Identity
        |`-> (1): nn.Sequential {
               [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
               (0): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (1): nn.SpatialConvolution(128 -> 128, 3x3)
               (2): nn.InstanceNormalization
               (3): nn.ReLU
               (4): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (5): nn.SpatialConvolution(128 -> 128, 3x3)
               (6): nn.InstanceNormalization
             }
         +. -> output
    }
    (1): nn.CAddTable
  }
  (13): nn.Sequential {
    [input -> (0) -> (1) -> output]
    (0): torch.legacy.nn.ConcatTable.ConcatTable {
      input
        |`-> (0): nn.Identity
        |`-> (1): nn.Sequential {
               [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
               (0): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (1): nn.SpatialConvolution(128 -> 128, 3x3)
               (2): nn.InstanceNormalization
               (3): nn.ReLU
               (4): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (5): nn.SpatialConvolution(128 -> 128, 3x3)
               (6): nn.InstanceNormalization
             }
         +. -> output
    }
    (1): nn.CAddTable
  }
  (14): nn.Sequential {
    [input -> (0) -> (1) -> output]
    (0): torch.legacy.nn.ConcatTable.ConcatTable {
      input
        |`-> (0): nn.Identity
        |`-> (1): nn.Sequential {
               [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
               (0): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (1): nn.SpatialConvolution(128 -> 128, 3x3)
               (2): nn.InstanceNormalization
               (3): nn.ReLU
               (4): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (5): nn.SpatialConvolution(128 -> 128, 3x3)
               (6): nn.InstanceNormalization
             }
         +. -> output
    }
    (1): nn.CAddTable
  }
  (15): nn.Sequential {
    [input -> (0) -> (1) -> output]
    (0): torch.legacy.nn.ConcatTable.ConcatTable {
      input
        |`-> (0): nn.Identity
        |`-> (1): nn.Sequential {
               [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
               (0): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (1): nn.SpatialConvolution(128 -> 128, 3x3)
               (2): nn.InstanceNormalization
               (3): nn.ReLU
               (4): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (5): nn.SpatialConvolution(128 -> 128, 3x3)
               (6): nn.InstanceNormalization
             }
         +. -> output
    }
    (1): nn.CAddTable
  }
  (16): nn.SpatialFullConvolution(128 -> 64, 3x3, 2, 2, 1, 1, 1, 1)
  (17): nn.InstanceNormalization
  (18): nn.ReLU
  (19): nn.SpatialFullConvolution(64 -> 32, 3x3, 2, 2, 1, 1, 1, 1)
  (20): nn.InstanceNormalization
  (21): nn.ReLU
  (22): nn.SpatialReplicationPadding(1, 1, 1, 1)
  (23): nn.SpatialConvolution(32 -> 3, 3x3)
}

the model is loaded perfectly fine but when i am trying to evaluate model , i get this error

 Traceback (most recent call last):
  File "convert-fast-neural-style.py", line 176, in <module>
    main()
  File "convert-fast-neural-style.py", line 162, in main
    unknown_layer_converter_fn=convert_instance_norm
  File "/usr/local/lib/python2.7/dist-packages/torch2coreml/_torch_converter.py", line 194, in convert
    print (model.evaluate())
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 39, in evaluate
    self.applyToModules(lambda m: m.evaluate())
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 26, in applyToModules
    func(module)
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 39, in <lambda>
    self.applyToModules(lambda m: m.evaluate())
TypeError: 'NoneType' object is not callable

what is the main reason behind the evaluation to bring this NoneType error.
is their any other may that is equivalent to m.evaluate()

for reference also this is the Model i am trying to evaluate

@DmitryUlyanov
Copy link
Owner

The file convert-fast-neural-style.py does not belong to this repo, so I cannot help you...

@engahmed1190
Copy link
Author

engahmed1190 commented Sep 30, 2017

i can provided for you but the part i am having an error with is is this
this is the load function

def load_torch_model(path):
    model = load_lua(path, unknown_classes=True)
    replace_module(
        model,
        lambda m: isinstance(m, TorchObject) and
        m.torch_typename() == 'nn.InstanceNormalization',
        create_instance_norm
    )
    replace_module(
        model,
        lambda m: isinstance(m, SpatialFullConvolution),
        fix_full_conv
    )
    return model

this is the create

def convert_instance_norm(builder, name, layer, input_names, output_names):
    if not isinstance(layer, InstanceNormalization):
        raise TypeError('Unsupported type {}'.format(layer,))

    epsilon = layer.eps
    weight = layer.weight.numpy()
    bias = None
    if layer.bias is not None:
        bias = layer.bias.numpy()

    builder.add_batchnorm(
        name=name,
        channels=weight.shape[0],
        gamma=weight,
        beta=bias,
        compute_mean_var=True,
        instance_normalization=True,
        input_name=input_names[0],
        output_name=output_names[0],
        epsilon=epsilon
    )

    return output_names


this is the full fix conv

def fix_full_conv(m):
    m.finput = None
    m.fgradInput = None
    return m

when i use function evaluate i got the previous error , my final aim is to transfer this model to coreml
can you help ?

@DmitryUlyanov
Copy link
Owner

It seems you need to have instance norm defined in nn/Legacy in pytorch. I haven't ever tried to load torch models to pytorch, so I don't know how to do it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants