diff --git a/dev/articles/callbacks.html b/dev/articles/callbacks.html index c55d112d..17260a19 100644 --- a/dev/articles/callbacks.html +++ b/dev/articles/callbacks.html @@ -225,7 +225,7 @@

Writing a Custom Logger## load_state_dict: function (state_dict) ## on_before_valid: function () ## on_batch_end: function () -## Parent env: <environment: 0x55f6ed50fb38> +## Parent env: <environment: 0x55dfb6df1b30> ## Locked objects: FALSE ## Locked class: FALSE ## Portable: TRUE diff --git a/dev/articles/get_started.html b/dev/articles/get_started.html index deb9c3c5..678472df 100644 --- a/dev/articles/get_started.html +++ b/dev/articles/get_started.html @@ -236,7 +236,7 @@

Loss #> clone: function (deep = FALSE, ..., replace_values = TRUE) #> Private: #> .__clone_r6__: function (deep = FALSE) -#> Parent env: <environment: 0x556c58224810> +#> Parent env: <environment: 0x5584a92d3ea8> #> Locked objects: FALSE #> Locked class: FALSE #> Portable: TRUE diff --git a/dev/articles/internals_pipeop_torch.html b/dev/articles/internals_pipeop_torch.html index b7599b72..0f82ec20 100644 --- a/dev/articles/internals_pipeop_torch.html +++ b/dev/articles/internals_pipeop_torch.html @@ -104,8 +104,8 @@

A torch Primerinput = torch_randn(2, 3) input #> torch_tensor -#> 0.9197 0.6295 -0.9055 -#> -2.5884 0.7595 1.2294 +#> -1.3766 -0.5136 0.3212 +#> -0.1381 0.5962 0.2744 #> [ CPUFloatType{2,3} ]

A nn_module is constructed from a nn_module_generator. nn_linear is one of the @@ -117,8 +117,8 @@

A torch Primeroutput = module_1(input) output #> torch_tensor -#> 0.6356 -0.0022 0.3491 -1.0918 -#> -1.5718 1.8213 -1.6298 0.2341 +#> 0.2026 -0.6605 0.1249 0.6521 +#> 0.4501 -0.2476 0.1827 0.3494 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

A neural network with one (4-unit) hidden layer and two outputs needs the following ingredients

@@ -134,8 +134,8 @@

A torch Primeroutput = softmax(output) output #> torch_tensor -#> 0.2569 0.4082 0.3350 -#> 0.3488 0.3890 0.2623 +#> 0.3464 0.1966 0.4570 +#> 0.3344 0.1942 0.4714 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

We will now continue with showing how such a neural network can be represented in mlr3torch.

@@ -170,8 +170,8 @@

Neural Networks as Graphsoutput = po_module_1$train(list(input))[[1]] output #> torch_tensor -#> 0.6356 -0.0022 0.3491 -1.0918 -#> -1.5718 1.8213 -1.6298 0.2341 +#> 0.2026 -0.6605 0.1249 0.6521 +#> 0.4501 -0.2476 0.1827 0.3494 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

Note we only use the $train(), since torch modules do not have anything that maps to the state (it is filled by @@ -196,8 +196,8 @@

Neural Networks as Graphsoutput = module_graph$train(input)[[1]] output #> torch_tensor -#> 0.2569 0.4082 0.3350 -#> 0.3488 0.3890 0.2623 +#> 0.3464 0.1966 0.4570 +#> 0.3344 0.1942 0.4714 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

While this object allows to easily perform a forward pass, it does not inherit from nn_module, which is useful for various @@ -245,8 +245,8 @@

Neural Networks as Graphs
 graph_module(input)
 #> torch_tensor
-#>  0.2569  0.4082  0.3350
-#>  0.3488  0.3890  0.2623
+#>  0.3464  0.1966  0.4570
+#>  0.3344  0.1942  0.4714
 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]
@@ -363,8 +363,8 @@

small_module(input) #> torch_tensor -#> 0.2935 0.0055 0.1483 0.6004 -#> 0.3216 2.2073 -0.5215 -0.5771 +#> 0.2228 -0.2205 -0.0451 0.1615 +#> 0.2096 0.2013 -0.4630 -0.3194 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

@@ -429,9 +429,9 @@

Using ModelDescriptor to small_module(batch$x[[1]]) #> torch_tensor -#> 1.7664 4.3770 -3.4375 -0.1343 -#> 1.7621 4.0245 -3.1681 -0.1304 -#> 1.6213 4.0500 -3.1483 -0.0796 +#> 2.5036 3.6613 -0.2827 -0.3227 +#> 2.2548 3.3180 -0.4614 -0.4464 +#> 2.2830 3.3477 -0.3081 -0.3362 #> [ CPUFloatType{3,4} ][ grad_fn = <AddmmBackward0> ]

The first linear layer that takes “Sepal” input ("linear1") creates a 2x4 tensor (batch size 2, 4 units), @@ -690,14 +689,14 @@

Building more interesting NNsiris_module$graph$pipeops$linear1$.result #> $output #> torch_tensor -#> 2.1381 -1.4562 -2.3068 -3.0385 -#> 2.0602 -1.6112 -2.1434 -2.9874 +#> 0.7996 -2.4717 -1.8110 -2.8502 +#> 0.5982 -2.1502 -1.6179 -2.5826 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ] iris_module$graph$pipeops$linear3$.result #> $output #> torch_tensor -#> -0.4947 -0.6350 0.1133 -0.4768 0.0452 -#> -0.4947 -0.6350 0.1133 -0.4768 0.0452 +#> 0.2892 0.8388 0.0890 -0.1588 0.4280 +#> 0.2892 0.8388 0.0890 -0.1588 0.4280 #> [ CPUFloatType{2,5} ][ grad_fn = <AddmmBackward0> ]

We observe that the po("nn_merge_cat") concatenates these, as expected:

@@ -705,8 +704,8 @@

Building more interesting NNsiris_module$graph$pipeops$nn_merge_cat$.result #> $output #> torch_tensor -#> 2.1381 -1.4562 -2.3068 -3.0385 -0.4947 -0.6350 0.1133 -0.4768 0.0452 -#> 2.0602 -1.6112 -2.1434 -2.9874 -0.4947 -0.6350 0.1133 -0.4768 0.0452 +#> 0.7996 -2.4717 -1.8110 -2.8502 0.2892 0.8388 0.0890 -0.1588 0.4280 +#> 0.5982 -2.1502 -1.6179 -2.5826 0.2892 0.8388 0.0890 -0.1588 0.4280 #> [ CPUFloatType{2,9} ][ grad_fn = <CatBackward0> ] diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png index dc7f345e..cf3a7b7d 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png index eb93e89f..e80dce3e 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png index eaee34f8..5ed37234 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png index 6f29b401..5946ff1c 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png index aa4af5c2..2922de58 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png differ diff --git a/dev/articles/lazy_tensor.html b/dev/articles/lazy_tensor.html index c59a49d7..c2c7967c 100644 --- a/dev/articles/lazy_tensor.html +++ b/dev/articles/lazy_tensor.html @@ -387,7 +387,7 @@

Digging Into Internals#> <DataDescriptor: 1 ops> #> * dataset_shapes: [x: (NA,1)] #> * input_map: (x) -> Graph -#> * pointer: nop.563486.x.output +#> * pointer: nop.09428b.x.output #> * shape: [(NA,1)]

The printed output of the data descriptor informs us about: