Skip to content

Commit

Permalink
create ipynb rst
Browse files Browse the repository at this point in the history
  • Loading branch information
KindXiaoming committed Aug 17, 2024
1 parent 0ad7d67 commit 7e8897c
Show file tree
Hide file tree
Showing 469 changed files with 24,872 additions and 27,984 deletions.
12 changes: 12 additions & 0 deletions docs/.ipynb_checkpoints/community-checkpoint.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
.. _examples:

Examples
--------

.. toctree::
:maxdepth: 1

Community/Community_1_physics_informed_kan.rst
Community/Community_2_protein_sequence_classification.rst


14 changes: 9 additions & 5 deletions docs/.ipynb_checkpoints/demos-checkpoint.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _api-demo:

API Demos
---------

Expand All @@ -6,11 +8,13 @@ API Demos

API_demo/API_1_indexing.rst
API_demo/API_2_plotting.rst
API_demo/API_3_grid.rst
API_demo/API_4_extract_activations.rst
API_demo/API_5_initialization_hyperparameter.rst
API_demo/API_3_extract_activations.rst
API_demo/API_4_initialization.rst
API_demo/API_5_grid.rst
API_demo/API_6_training_hyperparameter.rst
API_demo/API_7_pruning.rst
API_demo/API_8_checkpoint.rst
API_demo/API_8_regularization.rst
API_demo/API_9_video.rst
API_demo/API_10_device.rst
API_demo/API_10_device.rst
API_demo/API_11_create_dataset.rst
API_demo/API_12_checkpoint_save_load_model.rst
24 changes: 24 additions & 0 deletions docs/.ipynb_checkpoints/examples-checkpoint.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
.. _examples:

Examples
--------

.. toctree::
:maxdepth: 1

Example/Example_1_function_fitting.rst
Example/Example_3_classfication.rst
Example/Example_4_classfication.rst
Example/Example_5_special_functions.rst
Example/Example_6_PDE_interpretation.rst
Example/Example_7_PDE_accuracy.rst
Example/Example_8_continual_learning.rst
Example/Example_9_singularity.rst
Example/Example_10_relativity-addition.rst
Example/Example_11_encouraing_linear.rst
Example/Example_12_unsupervised_learning.rst
Example/Example_13_phase_transition.rst
Example/Example_14_knot_supervised.rst
Example/Example_15_knot_unsupervised.rst


23 changes: 23 additions & 0 deletions docs/.ipynb_checkpoints/interp-checkpoint.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
.. _examples:

Examples
--------

.. toctree::
:maxdepth: 1

Interp/Interp_1_Hello, MultKAN.rst
Interp/Interp_2_Advanced MultKAN.rst
Interp/Interp_3_KAN_Compiler.rst
Interp/Interp_4_feature_attribution.rst
Interp/Interp_5_test_symmetry.rst
Interp/Interp_6_test_symmetry_NN.rst
Interp/Interp_8_adding_auxillary_variables.rst
Interp/Interp_9_different_plotting_metrics.rst
Interp/Interp_10_hessian.rst
Interp/Interp_10A_swap.rst
Interp/Interp_10B_swap.rst
Interp/Interp_11_sparse_init.rst



18 changes: 18 additions & 0 deletions docs/.ipynb_checkpoints/physics-checkpoint.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
.. _examples:

Examples
--------

.. toctree::
:maxdepth: 1

Physics/Physics_1_Lagrangian.rst
Physics/Physics_2A_conservation_law.rst
Physics/Physics_2B_conservation_law_2D.rst
Physics/Physics_3_blackhole.rst
Physics/Physics_4A_constitutive_laws_P11.rst
Physics/Physics_4B_constitutive_laws_P12_with_prior.rst
Physics/Physics_4C_constitutive_laws_P12_without_prior.rst



Original file line number Diff line number Diff line change
@@ -0,0 +1,231 @@
API 6: Training Hyperparamters
==============================

Regularization helps interpretability by making KANs sparser. This may
require some hyperparamter tuning. Let’s see how hyperparameters can
affect training

Load KAN and create_dataset

.. code:: ipython3
from kan import *
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
f = lambda x: torch.exp(torch.sin(torch.pi*x[:,[0]]) + x[:,[1]]**2)
dataset = create_dataset(f, n_var=2, device=device)
dataset['train_input'].shape, dataset['train_label'].shape
.. parsed-literal::
cuda
.. parsed-literal::
(torch.Size([1000, 2]), torch.Size([1000, 1]))
Default setup

.. code:: ipython3
# train the model
model = KAN(width=[2,5,1], grid=5, k=3, seed=1, device=device)
model.fit(dataset, opt="LBFGS", steps=20, lamb=0.01);
model.plot()
.. parsed-literal::
checkpoint directory created: ./model
saving model version 0.0
.. parsed-literal::
| train_loss: 3.34e-02 | test_loss: 3.29e-02 | reg: 4.93e+00 | : 100%|█| 20/20 [00:05<00:00, 3.73it
.. parsed-literal::
saving model version 0.1
.. image:: API_6_training_hyperparameter_files/API_6_training_hyperparameter_4_3.png


Parameter 1: :math:`\lambda`, overall penalty strength.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Previously :math:`\lambda=0.01`, now we try different :math:`\lambda`.

:math:`\lambda=0`

.. code:: ipython3
# train the model
model = KAN(width=[2,5,1], grid=5, k=3, seed=1, device=device)
model.fit(dataset, opt="LBFGS", steps=20, lamb=0.00);
model.plot()
.. parsed-literal::
checkpoint directory created: ./model
saving model version 0.0
.. parsed-literal::
| train_loss: 5.51e-03 | test_loss: 6.14e-03 | reg: 1.52e+01 | : 100%|█| 20/20 [00:03<00:00, 5.84it
.. parsed-literal::
saving model version 0.1
.. image:: API_6_training_hyperparameter_files/API_6_training_hyperparameter_7_3.png


:math:`\lambda=1`

.. code:: ipython3
# train the model
model = KAN(width=[2,5,1], grid=5, k=3, seed=0, device=device)
model.fit(dataset, opt="LBFGS", steps=20, lamb=1.0);
model.plot()
.. parsed-literal::
checkpoint directory created: ./model
saving model version 0.0
.. parsed-literal::
| train_loss: 1.70e+00 | test_loss: 1.73e+00 | reg: 1.08e+01 | : 100%|█| 20/20 [00:04<00:00, 4.59it
.. parsed-literal::
saving model version 0.1
.. image:: API_6_training_hyperparameter_files/API_6_training_hyperparameter_9_3.png


Parameter 2: (relative) penalty strength of entropy :math:`\lambda_{\rm ent}`.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The absolute magnitude is :math:`\lambda\lambda_{\rm ent}`. Previously
we set :math:`\lambda=0.1` and :math:`\lambda_{\rm ent}=2.0` (default).
Below we fix :math:`\lambda=0.1` and vary :math:`\lambda_{\rm ent}`.

:math:`\lambda_{\rm ent}=0.0`

.. code:: ipython3
# train the model
model = KAN(width=[2,5,1], grid=5, k=3, seed=1, device=device)
model.fit(dataset, opt="LBFGS", steps=20, lamb=0.01, lamb_entropy=0.0);
model.plot()
.. parsed-literal::
checkpoint directory created: ./model
saving model version 0.0
.. parsed-literal::
| train_loss: 4.20e-02 | test_loss: 4.50e-02 | reg: 2.57e+00 | : 100%|█| 20/20 [00:04<00:00, 4.68it
.. parsed-literal::
saving model version 0.1
.. image:: API_6_training_hyperparameter_files/API_6_training_hyperparameter_12_3.png


:math:`\lambda_{\rm ent}=10.`

.. code:: ipython3
# train the model
model = KAN(width=[2,5,1], grid=5, k=3, seed=1, device=device)
model.fit(dataset, opt="LBFGS", steps=20, lamb=0.01, lamb_entropy=10.0);
model.plot()
.. parsed-literal::
checkpoint directory created: ./model
saving model version 0.0
.. parsed-literal::
| train_loss: 7.83e-02 | test_loss: 7.74e-02 | reg: 1.54e+01 | : 100%|█| 20/20 [00:05<00:00, 3.77it
.. parsed-literal::
saving model version 0.1
.. image:: API_6_training_hyperparameter_files/API_6_training_hyperparameter_14_3.png


Parameter 3: seed.
~~~~~~~~~~~~~~~~~~

Previously we use seed = 1. Below we vary seed.

:math:`{\rm seed} = 42`

.. code:: ipython3
model = KAN(width=[2,5,1], grid=3, k=3, seed=42, device=device)
model.fit(dataset, opt="LBFGS", steps=20, lamb=0.01);
model.plot()
.. parsed-literal::
checkpoint directory created: ./model
saving model version 0.0
.. parsed-literal::
| train_loss: 5.67e-02 | test_loss: 5.72e-02 | reg: 5.81e+00 | : 100%|█| 20/20 [00:04<00:00, 4.81it
.. parsed-literal::
saving model version 0.1
.. image:: API_6_training_hyperparameter_files/API_6_training_hyperparameter_17_3.png


File renamed without changes.
Loading

0 comments on commit 7e8897c

Please sign in to comment.