Skip to content

standing-o/Lengyel-Epstein_Pattern_Classification

Repository files navigation

Lengyel-Epstein Pattern Classification

  • In scientific computing lab, this is my first machine learning project.
  • We aim to present the insight of the classification for pattern images generated from the Turing models using machine learning with feature engineering.
  • Our insight is applied to the image data generated from the PDEs using both NN and clustering methods such as k-means and agglomerative.
  • Jan. 15, 2020 ~ Jul. 30, 2020

     

SEOYOUNG OH, and SEUNGGYU LEE. "EXTRACTING INSIGHTS OF CLASSIFICATION FOR TURING PATTERN WITH FEATURE ENGINEERING."

Abstract Data classification and clustering is one of the most common applications of the machine learning. In this paper, we aim to provide the insight of the classification for Turing pattern image, which has high nonlinearity, with feature engineering using the machine learning without a multi-layered algorithm. For a given image data X whose fixel values are defined in [-1, 1], X-X3 and ∇X would be more meaningful feature than X to represent the interface and bulk region for a complex pattern image data. Therefore, we use X-X3 and∇ X in the neural network and clustering algorithm to classification. The results validate the feasibility of the proposed approach.

     

Dataset

Lengyel–Epstein (LE) model developed to describe the CIMA chemical reaction

$$ \frac{\delta u}{\delta t} = D_u \Delta u + k_1 \Big( v - \frac{uv}{1 + v^2} \Big), $$

$$ \frac{\delta v}{\delta t} = D_v \Delta v - k_2 \Big( v + \frac{4uv}{1 + v^2} \Big). $$

  • Patterns in Lengyel-Epstein model created by MATLAB | Code

  • We choice dissimilar patterns for classification

     

Process

1. Chemical prepattern and reaction-diffusion models for pigmentation | Presentation
: Lengyel-Epstein equation in 1D with MATLAB

2. Creating the pattern images(2D) based on Lengyel-Epstein model with MATLAB | Presentation
: To classify 3 dissimilar patterns through a Neural Network

3. Gradient Descent | Presentation

4. Single-Layer Neural Network with Softmax | Presentation

5. CNN(Convolutional Neural Network) | Presentation
: Performance of CNN is tremendous, but I want to increase the performance of the single-layer neural network.

6. Gradient feature in affine layer | Presentation

7. Adam regularization | Presentation

8. PCA (Principle Component Analysis) | Presentation
: I don't think this will help reduce over-fitting.
I realized that the number of training data can affect accuracy.

9. Compare 10 cases | Presentation
: When classifying images that are not similar, the accuracy of 9th model was good. But not in similar images.

10. Classification of all patterns | Presentation
: I thought that similar patterns are mathematically the same ones thus 9th model can't classify it.

11. Complex Pattern in a simple system | Presentation
: Pearson's Classification of Gray-Scott System Parameter Values

12. k-means clustering | Presentation

13. Recall | Presentation
: Recall the whole process

14. Visualization | Presentation
: We use three features (mean of X, mean of nabla X, variance of X-X^3) to visualize all patterns in a two and three-dimension

     

Architecture

1. Classification | Code

  • Single layer Neural Network (NN) is used for the classification of the patterns.
    We train a NN with a soft-max output layer comprising 3 output nodes. The forward propagation is written:

$$ A = \sigma (WX + b), $$ $$ A = \sigma (W \nabla X + b), $$ $$ A = \sigma (W (X-X^3) + b), $$ $$ A = \sigma (W_1 X + W_2 \nabla X + b), $$ $$ A = \sigma (W_1 X + W_2 (X-X^3) + b), $$ $$ A = \sigma (W_1 \nabla X + W_2 (X-X^3) + b). $$

     

Results

1. Classification with Dissimilar patterns | Presentation

Features Gradient Descent Adam
$X$ 0.36 0.42
$\nabla X$ 0.40 0.36
$X-X^3$ 0.44 0.39
$X$ and $\nabla X$ 0.42 0.44
$X$ and $X-X^3$ 0.40 0.36
$\nabla X$ and $X-X^3$ 0.90 0.90

2. k-means and Agglomerative Clustering

  • Feature Selection for clustering | Presentation | Code

  • Visualization of all 36 patterns as 40 points per pattern in a three-dimensional space.

  • Visualization of all 36 patterns in a three-dimensional space. The points in same cluster are represented by same color.
    One point per pattern is visualized. | Code

  • K-means clustering and Agglomerative clustering


Info

  • Authors : SEOYOUNG OH, SEUNGGYU LEE.
  • Journal : KSIAM
  • Year : 2020