Skip to content

Latest commit

 

History

History
62 lines (33 loc) · 3.66 KB

README.md

File metadata and controls

62 lines (33 loc) · 3.66 KB

Interferometric Neural Networks (INNs)

All the results from the attached Notebooks are reported in the paper:

Arun Sehrawat, Interferometric Neural Networks, arXiv:2310.16742 [quant-ph] (2023).


In this paper, we introduce INNs given below in hierarchical order. The top INN is made of INNs shown in the middle, and the middle INN is made of INNs shown at the bottom.

${}$

${}$

Int (copy)


(1) A Quadratic Unconstrained Binary Optimization (QUBO) problem is a combinatorial optimization problem specified by a real square matrix Q. In the Jupyter Notebook QUBO_with_INN, we have generated two sets of QUBO instances, where the Q matrix sampled from a discrete uniform and the standard normal distributions. Each set containing one thousand instances for 17 binary variables (qubits). For each problem instance, we obtained two solutions: (1) a global optimum solution through an exhaustive search over $2^{17}=131072$ possibilities and (2) a solution using the the bottom INN. And, we obtained the gap between the two solutions. The INN consistently yields the global optimum, achieving a success rate of 95% and 89% for the uniform and normal distributions, respectively. In both cases, the optimality gaps remain under 2%.

opt_gap


(2) In the Jupyter Notebook Binary_Image_classification_with_INN, we have employed the bottom INN for binary image classifications on the MNIST dataset. To create a 2-class classification problem, we gathered all the images of only two specific digits, denoted as $a$ and $b$, which are respectively labeled as the negative class and the positive class. After the training of the INN, we evaluate its performance by using accuracy and $\text{f}_1$ scores on the test set. The results are obtained for every $a,b=0,\cdots,9$ provided $a\neq b$.

acc_mat f1_mat


(3) In the Jupyter Notebook Image_classification_with_INN, we have employed the top INN for image classification on both the MNIST and FashionMNIST datasets, each with $C=10$ classes. As we increase the number of layers, the performance---measured by the accuracy and average $\text{f}_1$---of the INN improves for both datasets. In summary, we have attained accuracies and average $\text{f}_1$ scores of 93% and 83% on the MNIST and FashionMNIST datasets, respectively.

acc_f1


(4) In the paper, we also introduced IGANs, made of the top INNs, for image generation. Separate IGANs were trained on the MNIST dataset and CelebA datasets. Their complete implementation is given in the notebook IGAN. These IGANs have successfully generated images of 0 to 9 digits and human faces.

faces