Members: MichaΕ BΔ czyk, Stephen DiAdamo, Sean Mcilvane, Eraraya Ricardo Muten, Ankit Khandelwal, Andrei Tomut, and Renata Wong.
Our Final Presentation Notebook, presenting clearly all the methods and results, can be found here.
This repository is composed of two parts:
- Testing autoencoding strategies for anomaly detection specifically for two datasets - breast cancer and credit card fraud.
- Testing how entanglement resources can be added to autoencoders to enhance their encoding and decoding ability.
The preliminary results for 1) are presented in Use-case_Cancer_detection and for 2) are presented in Use-case_Fraud_detection, respectively. For that, we follow the approach developed in https://arxiv.org/pdf/2112.08869.pdf but explore a lot more approaches than the original anomaly detection paper; by also implementing an enhanced autoencoder and a patch autoencoder that were not previously used for anomaly detection but some improvements (for example connecting the encoder circuits).
Our results show high accuracy for the experiments on the MNIST data sets, and for cancer detection, we observe an improvement using our version of the patch encoder. To speed up the training for the financial data, we used Jax to multi-process the optimization step. To read more about our results please check:
Compression accuracy: 0.8609424846861528
Classification:
split: 0.845
benign classification accuracy: 0.9230769230769231
malign clasification accuracy: 0.9098360655737705
total accuracy: 0.9157175398633257
Simulator:
Hardware:
Compression accuracy: 0.9106666637654454
Classification:
split:0.75
fraund classification accuracy: 0.83
legal classification accuracy: 0.93
total accuracy: 0.88
With E1:
split: 0.67
class 1 classification accuracy: 1.0
class 0 classification accuracy: 0.9943820224719101
total accuracy: 0.9972222222222222
With E2:
split: 0.67
class 1 classification accuracy: 1.0
class 0 classification accuracy: 0.9943820224719101
total accuracy: 0.9972222222222222
With E3:
split: 0.53
class 1 classification accuracy: 1.0
class 0 classification accuracy: 0.949438202247191
total accuracy: 0.975
βββ EAQAE approaches
β β 3 conceptually different approaches presenting how etanglement might be used as a resource in training of QAEs.
β β
β βββ EAQAE 3-1-3; entangling directly encoder and decoder qubits; training both encoder and decoder.ipynb
β βββ EAQAE 4-1-4; 2 EPR pairs shared.ipynb
β βββ EAQAE 4-1-4; entangling directly encoder and decoder qubits.ipynb
β
β
βββ MNIST_benchmark
β β Here, we keep our experiments with the MNIST data set for benchmark and comparison with past paper implementation.
β β
β βββ mnist_JAX
β βββ six_one_six
β βββ six_three_six
β βββ six_two_six
β βββ results: mnist_JAX/digits data.xlsx
β
βββ Use-case_Cancer_detection
β βWe used and applied the Quantum autoencoder for anomaly detection in order to identify the Bening cels from the Kaggle
β βdataset: https://www.kaggle.com/uciml/breast-cancer-wisconsin-data/discussion .
β β
β βββ best results: Cancer_encoder_e5-SelectedFeautures-ent.ipynb
β βββ hardware results: e5_real-ent.ipynb Noise messes things on real hardware, we can correct and train on real hardware with more time to mitigate the error.
β βββ hardware results: e5_real.ipynb Maybe during an internship ;)
β
βββ Use-case_Fraud_detection
β βWe used and applied the Quantum autoencoder for anomaly detection on the Kaggle dataset (https://www.kaggle.com/mlg-ulb/creditcardfraud. )
β βthat contain card transaction to spot the fraudulent transactions.And we get decent results.
β β
β βββ best results: BEST_fraud_detection ; QuantumCreditFraud-best_pre_Braket.ipynb
β βββ hardware results: QuantumCreditFraud_BraketResults.ipynb
β
β
β
βββ qencode
β βββ This module aims to keep all the pieces of an autoencoder easy to connect with each other by using QubitsArrangement class. It also provides a range of: initializers, encoder, and decoder circuits that we implemented using Pennylane.
β
β
βββ LICENSE
β
βββ requirements.txt
β
βββREADME.md <- project README
Project done during QHack 2022.
Submitted to the following challenges:
- Bio-QML Challenge
- Quantum Finance Challenge
- Amazon Braket Challenge
- IBM Qiskit Challenge
- Hybrid Algorithms Challenge