- AI for IoT and Mobile
- Attacks and Defenses
- Federated Learning
- GAN and VAE
- Interpretability and Attacks to New Scenario
- Multimodal
- SGX, TrustZone and Crypto
- Survey
- Other links
-
2016, ICLR, Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
-
2017, SenSys, DeepIoT: Compressing Deep Neural Network Structures for Sensing Systems with a Compressor-Critic Framework
-
2018, ECCV, AMC: AutoML for Model Compression and Acceleration on Mobile Devices
-
2018, ICLR, Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
-
2018, SenSys, FastDeepIoT: Towards Understanding and Optimizing Neural Network Execution Time on Mobile and Embedded Devices
-
2019, arXiv, A Programmable Approach to Model Compression
-
2019, arXiv, Neural Network Distiller: A Python Package For DNN Compression Research
-
2019, BigComp, Towards Robust Compressed Convolutional Neural Networks
-
2019, CVPR, FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search
-
2019, CVPR, HAQ: Hardware-Aware Automated Quantization with Mixed Precision
-
2019, CVPR, MnasNet: Platform-Aware Neural Architecture Search for Mobile
-
2019, NeurIPS, Positive-Unlabeled Compression on the Cloud
-
2015, NeurIPS, Distilling the Knowledge in a Neural Network
-
2019, arXiv, Adversarially Robust Distillation
-
2020, AAAI, Adversarially Robust Distillation
-
2020, AAAI, Ultrafast Video Attention Prediction with Coupled Knowledge Distillation
- 2019, arXiv, Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
-
2017, arXiv, Structural Compression of Convolutional Neural Networks Based on Greedy Filter Pruning
-
2017, arXiv, To prune, or not to prune: exploring the efficacy of pruning for model compression
-
2018, arXiv, Dynamic Channel Pruning: Feature Boosting and Suppression
-
2019, arXiv, Adversarial Neural Pruning with Latent Vulnerability Suppression
-
2019, arXiv, Localization-aware Channel Pruning for Object Detection
-
2019, arXiv, Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
-
2019, arXiv, Pruning from Scratch
-
2019, arXiv, Selective Brain Damage: Measuring the Disparate Impact of Model Pruning
-
2019, arXiv, Structured Pruning of Large Language Models
-
2019, arXiv, Towards Compact and Robust Deep Neural Networks
-
2019, ICCV, Adversarial Robustness vs. Model Compression, or Both
-
2019, ICLR, Rethinking the Value of Network Pruning
-
2019, ICLR, The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
-
2019, ICONIP, Self-Adaptive Network Pruning
-
2019, NeurIPS, Network Pruning via Transformable Architecture Search
-
2020, AAAI, AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates
-
2020, AAAI, PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile Devices
-
2020, ASPLOS, PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning
-
2020, ICLR, Comparing Fine-tuning and Rewinding in Neural Network Pruning
-
2018, arXiv, Combinatorial Attacks on Binarized Neural Networks
-
2018, CVPR, Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
-
2018, ICLR, Attacking Binarized Neural Networks
-
2019, arXiv, Defensive Quantization: When Efficiency Meets Robustness
-
2019, arXiv, Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
-
2019, arXiv, Model Compression with Adversarial Robustness: A Unified Optimization Framework
-
2019, ICLR, Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
-
2020, MLSys, Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
-
2020, MLSys, Searching for Winograd-aware Quantized Networks
-
2020, MLSys, Trained Quantization Thresholds for Accurate and Efficient Fixed-Point Inference of Deep Neural Networks
-
2016, SenSys, Sparsification and Separation of Deep Learning Layers for Constrained Resource Inference on Wearables
-
2017, arXiv, Mobilenets: Efficient convolutional neural networks for mobile vision applications
-
2017, ICLR, SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and less than 0.5MB model size
-
2018, ECCV, PIRM Challenge on Perceptual Image Enhancement on Smartphones: Report
-
2018, mobicom, FoggyCache : Cross-Device Approximate Computation Reuse
-
2019, arXiv, Characterizing the Deep Neural Networks Inference Performance of Mobile Applications
-
2019, arXiv, Confidential Deep Learning: Executing Proprietary Models on Untrusted Devices
-
2019, arXiv, On-Device Neural Net Inference with Mobile GPUs
-
2019, HPCA, Machine Learning at Facebook: Understanding Inference at the Edge
-
2019, WWW, A First Look at Deep Learning Apps on Smartphones
-
2016, Euro S&P, The Limitations of Deep Learning in Adversarial Settings
-
2016, ICLR, Adversarial Manipulation of Deep Representations
-
2017, AISec, Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
-
2017, AsiaCCS, Practical Black-Box Attacks against Machine Learning
-
2017, S&P, Towards Evaluating the Robustness of Neural Networks
-
2018, arXiv, Are adversarial examples inevitable
-
2018, CVPR, Robust Physical-World Attacks on Deep Learning Visual Classification
-
2018, ICLR, Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
-
2018, ICML, Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
-
2018, KDD, Adversarial Attacks on Neural Networks for Graph Data
-
2018, USENIX, With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning
-
2019, arXiv, Adversarial Examples Are a Natural Consequence of Test Error in Noise
-
2019, arXiv, Adversarial Examples Are Not Bugs, They Are Features
-
2019, arXiv, Batch Normalization is a Cause of Adversarial Vulnerability
-
2019, arXiv, WITCHcraft: Efficient PGD attacks with random step size
-
2019, CVPR, Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
-
2019, GECCO, GenAttack: practical black-box attacks with gradient-free optimization
-
2019, ICML, Revisiting Adversarial Risk
-
2019, NDSS, TEXTBUGGER: Generating Adversarial Text Against Real-world Applications
-
2020, arXiv, Feature Purification: How Adversarial Training Performs Robust Deep Learning
-
2020, arXiv, On Adaptive Attacks to Adversarial Example Defenses
-
2020, arXiv, Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
-
2020, arXiv, Towards Feature Space Adversarial Attack
-
2020, CCS, A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
-
2020, CCS, Text Captcha Is Dead: A Large Scale Deployment and Empirical Study
-
2020, CVPR, Adversarial Examples Improve Image Recognition
-
2020, CVPR, High-Frequency Component Helps Explain the Generalization of Convolutional Neural Networks
-
2020, ECCV, Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search
-
2020, ICLR, A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
-
2020, ICML, Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
-
2020, S&P, Intriguing Properties of Adversarial ML Attacks in the Problem Space
-
2020, USENIX, Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning
-
2020, USENIX, Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
-
2020, USENIX, Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries
-
2015, arXiv, Analysis of classifiers' robustness to adversarial perturbations
-
2016, ICLR, Learning with a Strong Adversary
-
2016, S&P, Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
-
2017, arXiv, Extending Defensive Distillation
-
2017, arXiv, The Space of Transferable Adversarial Examples
-
2017, ICLR, Adversarial Machine Learning at Scale
-
2018, arXiv, Gradient Adversarial Training of Neural Networks
-
2018, arXiv, The Taboo Trap: Behavioural Detection of Adversarial Samples
-
2018, CVPR, Partial Transfer Learning with Selective Adversarial Networks
-
2018, ICLR, mixup: Beyond Empirical Risk Minimization)
-
2018, ICLR, Stochastic Activation Pruning for Robust Adversarial Defense
-
2018, ICLR, Thermometer Encoding: One Hot Way To Resist Adversarial Examples
-
2018, ICML, An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks
-
2019, arXiv, Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
-
2019, arXiv, Defending Against Misclassification Attacks in Transfer Learning
-
2019, arXiv, Scaleable input gradient regularization for adversarial robustness
-
2019, arXiv, Sitatapatra: Blocking the Transfer of Adversarial Samples
-
2019, arXiv, Stateful Detection of Black-Box Adversarial Attacks
-
2019, arXiv, Towards Compact and Robust Deep Neural Networks
-
2019, arXiv, Transfer of Adversarial Robustness Between Perturbation Types
-
2019, arXiv, Using Honeypots to Catch Adversarial Attacks on Neural Networks
-
2019, arXiv, Using Pre-Training Can Improve Model Robustness and Uncertainty
-
2019, arXiv, What it Thinks is Important is Important: Robustness Transfers through Input Gradients
-
2019, CVPR, Adversarial Defense by Stratified Convolutional Sparse Coding
-
2019, CVPR, Disentangling Adversarial Robustness and Generalization
-
2019, CVPR, Feature Denoising for Improving Adversarial Robustness
-
2019, NDSS, NIC: Detecting Adversarial Samples with Neural Network Invariant Checking
-
2019, USENIX, Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features
-
2020, arXiv, Adversarially-Trained Deep Nets Transfer Better
-
2020, arXiv, One Neuron to Fool Them All
-
2020, arXiv, Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
-
2020, CCS, DeepDyve: Dynamic Verification for Deep Neural Networks
-
2020, USENIX, TEXTSHIELD: Robust Text Classification Based on Multimodal Embedding and Neural Machine Translation
-
2016, CVPR, Improving the Robustness of Deep Neural Networks via Stability Training
-
2016, NeurIPS, Measuring neural net robustness with constraints
-
2017, CAV, Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
-
2017, ICML, Parseval Networks: Improving Robustness to Adversarial Examples
-
2018, arXiv, A Dual Approach to Scalable Verification of Deep Networks
-
2018, arXiv, Adversarial Logit Pairing
-
2018, arXiv, Adversarially Robust Training through Structured Gradient Regularization
-
2018, arXiv, MixTrain: Scalable Training of Verifiably Robust Neural Networks
-
2018, ICLR Workshop, Attacking the Madry Defense Model with L1-based Adversarial Examples
-
2018, ICLR, Certified Defenses against Adversarial Examples
-
2018, ICLR, Ensemble Adversarial Training: Attacks and Defenses)
-
2018, ICML, Provable defenses against adversarial examples via the convex outer adversarial polytope
-
2018, NeurIPS SECML, Evaluating and Understanding the Robustness of Adversarial Logit Pairing)
-
2018, NeurIPS SECML, Logit Pairing Methods Can Fool Gradient-Based Attacks
-
2018, NeurIPS, Adversarially Robust Generalization Requires More Data
-
2018, NeurIPS, Sparse DNNs with Improved Adversarial Robustness
-
2018, S&P, AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
-
2019, arXiv, Adversarial Robustness May Be at Odds With Simplicity
-
2019, arXiv, Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets
-
2019, arXiv, On Evaluating Adversarial Robustness
-
2019, arXiv, On the Effect of Low-Rank Weights on Adversarial Robustness of Neural Networks
-
2019, arXiv, Towards Deep Learning Models Resistant to Adversarial Attacks
-
2019, arXiv, Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin
-
2019, CVPR, Adversarial Defense by Stratified Convolutional Sparse Coding
-
2019, CVPR, Robustness via curvature regularization, and vice versa
-
2019, ICCV, Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
-
2019, ICLR, Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
-
2019, ICLR, Deep Anomaly Detection with Outlier Exposure
-
2019, ICLR, L2-Nonexpansive Neural Networks
-
2019, ICLR, Robustness May Be at Odds with Accuracy
-
2019, ICLR, Towards the first adversarially robust neural network model on MNIST
-
2019, ICLR, Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
-
2019, ICML, Improving Adversarial Robustness via Promoting Ensemble Diversity
-
2019, ICML, Theoretically Principled Trade-off between Robustness and Accuracy
-
2019, ICML, Using Pre-Training Can Improve Model Robustness and Uncertainty
-
2019, NeurIPS, A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
-
2019, NeurIPS, Adversarial Robustness through Local Linearization
-
2019, NeurIPS, Adversarial Training and Robustness for Multiple Perturbations
-
2019, NeurIPS, Adversarial Training for Free
-
2019, NeurIPS, Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
-
2019, NeurIPS, Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
-
2019, NeurIPS, Lower Bounds on Adversarial Robustness from Optimal Transport
-
2019, NeurIPS, Metric learning for adversarial robustness
-
2019, NeurIPS, Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
-
2019, NeurIPS, You only propagate once: Painless adversarial training using maximal principle
-
2019, PMLR, Transferable Adversarial Training: A General Approach to Adapting Deep Classifiers
-
2020, AAAI, Universal Adversarial Training.)
-
2020, arXiv, Adversarial Training and Provable Robustness: A Tale of Two Objectives
-
2020, arXiv, Do Adversarially Robust ImageNet Models Transfer Better
-
2020, arXiv, Improving the Adversarial Robustness of Transfer Learning via Noisy Feature Distillation
-
2020, arXiv, Smooth Adversarial Training
-
2020, arXiv, Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
-
2020, CVPR, Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
-
2020, CVPR, Benchmarking Adversarial Robustness on Image Classification
-
2020, ICLR, Adversarial Training and Provable Defenses: Bridging the Gap
-
2020, ICLR, Adversarially robust transfer learning
-
2020, ICLR, Fast is better than free: Revisiting adversarial training
-
2020, ICLR, Intriguing properties of adversarial training at scale
-
2020, ICLR, Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness
-
2020, ICLR, Towards Stable and Efficient Training of Verifiably Robust Neural Networks
-
2020, ICML, Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
-
2020, NeurIPS, Boosting Adversarial Training with Hypersphere Embedding
-
2020, NeurIPS, Understanding and Improving Fast Adversarial Training
-
2017, arXiv, Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
-
2017, CNS, Backdoor Attacks against Learning Systems
-
2018, CCS, Model-Reuse Attacks on Deep Learning Systems
-
2018, CoRR, Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
-
2018, NDSS, Trojaning Attack on Neural Networks
-
2019, Access, BadNets: Evaluating Backdooring Attacks on Deep Neural Networks
-
2019, arXiv, Bypassing Backdoor Detection Algorithms in Deep Learning
-
2019, arXiv, Invisible Backdoor Attacks Against Deep Neural Networks
-
2019, arXiv, Label-Consistent Backdoor Attacks
-
2019, arXiv, Programmable Neural Network Trojan for Pre-Trained Feature Extractor
-
2019, CCS, Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks
-
2019, Thesis, Exploring the Landscape of Backdoor Attacks on Deep Neural Network Models
-
2020, AAAI, Hidden Trigger Backdoor Attacks
-
2020, arXiv, Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models
-
2020, arXiv, Clean-Label Backdoor Attacks on Video Recognition Models
-
2020, arXiv, Piracy Resistant Watermarks for Deep Neural Networks
-
2020, CCS, Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features
-
2020, KDD, An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
-
2017, ICCD, Neural Trojans
-
2018, arXiv, Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
-
2018, arXiv, SentiNet: Detecting Physical Attacks Against Deep Learning Systems
-
2018, NeruIPS, Spectral Signatures in Backdoor Attacks
-
2018, RAID, Fine-Pruning Defending Against Backdooring Attacks on Deep Neural Network
-
2019, ACSAC, STRIP: a defence against trojan attacks on deep neural networks
-
2019, arXiv, NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations
-
2019, arXiv, TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems
-
2019, CCS, ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation
-
2019, NeurIPS, Defending Neural Backdoors via Generative Distribution Modeling
-
2019, Online, DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks
-
2019, S&P, Neural cleanse: Identifying and mitigating backdoor attacks in neural networks
-
2020, CVPR, Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
-
2017, S&P, Membership Inference Attacks Against Machine Learning Models
-
2019, CCS, Privacy Risks of Securing Machine Learning Models against Adversarial Examples
-
2020, NDSS, CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples
-
2020, USENIX, Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
-
2018, arXiv, PRIVADO: Practical and Secure DNN Inference with Enclaves
-
2018, arXiv, YerbaBuena: Securing Deep Learning Inference Data via Enclave-based Ternary Model Partitioning
-
2019, CCS, MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
-
2019, mobicom, Occlumency: Privacy-preserving Remote Deep-learning Inference Using SGX
-
2019, S&P, Certified Robustness to Adversarial Examples with Differential Privacy
-
2019, SOSP, Privacy accounting and quality control in the sage differentially private ML platform
-
2018, NeurIPS, Poison Frogs Targeted Clean-Label Poisoning Attacks on Neural Networks
-
2018, USENIX, When Does Machine Learning FAIL Generalized Transferability for Evasion and Poisoning Attacks
-
2020, S&P, Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning
-
2018, arXiv, How To Backdoor Federated Learning
-
2018, arXiv, Mitigating Sybils in Federated Learning Poisoning
-
2019, arXiv, Can You Really Backdoor Federated Learning
-
2019, arXiv, Deep Leakage from Gradients
-
2019, arXiv, On Safeguarding Privacy and Security in the Framework of Federated Learning
-
2019, ICLR, Analyzing Federated Learning through an Adversarial Lens
-
2014, ICLR, Auto-Encoding Variational Bayes
-
2014, NeurIPS, Generative Adversarial Nets
-
2016, ICLR, Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
-
2016, ICML, Autoencoding beyond pixels using a learned similarity metric
-
2016, NeurIPS, InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
-
2017, arXiv, BEGAN: Boundary Equilibrium Generative Adversarial Networks
-
2017, NeurIPS, Improved Training of Wasserstein GANs
-
2017, NeurIPS, Wasserstein GAN
-
2020, CVPR, CNN-generated images are surprisingly easy to spot... for now
-
2018, arXiv, How To Backdoor Federated Learning
-
2018, arXiv, Mitigating Sybils in Federated Learning Poisoning
-
2019, arXiv, Can You Really Backdoor Federated Learning
-
2019, arXiv, Deep Leakage from Gradients
-
2019, arXiv, On Safeguarding Privacy and Security in the Framework of Federated Learning
-
2019, ICLR, Analyzing Federated Learning through an Adversarial Lens
-
2020, USENIX, Interpretable Deep Learning under Fire
-
2016, CCS, Deep Learning with Differential Privacy
-
2020, NDSS, Adversarial Classification Under Differential Privacy
-
2020, S&P, The Value of Collaboration in Convex Machine Learning with Differential Privacy
-
2019, arXiv, Supervised Multimodal Bitransformers for Classifying Images and Text
-
2019, arXiv, VisualBERT: A Simple and Performant Baseline for Vision and Language
-
2019, TACL, Trick Me If You Can: Human-in-the-Loop Generation of Adversarial Examples for Question Answering
-
2020, AAAI, Is BERT Really Robust: A Strong Baseline for Natural Language Attack on Text Classification and Entailment
-
2018, arXiv, PRIVADO: Practical and Secure DNN Inference with Enclaves
-
2018, arXiv, StreamBox-TZ: Secure Stream Analytics at the Edge with TrustZone
-
2018, arXiv, YerbaBuena: Securing Deep Learning Inference Data via Enclave-based Ternary Model Partitioning
-
2019, arXiv, Confidential Deep Learning: Executing Proprietary Models on Untrusted Devices
-
2019, arXiv, Let the Cloud Watch Over Your IoT File Systems
-
2019, mobicom, Occlumency: Privacy-preserving Remote Deep-learning Inference Using SGX
-
2020, arXiv, CrypTFlow: Secure TensorFlow Inference
-
2017, arXiv, A Survey of Model Compression and Acceleration for Deep Neural Networks
-
2018, arXiv, A Survey of Machine and Deep Learning Methods for Internet of Things (IoT) Security
-
2018, ECCV, AI Benchmark: Running Deep Neural Networks on Android Smartphones
-
2019, arXiv, A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection
-
2019, arXiv, Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
-
2019, arXiv, AI Benchmark: All About Deep Learning on Smartphones in 2019
-
2019, arXiv, Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing
-
2019, ICLR, Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
-
2019, TNN&LS, Adversarial Examples: Attacks and Defenses for Deep Learning
-
2020, arXiv, A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications