Edge Intelligence Workshop 2022

19 - 20 September, 2022 - Montréal, Québec, Canada

Scientific Program

Monday September 19
08:00-08:30 Breakfast (Registration and Poster setup)
08:30-09:30 Diana Marculescu Keynote: "When Sustainability Meets Machine Learning: Efficient Learning from Cloud to Edge"
09:30-10:00 James Clark What is lost when networks are compressed?
10:00-10:30 Coffee Break and Posters
10:30-11:00 Vahid Partovi Nia Edge implementation of deep models
11:00-11:30 Mark Coates Efficient Bayesian Network Architecture Search for Graph Neural Networks
11:30-12:00 Ehsan Saboori Running 2 bit quantized CNN models on ARM CPUs
12:00-13:00 Lunch
13:00-15:00 Evgeni Gousev Keynote: "tinyML: ultimate energy efficient machine learning solution for edgeAI"
14:00-14:30 Muthucumaru Maheswaran JAMScript: A Programming Language for Edge Oriented Mobile Internet of Things
14:30-15:00 Shahrokh Valaee Cooperative Location Estimation using Federated Learning
15:00-15:30 Coffee Break and Posters
15:30-16:00 Rachel E. Bouserhal Hearables and their potential as a tool for early disease detection
16:00-16:30 Dounia Lakhmiri A Stochastic Proximal Method for Nonsmooth Regularized Finite Sum Optimization
16:30-17:00 Masoud Asgharian Causal Discovery, Independence of Mechanism and Input Assumption and Selection Bias
17:00-17:30 Michael Rabbat Asynchronous Federated Learning at Scale


Tuesday September 20
08:00-08:30 Breakfast (Registration and Poster setup)
08:30-09:30 Wen Tong Keynote: Machine Learning Based Post-Shannon Cognition Communications
09:30-10:00 Brett Meyer Transforming Intelligence for the Edge: Challenges and Opportunities in Modeling, Optimization, and Deployment
10:00-10:30 Coffee Break and Posters
10:30-11:00 Yunaho Yu Challenges for Edge Device Machine Learning Platform
11:00-11:30 Naoya Onizawa Fast-Converging Simulated Annealing for Ising Models Based on Integral Stochastic Computing
11:30-12:00 Ghouthi Boukli Hacene DNN Quantization and acceleration for training and inference
12:00-13:00 Lunch
13:00-14:00 Song Han Keynote: Efficient AI Computing with Sparsity
14:00-14:30 Christophe Dubach Very High-Level Synthesis of Neural Networks Accelerators for FPGAs
14:30-15:00 Francois Leduc-Primeau Building Energy-Efficient AI Chips by Exploiting Energy-Reliability Tradeoffs
15:00-15:30 Coffee Break and Posters
15:30-16:00 Yvon Savaria Applications of Edge Intelligence, Applications, Lessons Learned and Platforms
16:00-16:30 Andreas Moshovos Boosting Machine Learning Innovation: Computing Systems that Learn and Adapt
16:30-17:00 Pascal Poupart Uncertainty Aware Federated Learning
17:00-17:30 Sarath Chandar TBD


Board Number Poster Title
Monday September 19
1 Weighted Group L0-norm Constraint for Sparse Training
2 NAS plus Pipeline for High Throughput Edge Inference BERT
3 Generalizing ProxConnect on Vision Transformer Binarization
4 An Exploration into the Performance of Unsupervised Cross-Task Speech Representations for ''In the Wild'' Edge Applications
5 GHN-Q: Parameter Prediction for Unseen Quantized Convolutional Architectures via Graph Hypernetworks
6 A Decomposition Method Supporting Many Factorization Structures
7 Retention of Domain Adaptability in Compressed Neural Networks
8 Sharpness-Aware Training for Accurate Inference on Noisy DNN Accelerators
9 On the Importance of Integrating Curriculum Design for Teacher Assistant-based Knowledge Distillation
10 Towards Finding Efficient Students via Blockwise Neural Architecture Search and Knowledge Distillation
11 Quasi-convex floating points optmization
12 Standard Deviation-Based Quantization for Deep Neural Networks
13 S^3 Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks
14 Inspecting the Role of Pretrained Transformers in Federated Learning
15 Quantized One-dimensional Stacked CNN for Seizure Forecasting with Wearables
16 BERT Inference Energy Predictor for Efficient Hardware-aware NAS
17 Speeding up Resnet Architecture with Layers Targeted Low Rank Decomposition
18 Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization
Tuesday September 20
1 Limited-Memory Stochastic Partitioned Quasi-Newton Training
2 A Short Study on Compressing Decoder-Based Language Models
3 Faster Attention Is What You Need: A Fast Self-Attention Neural Network Backbone Architecture for the Edge via Double-Condensing Attention Condensers
4 Quadratic Regularization Optimizer in Low Precision for Deep Neural Networks: Implementation and Numerical Experience
5 Gradient Distribution Theory for Exploding and Vanishing Gradient Problem
6 Mixed representation integer fine-tuning of transformer-based models
7 How Robust is Robust wav2vec 2.0 for Edge Applications?: An Exploration into the Effects of Quantization and Model Pruning on “In-the-Wild” Speech Recognition
8 ARMCL BERT: Novel Quantizable BERT Implementation for ARM SoCs
9 Kronecker Decomposition for GPT Compression
10 Dyadic Integer Only BERT
11 Learning Gaussian Restricted Boltzmann Machine using tensorial decompositions
12 Persona Controlled Dialogue Prompting
13 Toward Training Neural Networks with a Multi-Precision Quadratic Regularization Algorithm
14 iRNN: Integer-only Recurrent Neural Network
15 Latency and Accuracy Predictors for Efficient BERT Hardware-aware NAS
16 Rational SoftMax
17 Partially-Random Initialization: A Smoking Gun for Binarization Hypothesis of BERT