Publications

This page is out of date. For a list of our recent publications, please check out

Conference Publications
(as of 22.9.2021)

Lower Bounds for Shared-Memory Leader Election under Bounded Write Contention
Dan Alistarh, Rati Gelashvili, Giorgi Nadiradze.
In the International Symposium on Distributed Computing (DISC 2021).
Best Paper Award.
[full version]

Comparison Dynamics in Population Protocols
Dan Alistarh, Martin Toepfer, Przemyslaw Uznanski.In theACM Symposium on Principles of Distributed Computing (PODC 2021).
[full version]

A Scalable Concurrent Algorithm for Dynamic Connectivity
Alexander Fedorov, Nikita Koval, Dan Alistarh.
In the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2021).
[full version]

Communication-Efficient Distributed Optimization with Quantized Preconditioners
Foivos Alimisis, Peter Davies, Dan Alistarh.In the International Conference on Machine Learning (ICML 2021).
[full version]

New Bounds for Distributed Mean Estimation and Variance Reduction
Peter Davies, Vijaykrishna Gurunanthan, Niusha Moshrefi, Saleh Ashkboos, Dan Alistarh.In the International Conference on Learning Representations (ICLR 2021).
[full version]

Byzantine-Resilient Non-Convex Stochastic Gradient Descent
Dan Alistarh, Zeyuan Allen-Zhu, Faeze Ebrahimian, Jerry Li.In the International Conference on Learning Representations (ICLR 2021).
[full version]

Elastic Consistency: A Practical Consistency Model for Distributed SGD
Giorgi Nadiradze, Ilia Markov, Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh.In the AAAI Conference on Artificial Intelligence (AAAI 2021).
[full version]

Asynchronous Optimization for Efficient Training of Deep Networks with Guarantees
Vyacheslav Kungurtsev, Malcom Egan, Bapi Chatterjee, Dan Alistarh.
In the AAAI Conference on Artificial Intelligence (AAAI 2021).
[full version]

Scalable Belief Propagation via Relaxed Scheduling
Vitaly Aksenov, Dan Alistarh, Janne Korhonen.In Neural Information Processing Systems (NeurIPS 2020).

Adaptive Gradient Quantization for Data-Parallel SGD
Fartash Faghri, Iman Tabrizian, Ilia Markov, Dan Alistarh, Daniel Roy, Ali Ramezani.
In Neural Information Processing Systems (NeurIPS 2020).

WoodFisher: Efficient Second-order Approximation for Neural Network Compression
Sidak Pal Singh, Dan Alistarh.
In Neural Information Processing Systems (NeurIPS 2020).

The Splay-List: A Distribution-Adaptive Concurrent Skip-List
Vitaly Aksenov, Dan Alistarh, Aleksandra Drozdova, Amirkeivan Mohtashami.
In the International Symposium on Distributed Computing (DISC 2020).
Invited to the journal Special Issue dedicated to DISC 2020.

Compressive Sensing using IHT with Low-Precision Data Representation
Merve Gürel, Kaan Kara, Alen Stojanov, Tyler Smith, Thomas Lemmin,
Dan Alistarh, Markus Puschel, Ce Zhang.
IEEE Transactions on Signal Processing, 68, 2020.

Dynamic Averaging Load Balancing on Cycles
Dan Alistarh, Giorgi Nadiradze, Amirmojtaba Sabour.
In the International Symposium on Automata, Languages, and Computation (ICALP 2020).

On the Sample Complexity of Adversarial Multi-Source PAC Learning
Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph Lampert.
In the International Conference on Machine Learning (ICML 2020).

Inducing and Exploiting Activation Sparsity for Fast Inference in Deep Neural Networks
Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin,
William Leiserson, Sage Moore, Nir Shavit, Dan Alistarh.In the International Conference on Machine Learning (ICML 2020).

Memory Tagging: Minimalist Synchronization for Scalable Concurrent Data Structures
Dan Alistarh, Trevor Brown, Nandini Singhal.In theACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2020).

Non-Blocking Concurrent Interpolation Search
with Trevor Brown and Aleksandar Prokopec
in PPOPP 2020. Best Paper Award.
[full version]

Taming Unbalanced Training Workloads in Deep Learning with Partial Collectives
with Shigang Li, Tal Ben-Nun, Salvatore di Girolamo, and Torsten Hoefler
in PPOPP 2020.

In Search of the Fastest Concurrent Union-Find Algorithm
with Alexander Fedorov and Nikita Koval
in OPODIS 2019. Best Paper Award.

Why Extension-Based Proofs Fail
with James Aspnes, Faith Ellen, Rati Gelashvili, and Leqi Zhu
in STOC 2019.

SparCML: High-Performance Sparse Communication for Machine Learning
with Cedric Renggli, Saleh Ashkboos, Mehdi Aghagholzadeh, and Torsten Hoefler
In Supercomputing (SC) 2019.

Distributed Learning over Unreliable Networks
with Chen Yu, Hanlin Tang, Cedric Renggli, Simon Klassing, Ankit Singla, and Ce Zhang
in ICML 2019.

Powerset Convolutional Neural Networks
with Chris Wendler and Markus Pueschel
in NeurIPS 2019.

Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers
with Giorgi Nadiradze and Nikita Koval
in SPAA 2019.

Scalable FIFO Channels for Programming via Communicating Sequential Processes
with Nikita Koval and Roman Elizarov
in Euro-Par 2019. 

Dan Alistarh, Zeyuan Allen-Zhu, Jerry Li:
Byzantine Stochastic Gradient Descent
NeurIPS 2018: 4618-4628

Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, Cédric Renggli:
The Convergence of Sparsified Gradient Methods
NeurIPS 2018: 5977-5987

Dan Alistarh, Christopher De Sa, Nikola Konstantinov:
The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory
PODC 2018: 169-178

Dan Alistarh, Trevor Brown, Justin Kopinsky, Giorgi Nadiradze:
Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms 
PODC 2018: 377-386

Dan Alistarh:
A Brief Tutorial on Distributed and Concurrent Machine Learning
PODC 2018: 487-488

Dan Alistarh, James Aspnes, Rati Gelashvili
Space-Optimal Majority in Population Protocols
SODA 2018: 2221-2239

Dan Alistarh, Trevor Brown, Justin Kopinsky, Jerry Zheng Li, Giorgi Nadiradze
Distributionally Linearizable Data Structures
SPAA 2018: 133-142

Dan Alistarh, Syed Kamran Haider, Raphael Kübler, Giorgi Nadiradze:
The Transactional Conflict Problem
SPAA 2018: 383-392

Alen Stojanov, Tyler Michael Smith, Dan Alistarh, Markus Püschel:
Fast Quantized Arithmetic on x86: Trading Compute for Data Movement
SiPS 2018: 349-354
[blog] [code]

Demjan Grubic, Leo Tam, Dan Alistarh, Ce Zhang:
Synchronous Multi-GPU Training for Deep Learning with Low-Precision Communications: An Empirical Study
EDBT 2018: 145-156

QSGD: Communication-Efficient SGD via Randomized Quantization
with Demjan Grubic, Jerry Li, Ryota Tomioka, Milan Vojnovic.
In NIPS 2017. Spotlight Paper. Invited for presentation at NVIDIA GTC.
[PDF]

Robust Detection in Leak-Prone Population Protocols
with Bartlomiej Dudek, Adrian Kosowski, David Soloveichik, Przemyslaw Uznanski.
In DNA 2017: 155-171. Invited to the Special Issue of Natural Computing for DNA23. 
[PDF]

Forkscan: Conservative Memory Reclamation for Modern Operating Systems
with William M. Leiserson, Alexander Matveev, Nir Shavit.
In EuroSys 2017: 483-498
[PDF]

FPGA-Accelerated Dense Linear Machine Learning
with Kaan Kara, Gustavo Alonso, Onur Mutlu, Ce Zhang.
In FCCM 2017: 160-167
[PDF] (paywall; please ask for full text)

ZipML: Training Linear Models with End-to-End Low Precision
with Hantian Zhang, Jerry Li, Kaan Kara, Ji Liu, Ce Zhang.
In ICML 2017: 4035-4043
[PDF]

The Power of Choice in Priority Scheduling
with Justin Kopinsky, Jerry Li, Giorgi Nadiradze.
In PODC 2017: 283-292
[PDF]

Time-Space Trade-offs in Population Protocols
with James Aspnes, David Eisenstat, Rati Gelashvili, Ronald L. Rivest.
In SODA 2017: 2560-2579.
[PDF]

Lease/Release: Architectural Support for Scaling Contended Data Structures
with William Hasenplaugh and Kamran Haider.
In Proceedings of the 2016 Symposium on Principles and Practice of Parallel Programming (PPoPP 2016).
Invited to the Special Issue of Transactions on Parallel Computing for PPoPP 2016.
[PDF]

Streaming Min-Max Hypergraph Partitioning
with Jenny Iglesias and Milan Vojnovic.
In Neural Information Processing Systems (NIPS 2015).
[PDF]

Non-Blocking Algorithms under Stochastic Schedulers
with Thomas Sauerwald and Milan Vojnovic.
In the 34th ACM Symposium on Principles of Distributed Computing (PODC 2015).
[PDF]

Fast and Exact Majority in Population Protocols
with Rati Gelashvili and Milan Vojnovic.
In the 34th ACM Symposium on Principles of Distributed Computing (PODC 2015).
[PDF]

How to Elect a Leader Faster than a Tournament
with Rati Gelashvili and Adrian Vladu.
In the 34th ACM Symposium on Principles of Distributed Computing (PODC 2015).
[PDF]

ThreadScan: Automatic and Scalable Memory Reclamation
with William Leiserson, Alexander Matveev, and Nir Shavit.
In the ACM Symposium on Algorithms and Architectures (SPAA 2015).
Invited to the Special Issue of Transactions on Parallel Computing dedicated to SPAA 2015.
[PDF]

The SprayList: A Scalable Relaxed Priority Queue
with Justin Kopinsky, Jerry Li, and Nir Shavit.
In the Symposium on Principles and Practice of Parallel Programming (PPoPP 2015).
Best Artefact Award.
[PDF]

Inherent Limitations of Hybrid Transactional Memory
with Justin Kopinsky, Petr Kuznetsov, Srivatsan Ravi, and Nir Shavit.
In the 29th International Symposium on Distributed Computing (DISC 2015).
[PDF]

Polylogarithmic-Time Leader Election in Population Protocols
withRati Gelashvili.
In the International Symposium on Automata, Languages, and Programming (ICALP 2015), Track C.
[PDF]

Communication-Efficient Randomized Consensus
with James Aspnes, Valerie King and Jared Saia.
In Proceedings of the 28th International Symposium on Distributed Computing (DISC 2014).
[PDF]

Balls-into-Leaves: Sub-logarithmic Renaming in Synchronous Message-Passing Systems
with Oksana Denysyuk, Luis Rodrigues and Nir Shavit.
In Proceedings of the 33rd ACM Symposium on Principles of Distributed Computing (PODC 2014).
Invited to the Special Issue of Distributed Computing dedicated to PODC 2015.
[preliminary version]

Are Lock-Free Concurrent Algorithms Practically Wait-Free?
with Keren Censor-Hillel and Nir Shavit.
In Proceedings of the 46th ACM Symposium on Theory of Computing (STOC 2014).
[PDF]

The LevelArray: A Fast, Practical Long-Lived Renaming Algorithm
with Justin Kopinsky, Alexander Matveev, and Nir Shavit.
in Proceedings of the 34th International Conference on Distributed Computing Systems (ICDCS 2014).
[PDF]

StackTrack: An Automated Transactional Approach to Concurrent Memory Reclamation
with Patrick Eugster, Maurice Herlihy, Alexander Matveev, and Nir Shavit.
In Proceedings of the 7th European Conference on Computer Systems (EuroSys 2014), to appear.
[PDF]

Dynamic Task Allocation in Asynchronous Shared Memory
with James Aspnes, Michael A. Bender, Rati Gelashvili, and Seth Gilbert.
In Proceedings of SODA 2014, Portland, Oregon, 2014.
[full version]

Firefly Synchronization with Asynchronous Wake-up
with Alejandro Cornejo, Mohsen Ghaffari, and Nancy Lynch.
[preliminary version] (presented at BDA 2013)

Randomized Loose Renaming in O(log log n) Time
with James Aspnes, George Giakkoupis, and Philipp Woelfel.
In Proceedings of PODC 2013, Montreal, Canada, 2013.
[full version]

How to Allocate Tasks Asynchronously
with Michael A. Bender, Seth Gilbert, and Rachid Guerraoui.
In Proceedings of FOCS 2012, New Brunswick, New Jersey, 2012.
[full version]

Early Deciding Synchronous Renaming in O( log f ) Rounds or Less
with Hagit Attiya, Rachid Guerraoui, and Corentin Travers.
In Proceedings of SIROCCO 2012, Reykjavik, Iceland, 2012.
[full version]

On the Cost of Composing Shared-Memory Algorithms
with Rachid Guerraoui, Petr Kuznetsov, and Giuliano Losa.
In Proceedings of SPAA 2012, Pittsburgh, PA, 2012.
[full version]

The Complexity of Renaming
with James Aspnes, Seth Gilbert, and Rachid Guerraoui.
In Proceedings of FOCS 2011, Palm Springs, California, 2011.
[detailed record] [bibtex] [full text]

Sub-Logarithmic Test-and-Set Against a Weak Adversary.
with James Aspnes.
In Proceedings of DISC 2011, Rome, Italy, 2011.
[detailed record] [bibtex] [full text]

Optimal-Time Adaptive Strong Renaming, with Applications to Counting
James Aspnes, Keren Censor-Hillel, Seth Gilbert, and Morteza Zadimoghaddam.
In Proceedings of PODC 2011, San Jose, California, 2011.
[detailed record] [bibtex] [full text]

Dan Alistarh, Seth Gilbert, Rachid Guerraoui, and Corentin Travers.
Generating Fast Indulgent Algorithms.
In Proceedings of ICDCN 2011, Bangalore, India, January 2011.
Best Paper Award.
[detailed record] [bibtex] [full text]

Dan Alistarh, Ioannis Avramopoulos, Petr Kuznetsov, and Gilles Tredan.
Routing Attacks as a Viable Threat: Can Software Systems Protect Themselves?.
Presented at HotDep 2010, Vancouver, Canada, October 2010.
[detailed record] [bibtex] [full text]

Dan Alistarh, Hagit Attiya, Seth Gilbert, Andrei Giurgiu, and Rachid Guerraoui.
Fast Randomized Test-and-Set and Renaming.
In Proceedings of DISC 2010, Boston, Massachusetts, USA, September 2010.
[detailed record] [bibtex] [full text]

Dan Alistarh, Seth Gilbert, Rachid Guerraoui, and Morteza Zadimoghaddam.
How Efficient Can Gossip Be? (On the Cost of Resilient Information Exchange).
In Proceedings of ICALP 2010, Bordeaux, France, July 2010.
[detailed record] [bibtex] [full text]

Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Zarko Milosevic, and Calvin Newport.
Securing Every Bit: Authenticated Broadcast in Radio Networks.
In Proceedings of SPAA 2010, Santorini, Greece, June 2010.
[detailed record] [bibtex] [full text]

Dan Alistarh, Seth Gilbert, Rachid Guerraoui, and Corentin Travers.
Of Choices, Failures and Asynchrony: The Many Faces of Set Agreement.
In Proceedings of ISAAC 2009, Hawaii, USA, December 2009.
[detailed record] [bibtex] [full text]

Dan Alistarh, Seth Gilbert, Rachid Guerraoui, and Corentin Travers.
How to solve consensus in the smallest window of synchrony.
In Proceedings of DISC 2008, Arcachon, France, September 2008.
[detailed record] [bibtex] [full text]

Journal Publications and Technical Reports

Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training
Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste.
Accepted to the Journal of Machine Learning Research (JMLR), 2021. To appear.
Basis for a Tutorial at the International Conference on Machine Learning (ICML),2021.
[preliminary version]

NUQSGD: Provably Communication-Efficient Data-Parallel SGD via Non-uniform Quantization
Ali Ramezani-Kebrya, Fartash Faghri, Ilya Markov, Vitaly Aksenov, Dan Alistarh, Daniel Roy.In the Journal of Machine Learning Research (JMLR), 22(114), 2021.

Dan Alistarh, James Aspnes, Seth Gilbert, and Rachid Guerraoui.
The Complexity of Renaming. In Journal of the ACM 61(3): 18 (2014)
[detailed record] [bibtex] [full text]

Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Corentin Travers.
Generating Fast Indulgent Algorithms. 
In Theory of Computing Systems 51(4): 404-424 (2012)
[detailed record]

Dan Alistarh, Seth Gilbert, Rachid Guerraoui, and Corentin Travers.
Of Choices, Failures and Asynchrony: The Many Faces of Set Agreement.
In Algorithmica 62(1-2): 595-629 (2012)
[detailed record] [bibtex] [full text]

Dan Alistarh, Hagit Attiya, Andrei Giurgiu, Seth Gilbert, and Rachid Guerraoui.
Fast Randomized Test-and-Set and RenamingTechnical Report, 2010.
[detailed record] [bibtex] [full text]

Dan Alistarh, Seth Gilbert, Rachid Guerraoui, and Morteza Zadimoghaddam.
How Efficient Can Gossip Be? (On the Cost of Resilient Information Exchange).
Technical Report, 2010.
[detailed record] [bibtex] [full text]

Theses and Book chapters

Randomized versus Deterministic Implementations of Concurrent Data Structures.
EPFL, Lausanne, 2012.
with Rachid Guerraoui (Dir.).
[detailed record]

Distributed Algorithms (chapter)
In the Computing Handbook, 3rd ed. (1) 2014: 16: 1-16.
with Rachid Guerraoui.