Keynote Speakers

Hagit Attiya, Technion, Israel

Bio: Hagit Attiya is a professor of Computer Science at the Technion, Israel Institute of Technology, and holds the Harry W. Labov and Charlotte Ullman Labov Academic Chair. She is the editor-in-chief of Springer's journal Distributed Computing. She won the Dijkstra award in Distributed Computing 2011 and is a fellow of the ACM.Attiya received all her academic degrees, in Computer Science, from the Hebrew University of Jerusalem, and was a post-doctoral fellow at MIT.

Preserving Hyperproperties when Using Concurrent Objects

Abstract: Linearizability, a consistency condition for concurrent objects, is known to preserve trace properties.This suffices for modular usage of concurrent objects in applications, deriving their safety properties from the abstract object they implement. However, other desirable properties, like average complexity and information leakage, are not trace properties. These *hyperproperties* are not preserved by linearizable concurrent objects, especially when randomization is used.This talk will discuss formal ways to specify concurrent objects that preserve hyperproperties and their relation with verification methods like forward / backward simulation. We will show that certain concurrent objects cannot satisfy such specifications, and describe ways to mitigate these limitations.

                                                                                                             

Maurice Herlihy, Brown University, USA

Bio: Maurice Herlihy is the An Wang Professor of Computer Science at Brown University. He has an A.B. in Mathematics from Harvard University, and a Ph.D. in Computer Science from M.I.T. He has served on the faculty of Carnegie Mellon University and the staff of DEC Cambridge Research Lab. He is the recipient of the 2003 Dijkstra Prize in Distributed Computing, the 2004 Gödel Prize in theoretical computer science, the 2008 ISCA influential paper award, the 2012 Edsger W. Dijkstra Prize, and the 2013 Wallace McDowell award. He received a 2012 Fulbright Distinguished Chair in the Natural Sciences and Engineering Lecturing Fellowship, and he is a fellow of the ACM, a fellow of the National Academy of Inventors, the National Academy of Engineering, and the National Academy of Arts and Sciences.

Correctness Conditions for Cross-Chain Deals

Abstract: Modern distributed data management systems face a new challenge: how can autonomous, mutually-distrusting parties cooperate safely and effectively? Addressing this challenge brings up questions familiar from classical distributed systems: how to combine multiple steps into a single atomic action, how to recover from failures, and how to synchronize concurrent access to data. Nevertheless, each of these issues requires rethinking when participants are autonomous and potentially adversarial.

                                                                                                             

Nitin Vaidya, Georgetown University, USA

Bio: Nitin Vaidya is the McDevitt Chair of Computer Science at Georgetown University. He received his Ph.D. from the University of Massachusetts at Amherst. He previously served as a Professor and Associate Head in Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. He has co-authored papers that received awards at several conferences, including 2015 SSS, 2007 ACM MobiHoc and 1998 ACM MobiCom. He is a fellow of the IEEE. He has served as the Chair of the Steering Committee for the ACM PODC conference, as the Editor-in-Chief for the IEEE Transactions on Mobile Computing, and as the Editor-in-Chief for ACM SIGMOBILE publication MC2R.

Security and Privacy for Distributed Optimization and Learning

Abstract: Consider a network of agents wherein each agent has a private cost function. In the context of distributed machine learning, the private cost function of an agent may represent the “loss function” corresponding to the agent’s local data. The objective here is to identify parameters that minimize the total cost over all the agents. In machine learning for classification, the cost function is designed such that minimizing the cost function should result in model parameters that achieve higher accuracy of classification. Similar optimization problems arise in the context of other applications as well. Our work addresses privacy and security of distributed optimization, with applications to machine learning. In privacy-preserving machine learning, the goal is to optimize the model parameters correctly while preserving the privacy of each agent’s local data. In security, the goal is to identify the model parameters correctly while tolerating adversarial agents that may be supplying incorrect information. When a large number of agents participate in distributed optimization, security compromise or failure of some of the agents becomes increasingly likely. The talk will provide intuition behind the design and correctness of the algorithms.