1
Distributed algorithms are algorithms designed to run on multiple processors, without tight centralized control.
694 years, 4 months
25
In general, they are harder to design and harder to understand than single-processor sequential algorithms. Distributed algorithms are used in many practical systems, ranging from large computer networks to multiprocessor shared-memory systems. They also have a rich theory, which forms the subject matter for this course.
The core of the material will consist of basic distributed algorithms and impossibility results, as covered in Prof. Lynch’s book Distributed Algorithms. This will be supplemented by some updated material on topics such as self-stabilization, wait-free computability, and failure detectors, and some new material on scalable shared-memory concurrent programming.
Course Currilcum
- Course overview. Synchronous networks. Unlimited
- Leader election in rings Unlimited
- Spanning trees. Minimum spanning trees. Unlimited
- Fault-tolerant consensus Unlimited
- Number-of-processor bounds for Byzantine agreement. Unlimited
- k-set-agreement Unlimited
- Asynchronous distributed computing Unlimited
- Non-fault-tolerant algorithms for asynchronous networks. Unlimited
- Spanning trees Unlimited
- Synchronizers Unlimited
- Time, clocks, and the ordering of events Unlimited
- Stable property detection Unlimited
- Asynchronous shared-memory systems Unlimited
- More mutual exclusion algorithms Unlimited
- Shared-memory multiprocessors Unlimited
- Impossibility of consensus in asynchronous, fault-prone, shared-memory systems Unlimited
- Atomic objects Unlimited
- Atomic snapshot algorithms Unlimited
- List algorithms Unlimited
- Transactional memory Unlimited
- Wait-free computability Unlimited
- Wait-free vs. f-fault-tolerant atomic objects. Boosting fault-tolerance. Unlimited
- Asynchronous network model vs Unlimited
- Self-stabilizing algorithms Unlimited
- Timing-based systems Unlimited