This module describes diffusing computation algorithms by which an agent can learn about the structure of the network in which the agent operates.
This module describes diffusing computations. An agent can use diffusing computations to learn about the structure of the network in which the agent operates. For example, an agent can use diffusing computations to determine the number of agents in the network or to determine if the system is deadlocked.Data Structures in Distributed Algorithms
The module shows how data structures play critical roles in distributed algorithms just as they do in sequential algorithms. The algorithm maintains the invariants that define the data structure --- in this case a tree --- even though the structure is modified concurrently by multiple agents.Nondeterministic Iteration in Sequential and Distributed Algorithms
This module shows how nondeterministic iteration is used in exactly the same way for reasoning about sequential and distributed algorithms. Whether the algorithm operates across multiple agents and channels or is a sequential program is immaterial to reasoning about its correctness.Initially, the system has a single active agent. This agent is called the initiator. Initially all channels are empty. The computation terminates exactly when all agents are idle and all channels are empty.
The computation may never terminate. Our first problem is to design an algorithm that enables the initiator to determine that the computation has terminated if it terminates. Later, we will extend this algorithm to enable the initiator to learn about the network.
In this system, for every channel from an agent \(x\) to an agent \(y\), there is a channel from \(y\) to \(x\). For any pair \(x, y\) of agents there exists at most one channel from \(x\) to \(y\), and at most one channel from \(y\) to \(x\).
An agent \(y\) sends an ack (acknowledgment) along channel \((y, x)\) after receiving a message along channel \((x, y)\). An ack is different from a message; so acks aren't acked.
Initially all channels are empty: there are no messages or acks in transit along channels. Let \(x.num\_unacked\) be the number of \(x\)'s unacknowledge messages, i.e, the number of messages that \(x\) has sent minus the number of acks that \(x\) has received. We can prove the invariant that there are no messages in any of \(x\)'s outgoing channels when \(x.num\_unacked = 0\).
We will prove the following invariants.
\( (x.parent = null) \quad \equiv \quad x.idle \wedge (x.num\_unacked = 0) \)
\( x.parent \neq null \quad \Rightarrow \quad x.parent.num\_unacked > 0 \)
\( initiator.idle \wedge (initiator.num\_unacked = 0) \)
So, the initiator detects that the computation has terminated when the initiator is idle and no unacknowledged messages. Next we give an algorithm that maintains the above invariants.
0: initially: x.parent = null x.idle = True x.num_unacked = 0 1. When x sends a message: x.num_unacked = x.num_unacked + 1 2. When x receives a message from y: x.idle = False if x.parent = null: x.parent = y else send ack to y 3. When x receives an ack: x.num_unacked = x.num_unacked - 1 if (x.num_unacked == 0) and x.idle: send ack to x.parent x.parent = null 4. When x becomes idle: x.idle = True if x.num_unacked == 0: send ack to x.parent x.parent = null
The Initiator
The program for the initiator is the same except that initially the initiator is active. Also, to keep the exposition uniform for the initiator and the other agents, we assume that the initiator has a parent which is not one of the agents of the network. We call the initiator's parent \(external\). This agent plays no role other than to keep the proofs identical for the invariant and other agents.initiator.parent = external initiator.idle = False initiator.num_unacked = 0 external.num_unacked = 1
Safety
The proof that the invariants are satisfied is carried out by showing that they hold initially and then verifying that each of the four commands maintains the invariants. The verification step is straightforward if a bit laborious.Progress
If all agents are idle and there are no messages in channels then the underlying computation has terminated. We will prove that if the underlying computation does terminate then the detection algorithm terminates as well, i.e. the tree vanishes, and at that point the initiator detects that the computation has terminated. After the underlying computation terminates, the only action that executes is action 3: receiving an ack.
A variant function is the following graph. A vertex \(x\) is in the graph exactly when \(x.parent\) is not null or there is an ack in transit along the channel from \(x\) to its parent. Define a partial order \(<\) between graphs as \(G \leq G'\) when \(G\) is a subgraph of \(G'\). This graph is a tree because the only pending acks are from a vertex to its parent (rule 3).
Next we prove that every execution of an action while the variant function is not the empty graph reduces the variant function. When all acks are delivered to \(y\), it sends an ack to its parent. So while the tree is not empty there is an ack in some channel. When \(y\) receives an ack from \(x\), the edge \((x, y)\) is deleted from the tree. Therefore every execution of action 3 reduces the variant function.
K. Mani Chandy, Emeritus Simon Ramo Professor, California Institute of Technology