Modeling Adversaries with TLA+

Modeling Adversaries with TLA+

We’ll start by writing a very simple TLA+ spec for the machine, then compose it with a spec of the world. If the world is only kicking things out of alignment a finite number of times, say one million, then it should still be stable. In the TLA+ formulation, and we can think of an adversary as an agent in the system who can take a superset of the actions everybody else can.

Source: www.hillelwayne.com