Monitoring Data-Dependent Temporal Patterns

· Agnishom Chattopadhyay

Programs and cyber-physical systems have implementations consisting of multiple complex parts. Such systems are often safety critical, and the developer is required to ensure that the system behaves correctly. There are many tools at their disposal to tackle this problem including unit tests, property tests, simulations, model checking, formal verification, etc. But all of this should start with specifications.

The strength of formalized specifications is that their semantics are rigorously defined. Having a well-defined semantics is the first step towards reasoning about the behavior of systems, and would enable stakeholders and designers of the system to communicate their requirements and expectations. Another way formal specifications could be utilized is through the use of automated monitoring tools, which can check whether the system’s behavior adheres to the specifications. In some cases, it may be possible to use a model of the program to ascertain that all execution paths behave correctly.

A formalism for our specifications can be judged along two axes: expressivity and algorithmic efficiency. It is desirable that the formalism allow us to express a large variety of properties, ideally succinctly, and in a manner that is intuitive. On the other hand, we also want automated tools which can interpret these specifications and use them for monitoring streams of data or proving the correctness of associated systems. When choosing a formalism, one has to manage the trade-off between its expressivity and succinctness with the degree to which tools and algorithms can handle them efficiently.

Temporal Logic

We are building SpecForge at Imiron. SpecForge is intended to be a framework for authoring, organizing and monitoring formal specifications based on temporal logic.

Temporal Logic was initially introduced by [Pnueli77] to describe program behavior. Since then, it has been used to express properties in various domains including hardware specification [EisnerF18], hazardous traffic scenarios for autonomous vehicles [Reimann24] and the behavior of insulin pumps [CairoliFPS19].

In this formalism, one uses logical and temporal operators together to describe the ordering of events in time. Given a formula $\varphi$, a trace $\sigma$, and some time $t$, we write $\sigma, t \models \varphi$ to say that the formula $\varphi$ holds at time $t$ in the trace $\sigma$. We omit $\sigma$ if it is obvious from the context. Some of the commonly used temporal operators are illustrated in this diagram.

  • $\text{eventually } \varphi$ means that $\varphi$ will hold at some point in the future.
  • $\text{always } \varphi$ means that $\varphi$ holds throughout the future.
  • $\varphi \text{ until } \psi$ means that
    • $\psi$ will hold at some point in the future, and
    • until that point, $\varphi$ holds throughout.

One could also use the past version of these operators, such as $\text{past } \varphi$, $\text{historically } \varphi$, and $\varphi \text{ since } \psi$.

The past version is especially useful in the context of online monitoring, where the monitor must report violations in real-time as the trace is being generated, without the knowledge of future events.

Additionally, one can annotate temporal operators with intervals to indicate a range of time within which the satisfying (or violating) witness must be found.

With these operators, one can express requirements containing propositions which must respect a temporal order. Consider this scenario, for example.

When a request is made,

  • the request must be eventually served, i.e., responded to, and
  • the response must be eventually sent within 10 time units of the request, and
  • the system must be locked until the response is sent, and
  • once the response is sent, the system will eventually be unlocked

One can express this requirement in temporal logic as follows:

$$\text{request} \Rightarrow \Big( \text{locked } \text{ until } [0, 10] \Big( \text{response} \land \text{eventually } \neg \text{locked} \Big) \Big)$$

First Order Temporal Logic

The formula above assumes that the input data consists of atomic propositions $\text{request}$, $\text{response}$, and $\text{locked}$. However, for a realistic system, it is likely that there are multiple resources which could be requested, and whose locks are managed independently. One could express this as a conjunction of multiple formulas, one for each resource.

$$ \begin{matrix} & \text{request}_a \Rightarrow \Big( \text{locked }_a \text{ until } \Big( \text{response}_a \land \text{eventually } \neg \text{locked}_a \Big) \Big) \\ \land & \text{request}_b \Rightarrow \Big( \text{locked }_b \text{ until } \Big( \text{response}_b \land \text{eventually } \neg \text{locked}_b \Big) \Big) \\ \land & \text{request}_c \Rightarrow \Big( \text{locked }_c \text{ until } \Big( \text{response}_c \land \text{eventually } \neg \text{locked}_c \Big) \Big) \\ \land & \ldots \end{matrix} $$

This approach is suboptimal for a number of reasons. The number of resources may not be known beforehand, or it may be very large. The data might not be organized in terms of individual atomic propositions per resource, necessitating additional preprocessing. Most importantly, this obscures the underlying relationship between the requests and the responses.

To address these issues, we move from using temporal logic over propositions to using first order temporal logic. Here, the term first order refers to the fact that there are variables which allow extracting values from the data and using them as variables in the formulas. In such a formalism, we could express the above requirement as follows:

$$\forall r : \text{resource} \cdot \Big [ \text{request}(r) \Rightarrow \Big( \text{locked}(r) \text{ until } \big( \text{response}(r) \land \text{eventually } \neg \text{locked}(r) \big) \Big) \Big ]$$

Example: Invalidated Iterators

Let us consider a more complex example taken from the excellent survey [HavelundRTZ18] involving iterator invalidation in the runtime environment of some programming language.

  • Frozen collections can be created from mutable maps.
  • Iterators can be created from collections.
  • When the map associated with a collection is modified, all iterators created from that collection are invalidated.
  • Invalidated iterators cannot be used.
$$ \forall i : \text{iterator} \cdot \text{next}(i) \Rightarrow \exists (m : \text{map}, c : \text{collection}) \cdot \Big [ \neg \text{update}(m) \text{ since } \big( \text{mkIter}(c, i) \land \text{past } \text{collect}(m, c) \big) \Big ] $$

The figure below illustrates the timeline of events for this property.

Another way of specifying this formula would be to specify it as a conjunction of two formulas.

  • When an iterator is used, it must have been created via mkIter from some collection, which in turn was created via collect from some map. $$ \forall i : \text{iterator} \cdot \text{next}(i) \Rightarrow \exists (m : \text{map}, c : \text{collection}) \cdot \Big [ \text{past } \Big( \text{mkIter}(c, i) \land \text{past } \text{collect}(m, c) \Big) \Big ] $$
  • It can never be the case that there was an update between the creation of the iterator and its use. $$ \neg \forall (m : \text{map}, c : \text{collection}, i : \text{iterator}) \cdot \Big ( \text{next}(i) \land \text{past} \Big ( \text{update}(m) \land \text{past } \Big( \text{mkIter}(c, i) \land \text{past } \text{collect}(m, c) \Big ) \Big ) \Big ) $$

The second formula forbids the timeline illustrated below.

Example: GDPR Compliance

In [ArfeltBD19], we see an effort towards formalizing parts of the GDPR data protection regulation using first order temporal logic. This involves predicates such as

  • \(\text{consent}(\texttt{subject\_id}, \texttt{data\_id})\) meaning that the user has given consent for processing the data with the given id,
  • \(\text{legal\_grounds}(\texttt{data\_id})\) meaning that there are legal grounds for processing the data with the given id,
  • \(\text{use}(\texttt{data\_id})\) meaning that the data with the given id is being used,
  • \(\text{deletion\_request}(\texttt{subject\_id}, \texttt{data\_id})\) meaning that the user with the given id has requested deletion of the data with the given id.

One of the requirements states that data can only be used if the user has given consent or there are legal grounds for using it. This is expressed as follows:

$$ \forall d : \text{data\_id} \cdot \text{use}(d) \Rightarrow \Big( \exists s : \text{subject\_id} \cdot \Big( \text{ past } \text{consent}(s, d) \Big) \lor \text{legal\_grounds}(d) \Big) $$

When the user requests deletion of some data which they had previously given consent for, the data must eventually be deleted, and hence should not be used anymore.

$$ \forall s : \text{subject\_id}, d : \text{data\_id} \cdot \Big [ \text{deletion\_request}(s, d) \land \text{ past } \text{consent}(s, d) \Rightarrow \text{eventually } \Big( \text{always } \neg \text{use}(d) \Big) \Big ] $$

Event Automata

Automata (state machines) and temporal logic share a close relationship. Automata are usually more expressive than temporal logic, meaning that properties expressed using temporal logic can also be expressed using automata, but not vice versa.

Event Automata (and their quantified counterpart), described by [BarringerFHRR12], allow the description of temporal patterns directly using automata.

  • The transitions on the automata are labeled with events carrying variables.
  • When an event is received, the automaton checks if the event matches a transition from the current state. If not, the automaton waits at the current state.
  • The variables may appear unquantified, existentially quantified or universally quantified.
    • For unquantified variables, the binding of the variables may vary during the run of the automaton.
    • When a variable is existentially quantified, the automaton can guess a specific valuation for the variable at the beginning of the run, but must match this specific valuation throughout the run. This is equivalent to instantiating all occurrences of the variable with a non-deterministically chosen value at the beginning of the run.
    • When a variable is universally quantified, it is equivalent to running multiple instantiated copies of the automaton in parallel, one for each possible valuation of the variable. The trace must be accepted by all of these copies.

The two images above show the Quantified Event Automata for the iterator invalidation and the request-response examples respectively. In the first example, the automaton lays out the sequence of events that would lead it to a rejecting state. In the second example, the automaton is forced into the non-accepting state once a request is received, and is stuck there until the corresponding obligations are fulfilled.

If the automaton is given additional registers to store values, it can define more complex properties. For example (taken from [RegerCR15]), here is an automaton that checks that the $\text{next}$ method on an iterator is not called more than the size of the underlying collection.

Value Freezing

In the rest of the article, we mostly studied the usage of formulas which can be dependent upon values bound to discrete events. Another way for formulas to be dependent on the input is when the input is a time-varying signal and the formula directly refers to the value of the signal at a certain point of time.

Signal Temporal Logic (STL) refers to the extension of temporal logic where the atomic propositions are replaced with predicates (typically linear inequalities) over real-valued signals. A freeze quantification [BrimDSV14] can be added to STL to introduce a variable whose value depends on the signal at the time of evaluation.

We say that $\sigma, t \models \text{freeze } x := f(\ldots) \text{ in } \varphi(x)$ if $\sigma, t \models \varphi(v)$ where $v$ is the value of the expression $f(\ldots)$ evaluated on the signal $\sigma$ at time $t$. Since $\varphi$ may be a temporal formula, this means that the value $v$ can be ‘accessed’ at a different time in the trace. This can be used to express interesting properties comparing the values of the signal at different times.

  • Suppose we have a frozen value $x_0$. Consider the formulas $\text{willExceed}(x_0)$ and $\text{willRecede}(x_0)$ defined as follows. $$\begin{align} \text{willExceed}(x_0) = & \text{ eventually } [0, 10]\, (x > x_0 + \delta) \\ \text{willRecede}(x_0) = & \text{ eventually } [0, 10]\, (x < x_0 - \delta) \end{align}$$ Informally, the formula $\text{willExceed}(x_0) \land \text{willRecede}(x_0)$ says that within the next 10 time units, the signal $x$ must go above $x_0 + \delta$ and below $x_0 - \delta$.
  • Now, consider the formula $$\psi = \text{always } \Big( \text{eventually } [0, 10] \text{ freeze } x_0 := x \text{ in } \text{willExceed}(x_0) \land \text{willRecede}(x_0) \Big)$$

This refers to a certain kind of oscillatory behavior of the signal $x$. There is always a point in the near future where the signal $x$ can be frozen as $x_0$, and within the next 10 time units, the signal $x$ must go above $x_0 + \delta$ and also below $x_0 - \delta$.

Quantified Signal Temporal Logic (QSTL) [BakhirkinFHN18] uses a set of time variables and a set of value variables, and resembles a first-order logic more closely. Instead of using a freeze quantifier, one can directly refer to the value of the signal using a time variable in QSTL. The survey of [BoufaiedJBBP21] presents a taxonomy of the different properties that different variants of signal temporal logic can express.

Complex Event Recognition

A related domain is Complex Event Recognition (CER), where the goal is not necessarily to just check if the trace adheres to the specification, but to extract occurrences of events so that they can be used for further downstream processing.

In [BucchiGQRV22], we see an automata-based approach towards CER. The following example is in the CEQL language, proposed in their work.

SELECT * FROM Stock
WHERE SELL as msft; SELL as intel; SELL as amzn
FILTER msft[name="MSFT"] AND msft[price > 100]
    AND intel[name="INTC"]
    AND amzn[name="AMZN"] AND amzn[price < 2000]

Here, there is a stream of events named Stock, containing BUY and SELL events. This query extracts (ordered, but not necessarily consecutive) occurrences of SELL events for the stocks MSFT, INTC and AMZN, with some additional constraints. The first MSFT stock must have had a price greater than 100, and the last AMZN stock must have had a price less than 2000.

Here, the WHERE clause carries a regular expression like pattern over the events, which also binds variables to the matched events. The FILTER clause then uses these variables to express additional constraints on the matched events. Here is another example using a few other features from this language.

SELECT MAX * FROM Stock
WHERE (BUY OR SELL) as l; (BUY OR SELL)+ as m; (BUY OR SELL) as h
FILTER l[price < 100] AND m[price >= 100]
    AND m[price <= 2000] AND h[price > 2000]
PARTITION BY [name]

This query searches for a sequence of stock events which are ordered by increasing price. The PARTITION BY clause indicates that the selected events must belong to the same stock. The + operator indicates iteration, meaning that the middle sequence m may contain multiple events. The MAX modifier indicates that we are interested in the longest matching sequence.

A related notion is that of register automata and their regular expression counterparts: regular expressions with memory, binding or equality [LibkinTV15]. While they have been studied in the context of querying graph databases ([LibkinTV13]), they have not received much attention in the domain of runtime verification. These expressions bind values using a nondeterministic semantics, and do not have universal quantifiers, making it impossible for them to express properties which require checking properties over all possible instantiations of a variable.

Stream Processing and Rule-Based Monitoring

In a framework such as RTLOLA (based on techniques from [BozzelliS14]), one could express properties more directly using stream processing primitives. For example, to express the invalidated iterator property, one could define some auxiliary streams as follows:

$$ \begin{align*} \text{collected}_0 &= \emptyset \\ \text{collected}_{t + 1} &= \text{collected}_t \cup \{ (m, c) \mid t \models \text{collect}(m, c) \} \\ \text{notupdated}_0 &= \emptyset \\ \text{notupdated}_{t + 1} &= \{ (m, i) \mid (m, c) \in \text{collected}_t, t \models \text{mkIter}(c, i) \} \setminus \{ (m, i) \mid t \models \text{update}(m) \} \end{align*} $$

With these definitions, the main property can be expressed as $\text{next} \subseteq \pi_i(\text{notupdated})$. The interesting observation here is a handful of relational algebra operations can be used to express these intermediate streams. First, we notice that $\text{collected}$ is an aggregation of certain tuples. Then, $\text{notupdated}$ can be understood as a join between $\text{collected}$ and the mkIter events, followed by a set difference with the update events. Finally, we used a projection on the $\text{notupdated}$ stream to express the main property.

In rule-based frameworks such as Logfire (presented in [Havelund15]) (no connection with Pydantic Logfire), one can express such relations using so-called rules.

facts collected, iterators, updatedIterators

r1: collect(m, c) => collected(m, c)
r2: collected(m, c), mkIter(c, i) => iterators(m, i)
r3: iterators(m, i), update(m) => updatedIterators(i)
r4: updatedIterators(i), next(i) => error

Logfire uses a variant of the RETE algorithm to maintain these relations.

Monitoring

With standard (propositional) temporal logic, the challenge of monitoring can be understood as tracking witnesses across time. For example, in order to know if $t \models \text{eventually } \varphi$ holds, a witness $t’ \geq t$ for $ t’ \models \varphi$ is needed. When first order variables are introduced, an additional layer of complexity is added to the witnesses which comes from the parametrizing variables. Consider the subformula $\text{safeIterator}(i)$ from our example involving unsafe iterators. Monitoring this formula not only requires us to observe the presence (or absence) of collect, mkIter and update events which are relevant to $i$, but also to determine the bindings for the existentially quantified variables $m$ and $c$.

$$\text{safeIterator}(i) = \exists (m : \text{map}, c : \text{collection}) \cdot \Big [ \neg \text{update}(m) \text{ since } \big( \text{mkIter}(c, i) \land \text{past } \text{collect}(m, c) \big) \Big ]$$

Propositional Temporal Logic formulas consist of subformulas which are also propositional. This means that monitoring each subformula over the input signal produces a signal over Boolean values. In First-Order Temporal Logic with first order variables, subformulas may contain free variables. Consider the time-varying semantic object $\llbracket \varphi \rrbracket$ associated with a formula $\varphi(x)$ defined as $\llbracket \varphi \rrbracket_t = \{ v \mid t \models \varphi(v) \}$. The conjunction $\varphi(x) \land \psi(x)$ and the disjunction $\varphi(x) \lor \psi(x)$ correspond to the intersection and union of the associated sets, $\llbracket \varphi \rrbracket \cap \llbracket \psi \rrbracket$ and $\llbracket \varphi \rrbracket \cup \llbracket \psi \rrbracket$, respectively. With more than one variable, more complexity arises. The semantic object associated with $\alpha(x, y)$ is a set of pairs, or equivalently, a relation over the variables $x$ and $y$. While $\alpha(x, y) \lor \beta(x, y)$ still corresponds to the union $\llbracket \alpha \rrbracket \cup \llbracket \beta \rrbracket$, using the formula $\varphi(x, y) \lor \psi(y, z)$ requires a join-like operation over the relations.

Temporal Databases use data structures that can efficiently track such time-varying relations. [BasinKMP08] discusses the use of temporal databases for monitoring first order temporal logic specifications, which resulted in the MonPoly monitoring tool. Verimon+, a version of MonPoly, which has been formally verified using the Isabelle proof assistant, was later presented in [BasinDHKRST20].

The tool DejaVu takes a different approach towards maintaining time-varying relations as described in [HavelundPU20]. When it observes a new parameter for any of its events, it is stored in a dictionary, and given an encoding using a bitvector. The relations are then represented using Binary Decision Diagrams (BDDs) over these bitvectors. The benefit of this approach is that BDDs have been engineered to efficiently represent and manipulate Boolean functions.

In most of the approaches discussed above, the atomic formulas consist of discrete events parametrized with some data. A more complex scenario arises when the atomic formulas consist of other relations or function symbols. For example, in the context of STL with value freezing, the atomic formulas are linear inequalities over real-valued signals. This means that the associated relations are unions of polyhedra. Monitoring these formulas would require specialized data structures for representing and manipulating such geometric objects.

Conclusion

Logical Formalisms can be valuable for expressing well-defined specifications which can serve as a basis for communication for designers and engineers, as well as for the use of automated tools. There are many interesting properties which we would be able to express once our language allows a mechanism to specify a dependency on the input data. We have looked at extensions of Temporal Logic and various other formalisms which allow expressing such dependencies.

Two contexts in which data-dependent temporal patterns arise are on traces of parametric events, i.e., data-carrying events, or over real-valued signals. We discussed First-Order Temporal Logic, which is usually used in the context of events. In this logic, there are first-order variables which denote values in the domain of the parameters of the events. A different approach is Value Freezing, usually used in the context of signals. Instead of directly quantifying over values, it freezes the current value of the signal for comparison with a different part of the signal in a subformula. When such extensions of temporal logic are used, the algorithmic problem of monitoring traces can become more challenging.

Using state machines directly may be preferred over temporal logic in certain situations. Quantified Event Automata is an automata based formalism that allows expressing such properties more directly. One could use formalisms dedicated for stream processing to express some of these notions. Sometimes, the user is interested in extracting certain fragments of the input data that matches the input specifications. Query languages in the domain of complex event recognition provide a framework for dealing with this scenario.

At the time of writing, the most idiomatic way to use SpecForge is to express properties about real-valued signals. In the near future, we plan to extend SpecForge to support events, which may have a payload. This would make the formalisms for expressing data-dependent temporal patterns in this article directly relevant. We wish to support value freezing in real-valued signals, as well as some of the constructs we discussed above based on first order temporal logic for event-based traces.

References

  • [ArfeltBD19] Arfelt, E., Basin, D., Debois, S. (2019). Monitoring the GDPR. In: Sako, K., Schneider, S., Ryan, P. (eds) Computer Security – ESORICS 2019. ESORICS 2019. Lecture Notes in Computer Science(), vol 11735. Springer, Cham.
  • [BakhirkinFHN18] Alexey Bakhirkin, Thomas Ferrère, Thomas Henzinger, Dejan Nickovic. The First-Order Logic of Signals. International Conference on Embedded Software (EMSOFT), Sep 2018, Torino, Italy.
  • [BarringerFHRR12] Barringer, H., Falcone, Y., Havelund, K., Reger, G., Rydeheard, D. (2012). Quantified Event Automata: Towards Expressive and Efficient Runtime Monitors. In: Giannakopoulou, D., Méry, D. (eds) FM 2012: Formal Methods. FM 2012. Lecture Notes in Computer Science, vol 7436. Springer, Berlin, Heidelberg.
  • [BasinDHKRST20] Basin, D., Dardinier, T., Heimes, L., Krstić, S., Raszyk, M., Schneider, J., & Traytel, D. (2020, June). A formally verified, optimized monitor for metric first-order dynamic logic. In International Joint Conference on Automated Reasoning (pp. 432-453). Cham: Springer International Publishing.
  • [BasinKMP08] Basin, D., Klaedtke, F., Müller, S., & Pfitzmann, B. (2008). Runtime monitoring of metric first-order temporal properties. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (2008) (pp. 49-60). Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
  • [BoufaiedJBBP21] Boufaied, C., Jukss, M., Bianculli, D., Briand, L. C., & Parache, Y. I. (2021). Signal-based properties of cyber-physical systems: Taxonomy and logic-based characterization. Journal of Systems and Software, 174, 110881.
  • [BozzelliS14] Bozzelli, L., Sánchez, C. (2014). Foundations of Boolean Stream Runtime Verification. In: Bonakdarpour, B., Smolka, S.A. (eds) Runtime Verification. RV 2014.
  • [BrimDSV14] Brim, L., Dluhoš, P., Šafránek, D., & Vejpustek, T. (2014). STL⁎: Extending signal temporal logic with signal-value freezing operator. Information and computation, 236, 52-67.
  • [BucchiGQRV22] Bucchi, M., Grez, A., Quintana, A., Riveros, C., & Vansummeren, S. (2022). CORE: a complex event recognition engine. Proceedings of the VLDB Endowment, 15(9), 1951-1964.
  • [CairoliFPS19] F. Cairoli, G. Fenu, F. A. Pellegrino and E. Salvato, “Model Predictive Control of glucose concentration based on Signal Temporal Logic specifications,” 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT), Paris, France, 2019, pp. 714-719
  • [EisnerF18] Eisner, C., Fisman, D. (2018). Functional Specification of Hardware via Temporal Logic. In: Clarke, E., Henzinger, T., Veith, H., Bloem, R. (eds) Handbook of Model Checking. Springer, Cham.
  • [Havelund15] Havelund, K. Rule-based runtime verification revisited. Int J Softw Tools Technol Transfer 17, 143–170 (2015).
  • [HavelundPU20] Havelund, K., Peled, D., & Ulus, D. (2020). First-order temporal logic monitoring with BDDs. Formal Methods in System Design, 56(1), 1-21.
  • [HavelundRTZ18] Havelund, K., Reger, G., Thoma, D., Zălinescu, E. (2018). Monitoring Events that Carry Data. In: Bartocci, E., Falcone, Y. (eds) Lectures on Runtime Verification. Lecture Notes in Computer Science(), vol 10457. Springer, Cham.
  • [LibkinTV13] Libkin, L., Tan, T., Vrgoč, D. (2013). Regular Expressions with Binding over Data Words for Querying Graph Databases. In: Béal, MP., Carton, O. (eds) Developments in Language Theory. DLT 2013. Lecture Notes in Computer Science, vol 7907. Springer, Berlin, Heidelberg.
  • [LibkinTV15] Libkin, L., Tan, T., & Vrgoč, D. (2015). Regular expressions for data words. Journal of Computer and System Sciences, 81(7), 1278-1297.
  • [Pnueli77] A. Pnueli, “The temporal logic of programs,” in 18th Annual Symposium on Foundations of Computer Science (sfcs 1977), pp. 46–57, 1977.
  • [RegerCR15] Reger, G., Cruz, H.C., Rydeheard, D. (2015). MarQ: Monitoring at Runtime with QEA. In: Baier, C., Tinelli, C. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2015. Lecture Notes in Computer Science(), vol 9035. Springer, Berlin, Heidelberg.
  • [Reimann24] Jesse Reimann, Nico Mansion, James Haydon, Benjamin Bray, Agnishom Chattopadhyay, Sota Sato, Masaki Waga, Étienne André, Ichiro Hasuo, Naoki Ueda, and Yosuke Yokoyama. 2024. Temporal Logic Formalisation of ISO 34502 Critical Scenarios: Modular Construction with the RSS Safety Distance. In Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing (SAC ‘24). Association for Computing Machinery, New York, NY, USA, 186–195.