Keynote Speakers

Neural Certificates

Speaker: Thomas A. Henzinger, IST Austria

Abstract: Symbolic datatypes have proved to be central for abstract reasoning about dynamical systems. Successful examples of symbolic datatypes are BDDs and SAT for reasoning about finite-state systems, and polyhedra and SMT for reasoning about discrete dynamical systems over certain infinite state spaces. Neural networks provide a natural symbolic datatype over continuous state spaces. They have the added benefit that, in addition to supporting important operations, they can be trained effectively.

We advocate the use of neural networks for multiple purposes when reasoning about continuous state spaces; in particular, they can be used to represent both deterministic dynamics – most importantly, controllers – and correctness certificates. A correctness certificate is a succinct witness for a desired property of a dynamical system. For example, invariants and barrier functions are certificates for safety; Lyapunov functions and supermartingales, certificates for progress. In particular, stochastic barrier functions and supermartingales can handle the presence of uncertainty (noise) in the system dynamics.

Established techniques from machine learning and from formal reasoning about neural networks (such as SMT, abstract interpretation, and counterexample-guided refinement) can be used for the joint synthesis of controllers and corresponding correctness certificates – i.e., for the synthesis of guaranteed-to-be-correct controllers – where both the controllers and their certificates are represented, learned, and verified as neural networks.

This talk includes joint work with Krishnendu Chatterjee, Mathias Lechner, and Djordje Zikelic.

Is your AI-based System Five Star Sustainable?

Speaker: Ina K. Schieferdecker, TU Berlin, Germany

Abstract: Artificial Intelligence (AI) is one of the most powerful tools available to humanity. While some debate the potential dangers of AI in general and consider a moratorium on AI development and applications necessary, others propose concepts such as AI for Sustainability and Sustainable AI to harness the potential of AI to address humanity’s most pressing challenges, including climate change to combat environmental degradation – in short, Sustainable Intelligence. In order to ground the still heated debate, this paper delves deeper into the state of the art of Sustainable Intelligence, in particular how to qualitatively and quantitatively explore the environmental impact of AI-based systems and how to minimise it. It also discusses selected research topics that need to be further explored and concludes with a Rising Star system for Sustainable Intelligence to share knowledge on environmental and societal goals for AI-based systems and to report and optimise their qualitative indicators and quantitative metrics in an open and standardised way.

Certainty vs. Intelligence

Speaker: Edward A. Lee, UC Berkeley, US

Abstract: Mathematical models can yield certainty, as can probabilistic models where the probabilities degenerate. The field of formal methods emphasizes developing such certainty about engineering designs. In safety-critical systems, such certainty is highly valued and, in some cases, even required by regulatory bodies. But achieving reasonable performance for sufficiently complex environments appears to require the use of AI technologies, which resist such certainty. This talk suggests that certainty and intelligence may be fundamentally incompatible. First, Bayes Theorem shows, rather trivially, that certainty implies an inability to learn when presented with new data. A more subtle issue, however, is that logic and mathematics, necessary for certainty, may be a result of intelligence rather than the foundations of intelligence. This paper makes the case that intelligence is an evolved form of prediction, that logic and mathematics were not discovered but rather were invented because of their predictive value, and that the certainty they can give us cannot be about systems that exhibit intelligence.

Verification Meets Causality

Speaker: Christel Baier, TU Dresden, Germany

Abstract: The early success story of the model checking approach relies fundamentally on two features. First, the algorithms provide a push-button technology: As soon as the model and specification have been generated, one obtains a result in a fully automated way. Second, if the algorithm terminates with a negative result, then it can infer counterexamples to the specification. Counterexamples are the first instances for what we use the term explication, which refers to a mathematical concept that in some way sheds light on why the model checker has returned the result. While counterexamples are single instances of execution traces violating the specification, they provide little insights in what causes the specification violation. To enhance the system transparency, more crisp explications for the satisfaction or violation of properties are demanded. The talk presents an overview of techniques that go in this direction by using formal notions of causality and responsibility to explicate verification results for transition systems and Markovian models.