Keynote Speakers

Speaker: Thomas Henzinger, Institute of Science and Technology Austria, Austria

Is your AI-based System Five Star Sustainable?

Speaker: Ina K. Schieferdecker, TU Berlin, Germany

Abstract: Artificial Intelligence (AI) is one of the most powerful tools available to humanity. While some debate the potential dangers of AI in general and consider a moratorium on AI development and applications necessary, others propose concepts such as AI for Sustainability and Sustainable AI to harness the potential of AI to address humanity’s most pressing challenges, including climate change to combat environmental degradation – in short, Sustainable Intelligence. In order to ground the still heated debate, this paper delves deeper into the state of the art of Sustainable Intelligence, in particular how to qualitatively and quantitatively explore the environmental impact of AI-based systems and how to minimise it. It also discusses selected research topics that need to be further explored and concludes with a Rising Star system for Sustainable Intelligence to share knowledge on environmental and societal goals for AI-based systems and to report and optimise their qualitative indicators and quantitative metrics in an open and standardised way.

Certainty vs. Intelligence

Speaker: Edward A. Lee, UC Berkeley, US

Abstract: Mathematical models can yield certainty, as can probabilistic models where the probabilities degenerate. The field of formal methods emphasizes developing such certainty about engineering designs. In safety-critical systems, such certainty is highly valued and, in some cases, even required by regulatory bodies. But achieving reasonable performance for sufficiently complex environments appears to require the use of AI technologies, which resist such certainty. This talk suggests that certainty and intelligence may be fundamentally incompatible. First, Bayes Theorem shows, rather trivially, that certainty implies an inability to learn when presented with new data. A more subtle issue, however, is that logic and mathematics, necessary for certainty, may be a result of intelligence rather than the foundations of intelligence. This paper makes the case that intelligence is an evolved form of prediction, that logic and mathematics were not discovered but rather were invented because of their predictive value, and that the certainty they can give us cannot be about systems that exhibit intelligence.

Verification Meets Causality

Speaker: Christel Baier, TU Dresden, Germany

Abstract: The early success story of the model checking approach relies fundamentally on two features. First, the algorithms provide a push-button technology: As soon as the model and specification have been generated, one obtains a result in a fully automated way. Second, if the algorithm terminates with a negative result, then it can infer counterexamples to the specification. Counterexamples are the first instances for what we use the term explication, which refers to a mathematical concept that in some way sheds light on why the model checker has returned the result. While counterexamples are single instances of execution traces violating the specification, they provide little insights in what causes the specification violation. To enhance the system transparency, more crisp explications for the satisfaction or violation of properties are demanded. The talk presents an overview of techniques that go in this direction by using formal notions of causality and responsibility to explicate verification results for transition systems and Markovian models.