Tracks (AISoLA)

AI Assisted Programming

Organizers:

  • Wolfgang Ahrendt (Chalmers University of Technology, SE)
  • Bernhard Aichernig (TU Graz, AT)
  • Klaus Havelund (NASA Jet Propulsion Laboratory, US)

Neural program synthesis, using large language models (LLMs) which are trained on open source code, are quickly becoming a popular addition to the software developer’s toolbox. Services like, for instance, ChatGPT and GitHub Copilot, and its integrations with popular IDEs, can generate code in many different programming languages from natural language requirements. This opens up for fascinating new perspectives, such as increased productivity and accessibility of programming also for non-experts. However, neural systems do not come with guarantees of producing correct, safe, or secure code. They produce the most probable output, based on the training data, and there are countless examples of coherent but erroneous results. Even alert users fall victim to automation bias: the well studied tendency of humans to be over-reliant on computer generated suggestions. The area of software development is no exception to this automation bias.

This track is devoted to discussions and exchange of ideas on questions like:

  • What are the capabilities of this technology when it comes to software development?
  • What are the limitations?
  • What are the challenges and research areas that need to be addressed? – How can we facilitate the rising power of code co-piloting while achieving a high level of correctness, safety, and security?
  • What does the future look like? How should these developments impact future approaches and technologies in software development and quality assurance?
  • What is the role of models, tests, specification, verification, and documentation in conjunction with code co-piloting?
  • Can quality assurance methods and technologies themselves profit from the new power of LLMs?

Topics of relevance to this track include the interplay of LLMs with the following areas:

  • Program synthesis
  • Formal specification and verification
  • Model driven development
  • Static analysis
  • Testing
  • Monitoring
  • Documentation
  • Requirements engineering
  • Code explanation
  • Library explanation
  • Coding tutorials

Submission:
https://equinocs.springernature.com/service/aiap2

Digital Humanities

Organizers:

  • Ciara Breathnach (University of Limerick, IE)
  • Tiziana Margaria (University of Limerick, IE)

We are in the middle of an AI and IT revolution and at a point of digital cultural heritage data saturation, but humanities’ scholarship is struggling to keep pace. In this Track we discuss the challenges faced by both computing and historical sciences to outline a roadmap to address some of the most pressing issues of data access, preservation, conservation, harmonisation across national datasets, and governance on one side, and the opportunities and threats brought by AI and machine learning to the advancement of rigorous data analytics. We concentrate primarily on written/printed documents rather than on pictures and images. We stress the importance of collaboration across the discipline boundaries and their cultures to ensure that mutual respect and equal partnerships are fostered from the outset so that in turn better practices can ensue.

In the track we welcome contributions that address these and other related topics:

  • Advances brought by modern software development, AI, ML and data analytics to the transcription of documents and sources
  • Tools and platforms that address the digital divide between physical, analog or digital sources and the level of curation of datasets needed for modern analytics
  • Design for accessibility an interoperability of data sets, including corpora and thesauri
  • Tools and techniques for machine-understanding form-based documents, recognition of digits and codes, handwriting, and other semantically structured data
  • Knowledge representation for better analysis of semi-structured data from relevant domains (diaries, registers, reports, etc.)
  • Specific needs arising from the study of minority languages and populations, disadvantaged groups and any other rare or less documented phenomena and groups
  • Challenges relative to the conservation, publication, curation, and governance of data as open access artefacts
  • Challenges relative to initial and continuing education and curricular or extracurricular professional formation in the digital humanities professions
  • Spatial digital humanities

Submission:
https://equinocs.springernature.com/service/digihum

Health Care - Approaches Using Formal Methods and AI

Organizers:

  • Martin Leucker (University of Lübeck, DE)
  • Violet Kai Pun (Western Norway University of Applied Sciences, NO)

To ensure future high quality health care support within given financial conditions, a digitalization of the healthcare sector is mandatory. The digitalization is implemented either using conventional software development or uses techniques from artificial intelligence and faces two main important challenges: First, health care is a safety critical domain and requires the use of formal methods to ensure that the systems work as required. Second, the use of artificial intelligence is safety critical domains is still not fully understood.

Formal methods build on precise mathematical modelling and analysis to verify a systems correctness. It comprises static and dynamic analysis techniques like model checking, theorem proving, runtime verification, to mention the most prominent ones. Its theoretical foundations have been developed in the past decades buts its application in various domains remains a challenge.

AI in healthcare is transforming the field by improving diagnostics, aiding in medical imaging analysis, personalizing treatment, and supporting clinical decision-making. It enables faster and more accurate analysis of medical data, enhances drug discovery, and assists in robot-assisted surgeries. AI also contributes to predictive analytics, virtual assistants, wearable devices, and clinical decision support. However, it is important to remember that AI is a tool to support healthcare professionals rather than replace them, and ethical considerations and data privacy are crucial in its implementation.

This track is devoted to discussions and exchange of ideas on questions like:

  • Formal modelling and optimization of hospital workflows
  • Validation and Clinical Implementation: How can algorithms be rigorously tested and integrated into clinical workflows?
  • Robustness and Reliability: How can systems be made robust, reliable, and adaptable to changing patient populations and data quality?
  • Human-AI Collaboration: How can systems effectively collaborate with healthcare professionals?
  • Long-term Impact and Cost-effectiveness: What is the long-term impact and cost-effectiveness of digitalization in healthcare?

Explainability and Interpretability: How can AI algorithms be made transparent and understandable to healthcare providers and patients?
Data Quality and Integration: How can diverse healthcare data sources be integrated while ensuring data quality and interoperability?
Ethical and Legal Considerations: What ethical and legal frameworks should be established to address privacy, consent, bias, and responsible AI use?
Regulatory and Policy Frameworks: What regulatory and policy frameworks are needed for the development and deployment of AI in healthcare?

These research questions drive efforts to address technical, ethical, legal, and societal challenges to maximize the benefits of digital solutions in healthcare.

Submission:
https://equinocs.springernature.com/service/hc-fmai

Models for Trustworthy Autonomous Systems

Organizers:

  • Ellen Enkel (University of Duisburg-Essen, DE)
  • Nils Jansen (Ruhr University Bochum, DE)
  • Mohammad Reza Mousavi (King’s College London, GB)
  • Kristin Yvonne Rozier (Iowa State University, US)

The term of a model has different meanings in different communities, e.g., in psychology, computer science, and human-computer interaction, among others. Well-defined models and specifications are the bottleneck of rigorous analysis techniques in practice: they are often non-existent or outdated. The track can discuss our understanding of models and model learning. The constructed models capture various aspects of system behaviours, which are inherently heterogeneous in nature in contemporary autonomous systems. Once these models are in place, they can be used to address further challenges concerning autonomous systems, such as validation and verification, transparency and trust, and explanation.

The proposed track will bring together the best experts in a diverse range of disciplines such as artificial intelligence, formal methods, psychology, software and systems engineering, and human-computer interaction as well as others dealing with autonomous systems. The goal is to consolidate these understanding of models in order to address three grand challenges in trustworthiness and trust: (1) understanding and analysing the dynamic relationship of trustworthiness and trust, (2) the understanding of mental modes and trust, and (3) rigorous and model-based measures for trustworthiness and calibrated trust.

The topics of interest include but are not limited to:

  • Rigorous theory and analysis of concepts such as “trustworthy AI”, “trusted AI”, and “responsible AI”.
  • Ethical and value-centred design of AI-enabled systems.
  • Processes and techniques for Human-in-the-Loop AI and human oversight in AI decision-making.
  • Models of transparency, explainability, and traceability for AI-enabled systems, and their effect in societal risk mitigation.
  • Validation and verification of AI models within larger decision contexts regarding acceptability, trustworthiness, and trust.

Submission:
https://equinocs.springernature.com/service/mtas

Responsible and Trusted AI: An Interdisciplinary Perspective

Organizers:

  • Kevin Baum (German Research Center for Artificial Intelligence, DE)
  • Thorsten Helfer (Saarland University, DE)
  • Markus Langer (University of Freiburg, DE)
  • Eva Schmidt (TU Dortmund University, DE)
  • Andreas Sesing-Wagenpfeil (Saarland University, DE)

As AI systems continue to permeate various sectors and aspects of our lives, it is crucial to ensure their responsible development and use. This raises a variety of questions that go beyond purely technical aspects. Following the successful introduction of a similar track at AISoLA 2023, this interdisciplinary conference track aims to bring together research from a broad range of fields to tackle these questions. Researchers from philosophy, law, psychology, economics, sociology, political science and other relevant fields, including informatics, are welcome to share their recent research and to explore the societal implications of developing and employing AI systems.

The topics of interest include but are not limited to:

  • The clear analysis of terms like “trustworthy AI”, “trusted AI”, “responsible AI” and similar buzzwords
  • Identifying and balancing the (individual and societal) benefits and risks of AI systems
  • Ethical design and value alignment of AI systems
  • Addressing AI-related fears, misconceptions, and biases
  • Understanding the legal consequences of using AI systems
  • Robust legal frameworks to support responsible AI adoption, including methods for and challenges to compliance with the future EU AI Act
  • Assessing the impact of AI on the labor market and workforce
  • AI for social good: opportunities and challenges in driving economic growth and addressing societal issues
  • (Attribution of) responsibility and accountability in AI contexts
  • The role of human oversight in AI decision-making
  • Legal responsibility (e.g. civil liability) in case of malfunctioning AI systems
  • Ownership and authorship in context of generative AI
  • Intellectual property rights, copyright, and revenue sharing for AI-created works
  • The impact of AI on education, grading, and academic integrity
  • Bias, discrimination, and algorithmic fairness: from detection to mitigation
  • The role of transparency, explainability, and traceability in societal risk mitigation
  • Holistic Assessment of AI models
  • Evaluating AI models within larger decision contexts regarding acceptability, trustworthiness, and trust
  • Evaluating normative choices in AI system design and deployment

By exploring these topics, this track will contribute to a deeper and more holistic understanding of AI and its societal implications. It aims to promote the interdisciplinary debate surrounding the development of responsible, justifiedly (or appropriately) trusted, and beneficial AI technologies while addressing the general societal challenges they pose in real-world applications. The track is part of the conference AISoLA, which provides an open forum for discussions about recent progress in machine learning and its implications. AISoLA invites researchers with different backgrounds like computer science, philosophy, psychology, law, economics, and social studies to participate in an interdisciplinary exchange of ideas and to establish new collaborations.

For questions, please contact eva.schmidt@tu-dortmund.de

If you would like to contribute to the proceedings, please submit a paper (full paper: 12-15 pages; short paper: 6-11 pages). If you would like to contribute to the post-proceedings, submit abstract (500 words).

Submission:
https://equinocs.springernature.com/service/RTAI-IP

This track is co-organized by the Centre for European Research in Trusted Artificial Intelligence (CERTAIN).

Safe Autonomous Vehicles

Organizers:

  • Falk Howar (TU Dortmund University, DE)
  • Hardi Hungar (German Aero Center and Carl von Ossietzky University Oldenburg, DE)

In the domain of autonomous vehicles, the integration of advanced AI technologies is pivotal for addressing both current challenges and unlocking new potentials. AI’s role extends beyond its established capabilities in environment perception to encompass emerging technologies such as vision language action models. These models offer a nuanced approach to vehicle interaction, enabling the interpretation of complex scenarios and commands through the analysis of natural language and video data. Despite the technological advancements, the critical issue of ensuring risk and safety in these AI-driven systems remains a primary concern.

We invite researchers, practitioners, and experts to submit their original research contributions on the safety of AI-based autonomous vehicles to AISoLA. We particularly encourage submissions of contributions that invite discussion on the basis of new findings.

Topics of interest include, but are not limited to:

  • Integration and implications of LLM and similar models in autonomous vehicle navigation and interaction.
  • Advanced safety verification and validation techniques tailored for contemporary AI-driven autonomous vehicles.
  • Application of formal methods in assuring the safety and reliability of AI-based autonomous systems.
  • Exploration of human-AI collaboration, building trust and understanding in autonomous vehicle operations.
  • Enhancing the robustness and resilience of AI algorithms to thrive in uncertain and dynamically changing environments.
  • Ethical considerations, responsible decision-making, and AI ethics in the context of autonomous driving.
  • Architectural innovations in safety-critical systems for AI-based autonomous vehicles.
  • Leveraging data-driven approaches for comprehensive safety assurance and risk analysis in autonomous driving practices.
  • Evolving safety standards, regulations, and certification processes tailored to the unique demands of AI-based autonomous vehicles.
  • Testing, simulation, and validation methodologies to ensure the reliability of autonomous vehicle systems.
  • Security and privacy concerns specific to the deployment and operation of AI-based autonomous vehicles.

Submission:
https://equinocs.springernature.com/service/sav

Verification and Learning for Assured Autonomy

Organizers:

  • Devdatt Dubhashi (Chalmers University of Technology, SE)
  • Raúl Pardo (IT University Copenhagen, DK)
  • Gerardo Schneider (University of Gothenburg, SE)
  • Hazem Torfah (Chalmers University of Technology, SE)

In recent years, there has been a paradigm shift in the design of cyber-physical systems (CPS) towards using learning-enabled components to perform challenging tasks in perception, prediction, planning, and control. This transformation has created a gap between the implementation of this emerging class of learning-enabled cyber-physical systems and the guarantees that one can provide on their safety and reliability. To close this gap, there needs to be closer interaction between different research communities and a fundamental revision is required of how a combination of formal methods and machine learning theory can be applied in the analysis of such systems.

The goal of this workshop is to foster the exchange of ideas on the topic of assured autonomy, between researchers from the communities of formal methods, control, and AI. We welcome contributions related to the design and verification of autonomous CPS. Topics of relevance include, but are not limited to:

  • Specification formalisms for learning-enabled systems
  • Statistical (PAC) learning theory
  • Bandits and online optimization
  • Contract-based design
  • Correct-by-design control
  • Runtime verification and enforcement
  • Environment modeling and simulation-based analysis
  • Probabilistic programming and reasoning
  • Real vs sim validation

Submission:
https://equinocs.springernature.com/service/vlaa

Verification for Neuro-Symbolic Artificial Intelligence (VNSAI)

Organizers:

  • Taylor Johnson (Vanderbilt University, US)
  • Daniel Neider (TU Dortmund University, DE)

Neuro-Symbolic artificial intelligence (AI) is a burgeoning area of artificial intelligence that combines the strengths of neuro-computational and symbolic models. Neural networks, inspired by the brain’s structure and function, are used for learning and processing complex patterns in data. In contrast, symbolic reasoning, which uses logical rules and representations, is used for generalization, reasoning, and explanation. This integration allows Neuro-Symbolic AI systems to learn from large amounts of data, reason about abstract concepts, and provide explainable solutions to complex problems.

This track focuses on developing rigorous methodologies for analyzing and reasoning about neuro-symbolic AI systems, including their formal verification. We aim to foster collaboration among researchers across various related areas, including neural control of continuous systems, neural ordinary and partial differential equations (ODEs/PDEs), automated analysis of neuro-symbolic AI, and neuro-symbolic verification. Another topic of interest is the generation of symbolic explanations for neural networks and other AI systems.

Building upon our efforts from last year’s iteration of the related Safety Verification of Deep Neural Networks (SVDNN) track that focused more on neural systems, we also invite submissions of benchmarks for future iterations of the International Verification of Neural Networks Competition (VNN-COMP) and the International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP) category on Artificial Intelligence and Neural Network Control Systems (AINNCS), as well as input for the Verification of Neural Networks standard (VNN-LIB). We welcome the submission of benchmark systems, networks, and specifications to support developing and evaluating novel verification techniques for neural networks, neuro-symbolic AI, and related systems.

Submission:
https://equinocs.springernature.com/service/vnsai

Doctoral Symposium

Organizers:

  • Sven Jörges (University of Applied Sciences and Arts, DE)
  • Steven Smyth (TU Dortmund University, DE)
  • Salim Saay (University of Limerick, IE)

The ISoLA Doctoral Symposium is a forum for young researchers who are looking for valuable scientific feedback and networking opportunities. Master and PhD students are invited to present their research ideas and projects, discuss them with the scientific community, and establish collaborations in their field of research.

The Doctoral Symposium invites contributions on all topics in the scope of ISoLA, including (but not limited to):

  • Deduction and model-checking
  • System specification, construction and transformation
  • Program analysis and verification
  • Composition and refinement
  • Testing and test-case generation
  • Software system certification
  • System maintenance and evolution
  • Hybrid and safety-critical systems
  • Model-based testing and automata learning
  • Applications, case studies and experience reports of formal methods in practice

The Doctoral Symposium will be held as combination of short presentations and posters. All participants will give a short presentation about their work in a special session during the conference. Additionally, there is the opportunity to display posters that can be presented and discussed during ISoLA’s generous coffee breaks.

Submission:
https://equinocs.springernature.com/service/docsyn