Industrial AI for Safety-Critical Systems

Presentation Abstracts

Presentation Abstracts

DNV: 

Talk 1: Trustworthy Industrial AI: Why and How?

As AI is increasingly adopted in safety-critical operations, trustworthiness becomes essential. This includes key principles such as safety, robustness, transparency, and explainability, all critical for managing risk and enabling responsible AI deployment. In this presentation, we explore how these principles are interconnected, why they matter in industrial contexts, and share insights into how we address them in practice at DNV.

Talk 2: AI for Assurance and Assurance of AI: Building Confidence in Industrial Systems

As artificial intelligence becomes integral to industrial systems, ensuring trust, safety, and reliability is more critical than ever. This presentation explores the evolving relationship between AI and assurance, highlighting the importance of building confidence in both the technologies we develop and the systems they support. It invites stakeholders across industries to rethink how we manage risk and foster trust in an increasingly intelligent world.

Talk 3: Bayesian surrogate modeling and optimization of extreme response calculation

Engineers often need to understand long-term behaviour of complex models in stochastic settings, such as when performing extreme response calculations (e.g., Ultimate Limit State). However, the high computational cost of running such models often makes directly calculating the value of interest infeasible, so approximate methods are required. Bayesian Optimisation (BO) is one such approach: it offers a robust way to measure and reduce the uncertainty introduced by approximation. In this presentation, we describe the challenges we encountered in applying BO to extreme response problems and how we overcame them. We explain the requirements for surrogate models to be useful in extreme value analysis, present strategies to efficiently reduce uncertainty, and introduce software we developed to make BO easier to apply in this context.

 

Equinor: Hands-on risk assessment for generative AI in drilling and well intervention.

As artificial intelligence (AI) applications begin spreading out towards operational areas with heightened safety requirements, the importance of adequate risk assessment increases. So far, there has been a lack of hands-on approaches for risk identification when it comes to human-AI interaction in operations. In this paper, we will demonstrate a new qualitative method for risk assessment during the development of an AI assistant for drilling engineers. Through observations and interviews with experts and end-users we identified AI shortcomings, their likelihood, and potential consequences. Based on our findings we will discuss mitigating measures as well as their feasibility and effectiveness.  

 

Kongsberg Maritime: Applications of AI in autonomous marine operations.

Increasing automation of marine operations has driven innovation at Kongsberg Maritime over several decades. Today, artificial intelligence is playing an increasingly important role in enabling further advances in ship autonomy. AI’s ability to generalize from data allows us to deploy complex systems where coding the rules explicitly would be impractical. Large neural networks are inherently opaque: it is impractical to know precisely what the model has learned from the data and what it has not, and any errors or gaps in the training data add to this uncertainty. The main areas of application are currently in computer vision, where AI is especially useful in detecting objects that do not transmit AIS data (such as small vessels, logs, or floating debris), and in using time-series prediction to forecast the movement of objects (currently, only AIS-equipped ships). Conversely, in vessel control the system’s desired behavior can be described precisely using mathematical models, and conventional rule-based methods remain more appropriate than neural networks.

For the foreseeable future, we will continue using AI alongside the existing methods such as MPC. AI will enhance the safety of marine operations by providing assistance and redundancy, but it will not be allowed to make independent decisions. It should be noted that given the remarkable pace of development in AI, the foreseeable future may well prove remarkably brief.

 

 

Havtil: Artificial Intelligence: Safety and responsibility

The Norwegian Ocean Industry Authority (Havtil) has a risk-based approach when following up activities. This entails directing efforts towards issues where the risk is highest, particularly with regards to major accident potential. Given the industry's rapid technological development, we are strengthening our follow-up of companies own processes for managing AI risks. To promote responsible use of AI in the industry, Havtil has chosen AI and risk as the Main issue for 2025.

 

UiO:

Talk 1: Morten Dæhlen, TRUST – The Norwegian Centre for Trustworthy AI

Abstract: The  TRUST aim is to provide ground-breaking transdisciplinary research and innovation that make  AI accurate, interpretable,  aligned  and  inclusive,  safe, sustainable, well-governed, and with the capacity to reach all this at scale. and thus, be trustworthy.  In order to do this TRUST is organized in 14 research areas

aligned with problems in 15 actions clusters with partners in academia, industry and civil society. Examples will be given in the talk.

 

Talk 2: Fred Espen Bernth, Structure-informed learning and interpretability: the case of energy markets

We argue for using structural information when  designing neural networks for approximating functions by showcasing pricing in energy markets and learning solution maps of partial differential equations. In both cases, mathematical (and financial) context provide knowledge that can be used in structure-informed learning. We exploit the universality of neural networks defined on spaces of functions (infinite-dimensional spaces). The talk is based on joint work with Nils Detering (Duesseldorf) and Luca Galimberti (King's College London)

Sintef (AID):

Artificial Intelligence (AI) for decision-making has the potential to reshape how we address critical challenges across diverse sectors through AI-enhanced human decisions and autonomous AI systems. Decisions underpin our society, from balancing energy and supply and demand to managing healthcare resources or ensuring production and distribution of essential good. AI for decision-making can improve resource use, cut costs, and enhance safety. However, taking optimal decisions while accounting for performance, uncertainties, risks, consequences, and the need for fairness and transparency remains a challenge. The core originality of AID's fundamental research lies in a holistic framework for enhancing AI-based decisions, where the learning happens on the decision objectives rather than descriptive models. Such a framework is essential to POs to leverage AI effectively, enabling them to solve societal challenges, fostering positive change, and promoting Norway's competitiveness.  AID includes 9 Norwegian academic and applied institutions on complementary aspects for decision, 13+ international researcher collaborators, 9 public entities, 5 companies with public mandates, and 36 Norwegian businesses.

 

NASA:

Industry session: NASA Langley experiences on modeling and managing uncertainty
Abstract: Accurate Uncertainty Quantification (UQ) is instrumental for effective system identification, robustness analysis, verification and robust design. This presentation will give a brief overview of some key methodologies developed and used at NASA Langley to systematically address UQ for dynamic systems. This presentation is organized in three complementary parts. The first part focuses on the usage of set deformations for robustness analysis and robust design, and their application to the control verification and tuning of a model reference adaptive controller. The second part introduces a strategy for modeling multi-dimensional data sets having possibly strong parameter dependencies. Lastly, we will present lessons learned from responses to a few UQ challenge problems we pose in the last few years.

 

Research session: Risk-aware Data-driven Decision Making under Uncertainty.

This talk introduces strategies for robust data-driven design of systems subject to multiple requirements. These data, which might correspond to realizations of aleatory uncertainties and/or changing operating conditions, are called scenarios. Data overfitting is prevented by ensuring that the requirements are met when such scenarios are perturbed from their nominal values. The feasible decision space is expanded by eliminating a given number of optimally chosen outliers from the set of scenarios, and by replacing constraints for the worst-case perturbations with chance constraints. These relaxations are used to trade off a lower cost against improved robustness to uncertainty. For instance, we can pursue a riskier design that attains a lower cost in exchange for violating the constraints for a few scenarios, or we might seek a conservative design that satisfies the constraints with an acceptably high probability for as many perturbed scenarios as possible. Furthermore, we will present risk-based and risk-free formulations to control the effects that outlier elimination has on the resulting robust design.

 

University of York: The BIG Argument for AI Safety Cases

We present our Balanced, Integrated and Grounded (BIG) argument for assuring the safety of AI systems (https://arxiv.org/abs/2503.11705). The BIG argument adopts a whole-system approach to constructing a safety case for AI systems of varying capability, autonomy and criticality. Firstly, it is balanced by addressing safety alongside other critical ethical issues such as privacy and equity, acknowledging complexities and trade-offs in the broader societal impact of AI. Secondly, it is integrated by bringing together the social, ethical and technical aspects of safety assurance in a way that is traceable and accountable. Thirdly, it is grounded in long-established safety norms and practices, such as being sensitive to context and maintaining risk proportionality. Whether the AI capability is narrow and constrained or general-purpose and powered by a frontier or foundational model, the BIG argument insists on a systematic treatment of safety. Further, it places a particular focus on the novel hazardous behaviours emerging from the advanced capabilities of frontier AI models and the open contexts in which they are rapidly being deployed. These complex issues are considered within a wider AI safety case, approaching assurance from both technical and sociotechnical perspectives. Examples illustrating the use of the BIG argument are provided throughout the talk.