Why AI literacy matters now: The urgent compliance challenge for organizations
As organizations across Europe accelerate their adoption of artificial intelligence (AI), a new regulatory reality is emerging. AI is not only reshaping industries, processes, and daily life but also introducing significant responsibilities for organizations. The ability to engage with AI systems in an informed and responsible manner has become a critical skill, particularly considering the potential risks these systems pose to people. Recognizing this need, the European Union has enacted the EU AI Act (2024/1689).
Introduction
As organizations across Europe accelerate their adoption of artificial intelligence (AI), a new regulatory reality is emerging. AI is not only reshaping industries, processes, and daily life but also introducing significant responsibilities for organizations. The ability to engage with AI systems in an informed and responsible manner has become a critical skill, particularly considering the potential risks these systems pose to people. Recognizing this need, the European Union has enacted the EU AI Act (2024/1689).
The AI Act entered into force on 1 August 2024, with certain provisions expected to be implemented in phases through 2025–2027. Notably, Article 4 of the AI Act, which mandates AI literacy measures and underpins the concept of ‘AI awareness’ discussed in this article, entered into force already on 2 February 2025. This provision places a clear and immediate obligation on both providers and deployers of AI systems, requiring them to ensure that everyone involved — including, e.g. staff and contractors — attains a sufficient level of AI literacy.
As organizations continue to develop and deploy AI systems at scale, they must recognize that AI literacy is no longer a “nice to have” — it is now a legal requirement. The risks of delayed implementation of AI literacy can have critical impacts on an organization’s operations.
So, what does this mean in practice, and why are so many organizations struggling to comply? In this article, we examine the key challenges organizations face when implementing measures to meet Article 4 requirements and provide practical guidance for navigating this regulatory obligation.
What happens if organizations don’t get AI literacy right?
Consider an organization eager to deploy an innovative AI-powered service. In the rush to deploy it, essential staff training and clear communication with users about system functionality and potential risks are neglected. As this scenario demonstrates, organizations must overcome several practical hurdles to achieve compliance with Article 4’s AI literacy requirements. These may include:
- Unclear expectations: The Act requires “sufficient” AI literacy, but what this means in practice is often ambiguous. Organizations struggle to determine what level of knowledge is enough for different roles.
- Resource constraints: Many lack the internal expertise, time, or budget to develop and maintain effective, up-to-date training for all relevant staff.
- Keeping pace: AI technology and regulations evolve rapidly, making it difficult to ensure training remains current and relevant.
- Integration with existing policies and risk management: AI literacy must be embedded within broader compliance, data protection, and cybersecurity frameworks, which can be complex and siloed.
- User trust and transparency: Without clear communication and training, users may not understand how AI impacts them, leading to mistrust or resistance.
Inadequate AI literacy may also lead to poor decision-making, increased vulnerability to cyber threats, and missed opportunities for innovation and growth. Ultimately, ensuring robust AI literacy is not just about compliance — it is a strategic imperative for sustainable success, resilience, and responsible leadership in the age of artificial intelligence. By proactively investing in AI awareness and training, organizations can safeguard their future and build lasting trust with customers, employees, and regulators alike.
Background of AI literacy
AI literacy refers to the “skills, knowledge, and understanding” needed to work with AI systems. It enables staff and others operating or using these systems on behalf of providers and deployers to make informed decisions about the usage of AI systems. It also helps them recognize opportunities, understand risks, and anticipate potential harms, regardless of their technical background.
The AI Act places significant emphasis on AI literacy, requiring both providers and deployers of AI systems to ensure that all relevant parties possess sufficient awareness and understanding of such systems. For the definitions of provider and deployer, read our article Introduction to the EU’s AI Act: What you should know.
AI literacy is not only about understanding how AI systems function but also about ensuring that users possess the knowledge necessary to navigate risks and opportunities arising from AI. It is equally important that users can make informed decisions about the deployment of AI systems. The EU AI Act explicitly addresses this issue by requiring organizations to make their best effort to ensure their staff and representatives achieve a sufficient level of AI literacy, regardless of the risk category of the AI systems.
Approaching AI literacy in practice
It is not sufficient to merely inform users about the existence of an AI system and how to operate it. While such information is essential, it must be complemented by broader, transferable knowledge about what AI is, what it is capable of, and what its limitations are, considering also training on ethical aspects.
Making AI-related policies easily accessible, such as through the organization’s intranet or internal communications, forms an important foundation for awareness and transparency. However, to truly support responsible and compliant AI use, these resources must be complemented by comprehensive and regularly updated training programmes. Such programmes help ensure that employees not only understand the policies but also know how to apply them in real-world contexts.
The target level of AI literacy is contextual — the training should take into account employees’ prior knowledge and background, including technical expertise, experience, education, and any previous AI training. A basic curriculum should be offered to all employees, followed by specialized, role-based training sessions.
As a final point, training should be tailored to specific, recognized use cases so that different organizational groups receive relevant guidance for their particular AI applications. To that extent, collecting a ‘use case library’ may be useful to verify and document the use cases.
By integrating these fundamental measures, organizations can ensure ethical AI use, mitigate risks, and effectively align AI systems with strategic objectives. However, it is important to acknowledge that there are also many other aspects and procedures to be implemented and considered to ensure compliance with the regulatory requirements.
How DNV Cyber can support your organization in complying with the AI literacy requirements
We at DNV Cyber help organizations meet their obligations under the AI literacy requirements of the EU AI Act. We support organizations in understanding what AI literacy entails, why it matters, and how to fulfil their responsibilities based on the obligations regarding AI literacy.
In addition to supporting with the general AI literacy requirements, our services also include support for implementing awareness-raising activities such as training programmes and communication and internal policies that align with other regulatory expectations, such as privacy and cybersecurity. Leveraging the expertise of our multidisciplinary team, including legal, cybersecurity, and awareness and training specialists, we collaborate with you to design awareness services tailored to your organization’s specific needs and objectives.