Artificial intelligence has moved from a research field to a geopolitical force in just a few years. As AI becomes part of everything from business strategy to national security, a new question emerges: who is really in control of the technology?
The Pentagon calls one AI model “best in class”. Days later, the same technology is described as a potential security risk. When companies such as Anthropic and OpenAI find themselves at the centre of political tensions and government contracts, artificial intelligence becomes more than innovation. It becomes geopolitics.
AI outpaces the law
The EU AI Act, the European Union’s first comprehensive regulatory framework for artificial intelligence, entered into force in August 2024 and will be gradually implemented through 2026–2027. At the same time, the adoption of AI is accelerating rapidly across both the public and private sectors.
- AI is being embraced faster than we are able to regulate it, says Dr. Piet Delport, Associate Professor and Programme Lead of Digital Assurance & Security Management at Noroff University College.
The question is therefore no longer whether we should use AI – but how we should govern it.
When technology changes the rules
Artificial intelligence is already influencing everything from healthcare and finance to defence and intelligence operations. According to the McKinsey Global Survey on AI, around 70 percent of organisations worldwide have adopted AI in at least one business function. The rapid growth of adoption illustrates how quickly the technology is becoming embedded in core processes across industries.
At the same time, cyber security communities report a growing number of vulnerabilities related to AI systems, including weaknesses in training data, model manipulation, and the misuse of AI tools in cyberattacks.
AI does not only represent efficiency gains. It also introduces new attack surfaces, new dependencies, and new regulatory challenges. In this landscape, the need for clear governance, risk awareness, and responsible oversight becomes increasingly important.
What is AI governance – and why does it matter?
When organisations implement artificial intelligence, the challenge is not only technological. It is also about how organisations manage risk, responsibility, and compliance.
- The primary purpose of governance is to create a stable foundation for safe and manageable growth, Delport explains.
The challenge is that technology often evolves faster than organisational control mechanisms.
- When AI is introduced outside established governance structures, we risk undermining the very stability we aim to create.
This is why many organisations are now working more systematically with Governance, Risk & Compliance (GRC) frameworks related to artificial intelligence.
“We need to be enthusiastic, but careful”
Public debates about AI often focus on technological competition, geopolitical tensions, and large government contracts. From a security leadership perspective, the discussion is just as much about accountability and control.
Have we properly assessed the risks?
Are we compliant with current regulations?
Do we have sufficient oversight?
Who is accountable if something goes wrong?
- AI is an extraordinary technology, but governance is lagging behind, says Delport.
He believes organisations need clearer regulation, strategic prioritisation, and strong leadership commitment.
- This is not just a technology question, it is a leadership question.
The students who will govern the future
Although AI technology evolves rapidly, the Digital Assurance & Security Management bachelor programme is built on principles that remain relevant even as technology changes.
Students learn how organisations can establish effective governance structures, conduct risk assessments, and navigate increasingly complex regulatory environments.
Through realistic case studies, they analyse scenarios where new technologies introduce new types of risk.
- We had to ask questions that leadership might not have considered. It felt very realistic, and a bit intimidating, one student explains after completing such a case project.
According to Delport, that is exactly the point.
- We train students to become the generation that will design regulatory frameworks, conduct risk assessments, and ensure responsible use of AI. They need to be able to pause and ask: how does this change our risk landscape?
AI still needs human judgement
Even though artificial intelligence can automate analysis and decision processes, it cannot take ethical responsibility.
It cannot independently interpret regulatory grey areas. It cannot assess long-term societal consequences. And it cannot take accountability if something goes wrong.
This is where security leadership becomes essential.
Demand for expertise in information security, risk management, and compliance is growing as AI becomes embedded across industries. Roles such as Risk Manager, Compliance Officer, Security Manager, and Chief Information Security Officer (CISO) are becoming increasingly critical for organisations seeking to combine innovation with responsible oversight.
The future is governed – not just coded
As artificial intelligence becomes integrated into everything from business strategy to military intelligence, the key question is no longer only how the technology works.
It is also about who defines the rules.
Understanding AI therefore requires more than technical knowledge. It requires insight into risk, regulation, accountability, and societal impact.
In the years ahead, artificial intelligence will shape everything from corporate governance to national security. The real question is not only who develops the technology – but who governs how it is used.
Learn more about the Digital Assurance & Security Management programme