I stumbled upon yet another article [1] about the existential risks of “human-level AI”. This time it was an article in The Guardian that commented on a report titled AI Safety Index [2] on the state of risk management in the biggest AI firms. The author of the article drew the following conclusion from the report:
Artificial intelligence companies are “fundamentally unprepared” for the consequences of creating systems with human-level intellectual performance [my emphasis], according to a leading AI safety group.
This is in line with the popular belief that the threat from AI-based systems is caused by it achieving “superintelligence” or “artificial general intelligence” (AGI). The Economist also summarizes this popular concern in a July 21, 2023 article (which actually downplays the risks with AI-based systems):
The concern is that computers endowed with superhuman intelligence [my emphasis] might destroy most or all human life.
What all these articles get wrong (but the report did not, although it isn’t entirely pedagogical about it) is that the threat is caused by (super-)intelligence. Not defining what exactly is meant by “intelligence” adds to the confusion.
AIs are machines
We humans have a tendency to anthropomorphize, to attribute human traits, emotions, or intentions to non-human entities.
Jean Piaget observed that young children tend to believe that inanimate objects, like toys, clouds, or even a rock that hit them, have feelings, intentions, and can act on their own.
When we grown-ups use terms like “intelligent,” “understands,” “learns,” “thinks,” and “decides,” we are intuitively attributing human-like cognitive abilities and internal states to AI systems. We imply that the AI has a mind, consciousness, or agency akin to a human’s.
Humans are patently hard to analyze so anthropomorphizing AI algorithms don’t lead to particularly constructive engineering and risk management strategies. I posit that when discussing AI safety we should drop all anthropomorphisms, including the word “intelligence”. We should instead take a dispassionate systems engineering stanse and analyze the AI through systems engineering concepts such as stakeholder needs, system requirements, system design, risk management, verification and validation. For more on systems, see this post. (It may analogously sometimes be productive to “mechanomorphize” humans. See this post.)
Redefining intelligence
To get rid of the (anthropomorphic) word “intelligence” (as in artificial intelligence), let’s try to give it a non-anthropomorfic definition. A collection of such definitions can be found in [7]. From this collection, the authors of the paper have distilled this high-level definition that is commonly used:
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
This definition doesn’t make any difference between different agents, organic or computer-based.
Referring back to the “superintelligence” scare described in the beginning of this post, it is important to note how the definition above uses the terms ability and goals. Intelligence is according to this definition ability. Goals, what the agent is programmed (by humans or by evolution) to achieve, are orthogonal to ability. Goal is a different ontological category.
Intelligence, defined as “ability to achieve goals across environments,” is inert without intent. Risk stems from the goal structure, including explicit programming, reward modeling, or emergent incentives emanating from user interactions, not from the ability itself. A well‑aligned super‑capable system might pose negligible risk. Conversely, weak systems with misaligned objectives can be more dangerous.
The goals that the AI is programmed and prompted to pursue must therefore be the starting point for any AI risk analysis. Just like intelligent evil humans can often do more harm than stupid evil humans1, an able AI with the wrong (“misaligned”) goals can probably do more harm than a simple AI. But good or evil, opportunity or threat, depend entirely on the goal(s) we give to the AI.
The systems engineering approach
Accepting that an AI application is nothing but a complex system, let’s translate the terms ability and goals to systems engineering terms and explore how systems engineering can be used to develop useful and safe AI applications.
A system is generally defined as an arrangement of interacting elements that together realize functions and is characterized by a number of quality attributes such as performance and availability. Examples of systems are automobiles, excavators, information systems such as ERP systems, space stations, living organisms, the climate, and artificial intelligence applications.

Systems engineering is the art and science of developing a system that helps satisfying some stakeholder needs and to that end satisfies a number of system requirements and constraints. Systems engineering is an iterative structured process roughly consisting of:
- Identifying the system mission, lifecycle processes, system stakeholders, stakeholder needs, and system requirements
- Proposing system design solutions
- Analyzing the proposed solutions with respect to their ability to fulfill the system requirements and stakeholder needs and modifying the design until a satisfactory solution is found
- Identifying and controlling risks. This may be part of the system analysis referred to above
- Verifying the system with respect to the defined system requirements
- Validating the system in real (or near-real) use
For a deep-dive in systems engineering, see [8].
Risk management
Almost all systems come with risks. Engineers and the society usually arrive at an acceptable ratio between benefits and risks. We for instance accepted the risk of traffic accidents to get the benefit of mobility.
All AI “scares” alluded to in the articles referred to above can be formulated as risks. Misalignment is caused either by an incorrect or detrimental requirement or a design flaw. We can reduce all AI safety discourse to a discussion about managing risks in the context of systems engineering.
Risk management is a systems engineering (sub-)discipline for identifying and quantifying risks emanating from, or associated with the use of a system (for risk management terminology, see the bottom of this post), analyzing their causes, and identifying risk controls for mitigating risks to an acceptable level. Risk management, properly adapted to the system of interest and performed according to risk management best practices is the method to control risks associated with all systems including AI applications.

Risk management is mandated through regulations in all mature industries whose products can harm people or the environment, including medical devices, health care, drugs, aerospace, construction, and automobiles.
Risk management starts with a risk analysis which is typically part of the system analysis step in the iterative systems engineering process outlined above. Risk mitigations are typically expressed as (verifiable) system requirements that are used as input in the next design iteration.
For meaningful risk analysis to be possible, the following must hold:
- The system boundaries and interfaces are well-defined.
- The operational scenarios of the system are well-defined, i.e., we know how the system is used or how the system otherwise interacts with the environment outside the system (for instance other systems).
- The architecture, technologies, and inner behavior of the system is well-known or at least bounded.
- We have some means to estimate the severity and probability of each identified harm.
- A multi-disciplinary team who can analyze both the system and quantify the harms is available. For risk management of for instance medical devices this means that both engineers and doctors need to be in the risk analysis team.
It is worth emphasizing that talking about the risks of a general purpose AI is meaningless. It would be much like talking about the risks of electricity. All risks are relative to the application and how it is used. It is therefore misguided to talk about “regulating AI”. All regulations should target the application and should for instance be different for a medical application and a consumer chatbot.
Examples of risks
Risk management is implemented through a series of risk analysis sessions. It is often useful to focus on a specific type of harms in one session as each type of harm requires a different set of topic matter experts. Quantifying harms to humans and harms to the environment for instance require different types of expertise.
Some examples of harms associated with AI applications include:
Environment: Carbon emissions, loss of fertile land (for data centers).
Human uplift: Results of amplification of malicious or dangerous human activity, including weapon design, surveillance, cybercrime, misinformation, etc.
Misaligned AI: Risks arising from incorrect, unintended, or adversarial goals—whether by accident (specification error) or design (faulty or intended).
Society: Societal upheaval caused by AI replacing humans. Energy shortages. Algorithm addiction [5].
Summary
Anthropomorphizing AI applications is not productive. AI applications are systems and should be designed using systems engineering.
AI risks depend entirely on the application that contains an AI component. Talking about (general) “AI risks” is not productive.
It should of course be recognized that an agentic AI application can be very complex and can have very complex interactions with its environment. Analyzing the risks of AI applications can thereore be very difficult and labor-intensive.
A long as we stick to best practice systems engineering and risk management principles, adapted to the technology used, we should be fine. We build the systems, the systems don’t build us.
Terminology
Algorithm: A mathematical or procedural method, in the domain of AI implemented as a computer program. Examples: convolutional neural network, transformer.
Model: A specific, trained AI algorithm. Example: GPT-4o.
Inference engine: A software/hardware component that takes an AI model and new data, performs computation, and produces some output results (predictions). Example: TensorFlow.
Component: A system element (part of a system) that implements a coherent set of functions and is often replacable by a similar component A component may or may not include AI. Examples: inference engine with a model, diesel engine, user interface.
System: An integrated set of system elements such as subsystems and components that accomplish a defined objective. Some system functions may be (partially) realized by an AI model together with an inference engine (AI component). Examples: ChatGPT, Inify histopathology laboratory, aircraft [6].
AI application: A system in which important functionality is realized by an AI component. Examples: ChatGPT, GitHub Copilot, autonomous vehicle.
Harm: Physical injury or damage to the health of people, or damage to property or the
environment.
Hazard: Potential source of harm.
Hazardous situation: Circumstance in which people, property, or the environment are
exposed to one or more hazard(s).
Risk: Combination (usually multiplication) of the probability of occurrence of harm and the
severity of that harm. Reduction of risk is the objective of risk management. A risk can be
reduced either by lowering the severity of the harm or the probability of the harm, or both.
The severity depends on the nature of the hazardous situation.
Risk control: Process in which decisions are made and measures implemented by which
risks are reduced to or maintained within specified levels. Risk control measures are also
called risk mitigations.
Fault: Condition or defect in a system which may lead to an error. Synonym: defect, bug.
Error: Manifestation of a fault as an unexpected or unwanted behavior of a system. An error
may lead to a failure.
Failure: Situation in which a system (or part of a system) is not performing its intended
function due to an error. The failure is characterized by its failure mode i.e. the specific
manner in which the failure occurs. A failure is a symptom of an error that may or may or
may not lead to a hazard.
Failure mode: The specific manner or way in which a failure occurs.
Links
[1] AI firms ‘unprepared’ for dangers of building human-level systems, report warns. The Guardian. July 17, 2025.
[2] AI Safety Index. Future of Life Institute. July 17, 2025.
[3] Agentic Misalignment: How LLMs could be insider threats. Jun 21, 2025. Anthropic.
[5] Social Media Addiction Statistics Worldwide. Magnet ABA Therapy (providing autism health care). May 26, 2025.
[6] Helsing AI agent successfully completes Saab Gripen E test flight. Helsing press release. June 11, 2025.
[7] A Collection of Definitions of Intelligence. Shane Legg, Marcus Hutter. arXiv.
[8] INCOSE Systems Engineering Handbook. INCOSE. 2025.
- This statement is contested by some, perhaps a little tongue in cheek. Follow this link for a case stating that it is the stupid people, not the intelligent people, that are the most dangerous people. The article uses a specific definition of stupid though. (Don’t miss the timely prediction at the end of the article.) ↩︎