Artificial Intelligence
Artificial Intelligence systems are rapidly being integrated into sectors and industries worldwide, bringing both benefits and risks. The term AI has become one of the most overused buzzwords recently. Below, we narrow it down, focusing on specific applications that raise the most concern and risk.
1​​
Up Front: As the Organisation for Economic Co-operation and Development (OECD) highlights, AI is increasingly changing how people learn, work, play, interact, and live. Furthermore, governments, industries, and private sectors are rapidly adopting diverse AI systems to improve processes and enhance capabilities. This is also introducing a distinct set of risks, including unintended consequences and collateral damage. AI has the potential to significantly reshape economies, transform warfare, accelerate scientific R&D, shift geopolitics, and alter how humans interact with the world. Categorized as a hallmark of the Fourth Industrial Revolution (4IR), AI is a disruptive technology due to its broad influence across domains and its substantial transformation of how things are done.
2
3
Background Source: Emeritus
What is AI?
Currently lacking a universal definition, AI is best described as a technology that enables machines to simulate human learning, comprehension, problem-solving, decision-making, creativity, and autonomy. Per OECD, an AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. AI systems vary in their levels of autonomy and adaptability after deployment.
4
5
In this video, John Crume, a cybersecurity professional at IBM, explains what AI is, its brief history, and commonly associated terms like machine learning, deep learning, and generative AI.

Image Source: UK MOD
AI is often compared to disruptive technologies like the combustion engine, electricity, or the internet. AI is not just a single tool, like ChatGPT or Claude AI, but a collection of systems, methods, and applications, each with its own development path and implications. The diagram to the left illustrates the overlap between AI, machine learning, and data science.
6
Information and Image Source: OECD Framework for Classification of AI Systems
The People & Planet dimension centers on developing human-centric, trustworthy AI that benefits individuals, society, and the environment. It considers users, impacted stakeholders, and how AI systems affect human rights, well-being, and work.
An AI model is a digital representation of part or all of an AI system’s external environment, such as processes, objects, people, ideas, or interactions, that take place in that environment. Core characteristics include the model’s technical type, how it is built (expert knowledge, ML, or both), and how the model is used.
The Task & Output dimension looks at what the AI system does, the results it produces, and how those results affect the surrounding context.
The Economic Context dimension entails the sector and setting where an AI system is used, focusing on the type of organization and its purpose. It applies mainly to specific, real-world AI systems rather than general ones.
The Data & Input dimension covers the data and expert knowledge used to help an AI model understand its environment, including people, the planet, and economic context. Expert input usually comes from human knowledge turned into rules.
7
To the left is the OECD’s framework for classifying AI systems, along with each system’s characteristics and key actors involved. The framework breaks down AI systems into five main dimensions (and sub-dimensions detailed in the policy paper) to support analysis of policy considerations tied to specific applications. It is a valuable tool not only for policymakers and practitioners but also for the general public to develop a common understanding of AI, its far-reaching effects, interactions, and its complexity.
Hover each dimension for more details.
AI Adoption
Now that we have a basic grasp of AI’s dimensions and what it entails, the image on the right highlights AI use cases across major industries. As shown, AI systems are being applied in diverse ways across all sectors and industries. Governments, private corporations, and individuals are racing to develop and adopt AI systems. AI is being integrated into systems, processes, and functions at varying levels, but sometimes with limited transparency, unclear ethical guidelines, or even malicious intent.

Image Source: Leeway Hertz
8
So What?
Just as AI has potential benefits, there are serious downsides, particularly in how state and non-state actors leverage it. Gaps in regulations, guidelines, and enforcement pose a significant problem. One sector missing from the graphic above is the defense sector. Nations are increasingly integrating AI into military, surveillance, weapons systems, command and control, and intelligence systems, platforms, and processes to gain a strategic edge. Certain countries, like China and Russia, leverage AI applications in their malign influence operations, computer network exploitation, and computer network attacks. What is unique about AI is that it does not pose just one or two issues; its wide application brings a range of expected and unexpected risks and concerns.​​
The Risks of AI
Image Source: Tufts
This illustration from Tufts University illustrates the top concerns surrounding AI, including development, employment, governance, and technological dominance and leadership.
- While AI could replace certain jobs and tasks, it could create new opportunities and transform future work environments and dynamics.
- AI systems can inherit biases from their training data, and if used without transparency or human oversight, they can cause significant harm.
- AI-enabled surveillance systems and novel devices, such as smart glasses, and the data used to train models pose serious risks to privacy and personal information.
- Deepfakes, disinformation, trolls, and other AI-enabled tools and applications can be used to manipulate public opinion, spread false narratives, commit crimes, and erode trust in governments and institutions.
- AI algorithms can be developed to influence human behavior, decisions, and beliefs through platforms like chatbots, large language models (LLMs), advertisements, and social media content.
- State and non-state actors are integrating AI into military equipment for various applications and purposes. One major concern is the use of lethal autonomous weapons systems (LAWS) with no human oversight, which can result in human collateral damage.
- Beyond the ethical issues observed in these examples, there are also concerns about tech companies and governments engaging in ethical washing, claiming the use of ethical guidelines in AI to gain public trust or avoid regulation.
- Lastly, the race to lead in AI could drive fierce competition, where safety, ethics, and long-term sustainability are overlooked.
​
There are many other risks and concerns, such as environmental and energy issues, as AI systems increase demand for data centers, which in turn drives up energy use. Similarly, the raw materials and hardware needed to run AI systems raise concerns around resource extraction, supply chains, and sustainability. There are also risks tied to biotechnology and quantum computing, including the potential integration of humans and machines, as seen in applications like Neuralink.
8
As noted by Prof. Anthony Aguirre of the Future of Life Institute, the world’s largest corporations and great power nations are racing to develop a specific type of AI: Artificial General Intelligence. AGI is a theoretical AI system that matches or exceeds almost all human capabilities. This is driven by the belief that whoever develops the most advanced AI will gain a massive competitive advantage across all sectors and domains, particularly economic, military, scientific, and technological (0:26-0:51).
Artificial Super Intelligence is a more advanced form of AI, similar to AGI, which remains theoretical, representing a world where AI surpasses human intelligence in all aspects. IBM Master Inventor Martin Keen explains what ASI is, the building blocks needed to reach it, and the potential benefits and risks of this level of technology.
9
Strategic Competition
AI research and development have become a strategic priority for great powers, particularly China, Russia, and the United States. Echoing Russian President Putin’s assertion, whoever leads in AI will achieve technological dominance and global influence. AI dominance, in parallel with quantum computing, is seen by nations as a path to demonstrate leadership, dominate markets, shape economies, and influence international rules and norms.
However, AI is more than just algorithms; the technology, expertise, and supply chains, including raw materials and semiconductors, are not controlled by any single country. The United States leads semiconductor design, R&D, and intellectual property, while Taiwan dominates advanced semiconductor manufacturing. Beijing is heavily subsidizing its semiconductor industry, gradually narrowing the gap with firms like TSMC (Taiwan) and Nvidia (U.S.). More concerning is that one of the main goals of the Chinese Communist Party and President Xi is the reunification of Taiwan, which puts regional stability and global supply chains at risk.
​
Another threat to global stability is the use of AI and cyber capabilities to conduct information operations, including influence campaigns, cyber espionage, and intrusion activities. China, Russia, and North Korea are leveraging AI to erode public trust in governments and institutions, amplify societal tensions, manipulate behavior and sentiment, infiltrate institutions, steal intellectual property, and control narratives.
11

"Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
- Russian President Vladimir Putin, 2017
10
Military: AI is already shaping, and will increasingly influence, how wars are fought in the future. The rise of AI in modern warfare is set to reshape relations among major powers like the United States, China, and Russia. While AI-powered military equipment already exists, concerns remain over ethical and moral implications, gaps in international laws, the use of LAWS without human oversight, and the growing role of private industry in state warfare.
13
In 2023, the Senate Intelligence Committee held a hearing regarding threats posed by AI. The hearing, led by Senator Mark Warner, featured panelists Dr. Benjamin Jensen, Dr. Jeffrey Ding, and Yann LeCun. The three panelists highlighted several critical concerns, risks, and vulnerabilities related to AI, many of which have been covered here. It is a discussion well worth watching for anyone with two hours to spare.
One key takeaway from Dr. Ding's arguments is that decision-makers, intelligence professionals, and subject matter experts often assess technological leadership primarily through innovation capacity while overlooking diffusion capacity. Success in AI would not depend on who makes the next breakthrough first but on who leads the secondary wave of adoption across all sectors and throughout the economy (20:48-21:48).
12
Initiatives and Policies
In 2025, the Trump administration revoked former President Biden’s AI Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The new order, titled Removing Barriers to American Leadership in Artificial Intelligence, calls for developing AI systems free from ideological bias or engineered social agendas. The new EO, which is pending the approval and release of an action plan, demands a policy that improves America’s global dominance, promotes human flourishing, boosts economic competitiveness, and strengthens national security. The White House Office of Management and Budget released two memos with initial guidance to accelerate federal AI adoption through innovation, governance, public engagement, and efficient acquisition.
14
While the new administration has proposed $500 billion in AI infrastructure investments, NVIDIA’s CEO argues that the U.S. must ease export controls to accelerate the global diffusion of American AI technology. He adds that China is close behind the United States, and any new policies or actions (i.e., tariffs) by the Trump administration should enable, support, and accelerate onshore manufacturing.
15
A recent hearing held by the Subcommittee on Economic Growth, Energy Policy, and Regulatory Affairs, titled "America's AI Moonshot: The Economics of AI, Data Centers, and Power Consumption," discussed the requirements, initiatives, and policies needed to advance AI, repeatedly linking its progress to national security and warning that falling behind in AI and energy infrastructure could weaken U.S. global competitiveness and defense readiness. Other key takeaways included the need for strong public-private collaboration, the need to improve the energy infrastructure to meet AI demands, and recognition of AI's role in transforming the economy by creating new industries and jobs.
16
The private sector primarily drives AI innovation. Government policies and regulations must support the private sector in R&D, manufacturing, and market competitiveness, ensuring companies remain incentivized to advance AI while also protecting the public from threats posed by malicious actors and the unintended consequences of AI systems to maintain global leadership. Similarly, international collaboration is crucial to help shape and uphold global norms and guidelines for the safe, ethical, and responsible development of AI.
International Strategies
The Organisation for Economic Co-operation and Development (OECD) has provided policy recommendations for over 60 years. Since 2016, it has offered expertise and guidance on AI policy. In 2024, the OECD merged with the Global Partnership on AI (GPAI) to advance a shared agenda for implementing human-centric, safe, secure, and trustworthy AI, as outlined in the OECD AI Principles. The OECD offers extensive resources and research papers across various sectors concerning AI. This serves as a foundation for countries and organizations to develop their own AI policies, regulations, and governance frameworks. It also hosts a repository of AI policy initiatives from over 69 countries, serving as a platform for collaboration and knowledge sharing.
Two notable policy documents from the OECD worth highlighting are the Framework for the Classification of AI Systems and the Sectoral Taxonomy of AI Intensity. These provide a strong foundation for classifying AI systems and understanding the role and implications of AI diffusion across sectors. These efforts promote shared understanding and the responsible use and development of AI.
17



In 2024, NATO released a revised AI strategy to accelerate the adoption of AI technologies across the alliance. Building on its 2021 draft, the update reflects recent advancements such as generative AI and AI-enabled information tools. Some key takeaways from the policy include the establishment of a comprehensive Testing, Evaluation, Verification & Validation (TEV&V) framework, the need to defend against adversarial uses of AI such as disinformation and election interference, the promotion of interoperability and standards, and the importance of addressing emerging risks and trends.
18
Image Source: NATO
Countries and institutions must continue collaborating to develop and integrate AI responsibly across diverse sectors. Collaboration enables countries to pool resources, share data, and jointly address global challenges. Cooperation among like-minded countries helps align norms and guidelines and is essential to reaffirm key principles of openness, democracy, freedom of expression, ethical guidelines, and human rights. Adversaries will continue leveraging AI in their malign influence campaigns and cyber attacks to manipulate societies, fuel societal divisions, and erode trust in governments and international institutions. AI technology is still in its infancy, yet as a disruptive innovation, it continues integrating into and transforming many aspects of human life.
19
References
OECD (2022), “OECD Framework for the Classification of AI systems”, OECD Digital Economy Papers, No. 323, OECD Publishing, Paris. p. 3. https://doi.org/10.1787/cb6d9eca-en​
OECD (2022), “OECD Framework for the Classification of AI systems”, OECD Digital Economy Papers, No. 323, OECD Publishing, Paris. p. 6. https://doi.org/10.1787/cb6d9eca-en​
​ Păvăloaia, V.D., & Necula, S.-C. (2023). Artificial Intelligence as a Disruptive Technology—A Systematic Literature Review. Electronics, 12(5), 1102. https://doi.org/10.3390/electronics12051102
​ Stryker, C. and Kavlakoglu, E.. (2024). What is artificial intelligence (AI)? IBM. https://www.ibm.com/think/topics/artificial-intelligence
OECD. (n.d.) OECD AI Principles overview. OECD. Retrieved from https://oecd.ai/en/ai-principles.​
Black, J., Eken, M., Parakilas, J., Dee, S., Ellis, C., Suman-Chauhan, K., Bain, R.J., Fine, H., Chiara Aquilino, M., Lebret, M., et al. (2024). Strategic competition in the age of AI. RAND. https://www.rand.org/pubs/research_reports/RRA3295-1.html
OECD. (2022). OECD Framework for the Classification of AI Systems. OECD. https://www.oecd.org/en/publications/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en.html​​
​ Future of Life Institute. (2025, April). We Can’t Stop AI – Here’s What To Do Instead [Keep The Future Human]. (Video). Youtube. https://www.youtube.com/watch?v=27KDl2uPiL8&t=30s​
IBM Technology (2024, September). What is Artificial Superintelligence (ASI)? (Video). Youtube. https://www.youtube.com/watch?v=PjqGbEE7EYc&t=121s​
​ RT. (2017) 'Whoever leads in AI will rule the world’: Putin to Russian children on Knowledge Day. RT. https://www.rt.com/news/401731-ai-rule-world-putin/
Lo, K. (2025). China’s chipmakers are catching up to Nvidia and TSMC. Here’s how they compare. Rest of World. https://restofworld.org/2025/china-chipmakers-nvidia-tsmc-gap/
Humble, K. (2024). War, Artificial Intelligence, and the Future of Conflict. Georgetown University, Georgetown Journal of International Affairs. https://gjia.georgetown.edu/2024/07/12/war-artificial-intelligence-and-the-future-of-conflict/.
Forbes Breaking News. (2023, September). Mark Warner Leads Senate Intelligence Committee Hearing On Threats Posed By Artificial Intelligence. (Video). Youtube. https://www.youtube.com/watch?v=-lD4ypDHfX4
The White House. (2025). Removing Barriers to American Leadership in Artificial Intelligence. The White House. https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/
Bloomberg Podcasts. (2025, April). China AI Capability Is Close Behind the US: Nvidia CEO. (Video) Youtube. https://www.youtube.com/watch?v=EV7qSW3HFxI&list=RDNSEV7qSW3HFxI&start_radio=1​​
GOP Oversight. (2025, April). America’s AI Moonshot: The Economics of AI, Data Centers, and Power Consumption. GOP Oversight. https://www.youtube.com/watch?v=kl6Ut2zfwNc
​ OECD (n.d.) Background. OECD. As of 03 May 2025. Retrieved from https://oecd.ai/en/about/background
NATO (2024). Summary of NATO's revised Artificial Intelligence (AI) strategy. NATO. https://www.nato.int/cps/en/natohq/official_texts_227237.htm​​​
Kerry, C., Meltzer, J.P., Renda, S., Engler, A., and Fanni, R. (2021). Strengthening International Cooperation on AI. The Brookings Institution. https://www.brookings.edu/articles/strengthening-international-cooperation-on-ai/
1
​​​
2
​​​
3
​
4
​5
6
​
7
​
8
​​​
9
​
10
​​​​​​
11
​​​
12
​
13
​​​​
14
​​​
15
​​​
16
​
17​
18​
19
​
​


