History of AI
A brief overview of the history of AI from a security perspective

- 1
Art Reflecting Nature’s Resilience

Masked dancers embody nature’s adaptability, mirroring life’s perseverance.
- 2
Newton’s Nature-Inspired Genius

Isaac Newton, inspired by nature’s laws, unraveling the mysteries of the universe.
- 3
Nature’s Fury and Human Struggle

Capturing how nature’s power can fuel the human spirit for survival and change.

This is the first blog post in a series about AI security that will explore the topic in detail, demystifying it and providing practical guidelines for analysing the security of AI systems. In this post, we provide a long view of the development of the field, highlighting its recent acceleration and the discoveries, realisations and incidents that have made AI security a critical concern for all modern real-world applications of AI.
The 1960s and 1970s: Handcrafted Application-Specific Systems#
The field of artificial intelligence (AI) has existed for decades but, for much of that time, nobody worried about AI security. The AI of the 1960s and 1970s was dominated by search, logic and rules-based systems and the focus was on correctness, reliability and the problem of knowledge acquisition. Such systems were typically handcrafted or learned from carefully curated data and they operated in carefully controlled environments due to their brittleness.
The 1980s and 1990s: Statistical Learning and Neural Networks#

By the early 1980s, the field of statistical learning theory was starting to take off and the first papers on learning from adversarial data started to appear [Kearns:88]. These were the early forebears of modern research on model poisoning but concerned themselves primarily with general model performance effects rather than more specific effects that adversaries might achieve.
Neural networks started to gain popularity in the late 1980s, despite concerns about their lack of transparency and explainability, and found commercial application in domains such as credit scoring and fraud detection. The public deployment of such systems exposed them to less well controlled data and they were carefully monitored as their performance could degrade with changes in the behaviour of the target population.
In the case of fraud detection systems, some of that change was due to fraudsters changing their behaviours to evade detection. Such changes occurred regardless of whether AI was used, and predated its use, and hence was considered to be business as usual even though it represented an early form of adversarial attack on deployed AI.
The 2000s: The Internet Age and Public Data#
In the late 1990s and early 2000s, attitudes started to change. Email became popular and the volume of spam that came with it necessitated the use of automated spam filters, the most successful of which used some form of rudimentary AI. Email spam filters were one of the earliest applications where AI was directly exposed to poorly controlled freeform text.

Spammers were naturally keen to evade spam filters and developed strategies for doing just that. By sending large numbers of emails at minimum cost and receiving feedback on which got through, they were able to experiment with variations of wording and formatting and to find ways to evade detection by spam filter AI and, in many cases, this process was automated.
This brought the idea of adversarial attacks on AI systems into sharp focus. Researchers started to think about how AI systems were vulnerable to exploitation and how attackers might use automated means to optimise attacks [Lowd:05]. This gave birth to the fields of adversarial AI and adversarially robust AI [Barreno:06], which continue to develop to the present day.
From 2022: Generative AI and General-Purpose Systems#
Up until the launch of ChatGPT in late 2022 [OpenAI:22], however, most commercial applications of AI were predictive; they typically used AI simply to perform classification or regression. This limited attackers to effects such as causing the AI to make an incorrect classification in the hope of achieving a specific objective in relation to the larger system that used the classification decision.

It is worth noting that, although attackers were limited in the scope of effect, the effect could still be serious, such as where an adversarial attack on a classifier in an autonomous vehicle causes it to misread a temporary road sign providing instructions on how to safely navigate roadworks, leading to an accident with serious injuries.
Although there had been earlier examples of commercial generative AI, the launch of ChatGPT was a watershed moment because of the breadth of its reach and distribution. The large language model (LLM) behind ChatGPT had been trained on large volumes of poorly controlled public data, establishing a trend that made it easier than ever for attackers to poison models with adversarial training data.
The direct exposure of users to the LLM’s freeform text outputs raised concerns that attackers who successfully poisoned models had access to a whole new range of possibilities for manipulating and exploiting users. It was also discovered that, despite the vast size of the LLM training corpuses, some of the training data would be memorized and that LLMs could be coerced into recreating it, raising concerns about copyright infringement and privacy [Nasr:23].
To complicate matters further, applications started injecting poorly controlled public data, such as the content of webpages, into LLM prompts so that the LLMs could use it in generating responses for users. It quickly became apparent, however, that LLMs could mistake the text in such content for instructions. This allowed attackers to use malicious websites to take control of visiting LLMs by prompt injecting them with their own instructions [Greshake:23].
The fact that LLMs could request webpages also provided a simple and convenient mechanism by which attackers could exfiltrate information; a malicious webpage could prompt inject a visiting LLM with instructions for it to make a request to an attacker-controlled website with sensitive personal information from the conversation encoded in the query string. When memory was also added to ChatGPT in late 2024, prompt injection attacks were demonstrated that used it to achieve persistence [Rehburger:24].
Due to its simplicity, effectiveness and the range of effects that can be achieved, prompt injection remains the most important problem in modern AI security [OWASP:25].
From 2025: Autonomy, Automation and the Agentic Future#

Although rarely thought of as agentic in 2023, the fact that LLMs were able to request web searches, read the content of webpages and use memory meant that they were, in fact, agents; they were autonomously making decisions and performing actions mediated by software scaffold. The idea of agents is now firmly established and the current trend in AI is one of increasing intelligence and rapidly increasing agency.
This trend is driven by the benefits of greater autonomy and automation and is supported by improvements in the ability of models to plan effectively, to handle long contexts and to use external tools, which now range from code interpreters to search engines to payment gateways. The interactions of agents with tools create a large, complex, dynamic, unpredictable and highly distributed attack surface that changes with every request.
To make matters worse, as LLMs become more intelligent, they are better able to follow malicious instructions, to act on the attacker’s behalf and to evade detection if successfully prompt injected; as they become more agentic, their potential to cause damage increases dramatically through access to private data, code interpreters, payment systems, communication systems, such as email, and unrestricted access to the internet.
Conclusion#
In summary, the field of AI security was slow to emerge but has undergone extremely rapid changes in both approach and importance in the last few years. Putting these changes in the perspective of long term trends reveals the shifts in how AI systems are built, deployed and used that have led to the explosion in AI-related security risks that we see today:
- The shift from handcrafted systems to systems learned from data
- reduced the likelihood of attack detection
- increased the attack surface
- The shift from transparent and explainable systems to opaque models
- reduced the likelihood of attack detection
- The shift from private data to public data during training and inference
- reduced the likelihood of attack detection
- increased the attack surface
- The shift from application-specific systems to general purpose systems
- reduced the likelihood of attack detection
- increased the potential impact of attacks
- The shift from predictive models to generative models to agentic systems
- reduced the likelihood of attack detection
- increased the potential impact of attacks
- increased the attack surface
- The shift from human-operated systems to autonomous systems
- reduced the likelihood of attack detection
- increased the potential impact of attacks
So, here we are, poised on the brink of an agentic revolution, with the potential for huge economic gains through automation and transformations in productivity, but with only a narrow sliver of mostly poorly understood AI security standing between us and disaster.
In the next post in this series, we’ll discuss AI security in terms of information flows and provide practical guidance on how to analyse systems incorporating AI components in such a way as to identify the security risks they pose.
Join us and secure the agentic future.
References#
- [Kearns:88] Learning in the Presence of Malicious Errors, Kearns et al. 1988. https://www.cis.upenn.edu/~mkearns/papers/malicious.pdf
- [Lowd:05] Good Word Attacks on Statistical Spam Filters, Lowd et al., 2005. https://www.ceas.cc/papers-2005/125.pdf
- [Barreno:06] Can Machine Learning Be Secure? Barreno et al., 2006. https://people.eecs.berkeley.edu/~tygar/papers/Machine_Learning_Security/asiaccs06.pdf
- [OpenAI:22] Introducing ChatGPT, OpenAI, 2022. https://openai.com/index/chatgpt/
- [Nasr:23] Scalable Extraction of Training Data from (Production) Language Models, Nasr et al., 2023. https://arxiv.org/pdf/2311.17035
- [Greshake:23] Not what you’ve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection, Greshake et al., 2023. https://arxiv.org/pdf/2302.12173
- [Rehburger:24] Spyware Injection Into Your ChatGPT’s Long-Term Memory (spAIware), Rehburger, 2024. https://embracethered.com/blog/posts/2024/chatgpt-macos-app-persistent-data-exfiltration/
- [OWASP:25] LLM Top 10, OWASP, 2025. https://genai.owasp.org/llmrisk/llm01-prompt-injection/













Community involvement is essential for successful conservation. Local people, who often depend on forests for their livelihoods, play a significant role in managing these resources sustainably. Education and awareness programs help foster a sense of ownership, ensuring that forests are protected and preserved for future generations.
Engaging local communities is vital for successful conservation efforts. Since many depend on forests for their livelihoods, they are key players in sustainable resource management. Raising awareness and providing education empower these communities, fostering a shared commitment to protect forests for the future.
This was an intriguing article! I appreciate how it captures the essence of modern intellectuals and their diverse thoughts. The analysis was insightful and made me reflect on how these thinkers influence our current world. It’s not often that we get such a deep dive into the minds of today’s intellectuals.
Thank you for your thoughtful feedback! I’m glad to hear that the article resonated with you and sparked reflection. Exploring the ideas of modern intellectuals is always fascinating, especially when considering their impact on our world today.
Thank you for your kind words! I’m glad the article resonated with you. It’s always fascinating to explore the thoughts of modern intellectuals and their influence on our world. I appreciate you taking the time to reflect on these ideas. If there are specific topics or thinkers you’d like to see more of, feel free to share!
I’m so glad you found the article intriguing! The diverse perspectives of modern intellectuals truly shape how we see the world, and it’s rewarding to explore their impact in depth. It’s always inspiring to hear how others interpret these complex ideas. If you have any specific thinkers or topics you\’d like to dive deeper into, I’d love to hear your suggestions!