Green background with bitmap elements depicting artificial intelligence icons and words.

Artificial Insecurity: access and availability in the age of AI

In the final part of our blog series on the dodgy digital security practices underlying advanced artificial intelligence (AI) tools, we explore how LLMs impact the availability of tools and systems. Catch up with parts one and two.

There is a now-legendary scene in Stanley Kubrick’s film, 2001: A Space Odyssey, where the AI computer system, HAL, responds to the human crew’s pleas to be allowed back inside the spacecraft it controls with this famous refusal: “I’m sorry, Dave. I’m afraid I can’t do that.” In our era of large language models (LLMs) and agentic AI, what was once the stuff of science-fiction is becoming reality (sort of), with severe repercussions for the accessibility and availability of the systems and data you depend on. Let’s dive in.

// How do AI tools impact availability? 

After confidentiality and integrity, availabilitythe idea that systems, networks, and applications are accessible and functioning as they should and when they should — is the third element in the CIA triad lens we’ve been using to examine the digital security pitfalls of LLMs. A distributed denial-of-service (DDoS) attack, where an “attacker floods a server with internet traffic to prevent users from accessing connected online services and sites,” is a classic example of an attack on availability. In the case of LLMs, it’s possible that attacks can disrupt the availability of LLMs or LLM-based tools. But we are increasingly seeing instances where LLMs are the instigators of disruptions to the availability of some data or service.

// When the AI agent eats your homework 

Disruptions of availability can have significant personal and professional repercussions. In one notable example of a poorly designed LLM disrupting availability, a professor in Germany who was using ChatGPT as an academic assistant and file storage system found that, after he changed a “data consent” setting, all his chats were permanently deleted, project folders emptied, and two years’ worth of data irreversibly lost. You may also experience a disruption to availability if the LLM you are using suddenly runs out of “working memory” capacity (otherwise known as “context window overflow), resulting in the partial or complete loss of your custom settings, your favorite personalizations, and your useful presets, potentially impacting any parts of your workflow that rely on the LLM.

Meanwhile, an increasing number of people are forming intense, personal, and even intimate bonds with LLM-based tools, with some controversial cases linked to OpenAI’s GPT-4o model. As a reminder, ChatGPT can be configured to run on different models, such as GPT-3.5, GPT-4o, etc., each with different performance attributes. After OpenAI announced that they were sunsetting GPT-4o, they faced enormous backlash from people who had bonded with it, and who found its supposedly superior successors, GPT-5.1 and 5.2, to be a sorry replacement. In this sense, OpenAI’s update disrupted the availability of these people’s systems, because the replacement they were offered lacked the emotional connection they had come to expect from the system. 

As we mention above, there have been multiple cases of “AI agents” — LLMs capable of executing commands on our behalf — becoming instigators in disrupting availability, causing havoc by wiping data or radically altering complex systems without authorization. According to security technologist Bruce Schneier, the problem with AI agents is that “fast, smart, and secure are the desired attributes, but you can only get two” of the three at once. Your AI agent may be smart, because it has full access to your data, devices, and passwords, and it may be fast, because it isn’t required to constantly ask for permission before taking action. But the downside is that you now have no meaningful control over what the agent does on your behalf, and you cannot prevent catastrophic errors from occurring. 

Business owners also have availability risks. If your business integrates a cloud-based enterprise model such as Gemini into its core business processes, you could see more disruptions, whether due to the underlying infrastructure failing or geopolitical decisions that cut off access. Since AI tools are specifically being rolled out as a way to replace workers, when they fail, there may be no human backup to pick up the slack.

On a technical level, there are clear ways to limit risk, starting with implementing hard brakes to prevent AI agents making irreversible changes to file systems. This should be a basic design feature, rather than a reluctantly implemented afterthought. Proponents of LLMs would do well to adopt a “slow but steady” approach; prioritizing the development and deployment of AI systems that interrupt themselves to seek human advice in high-stakes contexts, rather than recklessly pursuing building systems with unbridled autonomy.

// What happens when AI says “no”

Beyond disruptions to data and services, there is one availability issue unique to LLMs: when they refuse to do what they’re asked (following in HAL’s footsteps). If you’ve ever tried asking an LLM-based chatbot a question, you may well have met with a polite, but firm, boilerplate refusal along the lines of “As an AI chatbot, I cannot respond to that question.”

An example of this type of refusal came when Google’s Bard allegedly refused to answer any questions about Palestine and Israel, responding to queries that mentioned keywords such as “Gaza” or “the IDF” with a stock phrase, “I’m a language model and don’t have the capacity to help with that.” An LLM refusing to answer any kind of question on a specific topic is arguably an availability issue, because the system cannot then be used as intended. LLMs are increasingly promoted as a means to easily search the web, consult sources, or conduct research, but if these systems can arbitrarily and opaquely refuse to provide specific information, then they cannot be considered to be reliable, available gateways to information. 

There are, of course, legitimate safety reasons why an LLM might be trained not to provide information in response to specific prompts related to, for instance, self-harm, nudification, or illegal activities. However, a lot of the work underpinning these refusal mechanisms lacks a human rights lens, and is instead being done under the banner of “alignment research,” meaning that human rights practitioners’ expertise is not being brought to bear on many of the same problems that have long plagued automated content moderation. Not only do LLM refusal mechanisms lack transparency, they are also rarely foolproof, with copious research demonstrating how safeguards can easily be overridden or circumvented

The issue of censorship in LLMs made headlines with the release of the Chinese LLM, DeepSeek R1, and the DeepSeek chatbot app, which notably refused to answer questions about Tiananmen Square and other topics, and displayed a strong pro-China bias in other answers. However, because DeepSeek’s R1 model was partially open-sourced, software company Perplexity AI was able to retrain the model to remove its censorship and pro-China bias. By contrast, flagship models from Google and OpenAI are not open source to any degree, meaning we have no oversight or influence over any biases, refusal mechanisms, or censorship they may contain.

Indeed, the role (and definition) of open source AI models is a key question in current debates, especially in Global Majority countries and the European Union. The broad argument of open source proponents is that open source models will allow innovation without solidifying dependencies on a few big players, allowing for a level of customization and freedom in how people use AI that would not exist under today’s corporate hegemony. Of course, we must remain wary of what some have called “open washing,” whereby companies “strategically co-opt terms like ‘open’ and ‘open source’ while in fact shielding their models almost entirely from scientific and regulatory scrutiny.” It’s important to recognize that some open source approaches could fail to challenge or even solidify the infrastructural power of existing giants.

// Putting human agency before AI agents

Looking at the bigger picture, we should be clear that the “bigger is better” AI paradigm being pushed today is a gamble. AI enthusiasts swear that pouring trillions of dollars into this approach will pay off (although it’s not clear for whom), carrying us toward a techno-utopia where massive, centralized AI models can effortlessly perform any and all tasks in a way that surpasses human intelligence. But as we’ve examined in this blog series, there are many reasons to be wary of the current direction of travel because the flipside of this vision is a world where AI-driven mass surveillance is pervasive, where knowledge and truth are undermined by inescapable AI slop, and where access to our core systems and services is dependent on fickle tools. 

Without denying the truly remarkable capabilities that cutting-edge models are displaying, we can question the economic incentives driving the particular form in which they are being deployed, as well as uncoupling those achievements from the quasi-religious frame in which they are often placed. We can treat LLMs as a sometimes useful, sometimes not, normal technology, rather than a step toward superintelligence. 

We can also explore ideas that AI experts such as Timnit Gebru and others have suggested, including that small, narrowly focused machine learning models are sometimes the best tool for the job, and Martin Tisne’s call for open, resilient and non-aligned AI that “prioritizes the public’s demand for open, privacy-respecting technologies that do not lock them in, but rather empower them, as users.” Such approaches could be especially impactful in Global Majority regions, where discussions about so-called AI sovereignty are less about dominance or an AI arms race, and more about maintaining agency over the development of a new technology.

It’s time that we stop conflating the profit incentives of a handful of companies with the future of AI, and embed solid, human-centered security practices and digital rights safeguards into AI development. Without them, this technology will only deliver on its perils, not its promise.