Why I don't use AI

December 16, 2025 [Tech]

(Balaam, A.J. (2025). In F. Buontempo, editor, Overload 190 (2025).)

I choose to avoid using "AI" (by which I mean Large Language Models1). Here's why:

Environmental impact

Between 2010 and 2020, the energy used by data centres around the world rose only slightly2, but since then energy use has risen sharply3, driven by the expansion of AI4. Compounding the problem, because these data centres are using more energy than was predicted or provided for by existing generation, the carbon intensity of the electricity use is much higher than average (48% higher in the US according to one study4).

Driven by AI, data centres are predicted to double their energy use by 20305. In Ireland in 2025, data centres use almost a fifth of the electricity supply6, despite the rise in use of electric vehicles over recent years.

Unless we change course, this is not going to slow down or become sustainable. Quoting Sam Altman: "You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future" 7

This rapacious appetite for more data and more computation is hard-wired into the AI movement. It is driven by a belief that Sam Altman expresses like this on his blog: "it appears that you can spend arbitrary amounts of money and get continuous and predictable gains"8. The movement is built on the idea that if we just consume more and more resources, we will achieve greater and greater success. As long as they are driven by this belief, we can never expect them even to attempt to curb their energy use.

Data centres are often harmful to the local area, and are often sited in areas of existing social deprivation. They consume both energy and water that could otherwise be used by people, and cause problems with pollution and energy shortages9.

For more detail on the environmental impacts of AI I recommend (perhaps surprisingly) the Teen Vogue article "ChatGPT Is Everywhere — Why Aren't We Talking About Its Environmental Costs?"10.

Exploitation of workers

The AI companies don't like to talk about it, but their models only work when provided with vast amounts of human-created data. This data is not simply passively scraped from the Internet: the models are built on the work of millions of people actively classifying images and rating answers, shaping the models to produce results that look and sound safe and reasonable11.

Most of the people involved are very poorly paid12. Many of them are traumatised by horrific images and speech that they are asked to classify13.

Workers paid between $1.32 and $2 per hour in Kenya (a wage described as "an insult") talk about their work like this: "You’re reading this content, day in, day out, over the course of days and weeks and months, it seeps into your brain and you can’t get rid of it."14

Biased and dangerous results

Despite wide acknowledgement among experts that AI produces unreliable results, many people are being encouraged to trust its output in terms of accuracy and safety.

Researchers have found that recent AI models confidently express judgements that are plain wrong, making mistakes about basic economic ideas like interest rates15, or inventing conversations about patients' medical data16.

Even more concerningly, people are treating AI models as trustworthy conversation partners. This is done with full encouragement from the AI companies, despite the real risks involved. In 2023, Character AI founder Noam Shazeer said of AI17 "It’s going to be super, super helpful to a lot of people who are lonely or depressed." In fact, one of Character AI's chat bots played an alarming role in the suicide of a teenager17. The parents of another teenager say it explicitly encouraged him to commit suicide before he did18. There is a growing number of reports of chat bots guiding people down "delusional spirals" that can have devastating mental health consequences19.

It is clear from all these examples and many more court cases currently in progress that it is impossible to control the words spewing from these models. Given the racial slurs included in the most widely-used training dataset20 it is not surprising that they occasionally lose the plot like Grok once did, producing racist rants and naming itself "MechaHitler"21.

Throughout all of this unreliability, the popular AI models very reliably convey total confidence in the latest answer they gave, even where it contradicts the previous one.

It is pure wishful thinking to say that AI models can replace human judgement in any area. If people treat AI as a trustworthy oracle or a trustworthy companion, this wishful thinking is actively harmful22.

Unfair use of creative work

The leading models are trained on all the data the AI companies can get their hands on, regardless of license. This includes proprietary information such as news and information sites, art galleries and personal web sites that are protected by traditional copyright arrangements. These sites (and indeed printed books and other offline materials) are published under a legal framework that allows searching and indexing of their content without reproducing it. Many commercial web sites depend on visitors being directed towards their web site so they can receive advertising income.

Meanwhile, a huge amount of material is available online in free and open source form, especially but not exclusively the enormous corpus of source code that is used to train AI coding models. The bargain for this material is different: authors require attribution for re-use, and place additional requirements such as "share-alike" clauses that enforce the release of derived works under the same terms. AI models are breaking this legally-enforced bargain by reproducing derived or straightforwardly copied works with no attribution or correct license.

Directing visitors to web sites is not a benevolent or coincidental side effect of search engines: it is a self-sustaining bargain: you allow me to index your content and in exchange I direct users to your site. If this bargain breaks down, web site creators lose their source of income and many web sites will disappear23. Complying with license terms is not optional: it is required to use any material, including free and open source content.

These bargains are enforced by copyright law. The invention of AI did not change anything about this bargain, except that it obfuscated the copying of copyrighted material [^8]24, and convinced governments that enforcing the law would block the promised economic miracle of AI[^29]25.

Other reasons

This article is primarily concerned with ethical reasons for avoiding AI usage, but there are plenty of other reasons too:

Despite the hype, it is clear that AI can perform some tasks effectively, for example making very convincing fake videos. I choose to avoid them where I can, for the above reasons.

Conclusion

I believe that AI is a force that is doing real harm in our world, and is concentrating wealth and power in the hands of those who are already wealthy and powerful enough. If you agree, let's work together as professionals to help our companies, organisation and friends to be skeptical of its benefits, and mindful of its problems when we make decisions about how and where to use it.

If you'd like to hear more AI-skeptical viewpoints, thecon.ai30 is a good place to start. The article "I Am An AI Hater"31 by Anthony Moser was the research starting point for this article and is recommended if you'd like a less emotionally-constrained view along similar lines.

(With thanks to the Overload reviewers for suggesting several extra references.)