The energy demands from current AI data centres are relatively minor on a global scale, but energy production for AI can have significant local impacts.
xAI’s Colossus Data Center in Memphis has one of the worst environmental records. It operates on gas turbines, reportedly without the protections against air pollution, causing high emissions of gases that are harmful to human health.
So its perhaps surprising that Anthropic has signed up to use this particular data centre. Multiple AI commentators have questioned the necessity of this move (here and here).
Anthropic creates and serves some of the highest performing AI models and also markets itself as the (more?) ethical choice for language language models.
Growing use of Anthropic’s leading AI platform Claude saw the company reduce usage limits last week. Their current data centres couldn’t cope with the demand for services. (Full disclosure, Claude is one of my favourite tools).
The new deal with xAI for access to the Colossus Data Center has allowed them to again increase usage limits for their subscribers.
Colossus itself seems to have been developed without normal environmental regulations. It doesn’t have to be this way, other countries are taking a more cautious approach to data centres and regulating their environmental impacts.
It is important to get good environmental standards in at the ground level, so we don’t end up with expanding issues in coming years as AI electricity and resource demand grows.
You can see Anthropic’s business reasoning. Their business is rapidly growing and they need new data centres to match supply with demand.
But the move goes against the ethos of Anthropic who announce on their webpage “AI research and products that put safety at the frontier”. Their pages also talk about benefits to humanity and so on.
To date they have shown themselves to be leaders in pushing for better cybersecurity outcomes from AI tools. For instance, releasing new models to cybersecurity researchers before making them available to everyone. And they have pushed back against applications of their products to the military.
These new actions show that Anthropic’s definition of safety does not extend to the environmental and human health implications of their products.
It is increasingly difficult to make informed choices as a consumer in the tech space. As consumers, we need to watch how AI companies act, rather than read who they say they are.
Actions speak louder than words.