VCreaTek

By Shabana Chowdhury Ali, Thought Leader in Strategic Communications, 22 April 2026

AI is being positioned as one of sustainability’s greatest tools. But the harder question, the one we keep deferring, is whether the act of scaling intelligence is itself something we know how to sustain. 

I did not begin my exploration of AI with doubt. Like most people working at the intersection of data and operations, I was drawn to what it could solve: better forecasts, smarter systems, less waste. The promise felt intuitive. If better decisions can be made at scale, surely inefficiency can be reduced at scale. 

It was easy, in those early months, to believe that AI would naturally align with sustainability. That the two ambitions would reinforce each other almost by default. 

The deeper I went, the more that belief started to feel incomplete. Not wrong. But incomplete in a way that mattered. 

The story we are telling ourselves

The dominant narrative around AI and sustainability has become familiar. Data Centres are expanding. Energy consumption is rising. The carbon footprint of training large models is measurable, documented, and debated. 

All of that is true. But this framing may be too narrow, and in its narrowness, it risks becoming a distraction from the harder question underneath it. 

Energy is what we can see. It is what we can measure. It is what we can regulate. But it may not be a real problem. It may be the outcome of something deeper: how we design, scale, and behave in relation to intelligence itself. 

The misdiagnosis at the heart of the conversation

We are treating AI sustainability as an energy problem. But it may be more accurately understood as a systems problem, or more specifically, a problem of how we think about the relationship between capability and necessity. 

AI becomes wasteful when we assume data is infinite. When we treat the effort behind it as invisible. When we prioritize what is possible over what is purposeful. When we automate not because a process needed it, but because the option existed. 

In that sense, the ‘energy’ conversation may be addressing the symptoms while leaving the cause largely untouched.

The rebound effect, applied to intelligence

One of the strongest arguments in favour of AI is efficiency, and it is a genuinely valid one. AI reduces time spent on repetitive work. It improves accuracy. It accelerates decision-making in ways that would otherwise require significant human and resource investment. 

But efficiency has a historical pattern that we tend to overlook. When something becomes faster and cheaper, we use more of it. Not less. 

Economists call this the rebound effect. We saw it with energy-efficient appliances that led to larger homes. With faster broadband that produces more data consumption, not less. With cheaper flights that expanded air travel beyond anything previous generations could have imagined.

The Core Tension

If intelligence becomes cheaper and more accessible, we will use more intelligence: running multiple prompts instead of refining one, using large models for simple tasks, storing data that will never be revisited, automating processes that did not require automation in the first place. None of this feels significant in the moment. But at scale, small choices become structural realities. 

The question is not whether AI can be efficient. It clearly can. The question is whether efficiency, without intentionality, automatically reduces our overall footprint, or whether it simply shifts where the consumption happens.

Infrastructure amplifies patterns

There is a subtle shift worth naming. AI is no longer something most organisations reach for occasionally. It is becoming infrastructure: the layer beneath how we work, decide, communicate, and operate. And infrastructure does something very specific. It amplifies patterns. 

If our underlying patterns of decision-making are thoughtful and targeted, AI scales that thoughtfulness. If our patterns are wasteful or undisciplined, it scales that too, only faster, and in ways that are structurally harder to reverse. 

The concern is not that AI is inherently unsustainable. It is that the infrastructure layer absorbs our existing habits without correcting them. And by the time those habits are visible at scale, they are already embedded in the systems we depend on. 

The invisible cost problem

Part of what makes this difficult to address is that the cost of AI does not feel immediate. We do not see the infrastructure or the effort that powers it. There is no meter running when we send a prompt, run a pipeline, or store another dataset we may never return to. 

So, usage feels frictionless. And anything that feels frictionless tends toward overuse, not because people are careless, but because the cost is not visible at the point of use. The impact is distributed. The feedback is delayed. The consequences remain abstract until they are no longer avoidable. 

This is one of the central challenges of building responsible AI practice: the feedback loops that would normally discipline overconsumption are either absent or poorly designed. 

Responsibility needs a broader frame

When most organizations talk about responsible AI, the focus falls on ethics: bias, fairness, accountability, and transparency. These are critical, and the field has made genuine progress on them. 

But responsibility may need to extend further, to include how efficiently we use AI, whether every use case is necessary, and what it means to treat resource consumption as a first-order concern rather than an afterthought. 

This is not a call for austerity or a rejection of AI’s genuine potential. It is an argument for a different kind of intentionality. One that asks not only what AI can do, but what it should do, what it costs, and to what end. 

Sustainability shaping AI, not just the other way around

We ask often how AI can help sustainability. On Earth Day, it seems equally worth asking how sustainability principles can shape how we build and use AI. 

Not as a constraint, but as a design logic. Choosing targeted efficiency over an indiscriminate scale. Treating data quality as a priority rather than data quantity as a default. Proving value at a small scope before expanding it. Resisting the pull to automate simply because it is technically possible. 

These are not engineering decisions. They are cultural ones. And cultures, unlike codebases, do not come with version control. 

I am still early in understanding where all of this leads. But one thing has become harder to ignore: the sustainability question around AI is not only about what it consumes in megawatts. It is about what it enables us to consume, in decisions made without scrutiny, in processes automated without examination, in scale pursued without sufficient question. 

The promise of AI is real. So is the risk that we treat intelligence the way we have treated every abundant resource before it, used freely, expanded constantly, questioned too little and too late. 

The difference this time, perhaps, is that we have the chance to ask the question before the cost becomes irreversible. 
That is not a small thing. 
The question is whether we will. 

Disclaimer: The stories and opinions shared here are meant to inform and inspire. They reflect individual experiences and viewpoints, not necessarily those of VCreaTek. While every effort is made to ensure accuracy, VCreaTek is not responsible for any errors or outcomes arising from the use of this information.