6

Module 6  ·  Track 2: Operational Practice

Software Engineering
and Application Sustainability

Infrastructure decisions affect how efficiently you run a workload. Software decisions determine whether the workload needs to exist at all, and what it costs to execute when it does.

Duration30–35 minutes
TrackOperational Practice

What you will take from this module

Enterprise architecture as a sustainability lever.

Before code, before infrastructure: the application portfolio. Every application that continues running because nobody made a deliberate decision to retire it is consuming resources for no business return.

The TIME model applied to carbon: Tolerate, Invest, Migrate, Eliminate. The Eliminate category is where the highest sustainability return per decision lives, and it is almost always underused.
Tolerate

Runs as-is, no investment

Application remains in service at current efficiency. Measure its intensity. If intensity is high and usage is low, tolerance has a cost.

Invest

Active improvement

The service is worth improving. Sustainability criteria enter the investment case: compute per transaction, data retention policy, architecture pattern.

Migrate

Move to more efficient platform

Often delivers efficiency gains. Cloud-native, serverless, or managed services that eliminate the overhead of bespoke infrastructure.

Eliminate

Retire: the highest-leverage decision

Every application retired removes its entire footprint: run cost, licence cost, support overhead, energy, and embodied hardware. Retirement is the most impactful decision portfolio governance can make.

Software Carbon Intensity: the engineering metric.

Intensity, not total emissions, is the metric engineering teams can directly influence. Total emissions grow with the service. That is expected. Intensity shows whether engineering decisions are improving or degrading the sustainability of each unit of work.

SCI formula: interactive worked example

SCI = (E × I) ÷ R
VariableMeaningExample value
EEnergy consumed (kWh)2.4 kWh per run
IGrid carbon intensity (g CO₂/kWh)168 g/kWh (London)
RFunctional unit (transactions, calls, sessions)50,000 records
SCICarbon per functional unit8.06 g CO₂ per 1,000 records

Set an intensity target

Reduce to 6g CO₂ per 1,000 records within two quarters. That is a specific, ownable engineering goal, independent of how many records the pipeline processes.

SCI live calculator: adjust the inputs and see the formula update in real time

2.4 kWh
168 g/kWh (London)
50,000 records
SCI = (E × I) ÷ R
g CO₂ per 1,000 records

Software is where a large share of digital waste gets designed in. Not because developers do not care, and not because engineering teams are doing a bad job. Most delivery environments are set up to optimise for feature throughput, delivery speed, resilience, and user growth. Resource demand gets treated as somebody else's problem.

Software doesn't just run on infrastructure. It creates demand for infrastructure. It determines how much compute is needed, how much memory is reserved, how much data is stored, how often information moves, how long environments stay alive, and how quickly capacity gets consumed.

If you take one thing from this module: sustainable software is not mainly about elegant code. It is about controlling unnecessary demand across the full lifecycle of an application.

Section 3

Product decisions create technical demand.

Before a single line of code is written, product and service decisions are already shaping sustainability outcomes. Every feature creates demand: compute, storage, network traffic, device activity, third-party processing, support effort, operational complexity. A product team that keeps adding capability without discipline around value density will gradually build a service that is heavier, noisier, and more expensive to run.

Ask uncomfortable questions early

Good teams get used to asking those questions. Weak teams avoid them because they sound awkward.

Section 4

Architecture choices matter, but not in simplistic ways.

People want easy rules here. There aren't many. Cloud-native doesn't automatically mean efficient. Serverless isn't always better. Microservices aren't inherently greener than a modular monolith. Event-driven design can reduce waste in one context and create overhead in another.

The real question is not which architecture sounds most modern. It's: which architecture delivers the required outcome with the least unnecessary demand, and the least long-term operational waste?

Patterns that help when applied properly

  • Stateless services improve scaling efficiency.
  • Asynchronous processing reduces blocking and over-provisioning.
  • Event-based triggers cut idle runtime on bursty workloads.
  • Good caching reduces repeated compute and network traffic.
  • Tiered storage reduces waste where not all data needs the same performance profile.
  • Workload scheduling moves flexible jobs into sensible execution windows.

Traps that look modern and aren't

  • Microservices that multiply network calls, telemetry overhead, and idle baseline far beyond expectation.
  • Over-engineered resilience leaving capacity running permanently for risks that don't justify the overhead.
  • Multi-region duplication for low-criticality services where single-region would suffice.
  • Designing for future scale so early it becomes designing for current waste.

Sustainable architecture is disciplined architecture: the minimum complexity needed for the actual service, not the imagined keynote version.

Section 5

Software efficiency is broader than CPU efficiency.

When people hear "software efficiency" they usually think CPU cycles. That's too narrow. Software efficiency sits across at least six dimensions, and a service can look efficient in one while being awful in another.

01 · Compute

How much processing is required to complete the task?

02 · Memory

How much working memory is reserved, and how much infrastructure is being held open because memory footprints are bloated?

03 · Storage

How much data is retained, duplicated, backed up, versioned, and forgotten?

04 · Network

How much data moves between services, regions, vendors, devices, and users?

05 · Device-side

Especially for web and mobile services: how much work are you making end-user devices do?

06 · Human

How much operational effort is needed to keep the service running, support it, patch it, and explain it?

Simplistic claims about efficiency are usually not worth much. Be specific about which dimension you're talking about, and which ones you're ignoring.

Section 6

Data is one of the biggest hidden problems.

Bad software is often bad data discipline made operational. Most organisations are far better at storing data than deleting it. From a sustainability point of view, the questions are straightforward.

The questions worth asking

  • What data do we actually need?
  • How long do we need it for? At what resolution?
  • How often should it be collected?
  • How many times is it copied?
  • Who owns deletion rules?
  • What percentage of stored data is genuinely used?

Common waste patterns, easy to find

  • Verbose logs kept forever.
  • Telemetry captured because the platform default allows it.
  • Duplicate datasets across teams.
  • Full copies of production-like data in dev and test.
  • Backups retained long beyond real business need.
  • Analytics pipelines fed by data nobody ever acts on.

Storage looks cheap. It isn't.

Data drives storage, replication, backup, transfer, access control, and processing demand. The lazy organisational habit is to keep everything because the marginal storage cost looks low. It is not cheap once you include the rest of the chain, and it stays on the books indefinitely.

Section 7

Development and test environments are often a mess.

Most sustainability conversations focus on production. That misses a significant source of avoidable waste. Development, testing, QA, training environments, build runners, oversized pipelines, old snapshots, long-lived non-production databases, and environments that were supposed to be temporary but quietly became permanent. All of them consume resources continuously.

In many organisations nobody owns that waste clearly enough for it to get cleaned up.

Blunt questions worth asking

This is not glamorous work. It is operational discipline, which is exactly why it matters.

Section 8

Code-level practices still matter. Keep them in proportion.

Code quality is part of the picture, not the whole picture. Don't let the conversation collapse into the idea that sustainable software is mainly about writing low-level efficient code. In most enterprise environments, architecture, demand shaping, data growth, and runtime discipline will matter more than micro-level language differences.

That said, code-level practices still matter.

Back-end hygiene

  • Avoid wasteful loops and unnecessary polling.
  • Reduce repeated queries and duplicate processing.
  • Use batching where it makes sense.
  • Paginate and filter properly; avoid over-fetching.
  • Compress and cache intelligently.
  • Keep payloads lean.
  • Treat retries carefully; fail fast where appropriate.
  • Avoid background jobs that exist only because nobody redesigned the process.

Front-end discipline

Sustainable software is not only a back-end topic. Bloated pages, unnecessary video, oversized images, badly managed JavaScript bundles, and aggressive refresh behaviour all create repeated demand on networks and user devices, at scale, across every session.

  • Right-size images. Prefer modern formats.
  • Audit and tree-shake JS bundles; lazy-load what isn't needed on first paint.
  • Cache static assets aggressively at the CDN.
  • Question auto-refresh, auto-play, and polling loops that run by default.

Section 9

Observability needs discipline too.

Teams rightly want strong observability. But observability itself has a footprint. Logs, traces, metrics, dashboards, exports, long retention periods, duplicated telemetry pipelines, high-cardinality events, and capture-everything habits all create significant demand.

The answer is not weak observability. The answer is purposeful observability.

Purposeful observability, in practice

Otherwise you end up burning compute to watch compute burn compute.

Section 10

Runtime operations: where intent meets reality.

Once the application is live, sustainability becomes an operational question. Design intent doesn't matter if runtime behaviour tells a different story.

The questions a well-run service should answer

This is where software engineering and GreenOps should meet: not in a separate annual report, but in service reviews, architecture reviews, engineering governance, and normal delivery conversations.

A well-run service should be able to explain what demand it creates, what drives that demand, what its main efficiency metric is, what its biggest waste patterns are, and what the next optimisation move should be. If it cannot, the service is being run with a blind spot.

Section 11

AI features need special scrutiny.

Software sustainability training that skips AI is incomplete. For most organisations, the practical issue is not training a foundation model from scratch. It is embedding AI features into products and workflows without enough discipline around whether the extra computational demand is justified.

Questions to hold against every AI feature

Too much current AI delivery still behaves as if compute is an infinite free good. It isn't.

Section 12

How to measure this credibly.

This module is not a carbon accounting lesson. But software teams do need a usable measurement frame. At minimum, three categories of metric should be working together: demand, service efficiency, and business-normalised.

01 · Demand metrics

The raw consumption signals.

  • CPU-hours, memory consumption
  • Storage growth, network transfer
  • Request volumes, environment uptime
  • Inference counts, token volume

02 · Service efficiency metrics

Infrastructure use per unit of delivery.

  • Transactions per unit of infrastructure
  • Utilisation rates, percentage idle time
  • Storage per active user
  • Build minutes per release
  • Inference calls per business action

03 · Business-normalised metrics

The conversation leadership will hear.

  • Energy / emissions / resource use per transaction
  • …per API call, per customer journey
  • …per report produced, per active user
  • …per unit of useful business output

The point isn't perfect precision from day one. The point is to stop talking about software sustainability in vague adjectives.

Section 13

What good looks like, by role.

Sustainability can't live in a central team and expect to change software. It has to show up in the decisions each role already owns.

Product owners

Challenge feature demand. Define useful service-level unit metrics. Be willing to de-scope low-value functionality when the consumption cost doesn't match the business return.

Architects

Reduce duplication. Prevent unjustified complexity. Make demand trade-offs explicit in design reviews, not an afterthought after the pattern is chosen.

Developers

Treat efficiency, data minimisation, and unnecessary processing as quality concerns, not optional extras for when there is time. The same standard you'd apply to a bug or a security concern.

Platform and SRE

Control non-production waste. Right-size the runtime. Automate shutdown. Expose efficiency signals in engineering tooling where they influence decisions.

Leadership

Require services to justify resource demand the same way they justify cost, resilience, and risk. Make "what does this cost in compute, storage, and carbon?" a standard question in service reviews, not a sustainability team's awkward add-on.

Section 14

The simple test.

A mature software organisation should be able to answer five blunt questions for any service of material importance.

  1. Why does this service exist?
  2. What demand does it create?
  3. What share of that demand is actually valuable?
  4. What is the biggest avoidable waste in the service today?
  5. Who owns reducing it?

If you cannot answer those questions, you don't yet have software sustainability. You have a good intention and a blind spot.

Software is one of the most powerful leverage points in digital sustainability because it shapes demand before the infrastructure bill ever arrives. That is the opportunity. The risk is that many organisations still treat software sustainability as a niche technical concern, or worse, as a coding-style conversation.

It is much bigger than that. It is portfolio discipline, product discipline, architecture discipline, data discipline, delivery discipline, and operational discipline. In other words, it is just engineering done properly, with fewer excuses and better visibility of the consequences.

⏸ Pause & Reflect

Take two to three minutes. Hold these against a service you know well.

1Which application in your organisation is most likely to be consuming material compute or storage for questionable business value?
2Where are your biggest software-created waste patterns today: portfolio duplication, oversized environments, unnecessary data retention, chatty architecture, poor front-end design, or AI overuse?
3If you had to reduce software-driven infrastructure demand by 10% in the next quarter, what would you stop, simplify, retire, or redesign first?

Knowledge Check · Module 6 · Q1

Which action usually delivers the largest sustainability gain at portfolio level?

Select an answer to reveal the explanation.

✓ Correct: Option B

Retiring low-value or redundant applications usually removes demand entirely, which is often more impactful than incremental optimisation of applications that should not exist in the first place. The greenest application is the one you stop running, not the one you tune for a 5% efficiency gain over six months. Every retirement removes compute, storage, licensing, support effort, patching cycles, and operational overhead in a single decision.

Knowledge Check · Module 6 · Q2

True or false: sustainable software is mainly a matter of choosing a more efficient programming language.

Select an answer to reveal the explanation.

✓ Correct: Option B

Language choice does have a measurable effect. Research consistently finds compiled, statically typed languages consume less energy per task than interpreted ones used without care. But in most enterprise environments the larger levers are further up the stack: architecture that does not over-fetch or over-replicate, product scope that does not build features with low value density, data discipline that does not retain everything forever, and operational discipline that does not leave non-production environments running 24/7.

Rewriting a service in Rust while the company runs 40 never-retired applications is the wrong place to start.

Knowledge Check · Module 6 · Q3

Which of the following is most likely to create hidden software-driven waste?

Select an answer to reveal the explanation.

✓ Correct: Option C

Long-lived non-production environments, duplicate datasets cloned from production, and excessive telemetry retention are among the most common hidden sources of software-driven waste. None of them are visible in a production architecture diagram. All of them compound continuously. And in many organisations, nobody owns them clearly enough for cleanup to happen without a push.

The other options listed describe discipline patterns that reduce waste, not create it.

Knowledge Check · Module 6 · Q4: Short answer

Name one example of a software design choice that increases infrastructure demand without creating proportional business value. Hold your answer in mind, then reveal the reference response.

Reveal reference answer

Credible answers include:

  • Excessive polling where an event-driven pattern would serve the purpose.
  • Over-engineered microservice decomposition creating a permanent idle baseline far larger than the problem requires.
  • Storing detailed telemetry nobody queries, retained at maximum duration by default.
  • Invoking a heavyweight AI model for trivial tasks that rules-based logic or a small fine-tuned model would handle.
  • Keeping inactive services alive indefinitely because retirement was nobody has explicit ownership.
  • Collecting more user data than the service genuinely needs, driving downstream storage, replication, and governance overhead.
  • Multi-region duplication of low-criticality services that could operate cleanly in a single region.

The strongest answers link the design choice to a specific demand type (compute, storage, network, telemetry) and to why the business value doesn't justify it.

The test

Five questions that separate mature from aspirational.

Before closing this module, apply these five questions to any service in your portfolio. If you cannot answer them, you have a good intention and a blind spot.

01

Why does this service exist?

What business outcome does it deliver? If nobody can state the purpose in one sentence, the service may be running on inertia rather than value.

02

What demand does it create?

How much compute, storage, network, and downstream infrastructure does this service consume? Is that demand visible to anyone with the authority to change it?

03

What share of that demand is valuable?

Not all demand is justified. Features used by 2% of users, data retained beyond any regulatory or business need, environments left running permanently: these are demand without proportional value.

04

What is the biggest avoidable waste today?

Every service has one. It might be idle non-production environments, excessive data retention, over-provisioned compute, or a feature nobody uses. Name it.

05

Who owns reducing it?

If nobody owns the waste, nobody will remove it. Waste reduction needs a named owner, a target, and a review cadence. Without all three, it stays on the backlog forever.

The discipline test

A team that can answer all five questions for its top services has operational maturity. A team that cannot answer any of them has good intentions and no mechanism. Most teams sit somewhere in between, and the value of these questions is in making the gaps visible rather than comfortable.

Module 6: Key Takeaways

Software creates demand for infrastructure, not just run on it.

Every feature, every default, every retention policy, every AI call determines how much compute, storage, and network the organisation has to run. Sustainable software is about controlling unnecessary demand across an application's full lifecycle, not just writing elegant code.

Start at the portfolio. Eliminate before you optimise.

TIME: Tolerate, Invest, Migrate, Eliminate. The greenest application is often the one you stop running, not the one you spend six months tuning. Portfolio discipline is the biggest single lever most organisations never pull.

Product decisions create technical demand.

Every feature is a demand decision. Low value density compounds into heavy services that cost more to run forever. Get comfortable asking whether users actually need this, how often they will use it, and whether a lighter outcome would do.

Architecture is disciplined, not fashionable.

Cloud-native, serverless, microservices: none of these are automatically efficient. The right architecture is the minimum complexity that delivers the actual service, not the imagined keynote version. Guard against idle baselines, over-resilience, and premature future-proofing.

Efficiency sits across six dimensions, not just CPU.

Compute, memory, storage, network, device-side, and human. A service can look efficient in one dimension and be appalling in another. Be specific about which dimension your claims cover.

Data discipline is a software discipline.

Verbose logs kept forever, duplicate datasets across teams, production-like data in dev, backups retained long beyond need. These aggregate into large, continuous overhead. Data is not weightless, even when the marginal cost looks low.

Non-production environments are a huge blind spot.

Dev, test, QA, CI, old snapshots, long-lived "temporary" environments. Operational hygiene here (auto-shutdown, synthetic data, right-sized pipelines) releases substantial headroom without touching production.

Observability is a cost centre, not a free good.

Collect what you use, retain what you need, tier by criticality, reduce cardinality that does not add insight. Otherwise you end up burning compute to watch compute burn compute.

AI in product is where inference quietly scales.

Does this need AI at all? Could a smaller model work? Are prompts and context windows bloated? Is model invocation visible in product economics? Govern it at product level, before habits set.

Measure with three lenses, not one.

Demand metrics show raw consumption. Service efficiency metrics show infrastructure-per-delivery. Business-normalised metrics connect to the conversation leadership actually hears. Stop talking about software sustainability in vague adjectives.

Good looks different by role, and it has to show up in all of them.

Product owners challenge demand. Architects prevent duplication. Developers treat efficiency as quality. Platform teams control non-prod. Leadership requires services to justify resource demand the way they justify cost and risk.

Five questions separate mature from aspirational.

Why does this service exist? What demand does it create? What share is actually valuable? What is the biggest avoidable waste today? Who owns reducing it? If you cannot answer, you have a good intention and a blind spot.

Software creates demand. But much of what software demands is physical: servers, storage, network equipment, end-user devices. Module 7 shifts the focus from how applications consume infrastructure to how organisations buy it. Procurement is where lifecycle thinking, embodied carbon, and supplier evidence become central. The decisions made at purchase determine more than anything that happens at the plug socket afterward.

← PreviousM5: From Reporting to Operational Excellence