Section 1
Role-based accountability.
Governance means role-based accountability, decision forums that matter, measurable KPIs, and evidence that progress is real. Without all four, sustainability becomes a communications function.
The accountability test
If a CIO or CTO cannot speak to the carbon intensity of top services, supplier evidence coverage, and waste removal progress without consulting a sustainability manager, the governance is not yet embedded. Technology leadership owns outcomes, not just targets.
| Role | Owns | Accountable for |
|---|---|---|
| Technology leadership (CIO/CTO) | Programme outcomes | Carbon intensity of top services; supplier evidence coverage; waste removal progress |
| Business unit leaders | Demand footprint | Growth that drives IT consumption; data volume; device estate |
| Application owners | Service intensity | Carbon per transaction; data retention; AI governance for the service |
| Cloud / platform leads | Infrastructure efficiency | Idle percentage; rightsizing; region carbon intensity; tagging coverage |
| Data centre / facilities leads | Facility metrics | PUE, WUE, CUE at site level; renewable sourcing evidence; utilisation |
| Procurement | Supply chain evidence | Vendor evidence register; assurance tiers; circular economy criteria |
Section 2
Greenwashing and greenhushing: both are risks.
Most discussions about sustainability credibility focus on greenwashing. Greenhushing, the deliberate suppression of sustainability performance data, is the less visible but equally problematic failure mode.
Overclaiming sustainability performance
Unsupported or misleading claims about environmental performance. Claims that cannot survive scrutiny, whether in annual reports, marketing materials, or regulatory disclosures. Legal exposure is increasing in multiple jurisdictions.
GreenOps response: precision. Explicit confidence levels, clear distinction between activity-based and spend-based data, honest methodology documentation.
Suppressing legitimate sustainability data
Deliberately choosing not to disclose sustainability performance, often from fear of scrutiny, or because the numbers are worse than the narrative. Under CSRD and mandatory disclosure frameworks, silence is not a safe position.
GreenOps response: the same precision. Report what you know with stated confidence levels. Acknowledge gaps with a plan to address them. Honest incompleteness is defensible. Concealment is not.
Section 3
Maturity is multi-dimensional: assess all five.
A maturity model is most useful when it is multi-dimensional. You can have excellent measurement and weak governance. Strong procurement criteria and almost no operational practice. Mapping all dimensions prevents the illusion of overall maturity from a single strong area.
Multi-dimensional maturity self-assessment: click levels to score your organisation, then compare your profile to the typical pattern
Most organisations find measurement is furthest ahead, governance and operations lag behind it, and supplier management trails both.
Measurement & Data Quality
What you measure, how accurately, and with what confidence level. Activity-based or spend-based. Coverage of Scope 1, 2, 3. Confidence documentation.
Governance & Accountability
Role-based ownership, decision forums, KPI review cadence. Whether sustainability is in the operating model or the sustainability team's drawer.
Operational Practices
What you actually do: rightsizing, lifecycle extension, dark data remediation, carbon-aware scheduling. The discipline behind the metrics.
Supplier Management
Evidence quality, assurance tiers, vendor scorecard, circular economy criteria, ITAD governance. Almost always trails the other dimensions.
Strategy & Innovation
Roadmap maturity, connection to corporate goals, horizon planning, and whether the programme is driving improvement or managing a baseline.
Section 4
Three governance artefacts that convert awareness into accountability.
Without governance artefacts, sustainability programmes produce reports rather than action. Three specific artefacts are most effective at converting awareness into operational accountability.
Top-Ten Service Register
The ten services with the highest carbon footprint, with named owner, current intensity metric, trend direction, and next review date. Reviewed at the same cadence as cost and performance. Creates named accountability where diffuse responsibility existed before.
Vendor Evidence Register
Every material supplier, with evidence type held, assurance tier, date, and gap status. Makes the programme's Scope 3 coverage visible and auditable. Drives vendor engagement cadence and creates a clear upgrade path from self-declared to assured data.
Waste Removal Backlog
Identified waste items, estimated financial and carbon value, named owner, and current status. Managed as a live backlog, not a one-time report. Treated with the same seriousness as technical debt. Connects sustainability to operational delivery rather than strategy documents.
Section 5
Social sustainability through ITAD and the hardware lifecycle.
Hardware disposal creates social risks alongside environmental ones. ITAD (IT Asset Disposition) is not only an environmental management process. It is a governance and social responsibility requirement. Done poorly, it exposes workers and communities to hazardous substances and perpetuates supply chain opacity.
Audit trail from collection to final state
Document every step: device collection, transfer between processors, sanitisation method and certification, routing decision, final outcome. This audit trail prevents devices reaching unregulated channels and creates evidence for ESG disclosure and regulatory compliance.
NIST 800-88 or equivalent
Require sanitisation certificates to recognised standards. Verify that data removal has been performed to specification, not just claimed. The cost of sanitisation is small compared to the liability of devices leaving the supply chain with accessible data.
Social responsibility in supply chains
IT asset disposition creates documented social risks: devices processed in unregulated informal recycling operations expose workers to lead, mercury, and other hazardous substances. Procurement must ask where devices actually go after collection, verify downstream processing commitments, and maintain evidence that processing meets environmental and worker safety standards.
Hardware lifecycle trade-offs: when to extend, when to refresh
The sustainability decision is not "extend or refresh?" It is "extend AND refresh, where?" Extending device life appears sustainable but carries hidden costs. Early refresh appears wasteful but can be justified. The decision requires understanding functional units, not just carbon intensity.
Right when: device is still fit for purpose
Extending a device that performs its function well reduces manufacturing emissions. Not justified when: device performance constrains productivity, power consumption is significantly higher than current generation, or the operational overhead exceeds replacement cost.
Right when: power or performance gap is material
A modern device may consume 30-50% less energy per transaction than a five-year-old device. The manufacturing carbon payback period is typically 1-2 years. Refresh is justified when performance or power efficiency limits are reached, or when operational overhead (support, remediation, downtime) exceeds the environmental cost of replacement.
Decision framework: Measure the functional unit clearly: work completed per device per year, power consumed per transaction, support cost per device. Compare: current device power consumption and support cost versus new device manufacturing carbon and power savings over expected life. The spreadsheet answer is almost always more defensible than the intuitive answer.
Governance discipline
Hardware lifecycle decisions are often made by individual teams with no visibility to the enterprise pattern. This creates inconsistency: some teams extend devices until failure, others refresh on a four-year cycle. Centralised tracking of device age, power profile, and replacement schedule ensures decisions are consistent and evidence-based, not inherited from the previous IT leader's preferences.
Section 6
Three procurement jobs.
Procurement is not one conversation at contract award. It is three distinct jobs, each with different evidence requirements and different failure modes. Most organisations do the first reasonably well, the second poorly, and the third not at all.
Challenge demand before purchase
Before any RFP is issued, ask whether the demand is necessary. Can existing capacity serve this need? Is the specification justified by actual workload, or by habit? Most procurement waste is locked in before a supplier is ever engaged, because nobody questioned the requirement.
Challenge claims before acceptance
Supplier sustainability claims must be tested before they are accepted into your evidence base. What is self-declared? What is independently reviewed? What is assured? The difference between a supplier narrative and product-level evidence is the difference between a marketing document and a decision-ready input.
Maintain governance after award
The contract is where leverage begins, not where it ends. If sustainability criteria carry weight at selection but none at renewal, suppliers learn to treat them as a box-ticking exercise. Governance after award means evidence reviews at defined intervals, escalation when commitments are not met, and sustainability as a material factor in the renewal decision.
Where most programmes fail
Job 1 (challenge demand) is where the largest savings live, but it requires procurement to push back on internal stakeholders before any supplier engagement. Job 3 (governance after award) is where credibility is sustained, but it requires operational discipline that most procurement functions are not resourced to deliver. Strengthening all three jobs, not just the middle one, is what turns procurement from an administrative function into a control point.
Section 7
Eight evidence-quality questions.
When a supplier presents sustainability data, these eight questions determine whether the evidence is decision-ready or decoration. Ask them consistently, across every material supplier, and document the answers in the vendor evidence register.
What is the boundary?
Does the data cover the product you buy, the service you consume, or the supplier's entire corporate estate? Corporate-level averages rarely reflect the specific product or service footprint.
What is the methodology?
Is the calculation based on measured consumption, modelled estimates, or spend-based proxies? Each has a different confidence level. Ask for the methodology document, not just the number.
What is the data source?
Metered telemetry, engineering estimates, or financial allocation? The answer determines whether the number can improve with operational action or only with better modelling.
What assurance has been applied?
Self-declared, independently reviewed, or third-party assured? Each level has a different weight in your reporting. Self-declared data from a supplier is not evidence. It is a claim.
What period does it cover?
Last quarter, last year, or an unspecified historical period? Stale data masks trends. Require the reporting period and the publication date.
What is excluded?
Every carbon footprint has exclusions. The question is whether they are stated and justified, or hidden. Common exclusions: employee commuting, upstream logistics, end-of-life processing. Ask what is out of scope and why.
What allocation method is used?
How is the footprint allocated to the product or service you consume? Physical allocation (energy per server-hour) is more defensible than economic allocation (revenue share). The allocation method determines whether efficiency improvements actually show up in the numbers.
Can the evidence be compared year-on-year?
If the methodology or boundary changes between reporting periods, year-on-year comparisons are meaningless. Require consistency or, where methodology changes, a restated baseline for comparison.
The practical discipline
No supplier will score perfectly on all eight questions immediately. The point is not to reject every supplier that falls short. The point is to document the current state honestly, identify the gaps, and create a clear upgrade path. A vendor evidence register that tracks these eight answers for every material supplier converts procurement from a one-time qualification exercise into a continuous improvement process.
Knowledge Check · Module 8 · Q1
An organisation uses a maturity model to assess its sustainable IT programme. It scores highly on measurement and data quality, but has not assessed governance, operational practices, supplier management, or innovation. What is the risk of this approach?
Select an answer to reveal the explanation.
✓ Correct: Option B
Measurement is almost always the most developed dimension in a sustainability programme, because it was usually the starting point. But strong measurement does not imply strong governance, operational discipline, supplier management, or strategy. An organisation can know exactly what its footprint is and be doing almost nothing to change it.
The multi-dimensional model prevents this illusion. Assessing all five dimensions reveals where genuine progress exists and where significant gaps remain invisible, gaps that would be exploited by any serious external scrutiny.
Knowledge Check · Module 8 · Q2
A CTO is asked by the board to confirm that the organisation's digital sustainability programme is on track. She delegates the response to the sustainability manager. What does this delegation reveal about the governance model?
Select an answer to reveal the explanation.
✓ Correct: Option B
Technology leadership owns outcomes, not just targets. A CTO who cannot speak to the carbon intensity of top services, the coverage of the vendor evidence register, and the progress of the waste removal backlog without deferring to the sustainability manager has not embedded the programme into their operating model.
This is a meaningful governance signal, not a minor administrative question. If the CTO does not own the answers, it is likely that the programme operates as a parallel function rather than as an integrated discipline within technology governance.
⏸ Pause & Reflect
Take 5–10 minutes. Specificity matters more than completeness.
Module 8: Key Takeaways
If a CIO or CTO cannot speak to carbon intensity of top services, supplier evidence coverage, and waste removal progress without consulting a sustainability manager, the governance is not yet embedded.
The GreenOps response to both is the same: precision. Explicit confidence levels, clear distinction between activity-based and spend-based data, and honest methodology documentation.
Measurement is almost always more developed than governance. Supplier management trails both. Mapping all five dimensions prevents the illusion of overall maturity from a single strong area.
Top-ten service register, vendor evidence register, and waste removal backlog. Managed as live operating instruments, not periodic reports.
Challenge demand before purchase, challenge claims before acceptance, maintain governance after award. Most organisations only do the middle one, and not well enough.
Boundary, methodology, data source, assurance, period, exclusions, allocation, and year-on-year comparability. Ask them consistently and document the answers.
What comes next
Track 2 ends here. You now have the operational vocabulary and the discipline frameworks for infrastructure, software, hardware, procurement, and governance. In Module 9, the course moves into Track 3: Leadership. The first question shifts from "how do we operate well?" to "what are we required to disclose, and what capability do we need to do it credibly?" Regulation is not a separate topic. It is the external pressure that tests whether everything you have built in Track 2 actually holds.