Organizations are entering the next evolution of intelligent, data-driven digital technology. As this happens, it’s becoming increasingly important for them to understand how their products and applications impact society and our planet’s environment.

Whatever stage an enterprise has reached on its cloud transformation journey – from infrastructure modernization and adopting hybrid cloud to moving toward a cloud-first approach and/or building a multi-cloud strategy – understanding how to integrate sustainability into the design, development and release management of cloud applications should be one of its core strategies for delivery of sustainable products.

Moving toward cloud adoption definitely helps to save energy and compute resources. But without a clear sustainability strategy and understanding of how to design and develop cloud applications with sustainability in mind, it may be impossible to realize and build sustainable applications effectively.

While designing and creating sustainable products is a vast topic, my focus here is to provide a point of view on how to design and deliver sustainable applications using cloud technology.

Building a sustainable cloud strategy

Building a sustainable cloud strategy is about how you design, develop and deliver energy-efficient cloud applications.

In my view, the strategy can be broken down into –

  • Unified Measurement of Carbon Emissions
  • Carbon-aware workloads
  • Leveraging serverless technology and managed services
  • Leverage cloud services responsibly
  • Data storage and lifecycle
  • Network, infrastructure, and edge computing

Unified Measurement of Carbon Emissions – Design for build once, deploy and measure anywhere

“Build once, deploy, and measure” is a set of design principles for making enterprise applications cloud-agnostic so that they can be deployed virtually on any cloud platform. The real value of this approach is developing software applications that can be measured consistently for carbon footprint (on any environment) and optimized where needed. Each software application can be measured independently or collectively as part of a larger application.

Enterprises should break down their applications into loosely coupled services, which can be packaged into containers, where functionalities are exposed as APIs. For communication between services (internally as well as externally), technologies like service mesh can be used to transparently address cross-cutting areas such as service telemetry, security, and API throttling by including the service mesh container (as a sidecar proxy pattern) along with service containers.

Enterprises can opt for either a managed Kubernetes service or the equivalent container-managed service for deploying, managing, and scaling containers. As part of the deployment, CPU and memory requirements can be defined for the container. The container-managed solution effectively manages workloads and compute resources based on actual usage.

Serverless technology can also be used (see Serverless section) to save compute resources based on specific use cases. Network design and data transfer also play an essential role (see Network section).

Runtime data (CPU, RAM, API latency, network transfer etc.) can then be aggregated across all these approaches to generate an application’s digital carbon footprint. The enterprise can use this value as the baseline and integrate the process as part of its CI/CD process to track changes in future development. ISO Standards like Software Carbon Intensity from Green Software Foundation can be used to calculate the carbon intensity of a software application, and the focus should be towards creating energy-efficient applications.

Once the enterprise has the measurement values, it can start optimizing its code by changing the implementation (or moving towards energy-efficient languages) and deploying new versions of containers – without changing the service contract and keeping existing applications unaltered. For new developments, they should evaluate what design patterns and optimization to adopt to deliver green code (I’ll be covering key aspects of green code in a future blog.)

This strategy not only helps an enterprise to measure and optimize sustainability impact uniformly, but it also enables it to be agile and cloud neutral, deploying applications on any cloud vendor based on their green cloud offerings.

Carbon-aware workloads

Carbon-aware, in simple words, is where and when to run workloads to leverage clean energy. Cloud vendors are becoming increasingly transparent and publishing regions that have lower carbon footprints. If, due to data and regulatory requirements, an enterprise cannot select a preferred low-carbon region, it can still have its development and pre-production workloads (which don’t run against actual client data) deployed in low-carbon regions. Workloads like batch jobs, data pipelines, stress/performance testing, or training machine-learning models are all candidates to run in lower carbon regions.

So, carefully planning where and when to run each of their application components (once containerized) is an important decision for enterprises to include in their CI/CD processes. Automated deployment pipelines can be built to factor in all of these parameters during application deployment.

Leverage serverless technology and managed services

Serverless is an important capability offered by all leading cloud vendors. In a serverless environment, the infrastructure and underlying resource optimization is handled by the cloud vendor. The application need not be running all the time, and based on invocations (request/response) or triggers, the appropriate level of compute is allocated to serve each request.

So when designing cloud applications, enterprises should strategize which components can leverage serverless technology – sending push notifications once a day, carrying out proof of technology work, development instances, data pipelines, AI training models or inference – and use compute instances effectively.

Managed services offered by the cloud vendors, provide you to configure, operate and scale the required services based on your requirements, leaving the management, patching, optimization, upgrades and scalability of the hardware environment to the cloud vendor.

The Managed Services and their infrastructure are shared across projects and provide better utilization across cloud projects. So, leveraging managed services minimizes your carbon impact as a shared responsibility, including the embodied carbon and lifecycle of infrastructure.

Leverage cloud services responsibly

Cloud offers infinite computing at your disposal and a host of services to enable application development. Understanding what services to use based on specific requirements, how to use and optimize those services effectively, and what infrastructure to use for running applications are all important criteria that can affect an application’s carbon footprint.

For example, training a machine-learning (ML) model or a deep-learning model typically consumes massive amounts of energy. However, training a model continuously does not necessarily improve its accuracy beyond a certain point (or achieve more than marginal improvement). So knowing when to draw the line between accuracy, error rate, and recall becomes important. Techniques like transfer learning should be adopted wherever applicable for both minimal training and predictions.

Second, the choice of hardware environment – CPU, GPU or TPU, and whether code has been optimized to leverage hardware libraries – must be considered during ML model development. For example, if an enterprise is running a GAN network for computation creativity where video and audio files are involved, a GPU or TPU should be a better alternative to train that model quickly and efficiently. Cloud vendors also offer energy-efficient VMs or specialized custom computer chips that need to be evaluated based on application requirements.

Another example? Pop-up notifications are delivered to consumers’ smartphones. It’s been estimated that billions of notifications are sent every day – however, it’s unclear what percentage of them are actually viewed and/or acted upon. Apart from compute resources, the network is being utilized to deliver these notifications. A priority must therefore be to adopt a design approach that ensures only relevant and personalized notifications are delivered. Applications should filter out irrelevant notifications and limit the dispatch of the same information to individual consumers over multiple channels (e.g. email, SMS, WhatsApp etc.)

Data storage and lifecycle

As the source of actionable insights, data is the key asset of every business. Today, we’re seeing breakneck growth in data volumes (structured and unstructured), with more and more being generated across blogs, social media, transactional systems, analytics data, devices, learning systems, medical journals, videos and so on.

Enterprises will also have accumulated huge quantities of historical data for their audit and compliance requirements. There is also a widespread perception that large diverse datasets are needed to build ML models, so data accumulation is a never-ending process.

Such large varieties and volumes of data mean that more compute resources are needed to build data pipelines and support downstream processing (eg creating an ML model or running analytics.)

To do the subject justice, a discussion of how to build a large, scalable, resource-optimized, green computing data-aware pipeline (based on various use cases) will require a separate blog. Briefly, however, the point is that without a clear and defined strategy for data storage and lifecycle management, enterprises would keep on accumulating data and training systems forever and significantly increase their carbon footprint.

In summary, sustainable development demands careful evaluation and implementation of all data practices – from labelling data to understanding what data is being used and what data should be archived/purged, how to do incremental backups, data relevancy, as well as energy-efficient storage options for long-term retention and the impact of creating ML models with limited representative data.

Network, infrastructure, and edge computing

Setting up a cloud infrastructure usually starts with implementing network connectivity. Different environments can be set up to isolate key resources like development and production.

The way a network infrastructure is set up and communicates can significantly impact how data is transferred between environments. Some cloud providers use their own internal fiber-optic cable by default to transfer data between virtual private clouds. Some require a VPC link to be created to route traffic internally without going through the public network.

Some cloud providers have also set up their own underwater fiber-optic cables, which conserve less energy and have less environmental impact (compared to copper-wire cables.) So, knowing these details can help enterprises to plan their infrastructure and connectivity components and leverage existing green technology.

If an enterprise plans to connect its on-premises environment to the cloud, it can evaluate the various connectivity options offered by cloud vendors to significantly reduce network latency while being energy efficient.

Not all applications need to reside on a public cloud and certain classes of applications (especially with 5G) require data and compute to be located closer to end-users/systems for ultra-low latency requirements (eg low-latency streaming video applications, 3D and virtualized experiences, industrial automation, smart cities, Mateverse etc.)

This is where edge cloud computing infrastructures are needed, with computing and data co-located closer to end-users or systems. With edge computing, network transfer and bandwidth between edge instances and devices/systems are also significantly reduced as compared to the cloud – because there’s no need to transfer and store data to the cloud, less processing power is needed and energy can be conserved.

While edge computing helps in areas around network and data storage, the key point is to create optimized applications that can run on resource-constrained systems and leverage existing physical servers for edge computing. Circularity and embodied emissions should also be factored in as part of your carbon emission reduction targets.

Because connected devices play a key role too, it’s not just about software optimization but also about choosing the right hardware so that the enterprise can leverage hardware optimization features during application development.

Also, during data transfer, appropriate compression techniques need to be adopted based on the data type and size being transferred.


To sum up, it’s now essential to incorporate sustainability into your core strategy for designing, developing and deploying cloud applications. In follow-on blogs in this series, I’ll deep dive into other design issues and the concept of green architecture patterns for sustainable cloud applications and the move towards greenOps.

Tags : green software

The author Navveen