Reading the book - Building Green Software
Have you heard about the book Building Green Software? In this article, I'm going through my notes and share with you the most useful points I got from reading the book. Read through the end because I've prepared some interesting bonus content!

Listen to me read this post here (not an AI-generated voice!) or subscribe to the feed in your podcast app.
Some of you know, others probably don't, but I'm quite the nerd when it comes to the things that are important to me and close to my heart. Reading Building Green Software was one of those things.
In this article, I'll write about the most important stuff I learned from it AND, as a bonus, I'll attach here all questions and answers I collected reading this book.
This will not be one of those catchy, clickbaity articles. It is meant to spark curiosity, and interest you to possibly read the book yourself. I found it full of insights, and it is a great starting point if you want to learn more about green software.
What is this book about?
This book - Building Green Software, as the title says, is about building green software - the software that causes minimal carbon emissions when it's running. It looks on the three core principles of green computing:
- Energy efficiency - use less energy to do the same job.
- Hardware efficiency - use less hardware to do the same job.
- Carbon awareness - adjust operational and runtime aspects of an application based on the current carbon emissions.
Except the above principles, this book also covers sections about networking emissions; what can we do about greenness of ML, AI, and LLMs; how can we measure and monitor emissions; benefits of the approaches for all of us; section about green software maturity matrix - how mature are you when it comes to green software.
Who wrote it?
This book was written by three authors - Anne Currie, Sarah Hsu, and Sara Bergman.
Anne Currie is a techie with 30 years of experience and a writer. She writes about tech, climate, ethics, AI and surveillance.
Sarah Hsu is a Google Site Reliability Engineer and a strong advocate for green and sustainable software. She is a regular speaker and writer on the subject, and is the chair of Green Software Course project for Green Software Foundation.
Sara Bergman is a Senior Software Engineer working in the Microsoft ecosystem. She is an advocate for the green software, and speaks about it publicly in conferences, podcasts, and meetups. She is also a contributor to the Green Software Foundation.
What are some important points I noted?
As I mentioned above, I think this book is a great start if you want to dive into the ecosystem of building sustainable software. There are many things I've noted reading this book, and I wanted to share here the ones that I find the most important.
This book has also been an inspiration for a couple of blog posts I've written so far, and I've used various concepts in some presentations I held on the topic of green software.
Difference between Climate Change and Global Warming?
Climate change is the change in the Earth's local, regional, and global climate, based on the longer variations of weather patterns. Climate has always changed throughout the Earth's history, but the recent change has been faster than the usual cycles.
Global warming, on the other hand, is the continuous warming of Earth's surface and oceans, since the preindustrial age.
So, climate change is a normal Earth process, that happens in cycles. What is not normal, however, is the current speed of change caused by Global warming. We see these terms used quite often.
What is efficient code, and what are some common design patterns for code efficiency?
The efficient code is the one that doesn't do more work than is necessary to achieve the designed functionality. Common design patterns to improve code efficiency are:
- Avoid too many layers, so we are not doubling up on the work done by our platform and create some wasteful layers.
- Be mindful when using microservices - send fewer and larger messages using RPC rather than JSON/based communication, and carefully plan the architecture and inter-service calls.
- Replace inefficient services and libraries - use performance profiling to determine the bottlenecks (slow services and bottlenecks).
- Don't do or save too much - don't implement features you don't need, or save the data you don't need or use.
- Leverage client devices - use devices to the fullest and make them last as long as possible.
- Manage Machine Learning - reduce data collection and time for model training, and train models on green energy.
What is operational efficiency and which techniques can be used to improve it?
The operational efficiency is the way to achieve the same functional results of an application or a service, by using fewer hardware resources - servers, disks, and CPUs. Some of the techniques that can be used to improve operational efficiency are:
- Turn things off when not used or hardly used (e.g. test or dev systems during the weekend).
- Don't over-provision - use various approaches to battle this (rightsizing, auto-scaling, burstable instances on the Cloud) ; it is okay to scale up, but scale down as well.
- Cut the costing bills by inspecting Cloud Provider tools - cheaper can be almost always greener.
- Use containerized microservices only where possible and the introduction of them won't add any unnecessary complexity or over-provisioning (e.g. using Kubernetes cluster for a SPA).
- Running on Cloud - choose instances that give the most flexibility, are pre-optimised instance types (e.g. managed DBs), or spot instances.
- Embrace multitenancy - from shared VMs to managed Platforms.
What is Jevons paradox?
Improving the efficiency of doing something makes us do it even more. For example - if we improve the energy efficiency of data centres, we'll want more of them, and consume more energy than when we initially started.
What are some ways to make deployment of ML models greener?
We could decrease the size of a model in use - deployment is cheaper, and smaller devices can run the models.
There are also several Machine Learning techniques that can make our models greener.
Quantization
This is an ML optimisation technique that reduces computational load and memory of neural networks without significantly impacting the model accuracy. It includes converting data in floating point 32 bits to a smaller precision like integer 8 bit, perform all critical operations, and at the end, convert lower precision output into a higher precision one in floating point 32 bits.
Knowledge Distillation
The technique of transferring the "knowledge" from a large, complex model (the "teacher") into a smaller, more efficient model (the "student"). The goal is to train the student model to mimic the behaviour and replicate the performance of the teacher.
Model Pruning
The pruning is a technique of "removing" the weights in the context of neural networks - setting weights to zero. We can do it randomly, or remove the less important ones.
A great article on model compression and optimisation techniques that covers all three mentioned above, can be found on the link below.

Summary
If you wanted to start your journey in digital sustainability - this is the book for you. If you want to brush up your knowledge on greening the IT - this is the book for you. If you are curious about how we can help build a sustainable IT for the future - this is the book for you!
Following is the long anticipated link towards the book. Enjoy!