The Doppler Quarterly Special Edition 2019 | Page 94

data centers requires less networking, so we shaved $600 million off our carrier fees. Eliminating 100 instances of SAP saved about $100 million in licenses. A remaining $1 bil- lion-plus of savings came from rationalizing our application portfolio from 7,000 to 1,800, and reducing IT headcount from 20,000 to 2,000. Just Getting Started At this point, we were pretty happy with our progress. We were running the company’s IT with six shiny new data cen- ters, and costs seemed like they were at a manageable level. We didn’t realize it at the time, but we were only getting started. Within 18 months of building the new data centers, we began running out of capacity. We decided to plug the gap with EcoPODs, modular, containerized data centers we manufacture, which each generate a megawatt of power. We set up an EcoPOD next to each data center and planned to add two a year for the next five years. They were an expense, but far less costly than new data centers. Our team was happy with our strategy, but our leader was not. Meg Whitman, HPE’s CEO at the time, questioned us about our plan and its metrics, such as CPU utilization. At the time our utilization was about 10 percent, just below the industry average. She said she would like to see it in the 80 percent range. We had work to do. When we looked at our environment, we saw that we had about 10,000 virtual machines (VMs) that essentially were not being used. The reason? Developers were hoarding them. On average, it took 21 days to get a VM approved. Developers didn’t want to wait that long. They wanted to have capacity on hand to start developing immediately, as they can today on a PaaS. So, they ordered extras – dozens of them. This got us thinking in terms of a larger transformation. Taking the Next Step The first thing we did was set up a cloud-like system we referred to as highly automated platform provisioning. It was not really a cloud. There were no APIs, just automation. 92 | THE DOPPLER | SPECIAL EDITION 2019 Developers could go to a portal and order up cores, storage, memory, an operating system, middleware databases and load balancing. Twenty minutes later, they would have an environment. This helped us to do a better job managing our IT environ- ment. We identified VMs that were overprovisioned, used automation tools and drove our utilization up to 30 percent. We were able to eliminate the use of EcoPODs and shrink the number of data centers down to four. The next step was to move to the cloud. We started by cre- ating an OpenStack cloud for cloud native development projects and then started brokering workloads to Azure. The positive response was immediate. People were tired of the old way of relying on on-premises resources, so we put together a project to move the majority of our workloads to the cloud. Our plan called for the dissemination of workloads into four main buckets. The first would house about 10 percent of our applications – traditional IT resources, such as SAP HANA appliances and IBM mainframes, which would have to remain on premises. The remainder would go to the OpenStack VM (10 percent), to the public cloud (60 per- cent) and to SaaS applications (20 percent). In the end, we moved far more workloads to OpenStack (about 50 percent) and far fewer to the public cloud (10 percent). The problem was we didn’t have a good plan in place to manage costs throughout the process. Public cloud costs were swelling, and we weren’t shutting off VMs quickly enough to harvest the savings so we could move fast on public cloud deployments. We got scared and scaled back our cloud efforts. Understanding the “Why” This is something CTP’s business model could have helped with. HPE had set a goal to move 60 percent of our work- loads to the public cloud, but we did not consider the many factors involved in making migration decisions. CTP helps customers understand the “why.” Our problem was, we had no clear idea of why we should move workloads into certain buckets at certain times. We just wanted to do it.