scheduler shuts down the machines at 7:00:01 PM,
you will now be charged for the 1 second instead of
the full hour. Let’s even further assume that ALL
these development virtual machines are a good size
(C4.4xlarge – 16 vCPUs and 30GB RAM) that costs you
$0.796 / hour. In this scenario, you will see daily sav-
ings of $557.
Here’s the formula:
1 Hour Savings
$
$
0.796 / Hr
700 Instances
557 / Day
That adds up to weekly savings of $2,786, and annual
savings of $145K. Wow, that’s a lot of money, but let’s
dig in a little further. For the numbers above to be
true, you would be using all your servers in a pay-as-
you-go model (without prepaying for reserved
instances). If you prepaid, your hourly costs would
have been much lower and so would your potential
savings. In addition, prepaying suggests that you
expect to use the servers most of the time and not
start/stop them a whole lot. If you’re starting and
stopping them as described above, something is
wrong in the architecture and business models, and
you need to rethink your approach.
But let’s say the architecture and business models are
fine, and you truly need to do what was described. In
that case, the actual cost of those servers to you is:
700 Servers
12 Hours (7AM-7PM)
5 Days / Wk
$0.796 / Hr
52 Weeks
1,738,500
$
Now, that’s a BIG number, and the savings amount to
8.3% of your total compute spend! But if you’re spend-
ing $1.7M a year on simply renting servers from a pub-
lic cloud vendor, something else is wrong. You either
have not done a proper TCO analysis of your cloud
usage, did not architect your applications and envi-
ronments to take advantage of cloud capabilities, or
simply are not paying attention to where the money
goes. If those are all true, I don’t think you’re the type
of company that cares about a “mere” $145k in sav-
ings–it doesn’t look like it makes that much of a dif-
ference to you. By the way, your true cloud bill is most
likely much higher because we haven’t even talked
about the cost of storage, networking, APIs, etc.
Use Case 2: Now let’s look at a more realistic sce-
nario than the one above. Something like web servers
that need to handle spikes in traffic, or applications
that periodically spin up a series of servers to per-
form high-intensity computations. Let’s examine the
bursty web server scenario. Your company has a very
cool website that’s incredibly popular. You have 300
web servers (c4.large – 2 vCPUs with 3.75GB RAM)
powering the website, behind a load balancer and an
auto-scaling group set up to account for spikes. Each
server costs you $0.10/hour. Every day, there’s a spike
in activity each morning and then for a couple of
hours in the afternoon, where you need to increase
the capacity by 40%. That’s actually a lot, but let’s go
with it for argument’s sake. Let’s also use the above
comparison where shutting down within 1 second
saves you the whole hour. So your annual savings now are:
2 Spikes / Day
120 Servers
(1 hr savings in the AM shut-
down and 1 hr in the PM)
(300 servers x 40%)
$0.10
$
365 Days / Yr
8,760
That’s a decent chunk of change that should defi-
nitely come in handy for other needs.
Now, let’s calculate your total spend:
300 Regular Servers
24 Hrs / Day
$
$0.10 / Hr
365 Days / Yr
262,800
120 Scaled-up Servers
2 Hrs / Spike
$0.10 / Hr
2 Spikes / Day
365 Days / Yr
17,520
$
Total: $ 280,320
WINTER 2018 | THE DOPPLER | 29