The Public Cloud Market and Operational Logistics

by Sebastien Mirolo on Sun, 6 Oct 2013

Whenever you run your own fixed-size compute cluster, you are contending with two issues: utilization and time-in-queue. Utilization is the extend to which your installed productive capacity is used (expressed in percentage). Time-in-queue is the delay between submission of a job and the start of its execution - obviously the shorter the better.

As an IT manager, you have to decide how much machines to buy, when and how long they will be in production. There is no good answer when you are dealing with loads that can jump a hundred fold for a few hours as is typically the case in integrated circuit verification. It is not only your problem but also the one of the Chief Financial Officer as buying a data center is often a capital expenditure.

The public compute cloud (which I distinguish from the public storage cloud) solves those two problems when it comes to verilog simulation. First capex becomes opex (Operational expenditure) with the associated accounting implications. Second, with thousands more machines that you can justify to own, utilization is no more a problem - or more accurately, it is the cloud provider concerns, not yours.

As far as running Electronic Design Automation tools, the public cloud is a public cloud market in the same sense as the commodities market. Amazon already offers to bid on spot instances. I wouldn't be surprised we see futures, hedgers and speculators appear around public cloud computing at some point.

When price and time-to-result replace utilization and delay-in-queue as your metrics, it also becomes more natural to think of your IT and tools as an operational logistics problem. Processor companies already have a strong logistics know-how (They make their money selling manufactured chips after all) so extending those processes to cover front-end design IT procurement is a compelling offer.

by Sebastien Mirolo on Sun, 6 Oct 2013