November 22, 2014

OK, I still get mad when someone users the word DevOps for processes, where actually nothing about the Ops part is going on and none of the developers care or want to understand, that things like networking, CPU contexts switches and drivers exist. Then they are suddenly angry, that the software they developed on their laptops, sharing the same Layer 2 domain isn't working anymore when they put it on two separate cloud zones.
But this post isn't about that. It's about our increased reliance on external tools, workflows and providers. While being more agile we seem to sacrifice a lot of our knowledge, without even thinking about the business risks.

EaaP - Ecosystem as a Problem

We rely increasingly on things, which make the developers life easier, like one click installs, repository providers, various clouds. Every one of them has its risks.

If your cloud provider shuts down, your business dies

Have you ever thought about what will happen when your cloud provider decides to shut down? They wouldn't do that, or would they? For an example go read the AWS Agreement, section 11. For your convenience a quote (as of 2014-11-22)
WE AND OUR AFFILIATES OR LICENSORS WILL NOT BE LIABLE TO YOU FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL OR EXEMPLARY DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, GOODWILL, USE, OR DATA), EVEN IF A PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. FURTHER, NEITHER WE NOR ANY OF OUR AFFILIATES OR LICENSORS WILL BE RESPONSIBLE FOR ANY COMPENSATION, REIMBURSEMENT, OR DAMAGES ARISING IN CONNECTION WITH: (A) YOUR INABILITY TO USE THE SERVICES, INCLUDING AS A RESULT OF ANY [...] (II) OUR DISCONTINUATION OF ANY OR ALL OF THE SERVICE OFFERINGS
Of course this will never happen, just keep believing it...

If the team building your deployment workflow decides to quit

All at once.
Yes, you have made your developers life easier, they just commit code to the repository, click one button and then magic happens. Congratulations, you have just replaced your former operations team with the ecosystem team. You can proudly use the term DevOps. The end result is mostly the same, if they quit at once or your upper management decides that they are doing nothing, because everything works and no shiny features are shown, and as such are just a money sink, you have at least a month to replace them and get them up to speed. 90% of your developers won't have any idea what's going on behind the scenes. I once had a situation where a developer asked what's a mountpoint, and it wasn't a junior dev... So good luck.

Your external monitoring tools shut down

There's also a pretty good chance the latest and greatest monitoring company out there with the shiny tools will go out of business. Or be bought out by a bigger corporation and subsequently shut down in a year. Maybe you will get a months notice, 6 months if you were really lucky. So you have to find a new company, adapt the tools, checks etc - it takes a lot of time. If you were using on site monitoring tools, even if the product won't be made anymore you can usually still keep it running until you will find a replacement, no deadlines here (short of having time based licenses - these are bad for you, although I have to admit they are way cheaper).

Your external repository provider dies

This may seem as not that big of an issue, modern tools keep the full repository history on every developers machine. So what's the catch? Your deployment tools rely on automated integration between that repository, test runners, clouds etc. It may not be easy to replace them in a timely fashion, to deploy critical patches, vulnerability fixes and others to your environment. The problem is similar like in the deployment workflow, no one in your company really knows what's going on under the hood of these systems, to reimplement a similar flow. But if you're lucky you can survive that with some loss...