«

»

May 23

Change in IT is Inevitable, Part 3

Hopefully you can now appreciate why Amazon Web Services, Google Cloud Platform, Microsoft Azure and other cloud vendors are doing so well.  They have developed services that are quick and easy to consume. Customers don’t have to wait on IT approvals, procurement of hardware or any other IT processes that slow down requests for resources.  This is something we need to think about and start to map out what makes sense to deliver internally versa externally.

Why does it take traditional IT so long to deliver services?  For years IT has spent time developing processes to create stability, manage capacity, and improve productivity.  These processes have created silos and slowed delivery times.  I have heard IT teams acknowledge that it can take 120 plus days to get a server with the application installed. Ask yourself what are you doing to be part of the solution and not the problem?

Think about the lifecycle of these systems and the process to deliver them.  In many cases we have these documented and just need to think about how we streamline and automate the delivery.  Look at how many templates are in your environment, can they be consolidated down to an easy to manage standard.  Many teams, I have talked to feel that 80% of their environments could be blueprinted as part of a standardization effort and the rest would be snowflakes.  Essentially, create those most commonly requested infrastructure components a base operating system, IIS or Apache, SQL or Oracle etc… this creates a base that other stacks could be created and then create different sizes and delivery locations.

As we start to integrate these solutions together across different platforms or solutions, a key item to consider is a method that can be used to connect them.  If you are not familiar with REST API’s you may want to add that to your list of items to add to your portfolio.  REST provides a fundamental building block for many solutions.  Check the existing solutions to validate that they provide a common method of integration.

Additionally, look at what solutions you are already utilizing?  I see many teams utilize one product for Windows, another for Linux, a different one for middleware, scripts, manual installation and the list goes on.  A great place to start is an inventory of what tools you have and then develop the requirements to fill in the gaps.  Utilizing multiple tools may add value but increase complexity, so validate there is value to support your decision.  Rationalizing tools and processes as you move to a more automated approach needs to be done as if you automate bad processes you may not realize the value.  This is a great white boarding exercise; just make sure to include the appropriate team members.

Some may argue why bother building internally going forward; move to the cloud, it solves everything.  Maybe, but in many cases it can create as many issues as it solves.  Think about how many years it took to perfect the change management processes, asset tracking, the procurement process, budgeting, etc. let alone the technical aspects of operationalizing support, disaster recovery and other things that we take for granted today in the datacenter.

I will not tell you to move to the cloud or not move to the cloud because at the end of the day every solution is different and you need to look at the requirements.  There will be services that different cloud providers will excel at and it may be cost effective to run those workloads there.  For instance, I hear about businesses running big data in the cloud because the cost of storage is cheaper and the method to run the analytics can be utilized in off peak times, which in some cases may be valuable to the business and reduce costs.  One thing that we should look at is the tipping point from a cost perspective, when and where are these workloads cost effective.

In many cases you can develop similar offerings that can be delivered internally.  Think about a PaaS solution that you develop as an offering.  This type of solution could be delivered both internally and externally.  What about containers?  Is that something valuable to the development team that enables them?  Again it depends on requirements and what you are trying to deliver.

If you already have vSphere in your datacenter layering on a service catalog on top to deliver Infrastructure as a Service could also be an option.  Not to mention the multi-cloud integrations that could be delivered to provide a single mechanism to ensure that IT is the broker of all services (including things like SaaS that are very popular).  As you mature in delivering these types of multi-cloud services consider how to develop your requirements; will the services be located in one particular cloud or across clouds, how would you potentially move in and out, secure or manage.  From a PaaS perspective there are solutions that are only hosted in the cloud do you have a requirement to host on premise?  What processes have you developed that can be utilized to ensure policies are consistent.   As other software mechanism mature like SDN can they deliver to your external cloud?  Things to think about and it is not a one size fits all solution but it can be done.

At the end of the day just ask what is the value to the business, is it cost effective, does the solution meet their requirements and does it make sense?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>