The question of whether to move to the cloud is no longer relevant for most organizations. Migrating workloads to the cloud has already become an inevitable requirement for a vast majority of companies regardless of their sizes, and if your organization isn’t doing that already, then it is simply the question of when and how you’ll consider doing so.
The agility, redundancy and elasticity with regards to scaling compute resources present a compelling need to move to the cloud sooner than later, let alone other factors such as reduced overall capital expenses associated with managing a robust on-premises infrastructure. The finance sector is usually a little behind in embracing major changes in IT operations because of their strict regulatory compliance requirements and the complexity in their application portfolio as a result of several mergers and acquisitions. These restrictions are quite common, especially with insurance companies, but surprisingly there are several major insurance companies that have made significant investments to modernize their traditional applications and migrate them to the cloud for efficiency, agility and cost reasons.
To effectively support the business that constantly strives to be highly competitive and to reduce the time-to-market, most applications development efforts are planned in an Agile fashion with short sprints that produce usable deliverables within a few weeks. When development teams go Agile, it is important that all other associated services operate in a similar manner. Without agility in infrastructure provisioning and other support services, it is simply impossible for the development teams to be successful in producing deliverables at high velocity. A commonly heard complaint from most of the application development teams is about the delays from infrastructure teams on provisioning servers. Provisioning a virtual machine (VM) takes more than three weeks in many companies, even today. This is due to a variety of factors such as work backlog, complexity in approval processes, compliance requirements, lack of automation and several others that are organization-specific. IaaS and PaaS offerings from cloud leaders such as Microsoft Azure and AWS address these issues to a great extent. Even if companies are not ready to migrate their existing applications to the cloud, they should seriously consider IaaS and PaaS from well-established cloud providers for new applications being developed.
There are several factors that could influence IT leaders’ decisions on their strategy regarding cloud and some of the areas that certainly cannot be overlooked are discussed here. The purpose of this article is to provide a general high-level overview, especially for companies that have been hesitating to migrate their workloads to the cloud. It should be noted that we are barely scratching the tip of the iceberg in this discussion.
Complexities in the Application Stack and Infrastructure
It is not uncommon for organizations to be overwhelmed by complexities and issues with their current infrastructure setup and get bogged down dealing with their day-to-day operational challenges. Due to these ongoing issues, one might develop a false notion on the actual effort involved in migrating to the cloud and may even wonder whether it is an achievable target state. It is important to map out the current infrastructure that supports the business applications and have a clear view of the complexities, dependencies and licenses that are in play. The application servers, database servers, storage arrays, enterprise messaging bus, document management systems and several other key components that work together to power your key business applications need to be taken into consideration. This will not only help decide the right applications that are suitable for the cloud, but will also help pick the right cloud provider to suit your needs. You can notice the convergence of IaaS and PaaS as it is becoming difficult to separate one from the other. Most leading providers are offering these services as a package.
It is better to categorize your applications based on their level of complexity and shape up your cloud strategy accordingly. Standalone applications that simply require a web server, application server and database server are ideal candidates for your first phase of migration efforts. You can ensure that with the right architecture on the cloud, there will be better availability, load balancing and performance and less maintenance overhead for your standalone applications.
Amazon Elastic Compute Cloud (EC2) provides availability of 99.99 percent and its auto-scaling feature allows automatic scaling of the fleet up or down depending on the needs to maximize performance and minimize cost. Similarly, Microsoft Azure offers VM Scale Sets that can be configured for your applications to scale automatically. If the requirement is to develop brand new applications for the cloud, the options are endless. As an example, .net based web applications to be hosted on Azure can be developed to be run as Docker containers.
Containers and the management of a cluster of container nodes have come a long way since their inception. Docker Swarm and Kubernetes have drastically simplified the management, load balancing and scaling of containerized applications. Containers are not only for your new applications; they also come in as a great solution for modernizing your legacy/traditional applications and migrating them to the cloud.
Identity and access control used to be prevalent challenges in the past, but cloud providers have greatly simplified identity management services such as application authentication, user management and single sign-on by providing rich features to support the information security and compliance requirements of organizations. Microsoft Azure offers Azure Active Directory and provides a few different protocols such as WS-Federation, SAML-P and OpenID Connect to authenticate the user. It also supports multi-factor authentication. For brand new applications that are developed using Visual Studio 2017, it is extremely simple to integrate the application for authentication and authorization workflows on Azure AD using the above protocols.
Databases play a vital role in the decision-making process when it comes to migrating to the cloud. Senior IT managers are usually highly concerned about the security of data while technical personnel are worried about ensuring storage, performance, recoverability and availability. Application developers and DBAs end up spending a great deal of time fixing code or other application configuration parameters even when databases are simply migrated from one server to another within the same data center. Cloud solutions address such concerns. Microsoft Azure, for instance lets you migrate your native on-premises SQL databases to Azure SQL Database Managed Instance by providing the native virtual network (VNET) support. You don’t have to change your applications as a result of this migration. The Managed Instance offered by Azure is quite promising and it is certainly a great solution for migrating large numbers of on-premises SQL Server databases to Azure. Similarly, managed database services are offered by AWS for both relational (RDS) and NoSQL (DynamoDB) databases. Amazon EC2 service is integrated with Amazon RDS and Amazon VPC to provide a complete, secure solution for computing, query processing and cloud storage including, but not limited to, a wide range high-performance web applications, DB servers and batch processing needs.
Cloud providers offer data centers in several regions around the world. It requires a major investment for any organization, both financially and operationally, to match the redundancy and recoverability provided by major cloud providers. Even organizations that have a highly virtualized infrastructure setup have physical infrastructure for some of their mission-critical applications. The reasons for having such a setup could be argued, but the costs associated with setting up a disaster recovery environment for such infrastructure would be significant. To ensure comparable performance and capacity, the disaster recovery environment is typically a replica of the production environment that is underutilized most of the time, consuming significant investment.
Planning for setting up a disaster recovery site involves considering several factors such as, ensuring compute capacity and scalability (servers and storage), networking (bandwidth, load balancers, switches, routers), identity management (directory services, authentication, authorization), facility to host the DR site (location, floor space, power, air conditioning) and operational aspects (personnel, monitoring, alerting, break-fixing, hot-hands support). Cloud providers give you the option to choose the location of your DR site anywhere in the world. Depending on the contract you have with your cloud provider, you could either quickly deploy compute resources on your DR site on the cloud or utilize the necessary automated processes to switch over to an already running replica of your application environment on the cloud DR site.
There are numerous features that AWS offers to address each of the above-mentioned aspects that need to be considered for setting up a disaster recovery site. Amazon Machine Images (AMIs), for example, lets you preconfigure images with the operating system and your application stacks. It is also possible to configure your own AMIs to allow the recovery process to launch these images automatically in the event of a disaster, saving a significant amount of time.
Microsoft also provides highly dependable DR alternatives on Azure. You can use Azure as your secondary site for DR purposes or completely eliminate the need for data centers by picking different Azure regions as primary and secondary sites. Azure offers the Active-Passive and Active-Active topologies. In case of the active-passive topology, all user traffic flows to the primary region while preparing the secondary region to take over at any point since the database runs in both regions with a synchronization mechanism implemented between the regions. Active-active topology ensures running of cloud services and databases on both the primary and secondary regions and user traffic goes to both the regions. In the event of a failure at primary, all the user traffic will be directed to the secondary by simply disabling the DNS at the primary region. Auto-scaling features of Azure will allow the scaling up of compute resources in the secondary region as needed.
Any discussion on migrating to the cloud would be incomplete without highlighting the significance of a solid DevOps practice to complement the cloud strategy. From a technical standpoint, companies that settled on Linux environments used to have a slight edge over those that went the Microsoft route, especially with regards to implementing open source solutions and technologies that support DevOps and cloud initiatives. This is highly arguable now as Microsoft has not only opened its gates wide for several major open source tools but also allowed for better integration of several leading open source products such as Docker, Jenkins and Puppet with ALM tools such as Microsoft TFS 2017.
Depending on the goals of an organization with regards to its cloud strategy, the level of maturity required from a DevOps perspective needs to be analyzed. DevOps can be looked at from infrastructure as well as applications development perspectives. If your goal is to use cloud services only for IaaS, it will be extremely valuable to develop strong, infrastructure-focused DevOps capability, practices, processes, tools and automation. Through proper scripting and integration with IT services management (ITSM) tools such as Service Now, the entire process—from requesting infrastructure through having them provisioned with all the patches, hardening, etc.—can be fully automated by leveraging tools such as Chef or scripting via PowerShell. Features such as metered billing and quotas that are already offered by cloud providers can be leveraged and integrated with organization units’ budgets, approvals and other processes to establish a well-managed, fully automated infrastructure provisioning process.
Organizations considering leveraging the cloud for applications development are definitely at an advantage compared to those considering development on-premise. Microsoft, for example, offers IaaS at a highly competitive price and provides Visual Studio Team Services (VSTS), a cloud version of Team Foundation Services (TFS), on Azure. The PaaS services offered by Azure are also quite extensive, along with integration with a variety of open source tools. With this combination, developers have a complete, end-to-end solution to address all their requirements without losing time in waiting for environments to be set up. DevOps can play a significant role here. With a fully integrated continuous integration (CI) framework, the developers could commit code to a managed Git repository on the cloud, and automated continuous integration can be kicked off via VSTS, which handles the build, automated testing, code quality control, building container images, automated spinning up of infrastructure needed for an environment and deploying code or containers on the environment following a fully integrated change control process and quality gates. Developers can focus fully on developing code and enjoy the freedom of being able to deploy code on a transient environment on the fly to test their code before it goes through other phases of testing. In highly competitive business environments, this provides the ability to perform multiple production deployments in a single day if need be.
Organizations with a high level of maturity in DevOps are the ones most likely to leverage cloud to its fullest potential. Similarly, those that are striving to attain this level of maturity are likely to get there sooner by moving to the cloud. Developing the expertise, building the process framework, and implementing the right DevOps tools will make it easier to solidify your cloud strategy.
Compliance & Security
Several organizations have refrained from migrating to the cloud mainly due to the strict compliance and security requirements that are mandated by the industry. Companies that are still hesitating to embrace the cloud are those that are skeptical about the security of their data on the cloud. Irrespective of the cloud provider or the strict SLAs and contractual terms that you may have with them, you are still accountable for the data that you own. So, it is extremely important to understand more about the security setup and the features offered by your cloud provider to secure your valuable data on the cloud.
As the owner of your organization’s data, you have to know where your data is going to physically reside, what kind of encryption will be applied to your data at rest and in transit, who will physically have access to your data, how your data will be segregated from other tenants’ in a multi-tenanted environment, what recovery processes exist, what the regulatory compliance certifications that the provider has obtained are and answers to many more questions. Security experts highly recommend classifying your data based on risk levels and devising a strategy accordingly. You may decide to host your most critical data on-premises and use the cloud for other types of data.
Microsoft offers a very comprehensive checklist and reports to help you assess the risk to your organization in moving to the cloud. Both AWS and Microsoft are certified by almost all major regulatory bodies with regards to data security and compliance. However, as outlined earlier, while your provider can take the responsibility to host your data in the most secure manner, you as the owner of the data will still be accountable for its integrity and security. So, this is an area that requires thorough analysis before you decide on your workloads that need to migrate to the cloud.
To Make the Long Story Short …
While savings on capital expenses is a given, this is not the prime motivator for most organizations to migrate to the cloud. As stated in the Gartner report, “Cloud Computing Primer for 2018,” the role played by cloud in this decade is way beyond simply setting expectations and providing a platform for better economic models. Digital initiatives such as artificial intelligence (AI), internet of things (IoT) and several other technological innovations and disruptions are fueled by digital platforms that cloud offers. Agility, scalability and flexibility, on the other hand, are certainly some of the key drivers that would encourage companies to migrate to the cloud. Furthermore, reducing your time to market can be better accomplished by moving your development workloads to the cloud. The combination of IaaS, PaaS and a solid DevOps practice to automate deployment of infrastructure and application stack makes cloud far more superior to running applications development on-premises. Companies that haven’t invested much in DevOps should seriously consider doing so before it is too late.
Containers, the tools and practices to orchestrate, load balance and scale microservices add a completely different dimension to how cloud services could be further leveraged. They help ensure seamless, defect-free, repeatable deployments, and with the necessary automation to spin up transient environments, it is difficult to match the benefits of using containers on a fully automated IaaS in an on-premises setup.
The auto-scaling features offered by cloud providers make cloud a far more superior alternative compared to hosting internally. The elasticity provided by cloud helps with scaling compute power, storage, load balancing and several other infrastructure services as needed, without having to make massive hardware investments.
Cloud providers offer solid disaster recovery options and support. Implementing an active-passive or active-active primary and secondary sites is a lot more straightforward and easier to set up on AWS or Azure than doing it on-premises.
Compliance and security are also important factors that could drive the cloud strategy. It is important for decision makers to evaluate the risk factors and then decide on the type of data that could reside on the cloud as opposed to staying internally.
NetworkComputing provides interesting statistics on public, private, and hybrid IaaS adoption in their article, “IaaS Cloud Adoption Trends.” The article further refers to research from Interop ITX and InformationWeek on how organizations are moving and managing workloads to the cloud. Per their findings, several companies are expected to deliver over half (67 percent) of their IT and business services through the cloud in the next year. Of all the companies they surveyed, 31 percent plan to allocate over 50 percent of their budget to cloud initiatives.
With a clear strategy, budgets, categorization of applications, prioritization and staff with the right expertise, you can be sure that your journey in moving your workloads to the cloud will be a pleasant one.