,

Cloud Migration Strategies

Let’s look at the technology landscapes that are helping enterprises scale, adopt and modernize while they migrate to Google Cloud. We’ll review strategies and the tools to help you get there.

Cloud Migration Approach

Migrations are often considered for an extended period of time by enterprises as they fair considerations about moving their workloads over to a cloud environment. There is also the a difference in approach between small companies and large companies as they have different needs and migration paths. Let’s look at dissecting this process.

Why Move to the Cloud?

The first question to as during the cloud migration process is, why? Why are we going down this path? What is the business or technical goal you are looking to accomplish? Here are some common reasons why you might be moving to the cloud. You may want to:

  • Develop faster
  • Decommission workloads
  • Consolidate systems
  • Move VMs to cloud
  • Modernize by transforming into containers
  • Utilize the efficiencies and scaling the cloud offers.

These answers will vary greatly based on the size of your business and your business goals. Once the answer has been framed, you can begin to provide a clearer path on how to get from where your company is today to where you want to be.

What Are You Moving to The Cloud?

The next question to ask yourself is what? What do you have right now? For example, you will want to provide a:

  • A catalog of all the applications you have
  • The workloads that you are thinking about moving
  • Network and Security Requirements

This will help further build your migration strategy moving forward. Many businesses think they will need to build a very complicated system diagram, showing all the connections taking place or perhaps schematics showing the way all of our applications will work in the cloud. However, for many large organizations that have grown organically, this will not be feasible. As the what phase starts, sometimes all that is needed is to sit down with a napkin or sheet of paper and write a general overview of what apps, workloads, etc. you are looking to move. Later, when you need to expand this list, you will work with the various groups and lines of business that you have to find other resources and applications that you are concerned about.

What Dependencies Do You Have?

As we expand this further, we need to get more specific about our dependencies and start making lists of such things as:

  • Dependencies on application stacks
  • Database and message brokers
  • Underlying infrastructure
  • Firewall and security rules
  • Source code repositories

Often times, the “gotchas” happen as your business has grown organically and we see business unites popping out of the woodwork as the migration is moving along saying things like, “I’m actually keeping my source code over here and not in the official repository.” Overall, the more comprehensive your evaluation is ahead of time, the less headaches you will have during the migration.

Does Moving Everything to the Cloud Always Make Sense?

Sometimes there are cases where moving something to the cloud is not practical or might not be technically feasible in the near term. For example, maybe you have licenses that you can’t move to the cloud, your technology stack may not be virtualizable, you have 3rd party frameworks being used, or you have mainframes that need to stay independent. In these cases, it is OK to say NO! Rather, you want to focus what can be moved to the cloud. The last thing you want to do is force something into the cloud that doesn’t belong. Finally, you may also find that there is an interim path for one of your services that might not directly place it in the cloud, but aligns with your strategy. For example, if you are shutting down a datacenter that has a design too complicated to migrate to the cloud, you could move it to a co-location facility. This would allow you to gain some of the benefits, such as being closer to a cloud entry point, or getting that high-throughput or low-latency that you were looking for.

Choosing a Migration Path

There are a lot of ways to approach a cloud migration, such as an all-in-one lift in switch, a hybrid approach or private and public cloud. The answer to this will depend on what you are looking to accomplish and where you are coming from. If you are coming from legacy applications and hardware, you will likely have a much different migration path than if you are already cloud-native and just looking to scale. There could also be a scenario where you have an aggressive deadline to shutdown a datacenter, and do not have time to modernize. In this case, you would likely want to lift-and-shift your datacenter to the cloud and worry about modernizing or containerization strategies later on once the dust has settled.

Application Containerization Strategy

As a developer, containers can provide you a lot of freedom to package an app with all of its dependencies into an easy to move package.

One of the major moves during a cloud migration is deciding to containerize your applications rather than brining them back up into their own virtual machines. How do we know if an application is a good candidate for containerization? For example, you might have apps such as Dev Test Applications, Multi-Tier Stacks, LAMP applications, or a Java or web applications running on-premises. How do we know if these are good to containerize? There are a few questions we should ask.

  1. Is the app pre-packaged as a stand-alone binary or JAR file?
    1. Stand-alone binaries, such as EXE or JAR files are easy to containerize. Java and JAR files are especially flexible because the JRE can stay within the container.
  2. Is the platform on which your app is built available in a containerized version or package yet?
  3. Are any of your 3rd party apps available in a container version yet?
  4. Is the app stateless?
  5. Is your application already part of continuous integration/continuous deployment pipeline?

This may still leave us to question monolithic applications (think a monolithic SAP application), as many enterprises still use these. How might we convert these to a microservice environment that is more compatible with a containerization strategy? It may be possible to slowly break down the monolithic application into its subsequent services for a microservice strategy. It is also possible to containerize the entire application into one application container. This would allow for some of the containerization benefits such as fault tolerance and portability that containerization provides, without breaking down the monolithic application all at once.

Next, we want to look at what options are available to containerize your apps. In GCP, there are three main options, Google Kubernetes Engine (GKE), Cloud Run, and Compute Engine (GCE). Although the concepts underlying containers have been around for many years, Docker, Kubernetes and a collection of products and best practices have emerged in the last few years. This has enabled many different types of applications to be containerized. The solutions for running containers in Google Cloud Platform vary in how much of the underlying infrastructure needs to be exposed.

Google Kubernetes Engine (GKE)

As the inventor of Kubernetes, Google offers a fully managed Kubernetes service, taking care of scheduling and scaling your containers, while monitoring health and state. Getting your code to production on GKE can be as simply as creating a container deployment with the cluster being provisioned on the fly. Once running, these GKE clusters are secure by default, highly available, and run on GCP’s high-speed network. They can also be targeted for zonal and regional locations, and use specific machine types with the option of adding GPUs or Tensor Processing Units (TPUs). GKE Clusters also can provide you auto-scaling, auto-repair of failing nodes and automatic upgrades to the latest Kubernetes version. GKE is also a key player within Anthos, Google Cloud’s enterprise, hybrid and multi-cloud platform. Using Anthos, you can even migrate existing VMs directly into containers and move workloads freely between on-prem and GCP.

Cloud Run

It is also possible to shift your focus from building stateless apps, not on writing YAML files, and still deliver code packaged in a container. Cloud Run combines the benefits of containers and serverless. With cloud run, there is no cluster or infrastructure to provision or manage and any stateless containers are automatically scaled. Creating a Cloud Run service with your container only requires a few simple fields, such as name and location and choosing your authentication method. Cloud Run supports multiple requests per container and works with any language, library, binary or base Docker image. The result is serverless with pay-for-usage, the ability to scale-to-zero (can be reduced down to zero replicas when idle and brought back up if there is a request to serve), and out-of-the-box monitoring, logging and error reporting. Because Clour Run using Knative (offering serverless abstraction on top of Kubernetes), you can have a dedicated private hosting environment and deploy the same container workload on Cloud Run for Anthos in GCP or on prem.

Compute Engine (GCE)

It is also possible to use the Google virtual machine environment to run your containers. This means using your existing workflow and tools without needing to master cloud-native technologies. When you create the GCE virtual machine, there is a container section which will allow you to specify the image and other options the container will use. It is when you get to the boot disk section of setting up the VM, the suggested virtual machine OS is a Container-Optimized, which optimized for running Docker containers. This comes with Docker Runtime preinstalled.

Google Container Registry (GCR)

Where do these container images come from, where do you store them, how are they versioned, and how is access restricted to them? Google Container Registry (GCR), a container registry running on GCP. It is possible to push, pull and manage images in GCR from any system, VM instance or hardware. You then use it to control who can access, view and download those images. It also possible to deploy to GKE, Cloud Run, or GCE right from the registry. GCR works with popular continuous delivery systems, such as Cloud Build, Spinnaker, or Jenkins to automatically build containers on code or tag changes to repository.

System Containers

There are situations where you may want to go all-in on application containers, but due to technical requirements, you may want to explore system containers. System containers are similar to virtual machines, as they share the kernel of the host operating system and provide user space isolation. However, system containers do not use hypervisors. (Any container that runs an OS is a system container.) They also allow you to install different libraries, languages, and databases. Services running in each container use resources that are assigned to just that container.

Migrate VMs to Compute Cloud

After we have an application migration strategy defined, it is time to start thinking about a machine migration path. Migrate for Compute Engine (GCE) which allows one or many workloads to be moved in a unified way to GCE. Migrate also provides cloud testing and validation, including a plug-in that is available to find workloads and move them over. There is also the possibilities for a stateful rollback, so if at any point you feel like you need to get out the migration, there is the capability to roll-back to the on-prem environment. This can give you time to take a pause and see what is going on with a migration. There may also be a use-case where you need to maintain a VMWare-based control plane, there is support for VMWare and vSphere workloads that can be moved to GCP.

Unify On-Prem Applications

Some applications will need to stay on-premise, yet you may still want to take advantage of cloud-native capabilities. Anthos can be used to manage your on-premise and your hybrid-cloud or multi-cloud environments in one place. Anthos uses a tool called GKE On-Premise, which allows you to implement a containerization strategy in your on-premise environments. For example, you might use Cisco HyperFlex and use Anthos to develop a strategy for on-premise and cloud together. This can be used to simplify monitoring, logging, and config management and still get access to all the benefits of being on cloud. It’s as if Cisco HyperFlex was cloud-native but the underlying infrastructure is still on-premise.

Where to Start with Google Cloud Migration?

  1. Start by looking at the migration guides found at https://cloud.google.com. These are great at getting started guides that can help you think through what is required to migrate your business.
  2. Google also offers professional migrations services, known internally as PSO (Post Sales Department). For example, PSO can help leverage workshops to help out with the migration strategy.

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *