Application migration and modernisation
Application migration is about moving IT workloads from on-premise to the cloud; modernisation is about using cloud-native capabilities to optimise the application.
Overview
The 7R’s of cloud migration
One of the early stages of cloud migration is an assessment of an organisation’s IT application portfolio. The outcome is a classification of each application according to seven common migration strategies (7 Rs) for moving applications to the AWS Cloud. These strategies are: refactor, re-platform, re-purchase, re-host, relocate, retain, and retire. When choosing a migration strategy, it is important to consider the level of effort required to implement each one, in relation to the potential benefits. Many organisations take a progressive approach choosing to take some of the benefits of relocation or re-hosting in the short-term and plan the refactoring as mid to long-term initiatives.
Migration from a 7R perspective
Re-host, re-platform, re-purchase, relocate are considered to be “migrations” as they are ways to move the application without substantial re-engineering.
- re-host (lift and shift) – Move an application to the cloud without making any changes to take advantage of cloud capabilities. Typically, this involves using IaaS to create “like for like” servers to match the legacy architecture.
- re-platform (lift and reshape) – Move an application to the cloud and introduce some level of optimisation to take advantage of cloud capabilities. Example: Migrate your on-premises Oracle database to Amazon Relational Database Service (Amazon RDS) for Oracle in the AWS Cloud.
- re-purchase (drop and shop) – Switch to a different product, typically by moving from a traditional license to a SaaS model.
- relocate (hypervisor-level lift and shift) – Move infrastructure to the cloud without purchasing new hardware, rewriting applications, or modifying your existing operations. This migration scenario is specific to VMware Cloud, which supports virtual machine (VM) compatibility and workload portability between your on-premises environment and the cloud. You may continue to use the VMware Cloud Foundation technologies from your on-premises datacentres when you migrate your infrastructure to VMware Cloud.
Modernisation from a 7R perspective
Refactor, also called Re-engineer or Re-architect, entails moving the application and modifying its architecture to take advantage of cloud-native features to improve agility, performance, and scalability. Refactoring usually entails several activities ranging from containerisation, to using APIs and microservices to going cloud native and serverless.
The definition of modernisation differs from organisation to organisation. The typical application modernisation project seeks to reduce the technical debt of the legacy system: legacy infrastructure, legacy code, legacy integrations, legacy ways of working and legacy value streams.
Modernisation versus new build
Modernisation candidates most often have a complex legacy and architecture to consider, compared to many new builds. Many years of development efforts and complex integrations need to be understood and addressed as part of the modernisation effort.
By definition a modernisation development project doesn’t start from a blank piece of paper, but rather carries a significant effort of reverse engineering to understand what is already there and how it can be improved.
Cloud platforms host a number of transformative technologies which can be applied to modernise applications. What follows is a description of some of the most common tools and approaches, together with links to further reading on each topic.
Cloud native architecture
Cloud native technologies empower organisations to build and run scalable applications in modern, dynamic IT environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.
Typically, cloud native applications are architected as a set of microservices that run in containers, orchestrated in Kubernetes and managed and deployed using DevOps automation workflows.
Microservices and API’s
A microservices architecture, also simply known as “microservices”, is an approach to building an application as a series of independently deployable services that are decentralised and autonomously developed. These services have their own business logic and database with a specific goal. Microservices are loosely coupled, independently deployable, and easily maintainable. While a monolithic application is built as a single, indivisible unit, microservices break that unit down into a collection of independent units that contribute to the larger whole.
Microservices decouple major business, domain-specific concerns into separate, independent code bases. Microservices don’t reduce complexity, but they make any complexity visible and more manageable by separating tasks into smaller processes. In the context of application modernisation, this approach allows product teams to replace “homegrown” functions with off-the-shelf cloud-native functions, building on the innovative capabilities of the cloud to deliver applications which are scalable, efficient, and cost-effective.
Microservices are presented to a consumer via an Applications Programming Interface (API). APIs act as the access point for applications to access data, business logic, or functionality from your backend services. API’s are made available for use via an API gateway.
An API gateway is a centralised, managed entry point for services. The Gateway handles all the tasks involved in accepting and processing concurrent API calls, including traffic management, authorisation and access control, throttling, monitoring, and API version management.
Containerisation
A container is simply the packaging of an application and all its dependencies, which allows it to be deployed easily and consistently. Because containers don’t have the overhead of their own operating system, they are smaller and more lightweight than traditional virtual machines. They can spin up and down more quickly, making them a perfect match for the smaller services found within microservices architectures.
Containers lend themselves very well to microservices. When microservices are run in separate containers, they can be deployed independently and even in different languages. Because containers are portable and can operate in isolation from one another, it is very easy to create a microservices architecture using containers as well as move them from one environment to another or even to another public cloud if you need to.
Kubernetes is a popular open-source platform that orchestrates container runtime systems across a cluster of networked resources. Kubernetes bundles a set of containers into a group that it manages on the same machine to reduce network overhead and increase resource usage efficiency. An example of a cluster container set may contain an app server, a redis cache, and a sql database.
Kubernetes is particularly useful for DevOps teams since it offers service discovery, load balancing within the cluster, automated rollouts and rollbacks, self-healing of containers that fail, and configuration management. Plus, Kubernetes is a critical tool for building robust DevOps CI/CD pipelines.
NHS England has developed the TEXAS CPaaS using Kubernetes. Texas CPaaS provides an accelerator for service teams who wish to move their application to the cloud, by providing infrastructure and a full suite of functions to run services in a secure, compliant, resilient, and cost-effective way to underpin the development and support of these services. Learn more about Texas and how to access services
Serverless
Serverless computing is a cloud computing execution model in which the cloud provider allocates machine resources on demand, taking care of the servers on behalf of their customers effectively a form of utility computing. It is also referred to as Functions-as-a-Service.
Serverless functions are event driven, invoked by request using a published API. When the code is called, the cloud service provider allocates resources and charges only for compute time used by that execution, rather than a flat monthly fee for maintaining a physical or virtual server. For example, AWS Lambda charges by the hundredth of a second of execution time. When a function is not in active use, there are no resources allocated to it.
Serverless can be used in conjunction with code deployed in traditional styles – monolithic or microservices. This allows developers to offload specific compute functions from their legacy applications – to enhance functionality, improve efficiency or ensure scalability.
A case in point of the power of serverless is the NHS Digital COVID-19 application which was able to be developed very quickly to scale massively.
Contact us
Contact us by emailing [email protected].
If you need further support as an NHS organisations planning or migrating to use Cloud Services contact us using our feedback form for an introductory 30 minute conversation with a member of the team.
Last edited: 16 January 2025 11:17 am