Optimise how you run legacy workloads with minimal code changes & new code using newer Cloud services
Preface to Modernisation in the Cloud
Modernisation is a frequently used term in any technical sphere and it is increasingly applied to a Cloud situation: we want to modernise this into that. Modernisation relates to both where you were and where you want to go; and often it is recognised that even then, that is just a stepping stone to something else.
Modernisation is about exploiting the benefits of Cloud technologies generally acknowledged to be:
Elasticity and ability to scale,
reliable availability,
operational ease leading to organisational agility,
security, and
cost-effectiveness.
In the first of these three blogs we explored why you should consider building a modern environment first, one which is architected to AWS best practice, before you think about modernising the way you run your workloads.
In this, the second blog, we explore how to start leveraging the opportunity of Cloud, by optimising how to run your legacy workloads with minimal code changes, and new code using newer features available in AWS Cloud.
In the third blog, we will explore the final nirvana - striping down the monolithic applications into pure Cloud-native services - which integrate together to provide your users with the functionality they need, and deliver significantly more benefits than legacy technologies.
The Opportunity of Cloud
Cloud technologies empower organisations with:
Elasticity: the ability to respond to spikes in customer demand
Availability: the ability to serve customers’ requests wherever and whenever
Agility: the ability to quickly fix a problem or deploy new functionality that customers want
If you are born in the Cloud - this is your starting point - but if you migrated ‘legacy’ applications into the Cloud, you need to optimise your applications, and indeed the way you work, for the Cloud, to be able to gain these benefits.
In this blog post we will cover how to optimise your Application Workloads, your database workloads and indeed the way you work, to realise the benefits above.
Application Workloads
If your organisation started life in the Cloud, your application development has the choice of containers, or completely Cloud-native technologies such as microservices architecture, or full serverless. (Refer to our third blog).
If your journey started with a migration of legacy architecture to the Cloud, then in order to minimise code re-writes you are probably still on virtual machines (VM). VM offers some elasticity, but not anything like to the extent which containers do.
Unlike a Virtual Machine - which necessarily contains everything except the tin of a standalone server: Operating system, network elements, interfaces, bins/libraries and the application itself, a container holds the application and just enough of the bins/libraries to run. The containerisation looks after everything else - so developers can focus on the specifics of the application. This is particularly of value as the application moves from dev to test to live environments. There are no changes required because of the environment, so the code development pipeline can move much faster.
In production, containers require a management service for scheduling, deployment and scaling, plus a hosting service - where the containers will run. Amazon Elastic Container Service (Amazon ECS) supports Docker containers and allows you to easily manage containers on AWS. It is a managed platform that is highly scalable, high performing and secure.
There are two launch types for containers: Amazon Elastic Compute Cloud (Amazon EC2) and AWS Fargate. With Fargate you can run your containers without having to manage servers or clusters. All you have to do is package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application.
The Amazon EC2 launch type allows you to bring on instances to run containers, providing server-level, granular control over the infrastructure that runs your container applications.
Benefits:
Containers offer Elasticity in spades - simply launch as many containers as you need when you need them; and because application developers can focus on features and functions they are more agile. Also, even if you are containerising legacy applications post-migration, you can still minimise code changes.
Database Workloads
If your organisation started life in the Cloud, you have a choice of scalable, high performance Cloud-native data repository options designed specifically for the objects being stored within them. (Refer to our third blog in the Modernisation series).
If your journey started with a migration of legacy databases to the Cloud, your path of least resistance was probably to run your existing database software on virtual machines (VM), hosted on Amazon Elastic Cloud Compute (Amazon EC2). This is not exactly exploiting the elasticity of high availability of Cloud.
You do have the option to modernise to a Cloud-native data repository, but you also have the option to optimise your database for the Cloud without re-writing for a new technology.
In this case, the optimisation step is to take advantage of Amazon Relational Database Service (Amazon RDS). Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud. It also automates time-consuming administration tasks such as hardware provisioning, database setup, patching and backups.
It is available on six database instance types, each optimised for memory, performance or I/O, and provides you with six familiar database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and Microsoft SQL Server.
Benefits:
Amazon RDS can be configured to be highly available and durable - something you’d certainly want for critical production databases. You can provision a Multi-AZ DB Instance to synchronously replicate data to a standby instance in a different Availability Zone (AZ). Amazon RDS has many other features that enhance reliability including automated backups, database snapshots, and automatic host replacement.
Amazon RDS is highly scalable - you can scale your database's compute and storage resources with only a few mouse clicks or an API call, often with no downtime. Many Amazon RDS engine types allow you to launch one or more Read Replicas to offload read traffic from your primary database instance greatly improving user experience.
DevOps
Using Containers, the previously separated silos of Development and Operations become blurred, and so organisations developing applications in the Cloud tend to move towards a DevOps model. Whereas Development is focussed on applications, and Operations is focussed on infrastructure or environment, in the DevOps model these two teams are merged into a single team and individuals develop a range of skills not limited to their single function.
DevOps is the combination of cultural philosophies, practices, and tools (such as CI/CD and automation) that increases an organisation’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace and being more agile than the competition.
Continuous Integration and Continuous Deployment (CI/CD) is where a commit or change to code passes through various automated stage gates, all the way from building and testing to deploying applications, from development to production environments. The automated environment removes the tasks of deploying code manually. When your infrastructure leverages hundreds or thousands of containers, automating with CI/CD allows you to scale and react faster, while minimising the risk of human error.
Four AWS services help you make the most of automation:
AWS CodeCommit – A fully-managed source control service that hosts secure Git-based repositories which holds both your application and deployment code. CodeCommit makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem.
AWS CodeBuild – A fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy, on a dynamically created build server.
AWS CodeDeploy – A fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2 and AWS Fargatewhich are running CodeDeploy agents.
AWS CodePipeline – A fully managed continuous deployment service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline can be used to create an end-to-end pipeline that fetches the application code from CodeCommit, builds and tests using CodeBuild, and finally deploys using CodeDeploy.
Benefits:
With the automated and replicable compilation and deployment of code packages to a number of dev, test and live environments, developers can be so much more efficient and focus exclusively on fixing issues and rolling out new features and functions to their customers, making their company much more agile than those who aren’t.
Bottom line
There is a lot you can do to squeeze further benefits from your investment in Cloud by reviewing your workloads, data and by adopting DevOps practices - a Cloud-native culture of application development. This is particularly relevant if you have brought legacy applications on the journey and still don’t want to make any major changes to your legacy code.
Of course when you are ready, you can modernise to fully Cloud-native technologies, as explained in our next Blog in the Modernisation series.
If you would like to discuss any of these concepts in relation to your organisation, or need help with implementing any of them, PolarSeven is just a phone call away. We would be happy to give you a free consultation to discuss your unique situation.
Kommentare