QUANTUM FIELDS
  • Home
  • Architecture
  • Data & Apps
  • Cloud
  • Network
  • Cyber

Cloud Architecture

Cloud Migration: ​Strategy and Best Practices Part 1

16/5/2023

0 Comments

 
Picture
​Migrating to the cloud can offer numerous benefits, including improved scalability, flexibility, and cost savings. However, the migration process can be complex and challenging, especially for organizations that are new to cloud computing. To ensure a successful migration, it's crucial to follow best practices and avoid common pitfalls.

Now, migrating at scale isn't just about the number of servers you're moving over. It also involves a whole host of complexities like people, processes, and technology. This is part one of a three-part series of articles that dive deeper into cloud migration strategy and best practice diving deeper into people, process and technology. In this article, we'll focus on the ‘people’ perspective of large cloud migration projects. By following these best practices, you can streamline the migration process, reduce risk, and maximize the benefits of cloud computing.

Strategy, Scope and Timeline

​
The success of any migration program relies on three key elements: scope, strategy, and timeline. These elements need to be aligned and understood from the very beginning of your migration program to set the stage for a successful journey. Any changes to one element will affect the others, so realignment should be factored in for every change, no matter how basic or sensible the change might seem.

Strategy - Why do You Want to Migrate?

There are various reasons why you might be planning to migrate to AWS. Regardless of your reasons, it's essential to understand what your drivers are, communicate them, and prioritize them. Each additional driver adds time, costs, scope, and risks to your migration program. Once you define your migration strategy, alignment of requirements across various stakeholders and teams is crucial for success.

Different teams like Infrastructure, Security, Application, and Operations need to work towards a single goal and align their priorities with a single timeline of migrations. We recommend exploring how the desired business outcomes can be aligned across the various teams to minimize friction and ensure a smooth migration.

Scope - What are You Migrating?

It's not uncommon for the total scope of a migration program to be undefined, even when you're already halfway through the migration. Unknowns like shadow IT or production incidents can pop up unexpectedly, causing delays and shifts in your plans. To avoid this, it's recommended to invest time in defining the scope, working backwards from your target business outcome. Using discovery tooling to uncover assets is a best practice that can help you define the scope. Be flexible and have contingency plans in place to keep the program moving forward, as the scope will inevitably change with large migrations.

Timeline - When do You Need to Complete the Migration?

Your migration program's timeline should be based on your business case and what's possible to achieve in the allocated time. If your driver for migrating is based on a fixed date of completion, you must choose the strategy that meets that timeline requirement. For these time-sensitive types of migrations, it's recommended to follow the "Migrate first, then modernize" approach. This helps set expectations and encourages teams to align their individual project plans and budgets with the overall migration goal. It's important to address any disagreements as early as possible in the project, fail fast, and engage the right stakeholders to ensure that alignment is in place.

On the other hand, if your main goal of migration is to gain the benefits of application modernization, this must be called out early in the program. Many programs start with an initial goal based on a fixed deadline, and they don't plan for the requirements from stakeholders who want to resolve outstanding issues and problems. It's important to note that modernization activities during a migration can affect the functionality of business applications. Even a seemingly small upgrade like an operating system version change can have a significant impact on the program timelines. Therefore, it's crucial not to consider these upgrades trivial and to plan accordingly.

Best Practices for Large Migrations


Migrating to the cloud can be a daunting task, especially for large organizations. The success of a large migration project depends on several factors that need to be addressed from the very beginning of the project. In this section, we will discuss some best practices for large migrations that are based on data from other customers. These practices are divided into three categories:
​
  • People
  • Technology
  • Processes
​

People Perspective


This section focuses on the following key areas of the people perspective:
​
  • Executive support: Identifying a single-threaded leader who’s empowered to make decisions
  • Team collaboration and ownership: Collaborating among various teams
  • Training: Proactively training teams on the various tooling

Executive support


Identify a Single-Threaded Leader

When it comes to large migrations, it's crucial to have the right people in place who can make informed decisions and ensure that the project stays on track. This involves identifying a single-threaded leader who is accountable for the project's success and empowered to make decisions. The leader should also help avoid silos and streamline work-streams by maintaining consistent priorities.

For instance, a global customer was able to scale from one server per week at the outset of the program to over 80 servers per week at the start of the second month. This was only possible due to the CIO's full support as a single-threaded leader. The CIO attended weekly migration cutover calls with the migration team to ensure real-time escalation and resolution of issues, which accelerated the migration velocity.

Align the Senior Leadership Team
​

Achieving alignment between teams regarding the success criteria of the migration is crucial. Although a small, dedicated team can handle migration planning and implementation, defining the migration strategy and carrying out peripheral activities can pose challenges that may require involvement from different areas of the IT organization.

​These areas include business, applications, networking, security, infrastructure, and third-party vendors. In such cases, it is essential to have direct involvement from application owners and leadership, establish alignment, and establish a clear escalation path to the single-threaded leader.

Team Collaboration and Ownership


Create a Cross-Functional Cloud-Enablement Team

To successfully migrate to the cloud, it's crucial to have a team that is focused on enabling the organization to work efficiently in the cloud. We recommend creating a Cloud Enablement Engine (CEE), which is a cross-functional team responsible for ensuring the organization's readiness for migrating to AWS. The CEE should include representation from various departments, including infrastructure, applications, operations, and security, and be accountable for developing policies, defining and implementing tools and processes, and establishing the organization's cloud operations model.

As the cutover data approaches, it is a good idea to setup a war room, where stakeholders from different areas, such as infrastructure, security, applications, and business, can work together to resolve issues. This will enable the team to meet deadlines and successfully complete the migration.

Define Requirements for All Stakeholders

It's important to plan in advance for the involvement of teams and individuals who are not part of the core migration team. This involves identifying these groups and defining their role during the migration planning stages. Specifically, it's important to involve the application teams as they possess crucial knowledge of the applications, and their participation is needed to diagnose issues and sign off on the cutover.

This is where a RACI can be very useful. 
RACI is a popular project management and organizational tool used to clarify the roles and responsibilities of individuals or teams involved in a project or process. It helps ensure that everyone understands their assigned tasks and that accountability is clearly defined. The term "RACI" stands for Responsible, Accountable, Consulted, and Informed, which are the four key roles involved in the process.

While the core team will lead the migration, the application teams will likely play a role in validating the migration plan and testing during cutover. Many organizations view cloud migration as an infrastructure project, but it's important to recognize that it's also an application migration. Failing to involve application teams can lead to issues during the migration process.

When selecting a migration strategy, it's recommended to consider the application team's required involvement. For instance, a rehost strategy may require less application-team involvement compared to a replatform or refactor strategy, which involve more changes to the application landscape. If application owner availability is limited, it may be preferable to use a rehost or replatform strategy rather than refactor, relocate, or repurchase strategies.
 
Validate that there are no Licensing issues when migrating workloads
​

To avoid potential licensing issues when migrating workloads to the cloud, it is important to validate that the licenses will still be valid in the new environment. It is possible that licensing agreements may be focused on on-premises infrastructure, such as CPU or MAC address, or may not allow hosting in a public cloud environment. Renegotiating licensing agreements can be time-consuming and may delay the migration project.

​To prevent licensing issues, we suggest working with sourcing or vendor management teams as soon as the migration scope is defined. This can also impact the target architecture and migration strategy, so it is important to take licensing into account during the planning phase.
 

Training


Train Teams on New Tooling and Processes

After defining the migration strategy, it's important to assess what training is required for both the migration and the target operating model. Using new tooling, such as AWS Database Migration Service, can cause delays during the migration, so it's recommended to provide hands-on training to teams. Automation is also key to accelerate large migrations.

Summary


Large-scale migration to the cloud requires a well-defined strategy, scope and timeline. This includes understanding the business drivers for the migration, identifying the workloads to be migrated, and developing a roadmap for the migration process.

In addition, successful cloud migration projects require a holistic approach that considers people, process, and technology. While it's important to have the right technology and processes in place, it's equally crucial to focus on the people involved in the migration. This includes identifying and engaging with stakeholders, establishing clear communication channels, and providing adequate training and support for employees.

In this article we have focused on the people perspective of cloud migration, which is a critical aspect of any successful migration project. We have discussed the importance of establishing a clear scope and strategy for the migration project, as well as setting realistic timelines to ensure a smooth transition.
​

In future articles, we will delve deeper into these areas and provide insights and best practices for navigating the technical and procedural aspects of cloud migration.​​​
0 Comments

​Achieving Cloud Excellence with the AWS Well-Architected Framework

12/5/2023

1 Comment

 
Picture
​AWS Well-Architected Framework is a set of best practices and guidelines for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It offers a structured approach to evaluate and improve existing architectures and plan new ones.

By following the guidance provided by the AWS Well-Architected Framework, businesses can optimize their cloud infrastructure, improve their applications, and reduce operational costs. In this article, we will explore each of these pillars in detail and learn how to apply the principles to your cloud architecture.

Consistent use of the framework ensures that your operations and architectures are aligned with industry best practices, enabling you to identify areas for improvement. We believe that adopting a Well-Architected approach that incorporates operational considerations significantly improves the likelihood of business success.

Here are the six pillars on which the AWS Well-Architected Framework is based. An easy way to remember these is through using the acronym PSCORS:

  • P - Performance Efficiency
  • S - Security
  • C - Cost Optimization
  • O - Operational Excellence
  • R - Reliability
  • S - Sustainability

 
Now that we have introduced the AWS Well-Architected Framework, let's dive deeper into the six pillars that form the basis of this framework. Each pillar covers a different aspect of building and running workloads in the cloud and provides a set of best practices and guidelines to help you improve the overall quality of your workloads. Let's explore each pillar in more detail to gain a better understanding of how they can help you achieve operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability.

​Performance Efficiency Pillar


The performance efficiency pillar aims to optimize IT and computing resources allocation by providing a structured and streamlined approach. It involves selecting the appropriate resource types and sizes that meet workload requirements, monitoring performance, and maintaining efficiency as business needs evolve.

Design Principles

To achieve and maintain efficient workloads in the cloud, consider the following design principles:
​
  • Democratize advanced technologies: Allow your team to focus on product development by delegating complex tasks to your cloud vendor. Rather than asking your IT team to learn about hosting and running a new technology, consider consuming the technology as a service. For instance, NoSQL databases, media transcoding, and machine learning are specialized technologies that become services in the cloud, allowing your team to consume them.
  • Go global in minutes: Deploying your workload in multiple AWS Regions around the world provides lower latency and a better experience for your customers at minimal cost.
  • Use serverless architectures: Serverless architectures remove the need to run and maintain physical servers for traditional compute activities. For instance, serverless storage services can act as static websites (eliminating the need for web servers) and event services can host code. This eliminates the operational burden of managing physical servers and lowers transactional costs because managed services operate at cloud scale.
  • Experiment more often: With virtual and automatable resources, you can quickly carry out comparative testing using different types of instances, storage, or configurations.
  • Consider mechanical sympathy: Mechanical sympathy is when you use a tool or system with an understanding of how it operates best. When you understand how a system is designed to be used, you can align with the design to gain optimal performance. Choose the technology approach that aligns best with your goals. For example, consider data access patterns when selecting database or storage approaches. 

    "You don't have to be an engineer to be be a racing driver, but you do have to have Mechanical Sympathy. "
                                                                                       Jackie Stewart, racing driver


The performance efficiency pillar focuses on optimizing IT and computing resources for workload requirements. By leveraging advanced technologies as services, adopting serverless architectures, going global in minutes, experimenting more often, and selecting the technology approach that aligns best with your goals, you can improve performance, lower costs, and increase efficiency in the cloud. Following these principles can help you achieve and maintain efficient workloads that scale with your business needs.

​​Security Pillar


The security pillar focuses on safeguarding systems and data. It includes topics like data confidentiality, integrity, availability, permission management, and establishing controls to detect security events. The security pillar offers guidance for architecting secure workloads on AWS by utilizing cloud technologies to improve the security posture. 

Design Principles

To strengthen the security of workloads, there are several design principles that AWS recommends:
​
  • Implement a strong identity foundation: Implement the principle of least privilege (POLP) and separation of duties to authorize each interaction with AWS resources. Centralize identity management, and avoid using long-term static credentials.
  • Maintain traceability: Monitor, alert, and audit actions and changes to the environment in real-time to maintain traceability. Integrate log and metric collection with systems to automatically investigate and take action.
  • Apply security at all layers: Apply defense in depth approach with multiple security controls at all layers, including the edge of the network, VPC, load balancing, every instance and compute service, operating system, application, and code.
  • Automate security best practices: Automate software-based security mechanisms to improve the ability to securely scale more rapidly and cost-effectively. Create secure architectures by implementing controls that are defined and managed as code in version-controlled templates.
  • Protect data in transit and at rest: Classify data into sensitivity levels and use mechanisms such as encryption, tokenization, and access control where appropriate to protect data in transit and at rest.
  • Keep people away from data: Use mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data to avoid human error when handling sensitive data.
  • Prepare for security events: Prepare for an incident by having incident management and investigation policy and processes that align with organizational requirements. Run incident response simulations and use tools with automation to increase the speed of detection, investigation, and recovery.

By following the design principles discussed above, you can take advantage of cloud technologies to strengthen your workload security and reduce the risk of security incidents. These principles provide in-depth, best-practice guidance for architecting secure workloads on AWS. It is important to continuously review and improve your security posture to protect your data and systems from potential threats.

Cost Optimization Pillar


The cost optimization pillar focuses on controlling fund allocation, selecting the right type and quantity of resources, and scaling efficiently to meet business needs without incurring unnecessary costs. To achieve financial success in the cloud, it is crucial to understand spending over time and invest in cloud financial management.

Design Principles

To achieve cost optimization, consider the following design principles:
​
  • Implement cloud financial management: Build capability through knowledge building, programs, resources, and processes to become a cost-efficient organization.
  • Adopt a consumption model: Pay only for the computing resources you consume and increase or decrease usage based on business requirements.
  • Measure overall efficiency: Measure business output and costs associated with delivery to understand the gains you make from increasing output, functionality, and reducing cost.
  • Stop spending on undifferentiated heavy lifting: AWS removes the operational burden of managing infrastructure, allowing you to focus on your customers and business projects.
  • Analyze and attribute expenditure: Use cloud tools to accurately identify the cost and usage of workloads and attribute IT costs to revenue streams and individual workload owners. This helps measure ROI and optimize resources to reduce costs.

The cost optimization pillar is focused on minimizing unnecessary spending while ensuring that computing resources are allocated optimally. By investing in cloud financial management and adopting a consumption model, organizations can significantly reduce costs while maintaining efficiency. Measuring overall efficiency, stopping spending on undifferentiated heavy lifting, and analyzing and attributing expenditure can also contribute to achieving cost optimization.

​Operational Excellence Pillar


The operational excellence pillar within the AWS Well-Architected Framework is focused on running and monitoring systems, and continuously improving processes and procedures. This includes automating changes, responding to events, and defining standards to manage daily operations.

AWS define operational excellence as a commitment to building software correctly while consistently delivering a great customer experience. It includes best practices for organizing teams, designing workloads, operating them at scale, and evolving them over time. By implementing operational excellence, teams can focus more of their time on building new features that benefit customers, and less time on maintenance and firefighting.

​The ultimate goal of operational excellence is to get new features and bug fixes into customers' hands quickly and reliably. Organizations that invest in operational excellence consistently delight customers while building new features, making changes, and dealing with failures. Along the way, operational excellence drives towards continuous integration and continuous delivery (CI/CD) by helping developers achieve high-quality results consistently.


Design Principles

The following are the design principles for operational excellence in the cloud:
​
  • Perform operations as code: Applying engineering discipline that is used for application code to the entire environment in the cloud. This involves defining the entire workload (applications, infrastructure, etc.) as code and updating it with code. It also involves scripting operational procedures and automating their process by launching them in response to events. Performing operations as code helps limit human error and create consistent responses to events.
  • Make frequent, small, reversible changes: Design workloads to allow components to be updated regularly, which increases the flow of beneficial changes into the workload. Make changes in small increments that can be reversed if they fail, aiding in the identification and resolution of issues introduced to the environment without affecting customers when possible.
  • Refine operational procedures frequently: As operational procedures are used, teams should look for opportunities to improve them. As the workload evolves, procedures should be evolved appropriately. Regular game days should be set up to review and validate that all procedures are effective, and teams are familiar with them.
  • Anticipate failure: Performing "pre-mortem" exercises to identify potential sources of failure so they can be removed or mitigated. Testing failure scenarios and validating understanding of their impact. Testing response procedures to ensure they are effective and that teams are familiar with the process. Regular game days should be set up to test workload and team responses to simulated events.
  • Learn from all operational failures: Drive improvement through lessons learned from all operational events and failures. Share what is learned across teams and through the entire organization.

Operational excellence focuses on achieving a great customer experience by building software correctly, delivering new features and bug fixes quickly and reliably, and investing in continuous improvement. The design principles for operational excellence in the cloud are focused on performing operations as code, making frequent small reversible changes, refining operational procedures, anticipating failure, and learning from all operational failures. 

Reliability Pillar


The reliability pillar of AWS focuses on ensuring that workloads perform their intended functions and can recover quickly from failures. This section covers topics such as distributed system design, recovery planning, and adapting to changing requirements to help you achieve reliability.

Traditional on-premises environments can pose challenges to achieving reliability due to single points of failure, lack of automation, and lack of elasticity. By adopting the best practices outlined in this paper, you can build architectures that have strong foundations, resilient architecture, consistent change management, and proven failure recovery processes.

Design Principles

Here are some design principles that can help increase the reliability of your workloads:
​
  • Automatically recover from failure: Monitor key performance indicators (KPIs) to run automation when a threshold is breached. Use KPIs that measure business value and not just the technical aspects of the service's operation. This allows for automatic notification and tracking of failures, and for automated recovery processes that work around or repair the failure.
  • Test recovery procedures: In the cloud, you can test how your workload fails and validate your recovery procedures. You can use automation to simulate different failures or recreate scenarios that led to failures before. This approach exposes failure pathways that you can test and fix before a real failure scenario occurs, reducing risk.
  • Scale horizontally to increase aggregate workload availability: Replace one large resource with multiple small resources to reduce the impact of a single failure on the overall workload. Distribute requests across multiple, smaller resources to ensure they don't share a common point of failure.
  • Stop guessing capacity: In the cloud, you can monitor demand and workload utilization and automate the addition or removal of resources to maintain the optimal level to satisfy demand without over- or under-provisioning.
  • Manage change through automation: Changes to your infrastructure should be made using automation. Manage changes to the automation, which can be tracked and reviewed.
 
The reliability pillar encompasses the ability of a workload to perform its intended function correctly and consistently when expected to.

​Sustainability Pillar


The sustainability pillar aims to decrease the environmental impact of cloud workloads through a shared responsibility model, impact evaluation, and maximizing utilization to minimize required resources and reduce downstream impacts.

Design Principles

The following design principles can be applied to enhance sustainability and minimize impact when creating cloud workloads:

  • Understand the impact: Measure the impact of cloud workloads and forecast future impact by including all sources of impact. Compare productive output to total impact and use this data to establish key performance indicators (KPIs), improve productivity, and evaluate the impact of proposed changes over time.
  • Establish sustainability goals: Set long-term sustainability goals for each cloud workload and model the return on investment of sustainability improvements. Plan for growth and design workloads for reduced impact intensity per user or transaction.
  • Maximize utilization: Optimize workloads to ensure high utilization and maximize energy efficiency by eliminating idle resources, processing, and storage.
  • Anticipate and adopt new hardware and software: Monitor and evaluate new, more efficient hardware and software offerings and design for flexibility to allow rapid adoption of new technologies.
  • Use managed services: Adopt shared services to reduce the infrastructure needed to support cloud workloads, such as AWS Fargate for serverless containers and Amazon S3 Lifecycle configurations for infrequently accessed data.
  • Reduce downstream impact: Decrease the energy or resources required to use cloud services and eliminate the need for customers to upgrade their devices by testing with device farms.

The sustainability pillar of cloud computing focuses on reducing the environmental impact of running cloud workloads. By applying the design principles outlined, cloud architects can maximize sustainability and minimize impact. It is important to understand the impact of cloud workloads, establish sustainability goals, maximize utilization, anticipate and adopt new, more efficient hardware and software offerings, use managed services, and reduce the downstream impact of cloud workloads. Adopting these practices can help businesses and organizations support wider sustainability goals, identify areas of potential improvement, and reduce their overall environmental footprint.

Summary


In conclusion, the AWS Well-Architected Framework is a valuable resource for organizations looking to build and optimize their cloud infrastructure. By following the best practices outlined in the framework, businesses can improve their system's reliability, security, performance efficiency, cost optimization, and operational excellence. Regularly reviewing and updating your architecture based on the AWS Well-Architected Framework can help ensure that your system is scalable, efficient, and cost-effective. With the flexibility and scalability of the cloud, organizations can achieve their goals faster and more efficiently than ever before, and the AWS Well-Architected Framework provides a solid foundation to achieve that success.
1 Comment

Navigating Cloud Migration: Choosing the Right Strategy

3/5/2023

0 Comments

 
Picture
​Organizations are increasingly moving to the cloud due to a variety of factors, including the need for greater agility, scalability, cost-efficiency, and improved security. Cloud providers offer businesses access to a wide range of services and resources that can be quickly provisioned and scaled to meet changing business needs. 

Cloud migration is the process of moving an organization's data, applications, and other digital assets from on-premises infrastructure to a cloud computing environment. Migrating to the cloud offers many benefits, including greater flexibility, scalability, security, and cost savings. However, there are many different cloud migration strategies to choose from, each with its own unique set of benefits and challenges. 

When we talk about migrating a workload to the  Cloud, we're referring to the process of moving an application or workload to the cloud. In this article, we'll focus on the migration strategies for the AWS Cloud.

There are seven migration strategies that we call the 7 Rs, which are:

  • Retire
  • Retain
  • Rehost
  • Relocate
  • Repurchase
  • Replatform
  • Refactor or re-architect
 
It's really important to select the right migration strategies for a large migration. You might have already selected the strategies during the mobilize phase or during the initial portfolio assessment. In this section, we'll go over each migration strategy and its common use cases.

Retire


​Retire is the strategy we use for applications that we want to decommission or archive. This means that we can shut down the servers within that application stack. Here are some common use cases for the retire strategy:
​
  • There's no business value in retaining the application or moving it to the cloud.
  • We want to eliminate the cost of maintaining and hosting the application.
  • We want to reduce the security risks of operating an application that uses an operating system (OS) version or components that are no longer supported.
  • We might want to retire applications based on their performance, such as those that have an average CPU and memory usage below 5 percent, which we call zombie applications. We might also retire some applications that have an average CPU and memory usage between 5 and 20 percent over a period of 90 days, known as idle applications. To identify zombie and idle applications, we can use the utilization and performance data from our discovery tool.
  • Finally, if there has been no inbound connection to the application for the last 90 days, we might consider retiring it.

Retain


If you've got apps that you're not quite ready to migrate or that you want to keep in your source environment, the Retain strategy is your go-to. You might decide to migrate these apps at a later time, but for now, you want to keep them right where they are.

Here are some common scenarios where the Retain strategy is a good choice:
​
  • Security and compliance: If you need to comply with data residency requirements, you might want to keep certain apps in your source environment.
  • High risk: If an app is particularly complex or has a lot of dependencies, you might want to keep it in your source environment until you can make a detailed migration plan.
  • Dependencies: If an app depends on other apps that need to be migrated first, you might want to retain it until those other apps are in the cloud.
  • Recently upgraded: If you just invested in upgrading your current system, you might want to wait until the next technical refresh to migrate the app.
  • No business value to migrate: For apps with only a few internal users, it might not make sense to migrate them to the cloud.
  • Plans to migrate to software as a service (SaaS): If you're planning to move to a vendor-based SaaS solution, you might want to keep an app in your source environment until the SaaS version is available.
  • Unresolved physical dependencies: If an app is dependent on specialized hardware that doesn't have a cloud equivalent, such as machines in a manufacturing plant, you might want to retain it.
  • Mainframe or mid-range apps and non-x86 Unix apps: These apps require careful assessment and planning before migrating to the cloud. Examples include IBM AS/400 and Oracle Solaris.
  • Performance: If you want to keep zombie or idle apps in your source environment based on their performance, the Retain strategy is a good choice.

Rehost

​
Rehosting your applications into the Cloud using this strategy is also called “lift and shift”. It means moving your application stack from your source environment to the Cloud without making any changes to the application itself. This means that you can quickly migrate your applications from on-premises or other cloud platforms to the Cloud, without worrying about compatibility or performance disruptions.

With rehosting, you can migrate a large number of machines, including physical, virtual, or other cloud platforms, to the Cloud without downtime or long cutover windows. This helps minimize disruption to your business and your customers. The length of downtime depends on your cutover strategy.

The rehosting strategy lets you scale your applications without making any cloud optimizations, which means you don't have to spend time or money making changes to your applications before migration. Once your applications are running in the cloud, you can optimize or re-architect them more easily and integrate them with other cloud services.
​

With regards to AWS Cloud, you can make the migration process even smoother by automating the rehosting process using services such as:

  • AWS Application Migration Service
  • AWS Cloud Migration Factory Solution​

Relocate


If you are looking to transfer a large number of servers or instances from your on-premises platform to a cloud version of the platform, then the relocate strategy could be the right choice for you. With this strategy, you can move one or more applications to a different virtual private cloud (VPC), AWS region or AWS account. For instance, you can transfer servers in bulk from VMware software-defined data centre (SSDC) to VMware Cloud on AWS, or move an Amazon Relational Database Service (Amazon RDS) DB instance to another VPC or AWS account.

The relocate strategy is great because you don't have to buy new hardware, rewrite applications, or modify your existing operation. During relocation, your application will keep serving users, which means you'll experience minimal disruption and downtime. In fact, it's the quickest way to migrate and operate your workload in the cloud because it won't affect the overall architecture of your application.

​Repurchase


Repurchasing your application is a migration strategy that involves replacing your existing on-premises application with a different version or product. This new application should offer more business value than the existing one, such as accessibility from anywhere, no infrastructure maintenance, and pay-as-you-go pricing models. This strategy can help reduce costs associated with maintenance, infrastructure, and licensing.

Here are some common use cases for the repurchase migration strategy:
  • Moving from traditional licenses to Software-as-a-Service (SaaS) to remove the burden of managing and maintaining infrastructure and reduce licensing issues.
  • Upgrading to the latest version or third-party equivalent of your existing on-premises application to leverage new features, integrate with cloud services, and scale the application more easily.
  • Replacing a custom application by repurchasing a vendor-based SaaS or cloud-based application to avoid recoding and re-architecting the custom application.

Before purchasing the new application, you should assess it based on your business requirements, particularly security and compliance. After purchasing the new application, here are the next steps:
​
  • Training your team and users on the new system
  • Migrating your data to the newly purchased application
  • Integrating the application into your authentication services, such as Microsoft Active Directory, to centralize authentication
  • Configuring networking to help secure communication between the purchased application, your users, and your infrastructure

Typically, the application vendor assists with these activities for a smooth transition. 

Replatform


Replatforming, also known as lift, tinker, and shift or lift and reshape, is a migration strategy where you move your application to the cloud and introduce some level of optimization to operate it more efficiently, reduce costs, or take advantage of cloud capabilities. For instance, you can move a Microsoft SQL Server database to Amazon RDS for SQL Server.

With the replatform strategy, you can make minimal or extensive changes to your application, depending on your business goals and your target platform. Here are some common use cases for replatforming:
​
  • If you want to save time and reduce costs, you can move to a fully managed or serverless service in the AWS Cloud.
  • To improve your security and compliance posture, you can upgrade your operating system to the latest version using the End-of-Support Migration Program (EMP) for Windows Server. This program lets you migrate your legacy Windows Server applications to the latest supported versions of Windows Server on AWS, without any code changes.
  • You can also reduce costs by using AWS Graviton Processors, custom-built processors developed by AWS.
  • If you want to cut costs by moving from a Microsoft Windows operating system to a Linux operating system, you can port your .NET Framework applications to .NET Core, which can run on a Linux operating system. You can use the Porting Assistant for .NET analysis tool to help you with this.
  • You can also improve performance by migrating virtual machines to containers, without making any code changes. By using the AWS App2Container migration tool, you can modernize your .NET and Java applications into containerized applications.

​The replatform strategy allows you to keep your legacy application running without compromising security and compliance. It reduces costs and improves performance by migrating to a managed or serverless service, moving virtual machines to containers, and avoiding licensing expenses.

Refactor or Re-architect


Refactoring or re-architecting is a cloud migration strategy that involves moving an application to the cloud and making changes to its architecture to take full advantage of cloud-native features. This is done to improve agility, performance, and scalability, and is often driven by business demands to scale, release products and features faster, and reduce costs.

Here are some common use cases for the refactor migration strategy:
​
  • Your legacy mainframe application can no longer meet the demands of the business due to its limitations or is too expensive to maintain.
  • You have a monolithic application that is slowing down product delivery and cannot keep up with customer needs and demands.
  • You have a legacy application that nobody knows how to maintain, or the source code is not available.
  • The application is difficult to test, or test coverage is low, which affects the quality and delivery of new features and fixes. Redesigning the application for the cloud can help increase test coverage and integrate automated testing tools.
  • For security and compliance reasons, you might need to move a database to the cloud, but need to extract some tables (such as customer information, patient, or patient diagnosis tables) and keep those tables on premises. In this scenario, you will need to refactor your database to separate the tables that will be migrated from those that will be kept on premises.

By refactoring your application, you can take advantage of cloud-native features to improve performance, scalability, and agility. This strategy is particularly useful when your legacy application can no longer meet your business needs or is too costly to maintain. 

Summary


Cloud migration is a complex process that requires careful planning and consideration of various factors. As discussed in this article, there are several strategies that organizations can use to migrate their applications to the cloud, including rehost, refactor, repurchase, and retire. Each strategy has its own benefits and drawbacks, and the choice of strategy will depend on the specific needs of the organization.

While the benefits of cloud migration are many, including improved scalability, agility, and cost savings, it's important to approach the process with caution and to take a strategic approach. A successful cloud migration requires a clear understanding of the business goals and requirements, as well as careful consideration of security, compliance, and data protection.
​

Organizations that are considering a cloud migration should seek guidance from experienced cloud migration specialists and take advantage of the many tools and resources that are available to help simplify the process. With careful planning and the right strategy, cloud migration can be a powerful tool for driving innovation, improving efficiency, and delivering real value to the organization and its customers.
0 Comments
Forward>>

    Author

    ​Tim Hardwick is a Strategy & Transformation Consultant specialising in Technology Strategy & Enterprise Architecture

    Archives

    June 2023
    May 2023
    April 2023
    March 2023

    Categories

    All
    Cloud Adoption Framework
    Cloud Migration
    Cloud Operating Model
    Hyperscalers
    Quantum Computing
    Webscalers
    Well Architected Framework

    View my profile on LinkedIn
Site powered by Weebly. Managed by iPage
  • Home
  • Architecture
  • Data & Apps
  • Cloud
  • Network
  • Cyber