QUANTUM FIELDS
  • Home
  • Architecture
  • Data & Apps
  • Cloud
  • Network
  • Cyber

Cloud Architecture

Cloud Migration: Strategy and Best Practices Part 2

16/5/2023

0 Comments

 
Picture
​In the previous article on Cloud Migration Strategy and Best Practice we discussed the importance of having a well-defined strategy, a clear scope and realistic timeline for successful cloud migration projects. We also highlighted the critical role that people play in the success of cloud migration projects.

In this article, we will shift our focus to the technology perspective of cloud migration. We will explore how technology can be used to achieve the scale and velocity required, while aligning with the strategy, scope and timelines of the migration project. The key principle is to automate wherever possible, utilizing tools such as discovery tools, migration implementation tools, configuration management databases, inventory spreadsheets, and project management tools.

​Once the necessary tools are selected, it's essential to ensure that the migration team has the skills to use them effectively. With the right tools and skills in place, technology can play a critical role in accelerating large migrations.

Technology Perspective


In order to accelerate large migrations, technology can provide a solid foundation. One example of this is the Cloud Migration Factory solution, which focuses on end-to-end automation for migrations. This section explores some best practices for using technology to achieve the scale and velocity required, while also aligning with the strategy, scope, and timelines of the migration project.
​

The key principle here is to automate wherever possible. When dealing with thousands of servers, performing manual tasks can be a costly and time-consuming effort. To aid in the migration process, several tools are typically used, including discovery tools, migration implementation tools, configuration management databases (CMDBs), inventory spreadsheets, and project management tools. These are utilized at various stages of the migration, from assessment to mobilization through to implementation. The selection of tools is determined by the business objectives and timelines.

Once the migration phases are planned and the necessary tools are selected, it's essential to ensure that the migration team has the skills to use them effectively. If there are any gaps in skills or experience, targeted training should be planned to ramp up the team's abilities. Additionally, it's beneficial to create events where teams can gain experience with the migration tooling in a safe environment.

​For example, are there sandpit or lab servers that teams can migrate to gain experience with the tooling? Alternatively, can initial development workloads be used for learning purposes? With the right tools and skills in place, technology can play a critical role in accelerating large migrations.

Automation, Tracking, and Tooling Integration


Automate Migration Discovery to Reduce the Time Required
​

When starting a large migration project, it's important to figure out what needs to be migrated and how to migrate it. This process is called discovery and it involves capturing key information about the workloads that will be migrated. To speed up the migration, it's essential to automate the discovery process and import the captured data into the migration factory. This significantly reduces the time and effort required to complete the discovery phase.

For example, you could automate your data intake process by hosting your migration metadata on Microsoft SharePoint and using an AWS Lambda function to load the data into the migration factory automatically. This would enable you to reduce manual work, minimize human error and speed up their migration process.

Automate Repetitive Tasks

During the migration implementation phase, there are many repetitive tasks that must be done frequently. For instance, if you're using AWS Application Migration Service (MGN), you'll need to install the agent on every server that's included in the migration. To handle these tasks efficiently and quickly, it's best to set up a migration factory tailored to your specific business and technical needs.

A migration factory uses a standardized dataset to speed up the migration process, and after identifying all the tasks involved, you can spend time automating as many manual tasks as possible with prescriptive runbooks. One example of a migration automation solution is the Cloud Migration Factory. It provides the foundations for automating aspects specific to your organization. For instance, you may want to update a flag in your CMDB to indicate that the on-premises servers can now be decommissioned.

You could create an automation script that performs this task at the end of the migration wave, and Cloud Migration Factory would provide the centralized metadata store with all the wave, application, and server metadata. This way, the automation script can connect to Cloud Migration Factory, retrieve a list of servers in that wave, and take appropriate actions. Additionally, Cloud Migration Factory supports AWS Application Migration Service, which can further streamline your migration process.

 
Automate Tracking and Reporting to Speed Decision Making

To speed up decision-making during migration projects, it's important to have a system in place that tracks and reports live data to all stakeholders involved in the project. This includes teams such as application, testing, decommissioning, architecture, infrastructure, and leadership. Each team needs access to live data to perform their roles and make decisions. To achieve this, we recommend building an automated migration reporting dashboard that tracks and reports on key performance indicators (KPIs) for the program.

For example, network teams need to know the upcoming migration waves to understand the impact on the shared connection between on-premises resources and AWS, while leadership teams need to know how much of the migration is complete. By having a dependable, automated live feed of data, miscommunications can be prevented, and decisions can be made based on reliable information. A large healthcare customer was able to simplify tracking and communications while increasing the migration velocity by using Amazon QuickSight to build automated dashboards that visualized the data.

 
Explore Tooling that Can Facilitate Your Migration
​

When it comes to managing a large migration, selecting the right tools is crucial. However, choosing the right tools can be a challenge, especially if your organization lacks experience in managing large migrations. To ensure a successful migration, we recommend investing time in exploring the available tooling options to find the best fit for your specific needs. While some tools may come with a licensing cost, they can offer significant cost benefits in the long run. Additionally, you may find that your organization already has tooling in place that can support your migration. For example, your application performance monitoring tooling can provide valuable discovery information about your estate.

Prerequisites and Post Migration Validation


Build the Landing Zone During the Pre-Migration Phase

To ensure a successful migration to AWS, it is recommended to build the target environment, or landing zone, ahead of time during the pre-migration phase. This means creating a well-designed and secure environment that includes monitoring, governance, and operational controls, among other things. By having the landing zone in place before the migration, you can minimize the risks and uncertainties that come with running your workloads in a new environment.

Instead of building the VPCs and subnets during the migration wave, focus on building and validating the landing zone. This approach will help you ensure that the environment is well-architected and meets your business and technical requirements. Once the landing zone is in place, you can then focus on migrating your workloads without worrying about managing the account or VPC-level aspects. By building the landing zone during the pre-migration phase, you can streamline the migration process and minimize disruptions to your business.
 
Outline Prerequisite Activities

To ensure a successful migration, it's crucial to outline the prerequisite activities that need to be completed before the migration takes place. Along with building the landing zone, it's essential to identify other technical prerequisites, especially those with a lengthy lead time, such as making necessary firewall changes. Communicating these requirements early on can help prepare and allocate the necessary resources, ensuring that the migration stays on track and meets the intended timeline.
 
Implement Post-Migration Checks for Continuing Improvement

To ensure continued improvement, it's equally important to implement post-migration checks. These checks can include operations integration, cost optimization, and governance and compliance checks, among others. The post-migration phase is an excellent opportunity to implement cost-control operations, such as using Amazon CloudWatch to assess instance utilization and determine whether a smaller-sized instance would be suitable.

A real-life example of the importance of the post-migration phase is a large technology customer who didn't include it initially. After migrating more than 100 servers, they discovered that the AWS Systems Manager Agent (SSM Agent) wasn't configured correctly, causing the migration to stall. Additionally, they found that the instances were much larger than initially estimated, which would have resulted in higher costs if left unchecked. As a result, the customer implemented a cost checkpoint at the end of each migration wave to avoid similar issues in the future.

Summary


Successful cloud migration projects require a holistic approach that considers people, process, and technology. In this article we have focused on the technology perspective of cloud migration, which is a critical aspect of any successful migration project. 

The automation of migration discovery, repetitive tasks, tracking, and reporting can significantly reduce the time and effort required to complete a migration project. By automating these aspects, migration projects can accelerate the migration process while aligning with the project's scope, strategy, and timelines. To ensure a successful migration, it is crucial to explore tooling that can facilitate the migration process.
 
​​In the next article, we will delve deeper into the process perspective and provide insights and best practices for navigating the procedural aspects of cloud migration.
0 Comments

Cloud Migration: ​Strategy and Best Practices Part 1

16/5/2023

0 Comments

 
Picture
​Migrating to the cloud can offer numerous benefits, including improved scalability, flexibility, and cost savings. However, the migration process can be complex and challenging, especially for organizations that are new to cloud computing. To ensure a successful migration, it's crucial to follow best practices and avoid common pitfalls.

Now, migrating at scale isn't just about the number of servers you're moving over. It also involves a whole host of complexities like people, processes, and technology. This is part one of a three-part series of articles that dive deeper into cloud migration strategy and best practice diving deeper into people, process and technology. In this article, we'll focus on the ‘people’ perspective of large cloud migration projects. By following these best practices, you can streamline the migration process, reduce risk, and maximize the benefits of cloud computing.

Strategy, Scope and Timeline

​
The success of any migration program relies on three key elements: scope, strategy, and timeline. These elements need to be aligned and understood from the very beginning of your migration program to set the stage for a successful journey. Any changes to one element will affect the others, so realignment should be factored in for every change, no matter how basic or sensible the change might seem.

Strategy - Why do You Want to Migrate?

There are various reasons why you might be planning to migrate to AWS. Regardless of your reasons, it's essential to understand what your drivers are, communicate them, and prioritize them. Each additional driver adds time, costs, scope, and risks to your migration program. Once you define your migration strategy, alignment of requirements across various stakeholders and teams is crucial for success.

Different teams like Infrastructure, Security, Application, and Operations need to work towards a single goal and align their priorities with a single timeline of migrations. We recommend exploring how the desired business outcomes can be aligned across the various teams to minimize friction and ensure a smooth migration.

Scope - What are You Migrating?

It's not uncommon for the total scope of a migration program to be undefined, even when you're already halfway through the migration. Unknowns like shadow IT or production incidents can pop up unexpectedly, causing delays and shifts in your plans. To avoid this, it's recommended to invest time in defining the scope, working backwards from your target business outcome. Using discovery tooling to uncover assets is a best practice that can help you define the scope. Be flexible and have contingency plans in place to keep the program moving forward, as the scope will inevitably change with large migrations.

Timeline - When do You Need to Complete the Migration?

Your migration program's timeline should be based on your business case and what's possible to achieve in the allocated time. If your driver for migrating is based on a fixed date of completion, you must choose the strategy that meets that timeline requirement. For these time-sensitive types of migrations, it's recommended to follow the "Migrate first, then modernize" approach. This helps set expectations and encourages teams to align their individual project plans and budgets with the overall migration goal. It's important to address any disagreements as early as possible in the project, fail fast, and engage the right stakeholders to ensure that alignment is in place.

On the other hand, if your main goal of migration is to gain the benefits of application modernization, this must be called out early in the program. Many programs start with an initial goal based on a fixed deadline, and they don't plan for the requirements from stakeholders who want to resolve outstanding issues and problems. It's important to note that modernization activities during a migration can affect the functionality of business applications. Even a seemingly small upgrade like an operating system version change can have a significant impact on the program timelines. Therefore, it's crucial not to consider these upgrades trivial and to plan accordingly.

Best Practices for Large Migrations


Migrating to the cloud can be a daunting task, especially for large organizations. The success of a large migration project depends on several factors that need to be addressed from the very beginning of the project. In this section, we will discuss some best practices for large migrations that are based on data from other customers. These practices are divided into three categories:
​
  • People
  • Technology
  • Processes
​

People Perspective


This section focuses on the following key areas of the people perspective:
​
  • Executive support: Identifying a single-threaded leader who’s empowered to make decisions
  • Team collaboration and ownership: Collaborating among various teams
  • Training: Proactively training teams on the various tooling

Executive support


Identify a Single-Threaded Leader

When it comes to large migrations, it's crucial to have the right people in place who can make informed decisions and ensure that the project stays on track. This involves identifying a single-threaded leader who is accountable for the project's success and empowered to make decisions. The leader should also help avoid silos and streamline work-streams by maintaining consistent priorities.

For instance, a global customer was able to scale from one server per week at the outset of the program to over 80 servers per week at the start of the second month. This was only possible due to the CIO's full support as a single-threaded leader. The CIO attended weekly migration cutover calls with the migration team to ensure real-time escalation and resolution of issues, which accelerated the migration velocity.

Align the Senior Leadership Team
​

Achieving alignment between teams regarding the success criteria of the migration is crucial. Although a small, dedicated team can handle migration planning and implementation, defining the migration strategy and carrying out peripheral activities can pose challenges that may require involvement from different areas of the IT organization.

​These areas include business, applications, networking, security, infrastructure, and third-party vendors. In such cases, it is essential to have direct involvement from application owners and leadership, establish alignment, and establish a clear escalation path to the single-threaded leader.

Team Collaboration and Ownership


Create a Cross-Functional Cloud-Enablement Team

To successfully migrate to the cloud, it's crucial to have a team that is focused on enabling the organization to work efficiently in the cloud. We recommend creating a Cloud Enablement Engine (CEE), which is a cross-functional team responsible for ensuring the organization's readiness for migrating to AWS. The CEE should include representation from various departments, including infrastructure, applications, operations, and security, and be accountable for developing policies, defining and implementing tools and processes, and establishing the organization's cloud operations model.

As the cutover data approaches, it is a good idea to setup a war room, where stakeholders from different areas, such as infrastructure, security, applications, and business, can work together to resolve issues. This will enable the team to meet deadlines and successfully complete the migration.

Define Requirements for All Stakeholders

It's important to plan in advance for the involvement of teams and individuals who are not part of the core migration team. This involves identifying these groups and defining their role during the migration planning stages. Specifically, it's important to involve the application teams as they possess crucial knowledge of the applications, and their participation is needed to diagnose issues and sign off on the cutover.

This is where a RACI can be very useful. 
RACI is a popular project management and organizational tool used to clarify the roles and responsibilities of individuals or teams involved in a project or process. It helps ensure that everyone understands their assigned tasks and that accountability is clearly defined. The term "RACI" stands for Responsible, Accountable, Consulted, and Informed, which are the four key roles involved in the process.

While the core team will lead the migration, the application teams will likely play a role in validating the migration plan and testing during cutover. Many organizations view cloud migration as an infrastructure project, but it's important to recognize that it's also an application migration. Failing to involve application teams can lead to issues during the migration process.

When selecting a migration strategy, it's recommended to consider the application team's required involvement. For instance, a rehost strategy may require less application-team involvement compared to a replatform or refactor strategy, which involve more changes to the application landscape. If application owner availability is limited, it may be preferable to use a rehost or replatform strategy rather than refactor, relocate, or repurchase strategies.
 
Validate that there are no Licensing issues when migrating workloads
​

To avoid potential licensing issues when migrating workloads to the cloud, it is important to validate that the licenses will still be valid in the new environment. It is possible that licensing agreements may be focused on on-premises infrastructure, such as CPU or MAC address, or may not allow hosting in a public cloud environment. Renegotiating licensing agreements can be time-consuming and may delay the migration project.

​To prevent licensing issues, we suggest working with sourcing or vendor management teams as soon as the migration scope is defined. This can also impact the target architecture and migration strategy, so it is important to take licensing into account during the planning phase.
 

Training


Train Teams on New Tooling and Processes

After defining the migration strategy, it's important to assess what training is required for both the migration and the target operating model. Using new tooling, such as AWS Database Migration Service, can cause delays during the migration, so it's recommended to provide hands-on training to teams. Automation is also key to accelerate large migrations.

Summary


Large-scale migration to the cloud requires a well-defined strategy, scope and timeline. This includes understanding the business drivers for the migration, identifying the workloads to be migrated, and developing a roadmap for the migration process.

In addition, successful cloud migration projects require a holistic approach that considers people, process, and technology. While it's important to have the right technology and processes in place, it's equally crucial to focus on the people involved in the migration. This includes identifying and engaging with stakeholders, establishing clear communication channels, and providing adequate training and support for employees.

In this article we have focused on the people perspective of cloud migration, which is a critical aspect of any successful migration project. We have discussed the importance of establishing a clear scope and strategy for the migration project, as well as setting realistic timelines to ensure a smooth transition.
​

In future articles, we will delve deeper into these areas and provide insights and best practices for navigating the technical and procedural aspects of cloud migration.​​​
0 Comments

​Achieving Cloud Excellence with the AWS Well-Architected Framework

12/5/2023

1 Comment

 
Picture
​AWS Well-Architected Framework is a set of best practices and guidelines for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It offers a structured approach to evaluate and improve existing architectures and plan new ones.

By following the guidance provided by the AWS Well-Architected Framework, businesses can optimize their cloud infrastructure, improve their applications, and reduce operational costs. In this article, we will explore each of these pillars in detail and learn how to apply the principles to your cloud architecture.

Consistent use of the framework ensures that your operations and architectures are aligned with industry best practices, enabling you to identify areas for improvement. We believe that adopting a Well-Architected approach that incorporates operational considerations significantly improves the likelihood of business success.

Here are the six pillars on which the AWS Well-Architected Framework is based. An easy way to remember these is through using the acronym PSCORS:

  • P - Performance Efficiency
  • S - Security
  • C - Cost Optimization
  • O - Operational Excellence
  • R - Reliability
  • S - Sustainability

 
Now that we have introduced the AWS Well-Architected Framework, let's dive deeper into the six pillars that form the basis of this framework. Each pillar covers a different aspect of building and running workloads in the cloud and provides a set of best practices and guidelines to help you improve the overall quality of your workloads. Let's explore each pillar in more detail to gain a better understanding of how they can help you achieve operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability.

​Performance Efficiency Pillar


The performance efficiency pillar aims to optimize IT and computing resources allocation by providing a structured and streamlined approach. It involves selecting the appropriate resource types and sizes that meet workload requirements, monitoring performance, and maintaining efficiency as business needs evolve.

Design Principles

To achieve and maintain efficient workloads in the cloud, consider the following design principles:
​
  • Democratize advanced technologies: Allow your team to focus on product development by delegating complex tasks to your cloud vendor. Rather than asking your IT team to learn about hosting and running a new technology, consider consuming the technology as a service. For instance, NoSQL databases, media transcoding, and machine learning are specialized technologies that become services in the cloud, allowing your team to consume them.
  • Go global in minutes: Deploying your workload in multiple AWS Regions around the world provides lower latency and a better experience for your customers at minimal cost.
  • Use serverless architectures: Serverless architectures remove the need to run and maintain physical servers for traditional compute activities. For instance, serverless storage services can act as static websites (eliminating the need for web servers) and event services can host code. This eliminates the operational burden of managing physical servers and lowers transactional costs because managed services operate at cloud scale.
  • Experiment more often: With virtual and automatable resources, you can quickly carry out comparative testing using different types of instances, storage, or configurations.
  • Consider mechanical sympathy: Mechanical sympathy is when you use a tool or system with an understanding of how it operates best. When you understand how a system is designed to be used, you can align with the design to gain optimal performance. Choose the technology approach that aligns best with your goals. For example, consider data access patterns when selecting database or storage approaches. 

    "You don't have to be an engineer to be be a racing driver, but you do have to have Mechanical Sympathy. "
                                                                                       Jackie Stewart, racing driver


The performance efficiency pillar focuses on optimizing IT and computing resources for workload requirements. By leveraging advanced technologies as services, adopting serverless architectures, going global in minutes, experimenting more often, and selecting the technology approach that aligns best with your goals, you can improve performance, lower costs, and increase efficiency in the cloud. Following these principles can help you achieve and maintain efficient workloads that scale with your business needs.

​​Security Pillar


The security pillar focuses on safeguarding systems and data. It includes topics like data confidentiality, integrity, availability, permission management, and establishing controls to detect security events. The security pillar offers guidance for architecting secure workloads on AWS by utilizing cloud technologies to improve the security posture. 

Design Principles

To strengthen the security of workloads, there are several design principles that AWS recommends:
​
  • Implement a strong identity foundation: Implement the principle of least privilege (POLP) and separation of duties to authorize each interaction with AWS resources. Centralize identity management, and avoid using long-term static credentials.
  • Maintain traceability: Monitor, alert, and audit actions and changes to the environment in real-time to maintain traceability. Integrate log and metric collection with systems to automatically investigate and take action.
  • Apply security at all layers: Apply defense in depth approach with multiple security controls at all layers, including the edge of the network, VPC, load balancing, every instance and compute service, operating system, application, and code.
  • Automate security best practices: Automate software-based security mechanisms to improve the ability to securely scale more rapidly and cost-effectively. Create secure architectures by implementing controls that are defined and managed as code in version-controlled templates.
  • Protect data in transit and at rest: Classify data into sensitivity levels and use mechanisms such as encryption, tokenization, and access control where appropriate to protect data in transit and at rest.
  • Keep people away from data: Use mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data to avoid human error when handling sensitive data.
  • Prepare for security events: Prepare for an incident by having incident management and investigation policy and processes that align with organizational requirements. Run incident response simulations and use tools with automation to increase the speed of detection, investigation, and recovery.

By following the design principles discussed above, you can take advantage of cloud technologies to strengthen your workload security and reduce the risk of security incidents. These principles provide in-depth, best-practice guidance for architecting secure workloads on AWS. It is important to continuously review and improve your security posture to protect your data and systems from potential threats.

Cost Optimization Pillar


The cost optimization pillar focuses on controlling fund allocation, selecting the right type and quantity of resources, and scaling efficiently to meet business needs without incurring unnecessary costs. To achieve financial success in the cloud, it is crucial to understand spending over time and invest in cloud financial management.

Design Principles

To achieve cost optimization, consider the following design principles:
​
  • Implement cloud financial management: Build capability through knowledge building, programs, resources, and processes to become a cost-efficient organization.
  • Adopt a consumption model: Pay only for the computing resources you consume and increase or decrease usage based on business requirements.
  • Measure overall efficiency: Measure business output and costs associated with delivery to understand the gains you make from increasing output, functionality, and reducing cost.
  • Stop spending on undifferentiated heavy lifting: AWS removes the operational burden of managing infrastructure, allowing you to focus on your customers and business projects.
  • Analyze and attribute expenditure: Use cloud tools to accurately identify the cost and usage of workloads and attribute IT costs to revenue streams and individual workload owners. This helps measure ROI and optimize resources to reduce costs.

The cost optimization pillar is focused on minimizing unnecessary spending while ensuring that computing resources are allocated optimally. By investing in cloud financial management and adopting a consumption model, organizations can significantly reduce costs while maintaining efficiency. Measuring overall efficiency, stopping spending on undifferentiated heavy lifting, and analyzing and attributing expenditure can also contribute to achieving cost optimization.

​Operational Excellence Pillar


The operational excellence pillar within the AWS Well-Architected Framework is focused on running and monitoring systems, and continuously improving processes and procedures. This includes automating changes, responding to events, and defining standards to manage daily operations.

AWS define operational excellence as a commitment to building software correctly while consistently delivering a great customer experience. It includes best practices for organizing teams, designing workloads, operating them at scale, and evolving them over time. By implementing operational excellence, teams can focus more of their time on building new features that benefit customers, and less time on maintenance and firefighting.

​The ultimate goal of operational excellence is to get new features and bug fixes into customers' hands quickly and reliably. Organizations that invest in operational excellence consistently delight customers while building new features, making changes, and dealing with failures. Along the way, operational excellence drives towards continuous integration and continuous delivery (CI/CD) by helping developers achieve high-quality results consistently.


Design Principles

The following are the design principles for operational excellence in the cloud:
​
  • Perform operations as code: Applying engineering discipline that is used for application code to the entire environment in the cloud. This involves defining the entire workload (applications, infrastructure, etc.) as code and updating it with code. It also involves scripting operational procedures and automating their process by launching them in response to events. Performing operations as code helps limit human error and create consistent responses to events.
  • Make frequent, small, reversible changes: Design workloads to allow components to be updated regularly, which increases the flow of beneficial changes into the workload. Make changes in small increments that can be reversed if they fail, aiding in the identification and resolution of issues introduced to the environment without affecting customers when possible.
  • Refine operational procedures frequently: As operational procedures are used, teams should look for opportunities to improve them. As the workload evolves, procedures should be evolved appropriately. Regular game days should be set up to review and validate that all procedures are effective, and teams are familiar with them.
  • Anticipate failure: Performing "pre-mortem" exercises to identify potential sources of failure so they can be removed or mitigated. Testing failure scenarios and validating understanding of their impact. Testing response procedures to ensure they are effective and that teams are familiar with the process. Regular game days should be set up to test workload and team responses to simulated events.
  • Learn from all operational failures: Drive improvement through lessons learned from all operational events and failures. Share what is learned across teams and through the entire organization.

Operational excellence focuses on achieving a great customer experience by building software correctly, delivering new features and bug fixes quickly and reliably, and investing in continuous improvement. The design principles for operational excellence in the cloud are focused on performing operations as code, making frequent small reversible changes, refining operational procedures, anticipating failure, and learning from all operational failures. 

Reliability Pillar


The reliability pillar of AWS focuses on ensuring that workloads perform their intended functions and can recover quickly from failures. This section covers topics such as distributed system design, recovery planning, and adapting to changing requirements to help you achieve reliability.

Traditional on-premises environments can pose challenges to achieving reliability due to single points of failure, lack of automation, and lack of elasticity. By adopting the best practices outlined in this paper, you can build architectures that have strong foundations, resilient architecture, consistent change management, and proven failure recovery processes.

Design Principles

Here are some design principles that can help increase the reliability of your workloads:
​
  • Automatically recover from failure: Monitor key performance indicators (KPIs) to run automation when a threshold is breached. Use KPIs that measure business value and not just the technical aspects of the service's operation. This allows for automatic notification and tracking of failures, and for automated recovery processes that work around or repair the failure.
  • Test recovery procedures: In the cloud, you can test how your workload fails and validate your recovery procedures. You can use automation to simulate different failures or recreate scenarios that led to failures before. This approach exposes failure pathways that you can test and fix before a real failure scenario occurs, reducing risk.
  • Scale horizontally to increase aggregate workload availability: Replace one large resource with multiple small resources to reduce the impact of a single failure on the overall workload. Distribute requests across multiple, smaller resources to ensure they don't share a common point of failure.
  • Stop guessing capacity: In the cloud, you can monitor demand and workload utilization and automate the addition or removal of resources to maintain the optimal level to satisfy demand without over- or under-provisioning.
  • Manage change through automation: Changes to your infrastructure should be made using automation. Manage changes to the automation, which can be tracked and reviewed.
 
The reliability pillar encompasses the ability of a workload to perform its intended function correctly and consistently when expected to.

​Sustainability Pillar


The sustainability pillar aims to decrease the environmental impact of cloud workloads through a shared responsibility model, impact evaluation, and maximizing utilization to minimize required resources and reduce downstream impacts.

Design Principles

The following design principles can be applied to enhance sustainability and minimize impact when creating cloud workloads:

  • Understand the impact: Measure the impact of cloud workloads and forecast future impact by including all sources of impact. Compare productive output to total impact and use this data to establish key performance indicators (KPIs), improve productivity, and evaluate the impact of proposed changes over time.
  • Establish sustainability goals: Set long-term sustainability goals for each cloud workload and model the return on investment of sustainability improvements. Plan for growth and design workloads for reduced impact intensity per user or transaction.
  • Maximize utilization: Optimize workloads to ensure high utilization and maximize energy efficiency by eliminating idle resources, processing, and storage.
  • Anticipate and adopt new hardware and software: Monitor and evaluate new, more efficient hardware and software offerings and design for flexibility to allow rapid adoption of new technologies.
  • Use managed services: Adopt shared services to reduce the infrastructure needed to support cloud workloads, such as AWS Fargate for serverless containers and Amazon S3 Lifecycle configurations for infrequently accessed data.
  • Reduce downstream impact: Decrease the energy or resources required to use cloud services and eliminate the need for customers to upgrade their devices by testing with device farms.

The sustainability pillar of cloud computing focuses on reducing the environmental impact of running cloud workloads. By applying the design principles outlined, cloud architects can maximize sustainability and minimize impact. It is important to understand the impact of cloud workloads, establish sustainability goals, maximize utilization, anticipate and adopt new, more efficient hardware and software offerings, use managed services, and reduce the downstream impact of cloud workloads. Adopting these practices can help businesses and organizations support wider sustainability goals, identify areas of potential improvement, and reduce their overall environmental footprint.

Summary


In conclusion, the AWS Well-Architected Framework is a valuable resource for organizations looking to build and optimize their cloud infrastructure. By following the best practices outlined in the framework, businesses can improve their system's reliability, security, performance efficiency, cost optimization, and operational excellence. Regularly reviewing and updating your architecture based on the AWS Well-Architected Framework can help ensure that your system is scalable, efficient, and cost-effective. With the flexibility and scalability of the cloud, organizations can achieve their goals faster and more efficiently than ever before, and the AWS Well-Architected Framework provides a solid foundation to achieve that success.
1 Comment

Navigating Cloud Migration: Choosing the Right Strategy

3/5/2023

0 Comments

 
Picture
​Organizations are increasingly moving to the cloud due to a variety of factors, including the need for greater agility, scalability, cost-efficiency, and improved security. Cloud providers offer businesses access to a wide range of services and resources that can be quickly provisioned and scaled to meet changing business needs. 

Cloud migration is the process of moving an organization's data, applications, and other digital assets from on-premises infrastructure to a cloud computing environment. Migrating to the cloud offers many benefits, including greater flexibility, scalability, security, and cost savings. However, there are many different cloud migration strategies to choose from, each with its own unique set of benefits and challenges. 

When we talk about migrating a workload to the  Cloud, we're referring to the process of moving an application or workload to the cloud. In this article, we'll focus on the migration strategies for the AWS Cloud.

There are seven migration strategies that we call the 7 Rs, which are:

  • Retire
  • Retain
  • Rehost
  • Relocate
  • Repurchase
  • Replatform
  • Refactor or re-architect
 
It's really important to select the right migration strategies for a large migration. You might have already selected the strategies during the mobilize phase or during the initial portfolio assessment. In this section, we'll go over each migration strategy and its common use cases.

Retire


​Retire is the strategy we use for applications that we want to decommission or archive. This means that we can shut down the servers within that application stack. Here are some common use cases for the retire strategy:
​
  • There's no business value in retaining the application or moving it to the cloud.
  • We want to eliminate the cost of maintaining and hosting the application.
  • We want to reduce the security risks of operating an application that uses an operating system (OS) version or components that are no longer supported.
  • We might want to retire applications based on their performance, such as those that have an average CPU and memory usage below 5 percent, which we call zombie applications. We might also retire some applications that have an average CPU and memory usage between 5 and 20 percent over a period of 90 days, known as idle applications. To identify zombie and idle applications, we can use the utilization and performance data from our discovery tool.
  • Finally, if there has been no inbound connection to the application for the last 90 days, we might consider retiring it.

Retain


If you've got apps that you're not quite ready to migrate or that you want to keep in your source environment, the Retain strategy is your go-to. You might decide to migrate these apps at a later time, but for now, you want to keep them right where they are.

Here are some common scenarios where the Retain strategy is a good choice:
​
  • Security and compliance: If you need to comply with data residency requirements, you might want to keep certain apps in your source environment.
  • High risk: If an app is particularly complex or has a lot of dependencies, you might want to keep it in your source environment until you can make a detailed migration plan.
  • Dependencies: If an app depends on other apps that need to be migrated first, you might want to retain it until those other apps are in the cloud.
  • Recently upgraded: If you just invested in upgrading your current system, you might want to wait until the next technical refresh to migrate the app.
  • No business value to migrate: For apps with only a few internal users, it might not make sense to migrate them to the cloud.
  • Plans to migrate to software as a service (SaaS): If you're planning to move to a vendor-based SaaS solution, you might want to keep an app in your source environment until the SaaS version is available.
  • Unresolved physical dependencies: If an app is dependent on specialized hardware that doesn't have a cloud equivalent, such as machines in a manufacturing plant, you might want to retain it.
  • Mainframe or mid-range apps and non-x86 Unix apps: These apps require careful assessment and planning before migrating to the cloud. Examples include IBM AS/400 and Oracle Solaris.
  • Performance: If you want to keep zombie or idle apps in your source environment based on their performance, the Retain strategy is a good choice.

Rehost

​
Rehosting your applications into the Cloud using this strategy is also called “lift and shift”. It means moving your application stack from your source environment to the Cloud without making any changes to the application itself. This means that you can quickly migrate your applications from on-premises or other cloud platforms to the Cloud, without worrying about compatibility or performance disruptions.

With rehosting, you can migrate a large number of machines, including physical, virtual, or other cloud platforms, to the Cloud without downtime or long cutover windows. This helps minimize disruption to your business and your customers. The length of downtime depends on your cutover strategy.

The rehosting strategy lets you scale your applications without making any cloud optimizations, which means you don't have to spend time or money making changes to your applications before migration. Once your applications are running in the cloud, you can optimize or re-architect them more easily and integrate them with other cloud services.
​

With regards to AWS Cloud, you can make the migration process even smoother by automating the rehosting process using services such as:

  • AWS Application Migration Service
  • AWS Cloud Migration Factory Solution​

Relocate


If you are looking to transfer a large number of servers or instances from your on-premises platform to a cloud version of the platform, then the relocate strategy could be the right choice for you. With this strategy, you can move one or more applications to a different virtual private cloud (VPC), AWS region or AWS account. For instance, you can transfer servers in bulk from VMware software-defined data centre (SSDC) to VMware Cloud on AWS, or move an Amazon Relational Database Service (Amazon RDS) DB instance to another VPC or AWS account.

The relocate strategy is great because you don't have to buy new hardware, rewrite applications, or modify your existing operation. During relocation, your application will keep serving users, which means you'll experience minimal disruption and downtime. In fact, it's the quickest way to migrate and operate your workload in the cloud because it won't affect the overall architecture of your application.

​Repurchase


Repurchasing your application is a migration strategy that involves replacing your existing on-premises application with a different version or product. This new application should offer more business value than the existing one, such as accessibility from anywhere, no infrastructure maintenance, and pay-as-you-go pricing models. This strategy can help reduce costs associated with maintenance, infrastructure, and licensing.

Here are some common use cases for the repurchase migration strategy:
  • Moving from traditional licenses to Software-as-a-Service (SaaS) to remove the burden of managing and maintaining infrastructure and reduce licensing issues.
  • Upgrading to the latest version or third-party equivalent of your existing on-premises application to leverage new features, integrate with cloud services, and scale the application more easily.
  • Replacing a custom application by repurchasing a vendor-based SaaS or cloud-based application to avoid recoding and re-architecting the custom application.

Before purchasing the new application, you should assess it based on your business requirements, particularly security and compliance. After purchasing the new application, here are the next steps:
​
  • Training your team and users on the new system
  • Migrating your data to the newly purchased application
  • Integrating the application into your authentication services, such as Microsoft Active Directory, to centralize authentication
  • Configuring networking to help secure communication between the purchased application, your users, and your infrastructure

Typically, the application vendor assists with these activities for a smooth transition. 

Replatform


Replatforming, also known as lift, tinker, and shift or lift and reshape, is a migration strategy where you move your application to the cloud and introduce some level of optimization to operate it more efficiently, reduce costs, or take advantage of cloud capabilities. For instance, you can move a Microsoft SQL Server database to Amazon RDS for SQL Server.

With the replatform strategy, you can make minimal or extensive changes to your application, depending on your business goals and your target platform. Here are some common use cases for replatforming:
​
  • If you want to save time and reduce costs, you can move to a fully managed or serverless service in the AWS Cloud.
  • To improve your security and compliance posture, you can upgrade your operating system to the latest version using the End-of-Support Migration Program (EMP) for Windows Server. This program lets you migrate your legacy Windows Server applications to the latest supported versions of Windows Server on AWS, without any code changes.
  • You can also reduce costs by using AWS Graviton Processors, custom-built processors developed by AWS.
  • If you want to cut costs by moving from a Microsoft Windows operating system to a Linux operating system, you can port your .NET Framework applications to .NET Core, which can run on a Linux operating system. You can use the Porting Assistant for .NET analysis tool to help you with this.
  • You can also improve performance by migrating virtual machines to containers, without making any code changes. By using the AWS App2Container migration tool, you can modernize your .NET and Java applications into containerized applications.

​The replatform strategy allows you to keep your legacy application running without compromising security and compliance. It reduces costs and improves performance by migrating to a managed or serverless service, moving virtual machines to containers, and avoiding licensing expenses.

Refactor or Re-architect


Refactoring or re-architecting is a cloud migration strategy that involves moving an application to the cloud and making changes to its architecture to take full advantage of cloud-native features. This is done to improve agility, performance, and scalability, and is often driven by business demands to scale, release products and features faster, and reduce costs.

Here are some common use cases for the refactor migration strategy:
​
  • Your legacy mainframe application can no longer meet the demands of the business due to its limitations or is too expensive to maintain.
  • You have a monolithic application that is slowing down product delivery and cannot keep up with customer needs and demands.
  • You have a legacy application that nobody knows how to maintain, or the source code is not available.
  • The application is difficult to test, or test coverage is low, which affects the quality and delivery of new features and fixes. Redesigning the application for the cloud can help increase test coverage and integrate automated testing tools.
  • For security and compliance reasons, you might need to move a database to the cloud, but need to extract some tables (such as customer information, patient, or patient diagnosis tables) and keep those tables on premises. In this scenario, you will need to refactor your database to separate the tables that will be migrated from those that will be kept on premises.

By refactoring your application, you can take advantage of cloud-native features to improve performance, scalability, and agility. This strategy is particularly useful when your legacy application can no longer meet your business needs or is too costly to maintain. 

Summary


Cloud migration is a complex process that requires careful planning and consideration of various factors. As discussed in this article, there are several strategies that organizations can use to migrate their applications to the cloud, including rehost, refactor, repurchase, and retire. Each strategy has its own benefits and drawbacks, and the choice of strategy will depend on the specific needs of the organization.

While the benefits of cloud migration are many, including improved scalability, agility, and cost savings, it's important to approach the process with caution and to take a strategic approach. A successful cloud migration requires a clear understanding of the business goals and requirements, as well as careful consideration of security, compliance, and data protection.
​

Organizations that are considering a cloud migration should seek guidance from experienced cloud migration specialists and take advantage of the many tools and resources that are available to help simplify the process. With careful planning and the right strategy, cloud migration can be a powerful tool for driving innovation, improving efficiency, and delivering real value to the organization and its customers.
0 Comments

​An Introduction to Serverless Architecture

26/4/2023

0 Comments

 
Picture
​​Serverless architecture has emerged as a popular approach for building modern, event-driven applications in the cloud. By abstracting away the underlying infrastructure, serverless architecture allows developers to focus on writing code and defining the business logic of their applications, rather than managing complex infrastructure themselves. 

Serverless architecture is a relatively new concept, with the first serverless platform, AWS Lambda, being introduced by Amazon Web Services in 2014. However, the ideas behind serverless architecture have been around for some time, and the term "serverless" was coined in 2012.

The primary problem that serverless architecture was designed to address is the challenge of managing and scaling infrastructure for modern, cloud-native applications. Traditional hosting models often require users to provision and manage servers, storage, and networking infrastructure, which can be complex and time-consuming. This can lead to a high degree of operational overhead and can be a significant barrier to rapid application development and deployment.
​
Serverless architecture aims to simplify the management of infrastructure by abstracting away the underlying hardware and networking layers, allowing developers to focus on writing code and defining the business logic of their applications. In this model, the cloud provider handles the scaling and provisioning of computing resources, which can be allocated dynamically based on the needs of the application.​What Exactly Is Serverless Architecture?
A serverless architecture is a cloud computing model in which the cloud provider manages and allocates computing resources automatically, as needed by the application, without the user having to manage the infrastructure. In a serverless architecture, the user writes and deploys functions, often called "serverless functions," that are executed by the cloud provider in response to events, such as user requests or scheduled tasks. These functions are designed to perform a specific task, such as processing data, accessing a database, or responding to an HTTP request.

Some of the components of a typical serverless architecture include:
​
  • Serverless Functions: These are small, single-purpose units of code that are designed to be triggered by events and perform a specific task. 
  • Event Sources: Events that trigger the execution of serverless functions. Event sources can include user requests, database changes, file uploads, etc.
  • Compute Services: The cloud provider offers a compute service, such as AWS Lambda, Azure Functions, or Google Cloud Functions, that is responsible for running serverless functions in response to events. 
  • Data Storage: Serverless architectures often require data storage, such as a database or file storage service. 
  • API Gateway: Serverless applications often expose APIs that can be used to access their functionality. API gateways provide a centralized point of access to these APIs and can handle authentication, authorization, and other security-related tasks.
  • Monitoring and Logging: Serverless architectures require monitoring and logging to track the performance and usage of functions, detect errors and anomalies, and troubleshoot issues. 

Overall, a serverless architecture is a highly scalable and cost-effective way to build modern, event-driven applications that can be deployed quickly and easily. Some popular serverless platforms include AWS Lambda, Azure Functions, and Google Cloud Functions, but there are many other providers and frameworks available.​Benefits of a Serverless Architecture
​
  • Scalability: Serverless architectures are highly scalable, as they can automatically scale up or down based on the number of requests or events processed by the application. This means that applications can handle sudden spikes in traffic or usage without the need for manual intervention or provisioning of additional resources.
  • Cost-Effectiveness: Serverless architectures can be cost-effective, as users only pay for the actual usage of the application, rather than having to pay for fixed amounts of infrastructure capacity. This can lead to significant cost savings, especially for applications that experience unpredictable or highly variable levels of usage.
  • Reduced Operational Overhead: Serverless architectures can reduce the operational overhead of managing infrastructure, as the cloud provider takes care of the underlying infrastructure and scaling. This allows developers to focus on writing code and defining the business logic of their applications.
  • Faster Time to Market: Serverless architectures can enable faster time to market, as developers can deploy changes and new features quickly and easily, without having to manage complex infrastructure or worry about scaling issues.
​​Challenges of a Serverless Architecture

  • Cold Start Latency: Serverless functions can experience cold start latency, which is the time it takes to initialize the function when it is first called. This can result in slower response times for the first request or event, which can be a problem for applications that require low latency.
  • Limited Control: Serverless architectures offer limited control over the underlying infrastructure, which can be a problem for applications that require fine-grained control over the hardware or network configuration.
  • Vendor Lock-In: Serverless architectures can result in vendor lock-in, as users are dependent on the cloud provider's platform and services. This can make it difficult to switch providers or move the application to a different platform.
  • Debugging and Testing: Serverless architectures can be more challenging to debug and test, as functions are typically deployed in isolation and may interact with other functions or services asynchronously.

Serverless architecture provides a cost-effective and scalable approach for building event-driven applications in the cloud. However, it also presents challenges such as cold start latency and limited control over infrastructure. To address these challenges, developers must follow best practices when designing and deploying serverless applications. By doing so, they can take advantage of the benefits of serverless architecture while minimizing its drawbacks, resulting in highly performant and scalable applications.
0 Comments

Hyperscalers and Telecoms

14/4/2023

0 Comments

 
Picture
Hyperscale cloud providers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud, have been expanding their services beyond the traditional realm of cloud computing and into the telecoms  industry. ​

These providers have recognized the growing demand for high-speed, low-latency networks that are necessary for emerging technologies like 5G, Internet of Things (IoT), and artificial intelligence (AI).

To enter the telecoms business, hyperscale cloud providers are leveraging their expertise in cloud computing, data analytics, and artificial intelligence to offer a range of services to telecom companies. These services include network virtualization, edge computing, and analytics that enable telecom companies to offer new services, reduce operating costs, and improve the overall customer experience.

One of the key advantages of hyperscale cloud providers entering the telecoms business is their ability to scale their services quickly and efficiently. With their vast resources and global infrastructure, these providers can offer telecom companies the ability to rapidly expand their networks, improve performance, and reduce costs.

In addition, hyperscale cloud providers are also investing heavily in developing new technologies that can be used in the telecoms industry. For example, AWS has launched its Wavelength service, which enables developers to build applications that run on 5G networks with ultra-low latency. Similarly, Microsoft Azure has partnered with telecom companies to develop solutions that leverage its AI capabilities to enhance network performance and security.

Overall, the entry of hyperscale cloud providers into the telecoms industry is likely to drive significant innovation and change. By leveraging their expertise and resources, these providers can help to accelerate the development of new technologies and services that will benefit both telecom companies and their customers.

There are several key challenges that hyperscale cloud providers face when moving into the telecoms market:
​
  • Regulatory compliance: Telecommunications is a heavily regulated industry, and hyperscale cloud providers must comply with a range of regulations related to privacy, security, and data protection. These regulations vary by country and region, which can make it challenging for hyperscale cloud providers to navigate the regulatory landscape.
  • Network infrastructure: While hyperscale cloud providers have extensive cloud infrastructure, they may not have the same level of physical network infrastructure as telecom companies. To enter the telecoms market, hyperscale cloud providers need to build or partner with existing telecom companies to expand their network infrastructure.
  • Competition from existing players: The telecoms market is highly competitive, and hyperscale cloud providers must compete with established players that have deep expertise in the industry. These companies also have strong customer relationships and established networks, which can make it challenging for hyperscale cloud providers to gain market share.
  • Technical challenges: The telecoms market requires specialized technical expertise, such as low latency networking and real-time data processing. Hyperscale cloud providers may need to invest in new technologies and expertise to meet these requirements.
  • Business models: The business models of hyperscale cloud providers may differ from those of telecom companies, which can create challenges when trying to align pricing models, revenue sharing, and other commercial terms.

Overall, hyperscale cloud providers face significant challenges when moving into the telecoms market. However, with their expertise in cloud computing, data analytics, and AI, and their vast resources, they are well positioned to bring innovation and disruption to the industry.
0 Comments

Web-scale or Hyper-scale Provider?

13/4/2023

0 Comments

 
Picture
​​Web-scale and hyper-scale are two terms used to describe the size and capabilities of cloud computing providers. While both types of providers offer large-scale cloud computing resources, there are some key differences between them.
​
Web-scale providers typically offer a more modestly sized cloud infrastructure compared to hyper-scale providers. They are generally more focused on serving the needs of mid-sized businesses and startups, with their resources being sufficient for hosting small to medium-sized workloads.

Hyper-scale providers, on the other hand, offer a massive and highly scalable infrastructure that can support huge amounts of data and massive workloads. They are capable of handling the most demanding and complex cloud computing requirements for large enterprises, governments, and other organizations that require a high degree of scalability and reliability.
​
Hyper-scale providers typically have a more extensive network of data centers located across different regions, making it easier for customers to access their services from anywhere in the world. They also have a wider range of offerings, including advanced machine learning and AI tools, advanced security features, and a wider variety of storage and database options.

Overall, the main difference between web-scale and hyper-scale providers is the scale and complexity of their infrastructure. While web-scale providers may be more suitable for small to medium-sized workloads, hyper-scale providers offer the most extensive and powerful cloud computing capabilities available, suitable for the most demanding workloads and applications.


Here are some examples of both types of cloud providers:

Web-scale cloud providers:
  • DigitalOcean
  • Cloudflare
  • Linode
  • Rackspace
  • Vultr
  • Heroku
  • Joyent
  • OVHCloud
  • Backblaze

Hyperscale cloud providers:
​
  • Amazon Web Services (AWS)
  • Microsoft Azure
  • Google Cloud Platform (GCP)
  • IBM Cloud
  • Oracle Cloud
  • Alibaba Cloud

Overall, the choice between web-scale and hyperscale providers depends on the specific needs of the business or organization. Web-scale providers may be more suitable for small to medium-sized workloads, while hyperscale providers offer the most extensive and powerful cloud computing capabilities available, suitable for the most demanding workloads and applications.
0 Comments

An Introduction to Quantum Computing

3/3/2023

0 Comments

 
Picture
​Quantum computing is a type of computing that uses quantum bits, or qubits, instead of classical bits used in classical computing. Qubits are quantum mechanical systems, such as atoms or electrons, that can be in multiple states at once, allowing for multiple calculations to be performed simultaneously.

​In classical computing, information is stored as bits, which can have a value of either 0 or 1. In contrast, qubits can exist in a superposition of both 0 and 1 at the same time, allowing for more complex calculations to be performed simultaneously.

Quantum computing has the potential to solve complex problems that are currently beyond the capabilities of classical computers. Here are some of the most promising use cases for quantum computing:
​
  • Optimization: Quantum computers can be used to optimize complex systems, such as financial portfolios, supply chains, and transportation networks. This is because quantum algorithms can quickly calculate the best possible outcomes among a large number of possibilities.
  • Machine learning: Quantum computing can be used to train machine learning models much faster than classical computing. This is because quantum computers can quickly process large amounts of data in parallel.
  • Cryptography: Quantum computers can be used to break many of the encryption techniques used to secure digital communications today. However, quantum computers can also be used to create new, more secure encryption techniques that are resistant to quantum attacks.
  • Material design: Quantum computing can be used to design new materials with specific properties. This is because quantum computers can quickly simulate the behavior of atoms and molecules, allowing researchers to design materials with specific chemical and physical properties.
  • Chemical simulation: Quantum computers can be used to simulate the behavior of complex chemical systems, such as proteins and drugs. This is because quantum mechanics plays a key role in these systems, and classical computers struggle to simulate them accurately.
  • Financial modeling: Quantum computing can be used to quickly calculate the value of financial derivatives and other complex financial instruments. This is because quantum algorithms can quickly simulate many possible market scenarios at once.

These are just a few of the many potential use cases for quantum computing. As the technology develops, it is likely that many new use cases will emerge.

Quantum computing hardware is available today and being used by hundreds of thousands of developers. Indeed, ever-more-powerful superconducting quantum processors are being developed at regular intervals, alongside crucial advances in software and quantum-classical orchestration. This work drives toward the quantum computing speed and capacity necessary to change the world.  ​
0 Comments
Forward>>

    Author

    ​Tim Hardwick is a Strategy & Transformation Consultant specialising in Technology Strategy & Enterprise Architecture

    Archives

    June 2023
    May 2023
    April 2023
    March 2023

    Categories

    All
    Cloud Adoption Framework
    Cloud Migration
    Cloud Operating Model
    Hyperscalers
    Quantum Computing
    Webscalers
    Well Architected Framework

    View my profile on LinkedIn
Site powered by Weebly. Managed by iPage
  • Home
  • Architecture
  • Data & Apps
  • Cloud
  • Network
  • Cyber