Cloud Migrations Demand Risk and Compliance Maturity

The COVID-19 pandemic brought undeniable disruptions for organizations and their employees whether business, personal or otherwise. Across the globe, businesses and governments alike were forced to try and manage these disruptions. For nearly all organizations, digitization initiatives were accelerated in a short period of time, from implementing work-from-home policies and launching new applications to support a distributed workforce to adopting artificial intelligence (AI) to adapt supply chain processes and more.

According to Gartner, by 2022, 30% of all security teams will have increased the number of employees working remotely on a permanent basis and by 2023, 40% of all enterprise workloads will be deployed in cloud infrastructure and platform services. The distributed cloud enables organizations to provide products and services when they’re needed in this era of work-from-anywhere, whether to their employees or customers.

Even in sectors such as energy and utilities, which are historically heavily reliant on standard on-premises installations and had often avoided cloud adoption, business leaders are realizing the value of migration, especially in light of the global pandemic. These businesses are able to deliver digital products with speed, experiment with tools such as artificial intelligence (AI) and robotic process automation (RPA) to increase productivity and to lower the total cost of ownership (TCO) across assets, among other benefits. Still, cloud migration risks abound, and new challenges may arise. A risk that cannot be ignored is the cybersecurity risk. Meeting regulatory IT compliance and managing risks involved with cloud computing are top challenges facing those migrating their workloads to the cloud.

Cloud Migrations and Security: Clear Risk and Compliance Gaps

Many cloud providers such as Microsoft Azure, AWS, Google Cloud and others have a global network of service models that include compliance teams and consulting organizations that help with risk and compliance for their cloud instances, whether public, private or hybrid cloud. Many of them have even built tools for customers to use to implement basic risk and compliance management in-house, leveraging relevant data and applications. However, monitoring and meeting security and compliance controls that span people, processes and technology for cloud environments, and in the broader context of the enterprise, is complex. This is one cause of cloud migration risk, and is a challenge for many other reasons; lack of measurement, visibility and accuracy are three of the greatest risks when migrating to the cloud. The point solutions that currently support most cloud instances don’t elevate the posture of cloud environments to that of the enterprise risk posture, and the majority of assessments still remain point-in-time and qualitative. Metrics are fractured and far from holistic, and very few, if any, solutions can provide insight beyond compliance and into real-time risk management.

IT and security regulations and standards are filled with requirements that were created before the cloud became a commodity. In the energy sector, for example, cloud security isn’t taken into account because regulators and industry leaders couldn’t fathom those platforms becoming as pervasive as they have, because on-premises installations were standard to the industry. On-premises installations are still a mainstay in energy, power and utilities, and for those who have become more comfortable with cloud migration processes, there is a clear and pressing need to leverage their human capital, processes and technologies to implement robust risk management practices.

Beyond regulatory compliance lags, many distributed organizations opt to have multiple providers in place, requiring a multi-cloud approach to compliance requirements and risk assessment. As more organizations consider cloud migration risks and begin shaping their cloud migration strategies, there are some innovations that address risk management and compliance in the cloud, but not many. Measuring, managing and reporting on compliance frameworks, making the shared responsibility model actionable, and getting a view into risk are all serious challenges. Cloud providers will continue to mature and bring new innovations to their services, but, to date, there hasn’t been a lot of anticipatory work done in this area. The focus has largely been on creating reactive solutions. In heavily regulated countries, the challenges only become greater.

Leverage AI Automation for Compliance and Risk Management

There is a shift occurring in cybersecurity and IT risk management, calling for the dramatic disruption of the legacy IT governance, risk and compliance (GRC) space and demanding a reevaluation of how we manage compliance and risk in the digital age. For years, data has been aggregated manually and analyses performed on out-of-date information. With the increasing availability of automation, the five functions of the NIST Cybersecurity Framework – identify, protect, detect, respond and recover – are becoming more continuous in nature and shifting into real-time management, from assessment to reporting and more.

Leveraging this technology in the cloud is no exception, but those who look to reinvent their approach must look for solutions that go beyond the siloed capabilities of cloud security posture management solutions and similar markets.

Ultimately, the true test of this next-generation approach comes when organizations are able to roll all of this data up to risk. With risk metrics that are supported by drill-downs, trend reports and risk profiles, executives can get the visibility they need into their posture with the most up-to-date data, informing their key business decisions. Using this next-generation approach to risk will inform global expansion, allow executives to evaluate risk across lines of business, and increase cyber maturity in any cloud-based organization.


Do follow us on social media and keep yourself updated with the top edge technology trends.

Why Every DevOps Team Needs a Spot Instance Strategy

Most DevOps teams use the public cloud extensively and focus a lot of energy on reducing cloud costs. According to one estimate, U.S. businesses spent $14.1 billion in 2019 just on wasted, unused cloud resources. Spot instances are one of the most important ways to reduce the cost of public cloud services.

Spot instances are not a new development. Amazon was the first to announce this pricing option in 2009, turning cloud computing into a market influenced by the dynamics of supply and demand. Microsoft Azure took more than a decade to follow suit, announcing its spot virtual machines offering in May 2020.

The spot pricing model is simple on the surface but can be complex to implement. Spot instances are how cloud providers sell their unused capacity. When compute instances are not ordered by anyone via the regular, on-demand pricing model, the cloud provider puts them up for auction at very favorable prices, usually around 10% to 20% less than the on-demand cost.

However, it’s not so simple to get this 80% to 90% discount. When another cloud customer requests the spot instance, the cloud provider sends a notification, and you need to immediately move your workloads before the instance is terminated. This makes it difficult to use spot instances for stateful, database-driven applications, or those that require high availability.

This is not the only challenge. The market price fluctuates and is much more volatile than on-demand pricing. This means the level of discount that can be achieved over time is relatively unpredictable.

However, there are at least three ways savvy DevOps teams can make great use of spot instances:

  1. Automation—DevOps teams are proficient with a range of automation tools, including configuration management, infrastructure as code (IaC) and cloud provider auto-scaling tools. These can be used to manage workloads in clusters and automatically fail over when a spot instance terminates.
  2. Dev/test environments—DevOps teams are responsible for setting up development and testing environments, which are extremely suitable for spot instances because they can usually tolerate brief interruptions, and, in many cases, are not stateful.
  3. CI/CD jobs—running jobs on Jenkins, GitLab and similar tools can be easy to scale on spot instances. These jobs are stateless, and if an instance drops, it’s easy to rerun the job on another.

AWS Spot Instances

AWS Spot Instances let you buy unused Amazon EC2 computing power at a significantly discounted price. You can specify a price, and when a spot instance is offered at that price, it is launched with the Amazon Machine Image (AMI) of your choice.

How Spot Instances Work on AWS

Spot instances are priced at a variable spot price, which is adjusted according to supply and demand conditions. To see current rates, use the AWS Spot Instance Advisor.

You create a spot instance request, specifying what instance types you are interested in, and the availability zones (AZ) in which they should run. If capacity is available, and their current price is less than your maximum bid, instances are launched.

Spot instances continue running until:

  • Capacity is no longer available (because the instances were requested by on-demand customers).
  • The price has risen over your maximum bid.
  • You request to terminate the instance, or it is automatically terminated by auto-scaling.

You can also order spot instances with a predefined duration—you then pay a static hourly rate for that entire duration (even if market price changes in the interim).

Automation Options

You can automatically scale instances on Amazon EC2, including both on-demand instances and spot instances in a single auto-scaling group. If spot instances are not available when scaling up, the group can use regular on-demand instances.

Amazon also supports mixing in reserved instances and savings plans (additional ways to save on on-demand instances by committing to a certain period of time or total capacity). So you can combine multiple saving methods in the same auto-scaling group.

You can improve availability by deploying applications across multiple instance types running in multiple AZs. By allowing multiple instance types, you tap into multiple pools and increase the chances of obtaining a spot instance when you need it.

Azure Spot Instances (Spot VMs)

Azure offers Spot VMs that give you access to unused compute capacity. You can request a single spot VM, or launch multiple spot VMs using an Azure VM Scale Set (VMSS). Spot VMs replaced the previous Low Priority VMs feature, which let you purchase VMs that were in low demand on Azure for a reduced price.

The spot price of VMs on Azure depends on the total capacity available for that specific instance size and SKU (instance type) in the Azure region. Azure commits to changing pricing slowly—avoiding sudden spikes—to maintain pricing stability and make it easier to manage budgets.

Like on Amazon, discounts fluctuate significantly, and spot VMs can be up to 90% cheaper than the base price of the same VM.

How Spot VMs Work on Azure

The Azure Portal provides access to Azure spot VMs. When you create a spot VM, you can see the current price for the selected region, image and VM size. For consistency, prices are always in U.S. dollars, even if you use a different base currency for billing.

There are two options for eviction of spot VMs—you can choose the condition on which spot VMs will be evicted:

  • Maximum price eviction—You set a maximum bidding price, and when the spot VM rises over that price, it is evicted.
  • Capacity eviction—You always pay the current price of the VM (without setting a maximum price), and when Azure does not have sufficient capacity of the requested VM type, your VM is evicted.

When a VM is evicted, Azure applies an eviction policy called Stop / Deallocate. This means the instance is paused, but attached disks remain, and you are still charged for them. When the price goes down or capacity becomes available, the instance is restarted and continues working on the same disk data.

Automation Options

Azure provides virtual machine scale sets (VMSS), which can automatically increase or decrease the number of VMs running your application. You can create a scale set that includes spot VMs, and as your application scales, more spot VMs will be added as they become available. Spot scale sets operate in a single fault domain and do not guarantee high availability. Unlike AWS, Azure currently does not allow you to mix on-demand VMs and spot VMs.

Both Amazon and Azure provide robust capabilities for cost savings using spot instances. Azure’s offering is newer and provides less-advanced bidding and auto-scaling capabilities, but these are expected to be added as the service matures.

Whether DevOps teams choose to run in AWS, Azure, or both, they literally cannot afford to ignore spot instances, especially for low-criticality workloads like dev/test and CI/CD job execution.


Run Azure Machine Learning anywhere – on hybrid and in multi-cloud with Azure Arc

Over the last couple of years, Azure customers have leaned towards Kubernetes for their on-premises needs. Kubernetes allows them to leverage cloud native technologies to innovate faster and take advantage of portability across the cloud and at the edge. We listened and launched Azure Arc enabled Kubernetes to integrate customers Kubernetes assets in Azure and centrally govern and manage Kubernetes clusters including Azure Kubernetes Service (AKS). We have now taken it one step further to leverage Kubernetes and enable training ML (Machine Learning) models using Azure Machine learning. 

Run machine learning seamlessly across on-premises, multi-cloud and at the edge  

Customers can now run their ML training on any Kubernetes target cluster in the Azure cloud, GCP, AWS, edge devices and on prem through Azure Arc enabled Kubernetes.  This allows customers to use excess capacity either in the cloud or on prem increasing operational efficiency. With a few clicks, they can enable the Azure Machine Learning agent to run on any OSS Kubernetes cluster that Azure Arc supports. This, along with other key design patterns, ensures a seamless set up of the agent on any OSS Kubernetes cluster such as AKS, RedHat OpenShift, managed Kubernetes services from other cloud providers, etc. There are multiple benefits of this design including using core Kubernetes concepts to set up/ configure a cluster, running cloud native tools, such as, GitOps etc. Once the agent is successfully deployed, IT operators can either grant Data Scientists access to the entire cluster or a slice of the cluster, using native concepts such as namespaces, node selectors, taints / tolerations, etc. The configuration and lifecycle management of the cluster (setting up autoscaling, upgrading to newer Kubernetes versions) is transparent, flexible and the responsibility of the customers’ IT operations team. 

thumbnail image 1 of blog post titled 
							Run Azure Machine Learning anywhere - on hybrid and in multi-cloud with Azure Arc

Built using familiar Kubernetes and cloud native concepts  

The core of the offering is an agent that extends the Kubernetes API. Once set up with a single command, the IT operator can view these Kubernetes objects (operators for TensorFlow, PyTorch, MPI, etc.) using familiar tools such as, kubectl. 

thumbnail image 2 of blog post titled 
							Run Azure Machine Learning anywhere - on hybrid and in multi-cloud with Azure Arc

Data Scientists can continue to use familiar tools to run training jobs 

One of the core principles we adhered to was splitting the IT operator persona and the Data Scientist one with separate roles and responsibilities. Data scientists do not need to know anything about or learn Kubernetes. To them, it is yet another compute target that they can submit their training jobs to.  They use familiar tools, such as, the Azure Machine Learning studio, Azure Machine Learning Python SDK (Software Development Kits) or OSS tools( Jupyter notebooks, TensorFlow, PyTorch, etc.) spending their time solving machine learning problems rather than worrying about infrastructure that they are running on.

thumbnail image 3 of blog post titled 
							Run Azure Machine Learning anywhere - on hybrid and in multi-cloud with Azure Arc

Ensure consistency across workloads with unified operations, management, and security. 

Kubernetes comes with its own sets of challenges around security, management and governance. The Azure Machine Learning team and the Azure Arc enabled Kubernetes team have worked together to ensure that not only is an IT operator able to centrally monitor and apply policies on your workloads on Arc infrastructure but also ensure that the interaction with Azure Machine Learning service is secure and compliant. This along with the consistent experience across the cloud and on prem clusters no longer require you to lift and shift machine learning workloads but seamlessly operate them across both. You can choose to just run in the cloud to take advantage of the scale or just run-on excess on- premises capacity while leveraging the single pane of glass Azure Arc provides to manage all your on-premises infrastructure.


Innovate across hybrid and multicloud with new Azure Arc capabilities

Across industries, companies are investing in hybrid and multicloud technologies to ensure they have the flexibility to innovate anywhere and meet evolving business needs. Customers tell us a key challenge with hybrid and multicloud adoption is managing and securing their IT environments while building and running cloud-native applications.

To enable the flexibility and agility customers are seeking to innovate anywhere while providing governance and security, we created Azure Arc—a set of technologies that extends Azure management and services to any infrastructure. Today, we are announcing new Azure Arc innovation that unlocks more scenarios.

Run machine learning anywhere with Azure Arc

Azure Arc enables customers to run Azure services in any Kubernetes environment, whether it’s on-premises, multicloud, or at the edge. The first set of services enabled to run in any Kubernetes environment was Azure data services. We continue to enhance Azure Arc enabled data services based on feedback from customers such as KPMG, Ford, Ferguson, and SKF.

Today, we’re excited to expand Azure Arc enabled services to include Azure Machine Learning. Azure Machine Learning is an enterprise-grade service that enables data scientists and developers to build, deploy, and manage machine learning models. By using Azure Arc to extend machine learning (ML) capabilities to hybrid and multicloud environments, customers can train ML models directly where the data lives using their existing infrastructure investments. This reduces data movement while meeting security and compliance requirements.

Customers can sign up for Azure Arc enabled Machine Learning today and deploy to any Kubernetes cluster. In one click, data scientists can now use familiar tools to build machine learning models consistently and reliably and deploy anywhere.

Build cloud-native applications anywhere, at scale with Azure Arc

More than ever, organizations are building modern applications using Kubernetes containers across cloud, on-premises, and the edge. Last fall, we released Azure Arc enabled Kubernetes in preview to help manage and govern Kubernetes clusters anywhere. Right from the Azure portal, customers can deploy a common set of Kubernetes configurations to their clusters wherever they are, consistently and at scale. Azure Arc also enables developers to centrally code and deploy cloud-native applications securely to any Kubernetes cluster using GitOps. Today, we are announcing Azure Arc enabled Kubernetes is now generally available.

“We are excited to see Microsoft bringing Azure Arc to manage cloud-native applications on any infrastructure. With Azure Arc, we can easily deploy our applications across the cloud and on-premises to meet regulatory and compliance requirements while ensuring consistent management and governance, delivering a huge benefit to our business.” —Martin Sciarrillo, Multicloud Expansion Lead, EY Technology

Use any Kubernetes conformant with CNCF

We are committed to providing customers with choices and supporting their existing Kubernetes investments. Azure Arc is built to work with any cloud native computing foundation (CNCF) conformant Kubernetes distribution. To give customers more confidence, we’ve collaborated with popular Kubernetes distributions including VMware Tanzu and Nutanix Karbon, which join Red Hat OpenShift, Canonical’s Charmed Kubernetes, and Rancher Kubernetes Engine (RKE) to test and validate their implementations with Azure Arc. We look forward to validating more partners in the future.

“VMware believes Kubernetes will become the dial-tone for modern applications. This can only be achieved through a thriving ecosystem that promotes interoperability. By certifying Tanzu Kubernetes Grid with Azure Arc, we’re teaming with Microsoft to help enterprises achieve the full potential of Kubernetes through a consistent experience.” —Craig McLuckie, Vice President, R&D, Modern Applications Business Unit, VMware

“Microsoft and Nutanix are collaborating to let customers manage and govern their on-premises Kubernetes clusters, deployed with Nutanix Karbon, alongside their Azure resources through the common control plane provided by Azure Arc. This integration provides customers with a consistent and reliable hybrid and multicloud solution, extending the Azure experience and Azure PaaS services to Nutanix HCI.”—Thomas Cornely, SVP, Product Portfolio Management, Nutanix

Modernize your datacenter with Azure Stack HCI and Azure Arc

Hyperconverged infrastructure has been an ideal way for organizations to modernize datacenters and deploy key workloads for remote offices and branch offices (ROBO). Azure Stack HCI provides a performant and cost-effective hyperconverged infrastructure solution that can be managed right from Azure. Customers can run Azure services and cloud-native applications on Azure Kubernetes Service (AKS) on Azure Stack HCI. Azure Stack HCI works with multiple systems co-engineered for simplicity and reliability from partners such as Dell, Lenovo, HPE, Fujitsu, and DataON.

“SKF is proud to be at the forefront of the hybrid cloud revolution. Azure Hybrid Cloud Solutions enable us to maximize our efficiency, grow our digital platform for world-class manufacturing, and empower the SKF factories of the future to innovate towards data-driven manufacturing.” —Sven Vollbehr, Head of Digital Manufacturing, SKF Group