What is automated machine learning (AutoML)?

3d rendering robot learning or machine learning with education hud interface

Automated machine learning (AutoML) signifies a fundamental shift in how organizations of all sizes strategy machine learning and information science. Implementing conventional machine learning approaches to real-world business issues is time consuming, resource-intensive, and hard. It requires specialists from the many areas, including information scientists — a number of those most sought after professionals at the job market today .

Automated machine learning varies which, which makes it simpler to construct and utilize machine learning versions from the actual world by conducting systematic procedures on raw information and picking models that extract the most applicable information from the information — what’s often known as the sign in the sound.” Automated machine learning integrates machine learning best practices from top-ranked data scientists to produce information science more accessible across the business.

Here’s the conventional machine learning procedure at a high level:

When creating a version with the standard procedure, as you can see from Figure 1, the sole automated task is version coaching . Automated machine learning applications automatically implements all of the actions outlined in red — guide, tedious modeling jobs that used to demand expert data scientists. That conventional procedure often takes months or weeks. With automatic machine learning nevertheless, it requires days for business specialists and information scientists to develop and compare dozens of versions, locate insights and forecasts , and resolve more business issues quicker.

Automating these measures allows for increased agility in the democratization of information science to include individuals without extensive programming knowledge.

Manually building a machine learning model is a multistep process that needs domain knowledge, mathematical experience, and computer engineering abilities — that is a whole lot to ask of one firm, let alone a single information scientist (supplied you can employ and keep 1 ). Not just that, there are an infinite number of chances for human error and prejudice, which degrades model precision and devalues the insights you could receive from the model. Automated machine learning empowers organizations to utilize the baked-in understanding of information scientists without wasting money and time to develop the capacities themselves, concurrently enhancing return on investment in data science initiatives and lessening the quantity of time that it takes to catch value.

Automated equipment learning makes it possible for companies in each business — healthcare,
By automating the majority of the modeling jobs required so as to develop and deploy machine learning units, automatic machine learning empowers business users to execute machine learning options easily, thereby allowing a company’s data scientists to concentrate on more complicated issues.

Source: https://www.datasciencecentral.com/profiles/blogs/what-is-automated-machine-learning-automl-1

DevOps Salary, Roles and Responsibilities

You may have noticed that DevOps Engineers are becoming increasingly common these days. And as recruiters are looking for DevOps talent, DevOps Salary is also increasing rapidly. There is a disconnect between the software development, IT team, operations and product development teams in an organisation. And DevOps Engineers work towards bridging this gap. 

A DevOps Engineer has a combination of hard skills and soft skills to overcome the challenges that arise between the operations team and the software development team in an organisation. The DevOps market size is growing and is expected to grow to USD 10.31 Billion by 2023 from USD 2.90 Billion in 2017. The CAGR or Compound Annual Growth Rate is 24.7% during the forecast period. The DevOps salary being offered to individuals, and the demand for individuals with DevOps skills are rising as businesses continue to document accomplishments. This has led to: 

  • Elimination of silos 
  • Better customer satisfaction
  • Higher frequency of code deployment
  • Lesser deployment failures

DevOps Engineer Roles and Responsibilities 

As the demand for DevOps engineer job roles increases, it is necessary to understand the roles and responsibilities of a DevOps engineer. You should also understand that the DevOps salary may change depending on various factors such as the niche job requirements, the hiring company, the job location, the number of years of work experience, and more. In general, the role calls for soft skills as well as technical skills. The “Dev” in DevOps covers the coding aspects. However, to become an efficient DevOps engineer, it is necessary to possess Ops skills as well. 

  • Must be comfortable with a high testing and deployment frequency 
  • Working knowledge of the wide range of tools and technologies that are used in software development
  • Know about IT systems and the different production environments
  • Data management 
  • Embrace team communication and collaboration
  • Project management skills
  • Ability to achieve business outcomes 
  • Ability to use automation tools 
  • Should have experience with operations in a production environment 

If you possess these skills, you can eliminate any complexity you face while creating products and minimise any delay in deployment. You can also guarantee that there is greater integration success across multiple operating systems and platforms. 

Cultivating soft skills will help you build a good working relationship and is necessary to develop internal stakeholders, customers, and development teams. Technical skills are essential to put together the final product and handle the role of a DevOps Engineer.

1. Understand Linux 

Several DevOps projects are Linux-based, and the different configuration management tools such as Puppet and Ansible also have their nodes on Linux. Thus, knowing scripting languages such as Ruby, Pearl, and Python and the Linux environment helps build a career. 

2. Knowledge of Tools and Technologies Used in every DevOps Process 

Some of the commonly used tools in the DevOps process are listed below:

  • For continuous management: Chef, Puppet, and Ansible
  • For continuous integration: Jenkins, Travis CI, and Bamboo 
  • For continuous testing: Test Complete, Tricentis Tosca, and Docker
  • For continuous monitoring: Nagios, Splunk, and Sensu

3. Understand the CI/CD Process 

Knowing when and where to use the DevOps tools and technologies is also just as essential as knowledge of tools. 

4. IAC skills

As a DevOps Engineer, you must understand the IAC model and its applications, as this will help you solve deployment problems. 

DevOps Engineer Salaries Based on location, organisation

The DevOps Salary that an individual earns is quite lucrative and varies based on several factors such as geographical location, work experience, hiring company, and more. Upskilling in this domain by taking up a cloud computing course will help you in the long run. 

DevOps Salary based on the location

The average salary of a DevOps Engineer in the US is $ 99,604 / yr. Depending on the location, your salary may vary. 

LocationSalary
California$149,783
Rhode Island$146,250
New York$145,000
Massachusetts$140,000
Alaska$137,500
Hawaii$135,000

DevOps Salary based on the organisation

Just as the salary varies with location, it also varies depending on the organisation. Here’s a few of the DevOps salaries based on the organisation. 

OrganisationSalary
Formac$ 1,05,583/yr
SunPower$ 1,24,198/yr
IBM$ 1,14,210/yr
NIT Technologies$ 1,01,527/yr

How to become a DevOps Engineer? 

As is the case with any other job role, there is no set path to becoming a DevOps Engineer. There are various steps that you can follow and reach the same end goal. A software engineer interested in product deployment and network operations can become a DevOps engineer. If you are a systems admin, you can also learn scripting languages and move into a testing and deployment role. You must be willing to learn and work towards your goal. Upskilling in the domain can surely help. 

Source: https://www.mygreatlearning.com/blog/devops-salary-roles-and-responsibilities/

What Makes Power BI A Powerful Tool For Businesses Today?

“Information is the oil of the 21st century, and analytics is the combustion engine”  by Peter Sondergaard, Senior Vice President, Gartner

The 80% of unstructured data that resides in social media feeds, digital photos, emails, and audio files can be hard to analyze and accumulate. It lets businesses fail to use the tones of relevant data they have access to. 

Emerging technologies today have entirely transformed the way companies deal with the present data. It brought new innovative ways to determine and understand business trends. The availability of information not only comes up with the opportunities but also holds challenges.

To overcome the data challenges, modern business intelligence tools come into existence.

According to a study by Forbes, 62% Of Businesses Say Self-Service BI Is Essential In 2020

Microsoft brings the most powerful suite of tools and services popularly known as Microsoft Power BI (Business Intelligence). 

It enables businesses to have a deeper understanding of business data with strong data analytics and visualizations.

Data no longer sits in the large databases used with Power BI. The integrated solutions encompass diverse data sources and visualization. Also, it enables smarter data-driven decisions.

It simplifies the process of extracting and presenting data dynamically and interactively.

In the Business Intelligence category, Microsoft Power BI has a market share of about 8.1%. 

Why Should Businesses Use Power BI For Business Analytics?

The world of Business Intelligence is evolving at a faster pace with the most powerful, effective, and innovative tool for data visualization and data handling.

Power BI is a leader in  Gartner’s 2018 Magic Quadrant for Analytics and Business Intelligence Platforms

It offers numerous opportunities to businesses to drive digital transformation for increased growth and profitability. The opportunity to quickly transform data from various sources into actionable insights lies in this powerful tool.

Whether it’s about interactive dashboards, real-time reports, rich data presentation, self-service capabilities, Power BI is a one-stop solution for businesses dealing with complex and huge data sets and challenging data-based processes.

To overcome the challenges of large volumes of data generated and accumulated over time, Power BI can be a savior data tool for your business. It helps you make smarter and brilliant data-driven decisions faster.

Moving further, let’s dig out in-depth as to why businesses should use Power BI  to enhance their business analytics.

Create interactive reports-Custom and Open-Source Visuals

Power BI comes with tonnes of pre-packaged standard data visuals to leverage your interactive reports. It facilitates users to embed BI and analytics with ease and produce reliable reports and analysis via dashboards, reports, and datasets.

The enhanced presentation and functionality leverage the power of data and business intelligence to deliver fast and interactive reports.

Power BI can help you make an attractive and engaging range of rich and complex visuals uniquely.

Can Easily Connect Your Data

Power BI makes it easier to incorporate your entire data together into one place, for flexibility, accessibility, and visibility when reporting.

The tool currently supports up to 70+ connectors. It let businesses load data from a wide range of cloud-based sources such as Azure, DropBox, Google Analytics, OneDrive, and SalesForce.

The built-in connectors facilitate users to load pre-built Power BI dashboards and perform data analysis in no time.

With this, you can always customize your data and even start from scratch. This can be done by importing your datasets and developing your rich dashboard and reports.

Also, the Drag-and-drop functionality lets your employees generate customized reports faster. It helps users to simply select and open information that stands out to gain a better understanding of what’s going on.

Thus, Power BI makes seamless integration with your existing business environment and improves your reporting capabilities.

Uncompromised Security

Power BI lets you manage security and user access within the same interface. It simply eliminates the need to use other tools.

Built-in Azure Active Directory (AAD) is used for user authentication, Single Sign-On (SSO), along with login details to access your data.

Power BI Ability To Customize Power BI App Navigation

Microsoft has worked hard on their software to ensure that apps such as Power BI are rich-featured, with similar usability, and full compatibility with one another. 

Power BI integration with apps gives report developers the power to customize navigation. It helps viewers find content quickly and understand the relationships between different reports and dashboards.

PowerApps is a powerful business tool used by businesses to create apps that run on all browsers and operating systems. It has a simple and user-friendly interface that doesn’t require coding experience as Power BI.

It let your end-users access real data insights to build customized applications using a similar interface.

It’s even easier to share key insights with employees using your in-house custom apps with native integration. 

More Advanced Analytics and High performance Delivered

Power BI’s is a powerful toolset that allows you to leverage organizational expertise and ease into Power BI faster. 

The in-memory assessment technology and DAX (data analysis expressions) scripting language maintain a perfect balance between intelligible simplicity and performance.

The DAX check into their data and find patterns that go easier with Power BI.

It makes it easier for Excel users to transform, and integrate business data in Power BI.

Facilitates Seamless Cortana Integration

Power BI is empowered to run remote apps on multiple internet platforms and devices, i.e. Android, iOS, and Windows devices, enabling users with advanced accessibility.

It works seamlessly with Microsoft’s digital assistant, Cortana which allows users to query data verbally using natural language. Users can ask verbal questions in natural language to access charts and graphs. This provides great help to mobile users.

With intuitive graphical functions, users require no professional training when using the app. 

Rich and Customized Dashboards

One of the best features of Power BI is the information dashboards which can customize to meet organization needs. You can easily embed the dashboards and BI reports in the applications to provide an inevitable user experience.

Users can now easily integrate on-premise and cloud data in a single view. It allows you to keep monitoring critical enterprise-wide data from all business applications irrespective of platform.

The dashboards update in real-time as data is pushed inside it that let users quickly solve issues and explore new opportunities fast. The real-time data and visuals can be generated with reports or dashboards.

Thus, business leaders can easily monitor their business more powerfully with the Power BI suite. Also, they get quick answers through rich Data Visualization with excellent dashboards.

“Many business intelligence (BI) and analytics leaders are unsure how to get started with advanced analytics, and many organizations feel they must make a significant investment in new tools and skills,” said Lisa Kart, research director at Gartner. “But a successful advanced analytics strategy is about more than simply acquiring the right tools. It’s also important to change mindsets and culture, and to be creative in search of success.” 

Final Verdict

It’s not hard to understand Power BI is a powerful business intelligence tool and is here to stay in the world of Data Visualization. That is why Power BI is enjoying huge popularity for businesses looking for real-data insights, interactive dashboards, and rich reporting.

Source: https://www.datasciencecentral.com/profiles/blogs/what-makes-power-bi-a-powerful-tool-for-businesses-today

Cloud Migrations Demand Risk and Compliance Maturity

The COVID-19 pandemic brought undeniable disruptions for organizations and their employees whether business, personal or otherwise. Across the globe, businesses and governments alike were forced to try and manage these disruptions. For nearly all organizations, digitization initiatives were accelerated in a short period of time, from implementing work-from-home policies and launching new applications to support a distributed workforce to adopting artificial intelligence (AI) to adapt supply chain processes and more.

According to Gartner, by 2022, 30% of all security teams will have increased the number of employees working remotely on a permanent basis and by 2023, 40% of all enterprise workloads will be deployed in cloud infrastructure and platform services. The distributed cloud enables organizations to provide products and services when they’re needed in this era of work-from-anywhere, whether to their employees or customers.

Even in sectors such as energy and utilities, which are historically heavily reliant on standard on-premises installations and had often avoided cloud adoption, business leaders are realizing the value of migration, especially in light of the global pandemic. These businesses are able to deliver digital products with speed, experiment with tools such as artificial intelligence (AI) and robotic process automation (RPA) to increase productivity and to lower the total cost of ownership (TCO) across assets, among other benefits. Still, cloud migration risks abound, and new challenges may arise. A risk that cannot be ignored is the cybersecurity risk. Meeting regulatory IT compliance and managing risks involved with cloud computing are top challenges facing those migrating their workloads to the cloud.

Cloud Migrations and Security: Clear Risk and Compliance Gaps

Many cloud providers such as Microsoft Azure, AWS, Google Cloud and others have a global network of service models that include compliance teams and consulting organizations that help with risk and compliance for their cloud instances, whether public, private or hybrid cloud. Many of them have even built tools for customers to use to implement basic risk and compliance management in-house, leveraging relevant data and applications. However, monitoring and meeting security and compliance controls that span people, processes and technology for cloud environments, and in the broader context of the enterprise, is complex. This is one cause of cloud migration risk, and is a challenge for many other reasons; lack of measurement, visibility and accuracy are three of the greatest risks when migrating to the cloud. The point solutions that currently support most cloud instances don’t elevate the posture of cloud environments to that of the enterprise risk posture, and the majority of assessments still remain point-in-time and qualitative. Metrics are fractured and far from holistic, and very few, if any, solutions can provide insight beyond compliance and into real-time risk management.

IT and security regulations and standards are filled with requirements that were created before the cloud became a commodity. In the energy sector, for example, cloud security isn’t taken into account because regulators and industry leaders couldn’t fathom those platforms becoming as pervasive as they have, because on-premises installations were standard to the industry. On-premises installations are still a mainstay in energy, power and utilities, and for those who have become more comfortable with cloud migration processes, there is a clear and pressing need to leverage their human capital, processes and technologies to implement robust risk management practices.

Beyond regulatory compliance lags, many distributed organizations opt to have multiple providers in place, requiring a multi-cloud approach to compliance requirements and risk assessment. As more organizations consider cloud migration risks and begin shaping their cloud migration strategies, there are some innovations that address risk management and compliance in the cloud, but not many. Measuring, managing and reporting on compliance frameworks, making the shared responsibility model actionable, and getting a view into risk are all serious challenges. Cloud providers will continue to mature and bring new innovations to their services, but, to date, there hasn’t been a lot of anticipatory work done in this area. The focus has largely been on creating reactive solutions. In heavily regulated countries, the challenges only become greater.

Leverage AI Automation for Compliance and Risk Management

There is a shift occurring in cybersecurity and IT risk management, calling for the dramatic disruption of the legacy IT governance, risk and compliance (GRC) space and demanding a reevaluation of how we manage compliance and risk in the digital age. For years, data has been aggregated manually and analyses performed on out-of-date information. With the increasing availability of automation, the five functions of the NIST Cybersecurity Framework – identify, protect, detect, respond and recover – are becoming more continuous in nature and shifting into real-time management, from assessment to reporting and more.

Leveraging this technology in the cloud is no exception, but those who look to reinvent their approach must look for solutions that go beyond the siloed capabilities of cloud security posture management solutions and similar markets.

Ultimately, the true test of this next-generation approach comes when organizations are able to roll all of this data up to risk. With risk metrics that are supported by drill-downs, trend reports and risk profiles, executives can get the visibility they need into their posture with the most up-to-date data, informing their key business decisions. Using this next-generation approach to risk will inform global expansion, allow executives to evaluate risk across lines of business, and increase cyber maturity in any cloud-based organization.

Source: https://devops.com/cloud-migrations-demand-risk-and-compliance-maturity/

Do follow us on social media and keep yourself updated with the top edge technology trends.

Why Every DevOps Team Needs a Spot Instance Strategy

Most DevOps teams use the public cloud extensively and focus a lot of energy on reducing cloud costs. According to one estimate, U.S. businesses spent $14.1 billion in 2019 just on wasted, unused cloud resources. Spot instances are one of the most important ways to reduce the cost of public cloud services.

Spot instances are not a new development. Amazon was the first to announce this pricing option in 2009, turning cloud computing into a market influenced by the dynamics of supply and demand. Microsoft Azure took more than a decade to follow suit, announcing its spot virtual machines offering in May 2020.

The spot pricing model is simple on the surface but can be complex to implement. Spot instances are how cloud providers sell their unused capacity. When compute instances are not ordered by anyone via the regular, on-demand pricing model, the cloud provider puts them up for auction at very favorable prices, usually around 10% to 20% less than the on-demand cost.

However, it’s not so simple to get this 80% to 90% discount. When another cloud customer requests the spot instance, the cloud provider sends a notification, and you need to immediately move your workloads before the instance is terminated. This makes it difficult to use spot instances for stateful, database-driven applications, or those that require high availability.

This is not the only challenge. The market price fluctuates and is much more volatile than on-demand pricing. This means the level of discount that can be achieved over time is relatively unpredictable.

However, there are at least three ways savvy DevOps teams can make great use of spot instances:

  1. Automation—DevOps teams are proficient with a range of automation tools, including configuration management, infrastructure as code (IaC) and cloud provider auto-scaling tools. These can be used to manage workloads in clusters and automatically fail over when a spot instance terminates.
  2. Dev/test environments—DevOps teams are responsible for setting up development and testing environments, which are extremely suitable for spot instances because they can usually tolerate brief interruptions, and, in many cases, are not stateful.
  3. CI/CD jobs—running jobs on Jenkins, GitLab and similar tools can be easy to scale on spot instances. These jobs are stateless, and if an instance drops, it’s easy to rerun the job on another.

AWS Spot Instances

AWS Spot Instances let you buy unused Amazon EC2 computing power at a significantly discounted price. You can specify a price, and when a spot instance is offered at that price, it is launched with the Amazon Machine Image (AMI) of your choice.

How Spot Instances Work on AWS

Spot instances are priced at a variable spot price, which is adjusted according to supply and demand conditions. To see current rates, use the AWS Spot Instance Advisor.

You create a spot instance request, specifying what instance types you are interested in, and the availability zones (AZ) in which they should run. If capacity is available, and their current price is less than your maximum bid, instances are launched.

Spot instances continue running until:

  • Capacity is no longer available (because the instances were requested by on-demand customers).
  • The price has risen over your maximum bid.
  • You request to terminate the instance, or it is automatically terminated by auto-scaling.

You can also order spot instances with a predefined duration—you then pay a static hourly rate for that entire duration (even if market price changes in the interim).

Automation Options

You can automatically scale instances on Amazon EC2, including both on-demand instances and spot instances in a single auto-scaling group. If spot instances are not available when scaling up, the group can use regular on-demand instances.

Amazon also supports mixing in reserved instances and savings plans (additional ways to save on on-demand instances by committing to a certain period of time or total capacity). So you can combine multiple saving methods in the same auto-scaling group.

You can improve availability by deploying applications across multiple instance types running in multiple AZs. By allowing multiple instance types, you tap into multiple pools and increase the chances of obtaining a spot instance when you need it.

Azure Spot Instances (Spot VMs)

Azure offers Spot VMs that give you access to unused compute capacity. You can request a single spot VM, or launch multiple spot VMs using an Azure VM Scale Set (VMSS). Spot VMs replaced the previous Low Priority VMs feature, which let you purchase VMs that were in low demand on Azure for a reduced price.

The spot price of VMs on Azure depends on the total capacity available for that specific instance size and SKU (instance type) in the Azure region. Azure commits to changing pricing slowly—avoiding sudden spikes—to maintain pricing stability and make it easier to manage budgets.

Like on Amazon, discounts fluctuate significantly, and spot VMs can be up to 90% cheaper than the base price of the same VM.

How Spot VMs Work on Azure

The Azure Portal provides access to Azure spot VMs. When you create a spot VM, you can see the current price for the selected region, image and VM size. For consistency, prices are always in U.S. dollars, even if you use a different base currency for billing.

There are two options for eviction of spot VMs—you can choose the condition on which spot VMs will be evicted:

  • Maximum price eviction—You set a maximum bidding price, and when the spot VM rises over that price, it is evicted.
  • Capacity eviction—You always pay the current price of the VM (without setting a maximum price), and when Azure does not have sufficient capacity of the requested VM type, your VM is evicted.

When a VM is evicted, Azure applies an eviction policy called Stop / Deallocate. This means the instance is paused, but attached disks remain, and you are still charged for them. When the price goes down or capacity becomes available, the instance is restarted and continues working on the same disk data.

Automation Options

Azure provides virtual machine scale sets (VMSS), which can automatically increase or decrease the number of VMs running your application. You can create a scale set that includes spot VMs, and as your application scales, more spot VMs will be added as they become available. Spot scale sets operate in a single fault domain and do not guarantee high availability. Unlike AWS, Azure currently does not allow you to mix on-demand VMs and spot VMs.

Both Amazon and Azure provide robust capabilities for cost savings using spot instances. Azure’s offering is newer and provides less-advanced bidding and auto-scaling capabilities, but these are expected to be added as the service matures.

Whether DevOps teams choose to run in AWS, Azure, or both, they literally cannot afford to ignore spot instances, especially for low-criticality workloads like dev/test and CI/CD job execution.

Source: https://devops.com/why-every-devops-team-needs-a-spot-instance-strategy/

Run Azure Machine Learning anywhere – on hybrid and in multi-cloud with Azure Arc

Over the last couple of years, Azure customers have leaned towards Kubernetes for their on-premises needs. Kubernetes allows them to leverage cloud native technologies to innovate faster and take advantage of portability across the cloud and at the edge. We listened and launched Azure Arc enabled Kubernetes to integrate customers Kubernetes assets in Azure and centrally govern and manage Kubernetes clusters including Azure Kubernetes Service (AKS). We have now taken it one step further to leverage Kubernetes and enable training ML (Machine Learning) models using Azure Machine learning. 

Run machine learning seamlessly across on-premises, multi-cloud and at the edge  

Customers can now run their ML training on any Kubernetes target cluster in the Azure cloud, GCP, AWS, edge devices and on prem through Azure Arc enabled Kubernetes.  This allows customers to use excess capacity either in the cloud or on prem increasing operational efficiency. With a few clicks, they can enable the Azure Machine Learning agent to run on any OSS Kubernetes cluster that Azure Arc supports. This, along with other key design patterns, ensures a seamless set up of the agent on any OSS Kubernetes cluster such as AKS, RedHat OpenShift, managed Kubernetes services from other cloud providers, etc. There are multiple benefits of this design including using core Kubernetes concepts to set up/ configure a cluster, running cloud native tools, such as, GitOps etc. Once the agent is successfully deployed, IT operators can either grant Data Scientists access to the entire cluster or a slice of the cluster, using native concepts such as namespaces, node selectors, taints / tolerations, etc. The configuration and lifecycle management of the cluster (setting up autoscaling, upgrading to newer Kubernetes versions) is transparent, flexible and the responsibility of the customers’ IT operations team. 

thumbnail image 1 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Run Azure Machine Learning anywhere - on hybrid and in multi-cloud with Azure Arc

Built using familiar Kubernetes and cloud native concepts  

The core of the offering is an agent that extends the Kubernetes API. Once set up with a single command, the IT operator can view these Kubernetes objects (operators for TensorFlow, PyTorch, MPI, etc.) using familiar tools such as, kubectl. 

thumbnail image 2 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Run Azure Machine Learning anywhere - on hybrid and in multi-cloud with Azure Arc

Data Scientists can continue to use familiar tools to run training jobs 

One of the core principles we adhered to was splitting the IT operator persona and the Data Scientist one with separate roles and responsibilities. Data scientists do not need to know anything about or learn Kubernetes. To them, it is yet another compute target that they can submit their training jobs to.  They use familiar tools, such as, the Azure Machine Learning studio, Azure Machine Learning Python SDK (Software Development Kits) or OSS tools( Jupyter notebooks, TensorFlow, PyTorch, etc.) spending their time solving machine learning problems rather than worrying about infrastructure that they are running on.

thumbnail image 3 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Run Azure Machine Learning anywhere - on hybrid and in multi-cloud with Azure Arc

Ensure consistency across workloads with unified operations, management, and security. 

Kubernetes comes with its own sets of challenges around security, management and governance. The Azure Machine Learning team and the Azure Arc enabled Kubernetes team have worked together to ensure that not only is an IT operator able to centrally monitor and apply policies on your workloads on Arc infrastructure but also ensure that the interaction with Azure Machine Learning service is secure and compliant. This along with the consistent experience across the cloud and on prem clusters no longer require you to lift and shift machine learning workloads but seamlessly operate them across both. You can choose to just run in the cloud to take advantage of the scale or just run-on excess on- premises capacity while leveraging the single pane of glass Azure Arc provides to manage all your on-premises infrastructure.

Source: https://techcommunity.microsoft.com/t5/azure-arc/run-azure-machine-learning-anywhere-on-hybrid-and-in-multi-cloud/ba-p/2170263

Innovate across hybrid and multicloud with new Azure Arc capabilities

Across industries, companies are investing in hybrid and multicloud technologies to ensure they have the flexibility to innovate anywhere and meet evolving business needs. Customers tell us a key challenge with hybrid and multicloud adoption is managing and securing their IT environments while building and running cloud-native applications.

To enable the flexibility and agility customers are seeking to innovate anywhere while providing governance and security, we created Azure Arc—a set of technologies that extends Azure management and services to any infrastructure. Today, we are announcing new Azure Arc innovation that unlocks more scenarios.

Run machine learning anywhere with Azure Arc

Azure Arc enables customers to run Azure services in any Kubernetes environment, whether it’s on-premises, multicloud, or at the edge. The first set of services enabled to run in any Kubernetes environment was Azure data services. We continue to enhance Azure Arc enabled data services based on feedback from customers such as KPMG, Ford, Ferguson, and SKF.

Today, we’re excited to expand Azure Arc enabled services to include Azure Machine Learning. Azure Machine Learning is an enterprise-grade service that enables data scientists and developers to build, deploy, and manage machine learning models. By using Azure Arc to extend machine learning (ML) capabilities to hybrid and multicloud environments, customers can train ML models directly where the data lives using their existing infrastructure investments. This reduces data movement while meeting security and compliance requirements.

Customers can sign up for Azure Arc enabled Machine Learning today and deploy to any Kubernetes cluster. In one click, data scientists can now use familiar tools to build machine learning models consistently and reliably and deploy anywhere.

Build cloud-native applications anywhere, at scale with Azure Arc

More than ever, organizations are building modern applications using Kubernetes containers across cloud, on-premises, and the edge. Last fall, we released Azure Arc enabled Kubernetes in preview to help manage and govern Kubernetes clusters anywhere. Right from the Azure portal, customers can deploy a common set of Kubernetes configurations to their clusters wherever they are, consistently and at scale. Azure Arc also enables developers to centrally code and deploy cloud-native applications securely to any Kubernetes cluster using GitOps. Today, we are announcing Azure Arc enabled Kubernetes is now generally available.

“We are excited to see Microsoft bringing Azure Arc to manage cloud-native applications on any infrastructure. With Azure Arc, we can easily deploy our applications across the cloud and on-premises to meet regulatory and compliance requirements while ensuring consistent management and governance, delivering a huge benefit to our business.” —Martin Sciarrillo, Multicloud Expansion Lead, EY Technology

Use any Kubernetes conformant with CNCF

We are committed to providing customers with choices and supporting their existing Kubernetes investments. Azure Arc is built to work with any cloud native computing foundation (CNCF) conformant Kubernetes distribution. To give customers more confidence, we’ve collaborated with popular Kubernetes distributions including VMware Tanzu and Nutanix Karbon, which join Red Hat OpenShift, Canonical’s Charmed Kubernetes, and Rancher Kubernetes Engine (RKE) to test and validate their implementations with Azure Arc. We look forward to validating more partners in the future.

“VMware believes Kubernetes will become the dial-tone for modern applications. This can only be achieved through a thriving ecosystem that promotes interoperability. By certifying Tanzu Kubernetes Grid with Azure Arc, we’re teaming with Microsoft to help enterprises achieve the full potential of Kubernetes through a consistent experience.” —Craig McLuckie, Vice President, R&D, Modern Applications Business Unit, VMware

“Microsoft and Nutanix are collaborating to let customers manage and govern their on-premises Kubernetes clusters, deployed with Nutanix Karbon, alongside their Azure resources through the common control plane provided by Azure Arc. This integration provides customers with a consistent and reliable hybrid and multicloud solution, extending the Azure experience and Azure PaaS services to Nutanix HCI.”—Thomas Cornely, SVP, Product Portfolio Management, Nutanix

Modernize your datacenter with Azure Stack HCI and Azure Arc

Hyperconverged infrastructure has been an ideal way for organizations to modernize datacenters and deploy key workloads for remote offices and branch offices (ROBO). Azure Stack HCI provides a performant and cost-effective hyperconverged infrastructure solution that can be managed right from Azure. Customers can run Azure services and cloud-native applications on Azure Kubernetes Service (AKS) on Azure Stack HCI. Azure Stack HCI works with multiple systems co-engineered for simplicity and reliability from partners such as Dell, Lenovo, HPE, Fujitsu, and DataON.

“SKF is proud to be at the forefront of the hybrid cloud revolution. Azure Hybrid Cloud Solutions enable us to maximize our efficiency, grow our digital platform for world-class manufacturing, and empower the SKF factories of the future to innovate towards data-driven manufacturing.” —Sven Vollbehr, Head of Digital Manufacturing, SKF Group

Source: https://azure.microsoft.com/en-in/blog/innovate-across-hybrid-and-multicloud-with-new-azure-arc-capabilities/

2021 Cybersecurity Trends: Bigger Budgets, Endpoint Emphasis and Cloud

Insider threats are redefined in 2021, the work-from-home trend will continue define the threat landscape and mobile endpoints become the attack vector of choice, according 2021 forecasts.

After shrinking in 2020, cybersecurity budgets in 2021 climb higher than pre-pandemic limits. Authentication, cloud data protection and application monitoring will top the list of CISO budget and cybersecurity priorities. According to experts, these are just a few of the themes to dominate the year ahead.

Here is round-robin of expert opinions illuminating the year ahead.

Home is Where the Attacks Will Happen in 2021

There is no question IT staffs are still reeling from the massive work-from-home shift that forced them to rethink cybersecurity and placed new dependencies on technologies such as cloud services and digital collaborative tools such as Zoom, Skype and Slack. Those 2020 trends will have a lasting impact.

Nearly 70 organizations surveyed by Skybox said over a third of their workforce would remain remote for at least the next 18 months. That will trigger an uptick on endpoint protection in the year ahead, according to Adaptiva CEO Deepak Kumar. He told Toolbox Security that endpoint protection will impact 55 percent of IT team, as companies look to protect assets purchased and deployed to remote workforces.

Bitdefender researchers agree and say securing remote workers will become a major focus for organizations. In fact, it will be an imperative, since remote workers will continue to present a unique set of opportunities for the bad guys.

“As more and more people adhere to the work-from-home schedule imposed by the coronavirus pandemic, employees will take cybersecurity shortcuts for convenience,” according to researchers at Bitdefender. “Insufficiently secured personal devices and home routers, transfer of sensitive information over unsecured or unsanctioned channels (such as instant messaging apps, personal e-mail addresses and cloud-based document processors) will play a key role in data breaches and leaks.”

Insider Threats

Upheaval in staffing needs and continued dependence on a remote workforce will create fertile attack vector for criminals looking to exploit insider threats. Forrester researchers believe the remote-workforce trend will drive uptick in insider threats. They explain, already 25 percent of data breaches are tied to insider threats and in 2021that percentage is expected to jump to 33 percent.

Forcepoint warns in 2021 the growth of an “insider-as-a-service” model. This, they describe as organized recruitment infiltrators, who offer up highly-targeted means for bad actors to become trusted employees in orderto gather sensitive IP.

“These ‘bad actors,’ literally, will become deep undercover agents who fly through the interview process and pass all the hurdles your HR and security teams have in place to stop them,” said Myrna Soto, chief strategy and trust officer for Forcepoint.

Inbox Bullseye

Endpoint security issues equal some of the most challenging today and tomorrow. Inboxes are the chink in the armor security front lines, often the perfect vector for ransomware attacks, business email compromise scams and malware infection, according to a Crowdstrike analysis of the challenges.

Moving forward, researchers warn that enterprises should expect a “major increase” in spear phishing attacks in 2021 – due to automation.

“Cyber criminals have already started to create tools that can automate the manual aspects of spear phishing,” said WatchGuard researchers in a recent blog. “This will dramatically increase the volume of spear phishing emails attackers can send at once, which will improve their success rate. On the bright side, these automated, volumetric spear phishing campaigns will likely be less sophisticated and easier to spot than the traditional, manually generated variety.”

Cybersecurity Cloud Burst

Cloud adoption, spurred by pandemic work realities, will only accelerate in the year ahead with software-as-a-service, cloud-hosted processes and storage driving the charge. A study by Rebyc found that 35 percent of companies surveyed said they plan to accelerate workload migration to the cloud in 2021.

Budget allocations to cloud security will grow from single-digit to double as companies look to protect 2020 cloud buildouts in the year ahead.

Gartner analysis of 2021 cloud priorities names “distributed cloud” as a future focus for businesses which will have significant security implications. Distributed cloud is the migration of business processes to the public and private cloud – or hybrid cloud.

“[Companies] by shifting the responsibility and work of running hardware and software infrastructure to cloud providers, leveraging the economics of cloud elasticity, benefiting from the pace of innovation in sync with public cloud providers, and more,” says David Smith, Distinguished VP Analyst, Gartner.

According to Muralidharan Palanisamy, chief solutions officer at AppViewX, that shift will drive Cloud Security Posture Management (CSPM) in 2021. CSPM includes finding misconfigured network connectivity, assessing data risk, detecting liberal account permissions, cloud monitoring for policy violations, automatic misconfiguration detection and remediation and regulatory compliance with GDPR, HIPAA, and CCPA.

Automation, Artificial Intelligence and Machine Learning

Defensive applications of artificial intelligence will have their moment in 2021, driving a trend of hyper automation, said Palanisamy.

“Hyper automation is a process in which businesses automate as many business and IT processes as possible using tools like AI, machine learning, robotic process automation, and other types of decision process and task automation tools,” he said.

A study by Splunk, it reported 47 percent of IT executives interviewed said cyberattacks were up since the pandemic began. More recently, 36 percent said they experienced an increased volume of security vulnerabilities due to remote work.

“The sheer amount of security alerts, of potential threats, is too much for humans to handle alone. Already, automation and machine learning help human security analysts separate the most urgent alerts from a sea of data, and take instant remedial action against certain threat profiles,” Splunk wrote.

The report acknowledged that meaningful, practical application of AI is still a way out. But Ram Sriharsha, Splunk’s head of machine learning said he “expects AI/ML security tools to grow in their sophistication and capability, both in terms of flagging anomalies and in automating effective countermeasures.”

Mobile Menace 

Mobile threats accelerated in the backdrop of the COVID-19 pandemic – a trend expected to continue. Threats ranged from specialized spyware designed to snoop on encrypted messaging applications to criminals exploiting a slew of Android  critical security vulnerabilities

For those reasons, defenders need to heed last year’s lessons and create mobile-focused security programs, experts say Mobile will contribute to the ongoing “de-perimeterization” and cloudification of the corporate network. 

“The next big thing in security is the inversion of the corporate network,” Oliver Tavakoli, CTO at Vectra said. “It used to be that everything truly important was kept on-premise and a small number of holes were poked into the protective fabric to allow outbound communications. 2021 is the year where de-perimeterization of the network (which has been long predicted) finally happens and does so with a vengeance. The leading indicator for this is companies who are ditching AD (on-premise legacy architecture) and moving all their identities to Azure AD (modern cloud-enabled technology).”

As ever, user awareness will need to be a priority, according to Bill Harrod, Federal CTO at Ivanti.

“In the new work-from-home era, we’re constantly working on the go using a range of mobile devices, such as tablets and phones, relying on public Wi-Fi networks, remote collaboration tools and cloud suites for work,” he said. “As we settle into a new year of this reality, mobile workers will be the biggest security risk as they view IT security as a hindrance to productivity and believe that IT security compromises personal privacy.”

Meanwhile, 5G security took a backseat in 2021 even as those networks continued to roll out; but 2021 will see it return to the conversation — because 5G adoption won’t be seamless.

“When it comes to adopting all of the benefits of 5G, it won’t be an easy transition — both for enterprises and for consumers,” said Russ Mohr, 5G security expert at Ivanti. “Between the security vulnerabilities bound to be exploited, the time it takes to patch those vulnerabilities, and the constant protocols being rolled out, using secure 5G networks won’t be a seamless experience in 2021.”

Source: https://threatpost.com/2021-cybersecurity-trends/162629/

4 open source lessons for 2021

2020 fundamentally changed how many companies and teams work—seemingly overnight, remote-first cultures became the new norm and people had to change how they communicate and collaborate. However, for those of us who have been deeply engaged in open source, remote work has been our norm for many years because open source communities are large, globally distributed, and require effective collaboration from developers around the world. We’ve had ample time to create and refine many digital-first practices.

It’s no surprise that open source adoption and usage grew significantly this year. New data from GitHub’s 2020 Octoverse report shows there were over 60 million new repositories created this past year, and more than 56 million developers on GitHub. When people had to stay home, developers came together to find community and connections through open source. And though open source developers had a lot of established remote practices, this year challenged companies of all sizes to integrate their open source software experiences and development models in new ways, bringing new learnings as a result.

We wanted to share four places where Microsoft is learning from and growing our engagement in open source over the last year that we hope can be useful for any developer or team looking to build and collaborate in 2021.

1. Seeking different perspectives makes better software

Success in open source is just as much about your own contributions to the community as it is about what you learn from the community. Behind every pull request, issue, and code snippet, is a person. It’s important to connect with them—to listen, learn, and empathize with them. They offer a different perspective and feedback that your team may not be thinking of.

I hear conversations in meetings (one of the new virtual hallways) about making sure we get feedback from industry users who are well outside the Microsoft faithful. With this new feedback, I hear a collective sound of Microsoft’s perspective expanding and our gratitude for the new and different views we are receiving.

One example of community feedback changing our perspective was when the Dapr project received a lot of user feedback requesting a streamlined API to retrieve application secrets. The Microsoft team working on Dapr had not planned that work in the current cycle, but the community made it very clear that this new API would solve a lot of problems that developers were facing.

The Dapr maintainers worked closely with community members who submitted multiple PRs to add this functionality, covering everything from code to documentation to samples. After this was added, we found that customers also picked up this functionality and used it in their Dapr implementation.

This reminded us that listening to community feedback is extremely valuable, and that given opportunity, encouragement and support, community members will contribute effort to make requirements a reality.

2. Finding the balance between policy and autonomy

To help drive Microsoft’s open source efforts, we have an Open Source Programs Office (OSPO), whose goal is to help our employees consume and participate in open source safely, effectively, and easily.

Over the last year, we have heard from more and more enterprise customers—from retailers to banks to auto makers—who are looking to establish similar offices and practices internally. We share and discuss best practices on how to find the balance between setting policy while also empowering employees to do the right thing. While OSPOs will look different depending on your company’s needs, a few common practices we often discuss include creating a cross-functional group, setting clear policies (and making them easy to find and understand!), investing in tooling, and providing rewards and motivation. We’ve shared our guidance and policies and we look forward to continuing to build out our own internal practices, and to share our learnings along the way to help others do the same.

3. Securing every link in your supply chain is critical

Using open source in your development process has many advantages, including increased time to market, reduced cost of ownership, and improved software quality. However, open source, like any software, has its risks—open source can contain security defects that lead to vulnerabilities—and new research shows security vulnerabilities often go undetected for more than four years before being disclosed. Because open source software is inherently community-driven, there is no central or single authority responsible for quality and maintenance. Source code can be copied and cloned, leading to outsized complexity with versioning and dependencies. Worse yet, attackers can become maintainers and introduce malware.

As more systems and critical infrastructure increasingly rely on open source software, it’s more important than ever that we build better security through a community-driven process. Securing open source is an essential part of securing the supply chain for every company. In 2020, we came together alongside GitHub, Google, IBM and others to create the Open Source Security Foundation (OpenSSF). The group is helping developers with resources to identify security threats to open source projects, providing education and learning resources, and finding ways to speed up vulnerability disclosures. In the coming year, the OpenSSF looks to provide hands-on help to improve the security of the world’s most critical open source projects.

4. Over communicate

Big companies and big open source projects know that important information has to be communicated broadly and frequently across different channels. Even with this knowledge, Microsoft had to change rapidly this year just as so many other companies did. We no longer had moments of serendipitous interaction where you learn something helpful from bumping into someone in the coffee line, walking with a colleague to a meeting, or waiting with someone for the elevator.

This year, we learned the importance of over communication, which has been a hallmark of open source communities. Over communication is key because uncertainty can be more stressful than either good or bad news.

Take, for example, the Kubernetes project—it has never had an office and today they have 407 chat channels, which run the gamut from regional user groups to developer discussions about particular technology subsystems. These chat rooms—whether they are IRC channels, Twitter hashtags, Teams, or Slack —*are* the offices of open source projects.

While chat rooms are the new water cooler, they are temporal and transient. They are not the new announcement email or documentation repository. In the same way that no one is expected to know what happened in every meeting or conversation in the office kitchen, few people read the history of chat rooms when they return to their desk. Understanding how communication has changed and what expectations are set for every medium allows internal communication to remain a critical support of a good collaborative culture.

Looking ahead to 2021, together

These four investment areas are just as important to good corporate culture and health, as they are part of open source collaboration. We strongly believe that most of the hard (and, by that we mean interesting) problems of today will take a team or the whole industry to solve. This means we all need to be trustworthy and (corporately) self-aware participants in open source.   

A few years ago if you wanted to get several large tech companies together to align on a software initiative, establish open standards, or agree on a policy, it would often require several months of negotiation, meetings, debate, back and forth with lawyers… and did we mention the lawyers? Open source has completely changed this: it has become an industry-accepted model for cross-company collaboration. When we see a new trend or issue emerging that we know would be better to work on together to solve, we come together in a matter of weeks, with established models we can use to guide our efforts.

As a result, companies are working together more frequently, and the amount of cross-industry work we’re able to accomplish is accelerating. In 2020 alone, Microsoft participated in dozens of industry groups, associations, and initiatives—from long-standing established organizations, like the Linux Foundation and Apache Foundation, to new emerging communities like Rust and WebAssembly. This work across companies and industries will continue in the year ahead and we look forward to learning, growing, and earning our place in open source.

Source: https://cloudblogs.microsoft.com/opensource/2021/01/14/four-open-source-lessons/

Enterprise Software Development Will Break the Speed Limit in 2021

For some, 2020 was the year for software development speed … but not for everyone. Startups and SMEs climbed on board the CI/CD, shift-left and agile development train en masse – and reaped the benefits of faster iterations and tighter release schedules.

But the “big kids” got left behind. Core enterprise applications – the legacy systems with 1+ million lines of code that do the heavy lifting in many corporations – have not yet made the switch to the world of rapid release cycles. And this is natural; enterprises are rightfully more cautious by nature, and because the technology driving this evolution was not mature by enterprise standards.

But in 2021, that’s going to change.

Here are four speedy predictions for pushing enterprises over the speed limit in 2021, and some of the companies making it happen.

Four Enterprise Software Development Speed Predictions

#1: AI-based test generation will make enterprise development faster and better

Testing automation is crucial for any software development organization trying to transition to CI/CD. But enterprise stakes are arguably high, and enterprise code is by nature inflated, often carrying significant technical debt. Preparing suitable test coverage manually is resource-heavy, to say the least. But a new generation of AI-powered testing automation tools solves this problem for you. Solutions from companies like Ponicode and Diffblue allow enterprises with large code base apps to embrace CI/CD by bridging the technical debt gap in testing while still meeting release cycles.

#2: Test avoidance tech will make a dent in unnecessary testing

Running tests on an entire build for one small code change is overkill – no one’s arguing that. But until recently, there wasn’t much of an alternative to ensure the ironclad quality that enterprises require. In 2021, solutions from companies like Sealights and Launchable (which counts among its investors Jenkins creator Kohsuke Kawaguchi) will help enterprise software development teams deliver quality at speed. The key is the effective use of machine learning to reduce test cycle times and run only relevant tests, not the entire test suite, to ensure faster iteration cycles.

#3: High Performance Computing will move toward distributed computing

Compute power will remain a bottleneck to development in 2021 – but the processing arms race will likely change. As enterprises get used to virtualizing their infrastructure, they will think twice about investing heavily in dedicated multicore build/render machines powered by next-gen processors. Instead, they will look to virtualized processing power like distributed computing, cloud bursting, and spot instances – which provide the massive scalability enterprises require at build time, while still maintaining a more reasonable price-performance ratio.

#4: Managed CI/CD in the cloud will automate enterprise release pipelines

Managed CI/CD, like Amazon’s CodePipeline and Azure Pipelines, help enterprises save time setting up and maintaining CI/CD infrastructure, scaling up during peak times and maintaining security. These services, while not for everyone, will continue to gain popularity in 2021. A key reason for increased adoption is that they allow companies to scale much faster: without hardware or software, development teams can push a button and suddenly gain 20 more build servers. Moreover, managed CI/CD enables faster onboarding of new teams, simpler creation of repositories and more streamlined work with multiple geolocations. Finally, pay-as-you-go models, no upfront fees and no commitments are increasingly attractive to enterprises in tough times.

Put the Pedal to the Metal

Newly released AI-powered products and mature hosted services from big-name cloud providers are changing the face of enterprise-level development. In 2021, we’re going to see teams supporting backbone enterprise applications increasingly moving to CI/CD, shift left and agile development methodologies. Because everyone – even enterprises – deserves to break the speed limit.

Source: https://devops.com/enterprise-software-development-will-break-the-speed-limit-in-2021/