Making sense of Microsoft’s new certifications scheme

Pic credit:

Over the past few years Microsoft has dramatically changed its approach to certification, moving away from qualifications connected to specific products to instead align them with common job roles. The idea is to provide experience-based learning delivered and assessed in small chunks, rather than forcing IT pros to cram for a long, theoretical exam every few years.

“We rebooted certifications around the modern jobs and roles that people have, as developers, as IT pros,” Jeff Sandquist, corporate vice president of developer relations at Microsoft, told “We worked with a set of industry partners and with various companies and enterprises [to ask]: What are the job tasks you need; what is that skill completion? What are the modern roles; what are the tasks that you need to complete as an individual? And then how do you go and validate that?”

The changes to certifications are part of a larger overhaul of how Microsoft delivers documentation, designs training, and assesses knowledge, with overlapping modules that add up to either preparation for exams you take to gain an initial certification or “knowledge checks” that count toward free annual renewals.

This “system of learning,” as Sandquist describes it, is available not just from Microsoft but also from training partners, with multiple ways for people to learn and training content aligned to what is covered in the certifications.

“If you want to go learn from in-person training, awesome. If you want to go learn from reading a book, that’s great. If you want to go to one of our third-party trainers or one of our online resource partners at Coursera or Pluralsight, awesome. You want to go to Microsoft Learn, that’s great,” he says, of the various different styles of training on offer for IT pros.

Sandquist hopes that even a company’s internal training will align with Microsoft’s vision, which is why the Learning module in Microsoft Viva shows content from Pluralsight, edX, Skillsoft, and Coursera, as well as Microsoft Learn. “We want it all in sync.”

Microsoft certification roles

Microsoft has centered its new certification scheme around Azure, grouping its certs into nine roles: administrator, developer, solutions architect, devops engineer, security engineer, data engineer, data scientist, AI engineer, and functional consultant. Most are self-explanatory, with functional consultant covering Dynamics 365 and the Power Platform.

Each certification role offers options at the Fundamentals, Associate, and Expert levels. Some also offer Specialities such as Azure IoT Developer, Azure for SAP Workloads, and the new Azure Virtual Desktop Speciality. Several top-level roles group together multiple paths, as noted below:

RoleRole paths
AdministratorMicrosoft 365 Messaging; Microsoft 365 Modern Desktop; Microsoft 365 Security; Microsoft 365 Teamwork; Teams; Identity and Access; Information Protection; Enterprise; Azure Stack Hub
AI engineerAzure
Data engineerAzure
Data scientistAzure
DeveloperAzure; Dynamics 365; Microsoft 365; Power Platform
DevOps engineerAzure
Functional consultantDynamics 365 Business Central; Dynamics 365 Customer Service; Dynamics 365 Field Service; Dynamics 365 Financial; Dynamics 365 Manufacturing; Dynamics 365 Marketing; Dynamics 365 Sales; Dynamics 365 Supply Chain; Power Platform
Security engineerAzure; Identity and Access; Information Protection; Security Operations Analyst
Solution architectAzure; Dynamics 365; Dynamics 365 plus Power Platform; Power Platform

Several new certifications have been added recently, as Microsoft works through the job task analysis for those roles. There’s a new Microsoft Teams Support Engineer Associate certification for support engineers that’s still in beta, and the exam for the Azure Network Engineer Associate will be available soon.

You can get a feel for the breadth of roles Microsoft is trying to cover with certifications by looking at the 20-plus roles you can use to filter the list of qualifications. These include app maker, business analyst, business owner, business user, data analyst, database administrator, network engineer, risk practitioner, student, and technology manager, in addition to the nine roles specified as paths above.

What replaces the MCSA, MCSE, or MTA?

The Azure Administrator Associate, Database Administrator Associate, and Data Analyst Associate certifications are the ones Microsoft highlights as the closest replacement for MSCE and MCSA certifications (and Azure Developer Associate for developer MCSA certifications), although they obviously cover cloud services rather than server products.

Exams for product-based MCSA certifications such as Windows Server and Exchange Server haven’t been available since January 2021, and the certifications have been retired. The exams for Microsoft Technology Associate (MTA) certifications that cover Windows and Windows Server (as well as network, security, database administration, and various programming topics) will be available until June 30, 2022. If you’ve already bought a voucher, you can take the exam before then and certifications will remain on your transcript, but you can no longer buy vouchers to take MTA exams.

Outside of the administrator and developer certifications, there are still some certifications that cover specific products: seven Microsoft Office Specialist (MOS) certifications cover Access, Excel, Word, and Office generally.

There are also Microsoft Learn courses that cover specific products such as Windows Server 2019 and Azure Stack HCI or technologies such as T-SQL in detail, and there will be more in-depth content from training partners such as Coursera and Pluralsight, Sandquist says. “There are going to be areas where they want to go deeper. People are going to want a 300 level [course].” So while the content on Microsoft Learn and from partners that’s based on the job task analysis will be in sync, “They will differentiate on what people need to go deep on — I need to go deeper in networking, I need to go deeper in hybrid or on-premise — and we will deliver those.”

But there are no exams or certifications associated with the product-specific courses: Something IT professionals who want to demonstrate their expertise in these areas continue to raise as an issue.

Microsoft Fundamentals certifications

Exams at the Fundamentals level typically cost $99 and are intended to provide firm grounding in the basics before moving on to Associate certifications, or for those with little industry experience to demonstrate skills and expertise to an employer. Fundamentals certs are also good for business leaders who want show they know a particular platform well enough to make decisions about what services to adopt.

There are eight certifications available at the Fundamentals level:

Fundamentals certs don’t offer one-to-one mapping with the nine top-level roles. So devops, AI, and data engineers or data scientists who already know their field but are gaining Azure skills would all start with Azure Fundamentals. Azure Data Fundamentals, however, would be relevant for Azure Database Administrator Associate or Azure Data Engineer Associate certifications.

Microsoft Associate certifications

Not all Associate level certifications are equal: Some, like Azure Administrator Associate and Azure Developer Associate, are intended as a broad introduction for people who will then pick a more specific certification such as Azure Stack Hub Operator, Azure Security Engineer, or Azure AI Engineer Associate.

Specialty and Associate certifications require one or more exams, typically priced around $165 each. Microsoft training experts we spoke to previously had concerns about how well Associate certifications with only a single exam could prepare people for more complex roles so it’s good to see these getting more in depth.

There are 39 Associate level certifications currently on offer, ranging from Azure Administrator, to Data Analyst, to Word and Excel credentials, and beyond. Six Associate level certs have already been moved to legacy status.

Microsoft Expert certifications

Expert level certifications are more specialized and so far there are only five, all building on one or more Associate certifications that you have to gain first: Azure Solutions Architect ExpertDevOps Engineer ExpertMicrosoft 365 Enterprise Administrator ExpertPower Platform Solution Architect Expert, and Dynamics 365: Finance and Operations Apps Solution Architect Expert. These require two or more exams, priced around $165 each.

All these modern, role-based certifications cover either cloud services, or hybrid options where cloud services are used in conjunction with on-premises products — Microsoft 365 for Office and Windows, Azure, and Azure Stack Hub. There is one exam that specifically covers Windows 10 (MD-100), for administrators who deploy, configure, secure, or monitor devices and manage policies, updates, or apps, but it’s for the Modern Desktop Administrator Associate certification rather than a standalone option.

Microsoft certification renewals

Getting an initial certification means taking an exam, online or (pandemic permitting) in person. But because online training now includes sandboxed environments in which candidates can practice the skills they are learning, renewal doesn’t require repeating examinations to stay up to date as cloud services change.

The new, cloud-based Microsoft certifications are valid for one year from the date the certification was earned, rather than the previous two, to ensure that certifications cover new features and services as they’re introduced. But you can renew certs annually at no cost, up to six months before the certification expires, by taking an online assessment on Microsoft Learn. Once you pass the assessment, the certification is valid for another full year from when it was due to expire. This enables those who have two certifications expiring in the same month to stagger the assessments over the six-month window to pace themselves.

Because Microsoft Learn is built on what Sandquist calls “micro-based learning” and interactive tasks, you can stay up to date incrementally — which is the way that cloud services change. “We have five- and ten-minute modules that are part of a broader learning path, and as you work through the learning path we do knowledge checks that aren’t just answers to questions,” he says. That might be deploying a VM on Azure or through the Microsoft Learn sandbox, with more experience points awarded for putting the VM in a different data center or for following security guidance.

Many of the tasks apply to multiple learning paths because they’re concepts that apply in multiple Azure services. “You learn how to do a particular task with identity or explain a concept, then pass a knowledge check. As people pick up a variety of skills, if you’ve done the work on identity, it’s checked off the next time you go through another learning path.” The platform keeps track of which modules and learning paths trainees have completed and which they still need to cover before renewing a certification and prompts them to take the extra modules.

While Microsoft Learn isn’t the only way to achieve Microsoft certifications, it will be key to renewing them and it exemplifies what Microsoft is trying to achieve with this new approach. “It’s free, it’s interactive, and it’s always up to date,” Sandquist says. It’s also most useful for organizations that are adopting Microsoft cloud and hybrid services and staying up to date with them.


India government IT spending to grow 8.6% in 2022: Gartner

IT Revenue
Pic credit: DATAQUEST

India government IT spending is projected to total $8.3 billion in 2022, an increase of 8.6% from 2021 according to a latest forecast by Gartner, Inc.

“Digitalization initiatives of Indian government organizations took a giant leap in 2020 because of the global pandemic. The pandemic forced the government to shift priorities as supply chains and revenue streams dwindled,” said Apeksha Kaushik, senior principal research analyst at Gartner. “As vaccination rates increase throughout the country and public health improves, the governments will focus on furthering the digitalization efforts on concerns such as ‘citizen experience’ and digital inclusion.”

Individual digital solutions do not correlate to overall digital maturity. As a result, the overall digital maturity of Indian government organizations is low compared to its western counterparts. Moving from legacy systems to digital will be a major reason for IT spending growth in 2022. For initiatives, such as digital licensing, online judicial proceedings, digital taxation, that were initiated as a knee-jerk reaction to the pandemic in 2020, there is a still long way ahead to achieve full potential as digital inclusion is not fully met in the country. The forthcoming 5G spectrum auction in India will aim at solving some of these challenges related to digital inclusion in 2022.

Indian government organizations, both local and national, will increase spending on all segments of IT in 2022, except for telecom services. The software segment is forecast to achieve the highest growth of 24.7% in 2022 as the adoption of citizen service delivery applications with use of artificial intelligence and machine learning will improve across citizen initiatives (see Table 1). As India prepares itself for 5G rollout, the telecom market requires deep pockets to make an impact on innovation, quality of services to citizens, pricing. Hence, the focus on investing on telecom services will be lower as compared to the other segments in 2022.

“In India, with increasing investments on cloud and cyber-security, the prime focus of IT spending by government organizations is on building collaborative partnerships, along with technology solutions. Government CIOs are looking beyond implementation for signs of the impact from the technology, outcome-based futuristic direction they should take and for IT technology/service providers that go beyond provision to partner and collaborate with them to achieve their mission critical priorities,” said Kaushik.

As cloud deployments and implementations further the digital agenda, privacy and security continue to be government CIOs’ top concerns in the country. Key technologies that government CIOs in India will be prioritizing their spending on in 2022 will be digital workplace and business continuity solutions, business intelligence and data analytics, responsible AI and blockchain along with improved data privacy & data sharing tools.


Microsoft Azure Certifications- Choose your right learning path!

Microsoft Azure Certifications- Choose your right learning path-Synergetics Learning Blog Banner

We can very well state that year 2020, was a year of transformation for the IT sector. Immense changes in IT-related work patterns and profile along with fast paced development in this segment has left many learning managers, training managers perplexed. 

The present scenario reveals the L&D heads or managers and people in similar profile engaged in a conundrum. The questions spinning in their minds are – How to deliver?  How to bridge the skill gap? How to gear up for the present technology and for the coming future from the IT perspective? 

On other hand, we have IT professionals who are worried advancing in their job and waiting for deserving job opportunities. Most IT professionals are keen to acquire new Certifications or upgrade their skills or reskill themselves either through their company or at a personal expense.

Sounds difficult or challenging? Yes, but Synergetics Learning provides the most viable and reliable path forward in this situation. We make it possible to attain the desired learning targets, bridge skills gap and have certified resources ready to deliver.  We know your deliverables and wish to help you guide towards them. 

The Skills Gap

As per industry reports 

  • Nearly 78% of worldwide IT Managers report skills gaps.
  • 77% feel that risk of skill gaps pose to their team objectives is medium to high.
  • 68% IT decision-makers anticipate new skills gaps in the next two years.

The rapid onset of new technologies gaining viable presence in the market leading to emergence of new roles, new job profiles especially in niche areas. This leads to the growing demand for certified technical professionals in these areas, which takes to time to deliver. 

For instance, the current high-in demand job profiles are associated with the Cloud and Cyber security. We can confidently state that cross certification is what most L & D Managers are looking out for, such as professionals with certifications in both – Cyber security and Cloud or Cloud Developer and DevOps Engineer

For Learning Managers and HR Heads, obtaining certified and knowledgeable resources is the need of the hour and when they are able to deliver resources as required, their duties and targets are perfectly in sync.  

We at Synergetics, propose to ‘bridging’ the skill gap through the most preferred training channel leading to certification and delivery of certified billable resources.

Preferred Training, Certification and Methodology

Most organizations believe in offering formal training to their existing workforce. It could entail re-skilling the existing workforce too. Also, onboarding solutions catering to persona and role specific are given a serious thought.

More than 60% of the IT professionals prefer formal training over informal training. They emphasize on instructor-led and formal teaching methods. While the remaining professionals are keen to go ahead with informal learning systems viz. micro-learning, self-paced, on demand study methods. However, we also believe in blended learning approach that is a combination of formal as well as informal training.

Nevertheless, training with certification is the key goal for most L&D heads or managers. They go ahead to reiterate that certified team players definitely add value to their segment which goes beyond the cos of certification.

So, let us begin technology certification – 

Technology certification implies having knowledge and expertise in particular technology (especially an emerging one) which means ease in acquiring new jobs, better pay hikes or a work on a new project or assignment.  

At the individual level, this means enhanced visibility along with acquiring an expertise and of course, standing apart from the rest of the crowd.  

We have an interesting set of statistics which highlight why certification matters a great deal. More than 50% of tech professionals stated increase in the self-confidence and their capability to deliver and their expertise was much valued at their workplace.  More than 35% of these professionals got increased earnings and lastly, the hiring bosses expressed that certified employees added immense value to their organization. 

Microsoft Azure DeveloperAZ-204- Microsoft Certified: Azure Developer AssociateAZ-204 This certification validates the candidate’s expertise in developing Solutions for Microsoft Azure for storage, security, monitor and optimize Azure solutions while connecting Azure and third-party services.
Microsoft Azure AdministratorAZ-104- Microsoft Certified: Azure Administrator AssociateAZ-104 This certification validates the candidate’s experience in performing and managing Azure identities and governance, storage, Azure compute resources, virtual networking and backup Azure resources.
Microsoft Azure ArchitectAZ-303- Microsoft Azure Architect Technologies   AZ-304- Microsoft Azure Architect DesignAZ-303 Seek certification for Microsoft Azure Architect Technologies and acquire skillsets in implementing, monitoring an Azure infrastructure including managing and security solutions, data platforms and apps.   AZ-304 Certify for an advisory profile with regards to offering and implementing reliable, scalable and secure cloud solutions on par with business requirements for enhanced performance and deliverables across the Microsoft Azure platform.
Data Engineer certification pathDP-203: Data Engineering on Microsoft AzureDP-203 This course is designed for students who want to attain the “Microsoft Certified: Azure Data Engineer Associate” certification. This course has contents for the Exam DP-203. The objectives covered in this course are. Design and implement data storage (40-45%) Design and develop data processing (25-30%)
Microsoft Data ScienceDP-100- Designing and Implementing a Data Science Solution on AzureDP-100 Offering yet another powerful certification to the Azure Data Scientist with setting up of Azure Machine Learning workspace and explore data science workloads across the Azure platform including data experiments, training, managing, deploying, and consuming models.  
Administering Relational Databases on Microsoft AzureDP-300: Administering Relational Databases on Microsoft AzureThis course provides students with the knowledge and skills to administer a SQL Server database infrastructure for cloud, on-premises, and hybrid relational databases and who work with the Microsoft PaaS relational database offerings. Additionally, it will be of use to individuals who develop applications that deliver content from SQL-based relational databases.
AI Engineer certification pathAI-102- Designing and Implementing an Azure AI SolutionAI-102 As an Azure AI Engineer, seek expertise and knowledge in analyzing solution requirements, design AI based solutions for deployment and manage them. Also, develop a custom API to meet the business requirements along with available data options on lines of Azure AI.  
DevOps Engineer ExpertAZ-400- Designing and Implementing Microsoft DevOps SolutionsAZ-400 This exam is for those who have experience in Azure administration and development.  Seek expertise in developing an instrumentation strategy – Site Reliability Engineering (SRE) strategy and develop a security and compliance plan and allied processes such as communication, collaboration, security, and compliance, along with integration and so on.  
Security Engineer certification pathAZ-500- Microsoft Azure Security TechnologiesAZ-500 Seek certification Microsoft Azure Security Technologies and look forward to manage identity and access, security operations while securing data and applications. This certification prepares the candidate for security controls and threat protection, managing identity and access.  

*Note: This list is for your reference. We cover all Microsoft Azure certification topics. Also, we have Certification + Addon (customizable) solutions for your cloud teams.

How to register and appear for a Microsoft Certification Exam –

  • Select the Microsoft online exam you wish to take from the exam list, and then select the Schedule exam button
  • Verify your profile and continue to Pearson Vue site to schedule the exam
  • If online exams are available in your country – Select the “At my home or office” option
  • Accept the online exam policies
  • Select the language to communicate with the proctor
  • Select the Exam Date and Time
  • Verify and click on “Proceed to Checkout” to make the payments and confirm the appointment

Using ML/AI to Support Infrastructure Monitoring

Image source:

Successful infrastructure monitoring enables IT teams to ensure constant uptime and performance of their company’s systems. Technologies like machine learning (ML) and artificial intelligence (AI) benefit infrastructure monitoring by more quickly collecting and analyzing data from all of the hardware and software components that comprise the IT stack. Infrastructure changes are occurring faster than ever before, but complex systems, the unique nature of applications and lack of IT skillsets can cause challenges when integrating with these newer technologies. However, it’s more important than ever that sysadmins and DevOps teams understand how ML and AI can mitigate these roadblocks, support them in staying on top of infrastructure performance and rapidly address issues that arise.

Intelligent Monitoring Support for Complex Systems

The most tangible result of intelligent infrastructure monitoring tools and processes is near-immediate alerting of performance and uptime issues, which can then be addressed in an efficient and effective manner so no business interruptions occur. However, complex systems can stunt these benefits if ML and AI are not being used and manual monitoring protocols are still in place.

Tools that use ML or AI lessen the work of IT staff immensely, freeing up critical business resources and aiding in overall productivity. Both technologies can automatically identify and update all IT stacks that comprise an enterprise’s infrastructure to keep systems up-to-date and aligned with established key performance indicators (KPIs). In addition, intelligent offerings can detect and factor those metrics against set standards so that early alerts to an “unhealthy” section of infrastructure can be identified, even as the IT stack is constantly changing. This drastically speeds up troubleshooting efforts.

Differentiation in Applications

The different applications supported by the various IT stacks will most often have unique service-level agreements (SLAs) for their performance and uptime, as well as remedies or penalties should those service levels not be achieved. Plus, system loads that stress the underlying infrastructure are frequently changed. For these reasons, it is important to identify what constitutes a “healthy” IT stack so that these minute parts of the infrastructure are not overlooked due to the variation involved.

ML and AI can be programmed to track system baselines that support a “healthy” IT stack. These technologies are particularly great at finding novel and unusual patterns in data. As the monitoring and observability landscape becomes more complex over time, driven by real changes in how developers build applications and systems, the ability to spot and detect such patterns in data can be crucial in helping make sense of it, further cutting down efforts on manual searching, detective work and “death by dashboards,” which we’ve all experienced at one time or another.

Supporting IT Team Skills with Intelligence Technology

The role of sysadmins—and to a greater extent, developers—has shifted over the past few years to become nearly as complex as the infrastructure they oversee. Nowadays, it seems as though developers are required to have expertise in all aspects of infrastructure, from monitoring to Kubernetes to machine learning. This can take quite the toll on developers who possess such skills, but in a more realistic sense, developers that can do all these things are very hard to come by. The lack of these skillsets is pervasive in the industry, which is why ML and AI can be seen as supporting technologies—they can fill in these gaps, to an extent.

With built-in intelligence and automation, ML/AI can enable even the most inexperienced sysadmin or DevOps professional to monitor complicated infrastructure like a pro, taking on most of the time-intensive work around collecting and analyzing the data and identifying where to troubleshoot. The main goal is to put humans in the driver’s seat, utilizing ML and AI for granular discovery of system issues, providing the metrics or charts that might be most relevant to IT staff as they troubleshoot their system and reducing the cognitive load of developers.

With the vast benefits that intelligent technologies possess, integrating them into your IT stack can help mitigate challenges experienced with complex systems, application differentiation and the skills deficit experienced in the IT team. The important ingredient in making ML and AI effective in infrastructure monitoring is using tools that incorporate the right formulas, algorithms and automation that can best help determine success when it comes to your desired outcome.c


“Synergetics offers Innovative Persona-based learning solutions for the best engagement of resources Experience for Learners”

How and Why Persona-based learning solutions are important

A quality onboarding solution gives much weightage to employee engagement to determine its success. As a veteran provider of learning services, Synergetics is constantly innovating its onboarding solutions to offer zealous productive teams with a high engagement factor.

We go to lengths to make the onboarded freshers feel valued and resourceful too. The concept of quality investment made by the company for them is in the best interests of both, the organization as well as the individuals.

Synergetics develops onboarding solutions taking into account the most desired outcomes from our customer’s point of view i.e., immediate delivery from onboarded freshers, better and well-cultivated work culture and of course, less attrition. 

Over the two decades plus of delivery, we have garnered the right skillsets to bridge the gap between expectations and delivery of these values. Hence our onboarding solutions are a class apart. We are willing to share some of our trade secrets with you.

To begin with, our approach is quite transparent, and we believe in working with our customers to design and develop our onboarding solutions based on the following mind exercises

Synergetics Onboarding Solutions

Prospect Mapping

In the initial stages, vision relates to an imaginary scene but for achieving it in reality does pose a different picture and the challenging path forward. Similarly, it is essential that the organization’s vision is well relayed to their freshers who should related with it well. This is essential for their apt contribution in realizing the vision in terms of investment of key resources and their deliverables.

Our onboarding solution will focus on the prospect mapping aspect through
Synergetics Onboarding Solution Prospect Mapping

The purpose of the Prospect Map is to assimilate the workforce, processes, data, and technology with the organization’s culture and mission

Creation of Personas

A self-explanatory term. It means someone similar to an actual individual and in reality, no two individuals are exactly the same.

With regards to onboarding solutions, Synergetics takes additional effort to identify the participants of the onboarding solutions. In most cases, the onboarding solutions are designed for a set of fresh recruits but the batch of fresh recruits keeps changing!

This means the attributes of the previously onboarded batch of freshers could be different from the current one and hence the onboarding solution has to be tailor-made for the current one for precise delivery. This brings us to the necessity of developing personas.

Over years of our onboarding experience, we have deduced that though most organizations prefer to go ahead with an existing onboarding solution that is not the case. The premise “one size fits none” is absolutely correct. 

We go to lengths to develop personas that help us in formulating and customizing onboarding solutions for apt delivery.  Our persona-based delivery incorporates overcoming the key challenges of a varied persona based while final outcome matches the desired delivery outcomes.

Though considered by most organizations an additional task or effort, it does help the onboarding service providers in a meticulously developed and on-target delivery program.

Path Forward

Synergetics Learning takes into consideration the personas of the participants of onboarding solutions while developing the path forward and realizing the onboarding program.

Apart from the quantifiable targets, the qualitative targets and parameters will be in perfect sync with the imparted knowledge, skills, talents, and prospects of the organization as envisioned.

These three mind exercises form the basis of our onboarding program, going further we delve into the following set of questions –
  • Do the existing process flows require remapping?
  • Is the suggested technology capable of achieving the chosen prospect?
  • Is it possible to have a roadmap and implementation schedule in place?
  • What about the costing vis-à-vis the success of the onboarding solution?

About Synergetics

Synergetics is a learning solutions company firm focused on helping small and medium enterprises through the Big Data Revolution. We offer a gamut of services that include learning delivery across various data-oriented profiles such as DevOps, Data Analyst, Data Engineer, Data Scientist, and so on along with emerging technology for learning Solutions Company

Each of our delivery services is centered on quality learning services with billable resources while reskilling and upskilling services focus on seamless migration towards the adoption of new technology as cost-effective solutions for most organizations. Synergetics, the right company for infinite learning solutions on emerging technology offers bespoken and most viable onboarding solutions, for more details do get in touch with us on or you can Call or What’s App us on +91 8291362058 or visit our contact us page to discuss your learning requirement.

Critical Vulnerability Affects Millions of IoT Devices

Picture source: OSRAM

Mandiant, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), and Internet of Things provider ThroughTek have disclosed a critical vulnerability affecting millions of IoT devices that could let attackers spy on video and audio feeds from Web cameras, baby monitors, and other devices.

CVE-2021-28372 was discovered by Mandiant’s Jake Valletta, Erik Barzdukas, and Dillon Franke, and it exists in several versions of ThroughTek’s Kalay protocol. It has been assigned a CVSS score of 9.6.

The Kalay protocol is implemented as a software development kit (SDK) that is built into client software, such as a mobile or desktop application, and networked IoT devices such as smart cameras. ThroughTek claims to have more than 83 million active devices and at least 1.1 billion monthly connections on its platform, and its clients include IoT camera manufacturers, smart baby monitors, and digital video recorder (DVR) products.

Because the Kalay protocol is integrated by OEMs and resellers before devices reach consumers, the researchers who discovered the vulnerability were unable to determine a complete list of devices and organizations it affects.

This isn’t the first ThroughTek flaw disclosed this year. In May 2021, researchers with Nozomi Networks disclosed a security camera vulnerability affecting a software component from ThroughTek. Unlike this flaw, CVE-2021-28372 allows attackers to communicate with devices remotely and in doing so, control devices and potentially conduct remote code execution.

Mandiant researchers used two approaches to analyze the protocol. They first downloaded and disassembled applications from Google Play and the Apple App Store that contained ThroughTek libraries. They also bought different Kalay-enabled devices, on which they conducted local and hardware-based attacks to obtain shell access, recover firmware images, and perform more dynamic testing.

Over a series of months, the team created a functional implementation of the Kalay protocol and with this, they were able to perform device discovery, device registration, remote client connections, authentication, and process audio and video data on the network. Their familiarity with the protocol allowed them to then focus on identifying logic and flow vulnerabilities in it.

CVE-2021-28372 affects how Kalay-enabled devices access and join the Kalay network, the Mandiant team explains in a blog post on their findings. They found device registration only requires a device’s 20-byte unique assigned identifier (UID) to access a network. The UID is usually provided to a Kalay-enabled device from a Web API hosted by the product’s seller.

If attackers gain access to the UID of a target device, they can register that device with the same UID on the network and cause the Kalay servers to overwrite the existing device. With this done, attempts at a client connection to access the victim UID will redirect to the attackers. The attackers can continue the connection and access the username and password needed to log in to the device.

“With the compromised credentials, an attacker can use the Kalay network to remotely connect to the original device, access AV data, and execute [remote procedure call] calls,” the researchers write. “Vulnerabilities in the device-implemented RPC interface can lead to fully remote and complete device compromise.”

A successful attack would require “comprehensive knowledge of the Kalay protocol” as well as the ability to create and send messages, researchers note. The attackers would need to obtain Kalay UIDs via social engineering or vulnerabilities in the APIs and services that return Kalay UIDs. This would allow them to attack devices linked to the UIDs they have.

Mitigations for Vulnerable Devices
Mandiant disclosed the vulnerability along with ThroughTek and CISA. Organizations using the Kalay protocol are advised to adopt the following guidance from ThroughTek and Mandiant:

If the implemented SDK is below version 3.0, upgrade the library to version or version and enable the Authkey and Datagram Transport Layer Security (DTLS) features the Kalay platform provides. If the implemented SDK is version 3.1.10 or above, enable Authkey and DTLS. Companies are also advised to review the security they have in place on APIs or other services that return Kalay UIDs.

Mandiant urges IoT device owners to keep their software and applications up to date and use complex, unique passwords for accounts associated with their devices. Further, they should avoid connecting to vulnerable devices from untrusted networks, such as public Wi-Fi.

For manufacturers, the company recommends ensuring IoT device manufacturers apply controls around Web APIs used to obtain Kalay UIDs, usernames, and passwords, as this would decrease attackers’ ability to access the data they need to remote access target devices.

“CVE-2021-28372 poses a huge risk to an end user’s security and privacy and should be mitigated appropriately,” the researchers write. “Unprotected devices, such as IoT cameras, can be compromised remotely with access to a UID and further attacks are possible depending on the functionality exposed by a device.”

CISA has also issued an advisory warning of the ThroughTek flaw.


Big Data: A Big Introduction

Image source: Learn Hub

The digital universe is continuously expanding—just like the physical universe, except that the digital world alone has generated more data than the number of stars in the entire observable physical universe.

44 zettabytes! That’s 44 with trailing zeros (44×1021). That’s 40 times more bytes than the number of stars in the observable universe.

Image source: Eyesdown Digital

By 2025, there will be 175 zettabytes of data in the global datasphere. The growth in data volume is exponential.

All of this data is aptly called Big Data. In this article, we will:

  • Introduce Big Data
  • Explain core concepts
  • Compare small and thick data
  • Highlight the latest Big Data trends for business
  • Point you to plenty of resources

What is Big Data?

Big data is the term for information assets (data) that are characterized by high volume, velocity, and variety that are systematically extracted, analyzed, and processed for decision making or control actions.

The characteristics of Big Data make it virtually impossible to analyze using traditional data analysis methods.

The importance of big data lies in the patterns and insights, hidden in large information assets, that can drive business decisions. When extracted using advanced analytics technologies, these insights help organizations understand how their users, markets, society, and the world behaves.

3 Vs of Big Data

For an information asset to be considered as Big Data, it must meet the 3-V criteria:

  • Volume. The size of data. High volume data is likely to contain useful insights. A minimum threshold for data to be considered big usually starts at terabytes and petabytes. The large volume of Big Data requires hyperscale computing environments with large storage and fast IOPS (Input/Output Operations per Second) for fast analytics processing.
  • Velocity. The speed at which data is produced and processed. Big Data is typically produced in streams and is available in real-time. The continuous nature of data generation makes it relevant for real-time decision-making.
  • Variety. The type and nature of information assets. Raw big data is often unstructured or multi-structured, generated with a variety of attributes, standards, and file formats. For example, datasets collected from sensors, log files, and social media networks are unstructured. So, they must be processed into structured databases for data analytics and decision-making.

More recently, two additional Vs help characterize Big Data:

  • Veracity. The reliability or truthfulness of data. The extent to which the output of big data analysis is pertinent to the associated business goals is determined by the quality of data, the processing technology, and the mechanism used to analyze the information assets.
  • Value. The usefulness of Big Data assets. The worthiness of the output of big data analysis can be subjective and is evaluated based on unique business objectives.
characterize Big Data
Image Source: bmc Blogs

Big data vs small data vs thick data

In contrast to these characteristics, there are two other forms of data: small data and thick data.

Small Data

Small Data refers to manageable data assets, usually in numerical or structured form, that can be analyzed using simple technologies such as Microsoft Excel or an open source alternative.

Thick Data

Thick Data refers to text or qualitative data that can be analyzed using manageable manual processes. Examples include:

  • Interview questions
  • Surveys
  • Video transcripts

When you use qualitative data in conjunction with quantitative big data, you can better understand the sentiment and behavioral aspects that can be easily communicated by individuals. Thick Data is particularly useful in the domains of medicine and scientific research where responses from individual humans hold sufficient value and insights—versus large big data streams.

Big Data trends in 2021-2022

Big Data technologies are continuously improving. Indeed, data itself is fast becoming the most important asset for a business organization.

Prevalence of the Internet of Things (IoT), cloud computing, and Artificial Intelligence (AI) is making it easier for organizations to transform raw data into actionable knowledge.

Here are three of the most popular big data technology trends to look out for in 2021:

  • Augmented Analytics. The Big Data industry will be worth nearly $274 billion by the end of 2021. Technologies such as Augmented Analytics, which help organizations with the data management process, are projected to grow rapidly and reach $18.4 billion by the year 2023.
  • Continuous Intelligence. Integrating real-time analytics to business operations is helping organizations leapfrog the competition with proactive and actionable insights delivered in real-time.
  • Blockchain. Stringent legislations such as the GDPR and HIPAA are encouraging organizations to make data secure, accessible, and reliable. Blockchain and similar technologies are making their way into the financial industry as a data governance and security instrument that is highly resilient and robust against privacy risks. This EU resource discusses how blockchain complements some key GDPR objectives.


Key essentials of continuous monitoring in DevOps

Continuous Monitoring (CM) or Continuous Control Monitoring (CCM) is a put forth by the DevOps personnel to help them notice, observe, and even detect security threats, compliance issues and much more during every phase of the DevOps pipeline. Also, it is an automated process that works seamlessly within other operations of DevOps.

On similar lines, this process can be implemented across other segments of the IT infrastructure for in-depth monitoring across the organization. These processes come useful in observing and analyzing key metrics and alternative resolutions of certain real-time issues.

Managing various segments within an IT infrastructure of an enterprise is a huge responsibility. Hence, most DevOps teams have in place a Continuous Monitoring process that works in accessing real-time data across both hybrid and public environments for minimizing breaches in security.

This CM process helps the DevOps team to locate bugs and put in place viable solutions that fortify the IT security to the highest possible degree. Certain measure includes threat assessment, incident response, database forensics, root cause analysis and even computers.

The CM process can be extended to offer data on the health and workings of the deployed software, offsite networks, and IT setup.

When is CM (Continuous Monitoring) or CCM (Continuous Controls Monitoring) introduced?

The DevOps team will have the Continuous Monitoring at the end stage of its pipeline i.e. after the software is released for production. This CM process will notify the key issues that arise at the production stage and environment to the dev and QA teams. This helps the relevant and responsible people to fix the errors as quickly as possible.

Objective of the CM Process in the DevOps

  • Useful in tracking user behaviour to a site or an app that has just been updated. It helps in ascertaining the effect of the update on the users i.e. a positive or a negative or a neutral effect on the experience of the user.
  • It comes handy in locating performance issues in the software operations. It helps in detecting the reason of error and identifying the suitable solution to rectify the situation before it hampers uptime and revenue.
  • The CM process is designed to improve the visibility and transparency of the network operations and IT with regards to a likely security breach and ensure its resolution through a well-tuned alert signal protocol.

Depending upon the business of the organizations the following are the best practices that need to be implemented in the key areas of servers and their health, application performance, development milestones and user behaviour and activity.

Let us now move on to the different role-specific continuous monitoring processes especially in infrastructure, networks and applications – the core activities of the IT department in any organization. 

Monitoring of Application

This CM process will keep track of the working of released software on the basis of the following parameters viz. system response, uptime, transaction time and volume, API response and both back-end and front-end stability performance.

The Application Monitoring CM should be equipped with tools that monitor

  • User Response Time
  • Browser Speed
  • Pages With Low Load Speed
  • Third-Party Resource Speed
  • End-User Transactions
  • Availability
  • Throughput
  • Sla Status
  • Error Rate

Monitoring of Infrastructure

This process includes data collection and examination of the performance of data centers, hardware, software, storage, servers and other vital components of the IT ecosystem. The focus of infrastructure monitoring process in to measure the performance of the IT infrastructure with regards to their fulfillment of products and services and improve its performance.

The Infrastructure Monitoring CM should include tools to check

  • Disk Usage and CPU
  • Database Health
  • Server Availability
  • Server & System Uptime
  • Response Time to Errors
  • Storage

Monitoring of Networks

The network is a complex array of routers, switches, servers, VMs, firewalls, each of which have to work in perfect conjunction and coordination. The continuous monitoring process is focused on detecting both current and likely to occur issues in the networks and alert the network professionals. The primary aim of this CM is to prevent network crashes and downtime.

The Network Monitoring process needs to be empowered with tools to monitor:

  • Server bandwidth
  • Multiple port level metrics
  • CPU use of hosts
  • Network packets flow
  • Latency

Over and above these CMs, every DevOps should have in place wholesome full stack monitoring tool that is capable of monitoring the entire IT stack in terms of security, user permissions, process level usage, signification performance trends and networks switches. This full stack CM should not only alert the issue but also offer to resolve the issues with suitable resources.

Co-relation between Risk Management and Continuous Monitoring

No organization or an enterprise is same, they could be identical on certain parameters. Similarly, different risks exist for an organization or an entity which has an IT infrastructure.

The DevOps has to select the most suitable monitoring tools and place them with in the CM process for best outcomes. And, this is possible only if the DevOps team conducts a thorough check of the risk factors, governance and the existing compliance systems before choosing the monitoring tools.

We present a brief overview of the metrics that should be considered along with the tools for the monitoring process.

  • What are the risks faced by the organization?
  • Which parameters can be used to calculate the risks?
  • What is the extent of these risks? Is the organization adequately resilient to face and emerge out of these risks?
  • In event of software failure, hardware, or security breach, what could be the dire consequences of the same?
  • Is the organization powered with the desired confidentiality parameters with reference to its data collection and generation?

Lastly, as we conclude with yet another set of valuable takeaways of Continuous Monitoring

Possibility of Speedy Responses

With the most suitable CM in place, the alert system is in a position to notify the threat and alert the concerned department immediately to prevent the mishap while setting right the systems and normalize the functioning within a minimal time gap. 

Negligible System Downtime

A comprehensive CM of network would be equipped with the right set of tools and alerts to ensure that uptime performance of the system especially in event of service outage or issues in performance of applications.

Clarity in Network Transparency and visibility – A well-defined CM ensures the ample transparency through data collection and analysis that state the possibilities of outages and other network related trends.

Continuous Monitoring is a must have for almost every organization for smoothest and seamless of operations. But at the same time, the DevOps should ensure to have in place a CM process that works in a nonintrusive manner.

New software products should be implemented after thorough real time testing, and they should in no way create an extra burden on QA team.

Moreover, the Dev Ops should focus on delivering software products that are scalable, secure and function towards improving the efficiency of the organization.

Continuous Monitoring is an essential of every DevOps pipeline in order to achieve better quality product with scalable and efficient performance. Continuous Monitoring gives a fair overview of the servers, cloud environment and networks which are extremely crucial for business performance, security and operations.

Which solution we have for you?

Synergetics do provide DevOps based offerings with which you can gain deeper knowledge on this technology. It can help any business as well individual DevOps professional to grow with this highly in demand emerging technology. One can choose to develop their skills with Microsoft DevOps certifications too. So, basically, you can consider us as your 360-degree solution providers and can fulfil any of your technological needs with our expert solutions.

With Azure Percept, Microsoft adds new ways for customers to bring AI to the edge

Elevators that respond to voice commands, cameras that notify store managers when to restock shelves and video streams that keep tabs on everything from cash register lines to parking space availability.

These are a few of the millions of scenarios becoming possible thanks to a combination of artificial intelligence and computing on the edge. Standalone edge devices can take advantage of AI tools for things like translating text or recognizing images without having to constantly access cloud computing capabilities.

At its Ignite digital conference, Microsoft unveiled the public preview of Azure Percept, a platform of hardware and services that aims to simplify the ways in which customers can use Azure AI technologies on the edge – including taking advantage of Azure cloud offerings such as device management, AI model development and analytics.

Roanne Sones, corporate vice president of Microsoft’s edge and platform group, said the goal of the new offering is to give customers a single, end-to-end system, from the hardware to the AI capabilities, that “just works” without requiring a lot of technical know-how.

The Azure Percept platform includes a development kit with an intelligent camera, Azure Percept Vision. There’s also a “getting started” experience called Azure Percept Studio that guides customers with or without a lot of coding expertise or experience through the entire AI lifecycle, including developing, training and deploying proof-of-concept ideas.

For example, a company may want to set up a system to automatically identify irregular produce on a production line so workers can pull those items off before shipping.

Azure Percept Vision and Azure Percept Audio, which ships separately from the development kit, connect to Azure services in the cloud and come with embedded hardware-accelerated AI modules that enable speech and vision AI at the edge, or during times when the device isn’t connected to the internet. That’s useful for scenarios in which the device needs to make lightning-fast calculations without taking the time to connect to the cloud, or in places where there isn’t always reliable internet connectivity, such as on a factory floor or in a location with spotty service.

Image showing Azure Percept devices, including the Trust Platform Module, Azure Percept Vision and Azure Percept Audio.
The Azure Percept platform makes it easy for anyone to deploy artificial intelligence on the edge. Devices include a Trusted Platform Module (center), Azure Percept Audio (left) and Azure Percept Vision (right). Photo credit: Microsoft

In addition to announcing hardware, Microsoft says it is working with third-party silicon and equipment manufacturers to build an ecosystem of intelligent edge devices that are certified to run on the Azure Percept platform, Sones said.

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” she said. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Making AI at the edge more accessible

The goal of the Azure Percept platform is to simplify the process of developing, training and deploying edge AI solutions, making it easier for more customers to take advantage of these kinds of offerings, according to Moe Tanabian, a Microsoft vice president and general manager of the Azure edge and devices group.

For example, most successful edge AI implementations today require engineers to design and build devices, plus data scientists to build and train AI models to run on those devices. Engineering and data science expertise are typically unique sets of skills held by different groups of highly trained people.

“With Azure Percept, we broke that barrier,” Tanabian said. “For many use cases, we significantly lowered the technical bar needed to develop edge AI-based solutions, and citizen developers can build these without needing deep embedded engineering or data science skills.”

The hardware in the Azure Percept development kit also uses the industry standard 80/20 T-slot framing architecture, which the company says will make it easier for customers to pilot proof-of-concept ideas everywhere from retail stores to factory floors using existing industrial infrastructure, before scaling up to wider production with certified devices.

As customers work on their proof-of-concept ideas with the Azure Percept development kit, they will have access to Azure AI Cognitive Services and Azure Machine Learning models as well as AI models available from the open-source community that have been designed to run on the edge.

In addition, Azure Percept devices automatically connect to Azure IoT Hub, which helps enable reliable communication with security protections between Internet of Things, or IoT, devices and the cloud. Customers can also integrate Azure Percept-based solutions with Azure Machine Learning processes that combine data science and IT operations to help companies develop machine learning models faster.

In the months to come, Microsoft aims to expand the number of third-party certified Azure Percept devices, so anybody who builds and trains a proof-of-concept edge AI solution with the Azure Percept development kit will be able to deploy it with a certified device from the marketplace, according to Christa St. Pierre, a product manager in Microsoft’s Azure edge and platform group.

“Anybody who builds a prototype using one of our development kits, if they buy a certified device, they don’t have to do any additional work,” she said.

Security and responsibility

Because Azure Percept runs on Azure, it includes the security protections already baked into the Azure platform, the company says.

Microsoft also says that all the components of the Azure Percept platform, from the development kit and services to Azure AI models, have gone through Microsoft’s internal assessment process to operate in accordance with Microsoft’s responsible AI principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

The Azure Percept team is currently working with select early customers to understand their concerns around the responsible development and deployment of AI on edge devices, and the team will provide them with documentation and access to toolkits such as Fairlearn and InterpretML for their own responsible AI implementations.

Ultimately, Sones said, Microsoft hopes to enable the development of an ecosystem of intelligent edge devices that can take advantage of Azure services, in the same way that the Windows operating system has helped enable the personal computer marketplace.

“We are a platform company at our core. If we’re going to truly get to a scale where the billions of devices that exist on the edge get connected to Azure, there is not going to be one hyperscale cloud that solves all that through their first-party devices portfolio,” she said. “That is why we’ve done it in an ecosystem-centric way.”


Overview of porting from .NET Framework to .NET

This article provides an overview of what you should consider when porting your code from .NET Framework to .NET (formerly named .NET Core). Porting to .NET from .NET Framework for many projects is relatively straightforward. The complexity of your projects dictates how much work you’ll do after the initial migration of the project files.

Projects where the app-model is available in .NET (such as libraries, console apps, and desktop apps) usually require little change. Projects that require a new app model, such as moving to ASP.NET Core from ASP.NET, require more work. Many patterns from the old app model have equivalents that can be used during the conversion.

Unavailable technologies

There are a few technologies in .NET Framework that don’t exist in .NET:

  • Application domainsCreating additional application domains isn’t supported. For code isolation, use separate processes or containers as an alternative.
  • RemotingRemoting is used for communicating across application domains, which are no longer supported. For communication across processes, consider inter-process communication (IPC) mechanisms as an alternative to remoting, such as the System.IO.Pipes class or the MemoryMappedFile class.
  • Code access security (CAS)CAS was a sandboxing technique supported by .NET Framework but deprecated in .NET Framework 4.0. It was replaced by Security Transparency and it’s not supported in .NET. Instead, use security boundaries provided by the operating system, such as virtualization, containers, or user accounts.
  • Security transparencySimilar to CAS, this sandboxing technique is no longer recommended for .NET Framework applications and it’s not supported in .NET. Instead, use security boundaries provided by the operating system, such as virtualization, containers, or user accounts.
  • System.EnterpriseServicesSystem.EnterpriseServices (COM+) isn’t supported in .NET.
  • Windows Workflow Foundation (WF) and Windows Communication Foundation (WCF)WF and WCF aren’t supported in .NET 5+ (including .NET Core). For alternatives, see CoreWF and CoreWCF.

For more information about these unsupported technologies, see .NET Framework technologies unavailable on .NET Core and .NET 5+.

Windows desktop technologies

Many applications created for .NET Framework use a desktop technology such as Windows Forms or Windows Presentation Foundation (WPF). Both Windows Forms and WPF have been ported to .NET, but these remain Windows-only technologies.

Consider the following dependencies before you migrate a Windows Forms or WPF application:

  1. Project files for .NET use a different format than .NET Framework.
  2. Your project may use an API that isn’t available in .NET.
  3. 3rd-party controls and libraries may not have been ported to .NET and remain only available to .NET Framework.
  4. Your project uses a technology that is no longer available in .NET.

.NET uses the open-source versions of Windows Forms and WPF and includes enhancements over .NET Framework.

For tutorials on migrating your desktop application to .NET 5, see one of the following articles:

Windows-specific APIs

Applications can still P/Invoke native libraries on platforms supported by .NET. This technology isn’t limited to Windows. However, if the library you’re referencing is Windows-specific, such as a user32.dll or kernel32.dll, then the code only works on Windows. For each platform you want your app to run on, you’ll have to either find platform-specific versions, or make your code generic enough to run on all platforms.

When porting an application from .NET Framework to .NET, your application probably used a library provided distributed with the .NET Framework. Many APIs that were available in .NET Framework weren’t ported to .NET because they relied on Windows-specific technology, such as the Windows Registry or the GDI+ drawing model.

The Windows Compatibility Pack provides a large portion of the .NET Framework API surface to .NET and is provided via the Microsoft.Windows.Compatibility NuGet package.

For more information, see Use the Windows Compatibility Pack to port code to .NET.

.NET Framework compatibility mode

The .NET Framework compatibility mode was introduced in .NET Standard 2.0. This compatibility mode allows .NET Standard and .NET 5+ (and .NET Core 3.1) projects to reference .NET Framework libraries on Windows-only. Referencing .NET Framework libraries doesn’t work for all projects, such as if the library uses Windows Presentation Foundation (WPF) APIs, but it does unblock many porting scenarios. For more information, see the Analyze your dependencies to port code from .NET Framework to .NET.


.NET (formerly known as .NET Core) is designed to be cross-platform. If your code doesn’t depend on Windows-specific technologies, it may run on other platforms such as macOS, Linux, and Android. This includes project types like:

  • Libraries
  • Console-based tools
  • Automation
  • ASP.NET sites

.NET Framework is a Windows-only component. When your code uses Windows-specific technologies or APIs, such as Windows Forms and Windows Presentation Foundation (WPF), the code can still run on .NET but it won’t run on other operating systems.

It’s possible that your library or console-based application can be used cross-platform without changing much. When porting to .NET, you may want to take this into consideration and test your application on other platforms.

The future of .NET Standard

.NET Standard is a formal specification of .NET APIs that are available on multiple .NET implementations. The motivation behind .NET Standard was to establish greater uniformity in the .NET ecosystem. Starting with .NET 5, a different approach to establishing uniformity has been adopted, and this new approach eliminates the need for .NET Standard in many scenarios. For more information, see .NET 5 and .NET Standard.

.NET Standard 2.0 was the last version to support .NET Framework.

Tools to assist porting

Instead of manually porting an application from .NET Framework to .NET, you can use different tools to help automate some aspects of the migration. Porting a complex project is, in itself, a complex process. These tools may help in that journey.

Even if you use a tool to help port your application, you should review the Considerations when porting section in this article.

.NET Upgrade Assistant

The .NET Upgrade Assistant is a command-line tool that can be run on different kinds of .NET Framework apps. It’s designed to assist with upgrading .NET Framework apps to .NET 5. After running the tool, in most cases the app will require more effort to complete the migration. The tool includes the installation of analyzers that can assist with completing the migration. This tool works on the following types of .NET Framework applications:

  • Windows Forms
  • WPF
  • Console
  • Class libraries

This tool uses the other tools listed in this article and guides the migration process. For more information about the tool, see Overview of the .NET Upgrade Assistant.


The try-convert tool is a .NET global tool that can convert a project or entire solution to the .NET SDK, including moving desktop apps to .NET 5. However, this tool isn’t recommended if your project has a complicated build process such as custom tasks, targets, or imports.

For more information, see the try-convert GitHub repository.

.NET Portability Analyzer

The .NET Portability Analyzer is a tool that analyzes assemblies and provides a detailed report on .NET APIs that are missing for the applications or libraries to be portable on your specified targeted .NET platforms.

To use the .NET Portability Analyzer in Visual Studio, install the extension from the marketplace.

For more information, see The .NET Portability Analyzer.

Platform compatibility analyzer

The Platform compatibility analyzer analyzes whether or not you’re using an API that will throw a PlatformNotSupportedException at run time. Although this isn’t common if you’re moving from .NET Framework 4.7.2 or higher, it’s good to check. For more information about APIs that throw exceptions on .NET, see APIs that always throw exceptions on .NET Core.

For more information, see Platform compatibility analyzer.

Considerations when porting

When porting your application to .NET, consider the following suggestions in order.

✔️ CONSIDER using the .NET Upgrade Assistant to migrate your projects. Even though this tool is in preview, it automates most of the manual steps detailed in this article and gives you a great starting point for continuing your migration path.

✔️ CONSIDER examining your dependencies first. Your dependencies must target .NET 5, .NET Standard, or .NET Core.

✔️ DO migrate from a NuGet packages.config file to PackageReference settings in the project file. Use Visual Studio to convert the package.config file.

✔️ CONSIDER upgrading to the latest project file format even if you can’t yet port your app. .NET Framework projects use an outdated project format. Even though the latest project format, known as SDK-style projects, was created for .NET Core and beyond, they work with .NET Framework. Having your project file in the latest format gives you a good basis for porting your app in the future.

✔️ DO retarget your .NET Framework project to at least .NET Framework 4.7.2. This ensures the availability of the latest API alternatives for cases where .NET Standard doesn’t support existing APIs.

✔️ CONSIDER targeting .NET 5 instead of .NET Core 3.1. While .NET Core 3.1 is under long-term support (LTS), .NET 5 is the latest and .NET 6 will be LTS when released.

✔️ DO target .NET 5 for Windows Forms and WPF projects. .NET 5 contains many improvements for Desktop apps.

✔️ CONSIDER targeting .NET Standard 2.0 if you’re migrating a library that may also be used with .NET Framework projects. You can also multitarget your library, targeting both .NET Framework and .NET Standard.

✔️ DO add reference to the Microsoft.Windows.Compatibility NuGet package if, after migrating, you get errors of missing APIs. A large portion of the .NET Framework API surface is available to .NET via the NuGet package.