Big Data: A Big Introduction

Image source: Learn Hub

The digital universe is continuously expanding—just like the physical universe, except that the digital world alone has generated more data than the number of stars in the entire observable physical universe.

44 zettabytes! That’s 44 with trailing zeros (44×1021). That’s 40 times more bytes than the number of stars in the observable universe.

Image source: Eyesdown Digital

By 2025, there will be 175 zettabytes of data in the global datasphere. The growth in data volume is exponential.

All of this data is aptly called Big Data. In this article, we will:

  • Introduce Big Data
  • Explain core concepts
  • Compare small and thick data
  • Highlight the latest Big Data trends for business
  • Point you to plenty of resources

What is Big Data?

Big data is the term for information assets (data) that are characterized by high volume, velocity, and variety that are systematically extracted, analyzed, and processed for decision making or control actions.

The characteristics of Big Data make it virtually impossible to analyze using traditional data analysis methods.

The importance of big data lies in the patterns and insights, hidden in large information assets, that can drive business decisions. When extracted using advanced analytics technologies, these insights help organizations understand how their users, markets, society, and the world behaves.

3 Vs of Big Data

For an information asset to be considered as Big Data, it must meet the 3-V criteria:

  • Volume. The size of data. High volume data is likely to contain useful insights. A minimum threshold for data to be considered big usually starts at terabytes and petabytes. The large volume of Big Data requires hyperscale computing environments with large storage and fast IOPS (Input/Output Operations per Second) for fast analytics processing.
  • Velocity. The speed at which data is produced and processed. Big Data is typically produced in streams and is available in real-time. The continuous nature of data generation makes it relevant for real-time decision-making.
  • Variety. The type and nature of information assets. Raw big data is often unstructured or multi-structured, generated with a variety of attributes, standards, and file formats. For example, datasets collected from sensors, log files, and social media networks are unstructured. So, they must be processed into structured databases for data analytics and decision-making.

More recently, two additional Vs help characterize Big Data:

  • Veracity. The reliability or truthfulness of data. The extent to which the output of big data analysis is pertinent to the associated business goals is determined by the quality of data, the processing technology, and the mechanism used to analyze the information assets.
  • Value. The usefulness of Big Data assets. The worthiness of the output of big data analysis can be subjective and is evaluated based on unique business objectives.
characterize Big Data
Image Source: bmc Blogs

Big data vs small data vs thick data

In contrast to these characteristics, there are two other forms of data: small data and thick data.

Small Data

Small Data refers to manageable data assets, usually in numerical or structured form, that can be analyzed using simple technologies such as Microsoft Excel or an open source alternative.

Thick Data

Thick Data refers to text or qualitative data that can be analyzed using manageable manual processes. Examples include:

  • Interview questions
  • Surveys
  • Video transcripts

When you use qualitative data in conjunction with quantitative big data, you can better understand the sentiment and behavioral aspects that can be easily communicated by individuals. Thick Data is particularly useful in the domains of medicine and scientific research where responses from individual humans hold sufficient value and insights—versus large big data streams.

Big Data trends in 2021-2022

Big Data technologies are continuously improving. Indeed, data itself is fast becoming the most important asset for a business organization.

Prevalence of the Internet of Things (IoT), cloud computing, and Artificial Intelligence (AI) is making it easier for organizations to transform raw data into actionable knowledge.

Here are three of the most popular big data technology trends to look out for in 2021:

  • Augmented Analytics. The Big Data industry will be worth nearly $274 billion by the end of 2021. Technologies such as Augmented Analytics, which help organizations with the data management process, are projected to grow rapidly and reach $18.4 billion by the year 2023.
  • Continuous Intelligence. Integrating real-time analytics to business operations is helping organizations leapfrog the competition with proactive and actionable insights delivered in real-time.
  • Blockchain. Stringent legislations such as the GDPR and HIPAA are encouraging organizations to make data secure, accessible, and reliable. Blockchain and similar technologies are making their way into the financial industry as a data governance and security instrument that is highly resilient and robust against privacy risks. This EU resource discusses how blockchain complements some key GDPR objectives.

Source: https://www.bmc.com/blogs/big-data/

Key essentials of continuous monitoring in DevOps

Continuous Monitoring (CM) or Continuous Control Monitoring (CCM) is a put forth by the DevOps personnel to help them notice, observe, and even detect security threats, compliance issues and much more during every phase of the DevOps pipeline. Also, it is an automated process that works seamlessly within other operations of DevOps.

On similar lines, this process can be implemented across other segments of the IT infrastructure for in-depth monitoring across the organization. These processes come useful in observing and analyzing key metrics and alternative resolutions of certain real-time issues.

Managing various segments within an IT infrastructure of an enterprise is a huge responsibility. Hence, most DevOps teams have in place a Continuous Monitoring process that works in accessing real-time data across both hybrid and public environments for minimizing breaches in security.

This CM process helps the DevOps team to locate bugs and put in place viable solutions that fortify the IT security to the highest possible degree. Certain measure includes threat assessment, incident response, database forensics, root cause analysis and even computers.

The CM process can be extended to offer data on the health and workings of the deployed software, offsite networks, and IT setup.

When is CM (Continuous Monitoring) or CCM (Continuous Controls Monitoring) introduced?

The DevOps team will have the Continuous Monitoring at the end stage of its pipeline i.e. after the software is released for production. This CM process will notify the key issues that arise at the production stage and environment to the dev and QA teams. This helps the relevant and responsible people to fix the errors as quickly as possible.

Objective of the CM Process in the DevOps

  • Useful in tracking user behaviour to a site or an app that has just been updated. It helps in ascertaining the effect of the update on the users i.e. a positive or a negative or a neutral effect on the experience of the user.
  • It comes handy in locating performance issues in the software operations. It helps in detecting the reason of error and identifying the suitable solution to rectify the situation before it hampers uptime and revenue.
  • The CM process is designed to improve the visibility and transparency of the network operations and IT with regards to a likely security breach and ensure its resolution through a well-tuned alert signal protocol.

Depending upon the business of the organizations the following are the best practices that need to be implemented in the key areas of servers and their health, application performance, development milestones and user behaviour and activity.

Let us now move on to the different role-specific continuous monitoring processes especially in infrastructure, networks and applications – the core activities of the IT department in any organization. 

Monitoring of Application

This CM process will keep track of the working of released software on the basis of the following parameters viz. system response, uptime, transaction time and volume, API response and both back-end and front-end stability performance.

The Application Monitoring CM should be equipped with tools that monitor

  • User Response Time
  • Browser Speed
  • Pages With Low Load Speed
  • Third-Party Resource Speed
  • End-User Transactions
  • Availability
  • Throughput
  • Sla Status
  • Error Rate

Monitoring of Infrastructure

This process includes data collection and examination of the performance of data centers, hardware, software, storage, servers and other vital components of the IT ecosystem. The focus of infrastructure monitoring process in to measure the performance of the IT infrastructure with regards to their fulfillment of products and services and improve its performance.

The Infrastructure Monitoring CM should include tools to check

  • Disk Usage and CPU
  • Database Health
  • Server Availability
  • Server & System Uptime
  • Response Time to Errors
  • Storage

Monitoring of Networks

The network is a complex array of routers, switches, servers, VMs, firewalls, each of which have to work in perfect conjunction and coordination. The continuous monitoring process is focused on detecting both current and likely to occur issues in the networks and alert the network professionals. The primary aim of this CM is to prevent network crashes and downtime.

The Network Monitoring process needs to be empowered with tools to monitor:

  • Server bandwidth
  • Multiple port level metrics
  • CPU use of hosts
  • Network packets flow
  • Latency

Over and above these CMs, every DevOps should have in place wholesome full stack monitoring tool that is capable of monitoring the entire IT stack in terms of security, user permissions, process level usage, signification performance trends and networks switches. This full stack CM should not only alert the issue but also offer to resolve the issues with suitable resources.

Co-relation between Risk Management and Continuous Monitoring

No organization or an enterprise is same, they could be identical on certain parameters. Similarly, different risks exist for an organization or an entity which has an IT infrastructure.

The DevOps has to select the most suitable monitoring tools and place them with in the CM process for best outcomes. And, this is possible only if the DevOps team conducts a thorough check of the risk factors, governance and the existing compliance systems before choosing the monitoring tools.

We present a brief overview of the metrics that should be considered along with the tools for the monitoring process.

  • What are the risks faced by the organization?
  • Which parameters can be used to calculate the risks?
  • What is the extent of these risks? Is the organization adequately resilient to face and emerge out of these risks?
  • In event of software failure, hardware, or security breach, what could be the dire consequences of the same?
  • Is the organization powered with the desired confidentiality parameters with reference to its data collection and generation?

Lastly, as we conclude with yet another set of valuable takeaways of Continuous Monitoring

Possibility of Speedy Responses

With the most suitable CM in place, the alert system is in a position to notify the threat and alert the concerned department immediately to prevent the mishap while setting right the systems and normalize the functioning within a minimal time gap. 

Negligible System Downtime

A comprehensive CM of network would be equipped with the right set of tools and alerts to ensure that uptime performance of the system especially in event of service outage or issues in performance of applications.

Clarity in Network Transparency and visibility – A well-defined CM ensures the ample transparency through data collection and analysis that state the possibilities of outages and other network related trends.

Continuous Monitoring is a must have for almost every organization for smoothest and seamless of operations. But at the same time, the DevOps should ensure to have in place a CM process that works in a nonintrusive manner.

New software products should be implemented after thorough real time testing, and they should in no way create an extra burden on QA team.

Moreover, the Dev Ops should focus on delivering software products that are scalable, secure and function towards improving the efficiency of the organization.

Continuous Monitoring is an essential of every DevOps pipeline in order to achieve better quality product with scalable and efficient performance. Continuous Monitoring gives a fair overview of the servers, cloud environment and networks which are extremely crucial for business performance, security and operations.

Which solution we have for you?

Synergetics do provide DevOps based offerings with which you can gain deeper knowledge on this technology. It can help any business as well individual DevOps professional to grow with this highly in demand emerging technology. One can choose to develop their skills with Microsoft DevOps certifications too. So, basically, you can consider us as your 360-degree solution providers and can fulfil any of your technological needs with our expert solutions.

With Azure Percept, Microsoft adds new ways for customers to bring AI to the edge

Elevators that respond to voice commands, cameras that notify store managers when to restock shelves and video streams that keep tabs on everything from cash register lines to parking space availability.

These are a few of the millions of scenarios becoming possible thanks to a combination of artificial intelligence and computing on the edge. Standalone edge devices can take advantage of AI tools for things like translating text or recognizing images without having to constantly access cloud computing capabilities.

At its Ignite digital conference, Microsoft unveiled the public preview of Azure Percept, a platform of hardware and services that aims to simplify the ways in which customers can use Azure AI technologies on the edge – including taking advantage of Azure cloud offerings such as device management, AI model development and analytics.

Roanne Sones, corporate vice president of Microsoft’s edge and platform group, said the goal of the new offering is to give customers a single, end-to-end system, from the hardware to the AI capabilities, that “just works” without requiring a lot of technical know-how.

The Azure Percept platform includes a development kit with an intelligent camera, Azure Percept Vision. There’s also a “getting started” experience called Azure Percept Studio that guides customers with or without a lot of coding expertise or experience through the entire AI lifecycle, including developing, training and deploying proof-of-concept ideas.

For example, a company may want to set up a system to automatically identify irregular produce on a production line so workers can pull those items off before shipping.

Azure Percept Vision and Azure Percept Audio, which ships separately from the development kit, connect to Azure services in the cloud and come with embedded hardware-accelerated AI modules that enable speech and vision AI at the edge, or during times when the device isn’t connected to the internet. That’s useful for scenarios in which the device needs to make lightning-fast calculations without taking the time to connect to the cloud, or in places where there isn’t always reliable internet connectivity, such as on a factory floor or in a location with spotty service.

Image showing Azure Percept devices, including the Trust Platform Module, Azure Percept Vision and Azure Percept Audio.
The Azure Percept platform makes it easy for anyone to deploy artificial intelligence on the edge. Devices include a Trusted Platform Module (center), Azure Percept Audio (left) and Azure Percept Vision (right). Photo credit: Microsoft

In addition to announcing hardware, Microsoft says it is working with third-party silicon and equipment manufacturers to build an ecosystem of intelligent edge devices that are certified to run on the Azure Percept platform, Sones said.

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” she said. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Making AI at the edge more accessible

The goal of the Azure Percept platform is to simplify the process of developing, training and deploying edge AI solutions, making it easier for more customers to take advantage of these kinds of offerings, according to Moe Tanabian, a Microsoft vice president and general manager of the Azure edge and devices group.

For example, most successful edge AI implementations today require engineers to design and build devices, plus data scientists to build and train AI models to run on those devices. Engineering and data science expertise are typically unique sets of skills held by different groups of highly trained people.

“With Azure Percept, we broke that barrier,” Tanabian said. “For many use cases, we significantly lowered the technical bar needed to develop edge AI-based solutions, and citizen developers can build these without needing deep embedded engineering or data science skills.”

The hardware in the Azure Percept development kit also uses the industry standard 80/20 T-slot framing architecture, which the company says will make it easier for customers to pilot proof-of-concept ideas everywhere from retail stores to factory floors using existing industrial infrastructure, before scaling up to wider production with certified devices.

As customers work on their proof-of-concept ideas with the Azure Percept development kit, they will have access to Azure AI Cognitive Services and Azure Machine Learning models as well as AI models available from the open-source community that have been designed to run on the edge.

In addition, Azure Percept devices automatically connect to Azure IoT Hub, which helps enable reliable communication with security protections between Internet of Things, or IoT, devices and the cloud. Customers can also integrate Azure Percept-based solutions with Azure Machine Learning processes that combine data science and IT operations to help companies develop machine learning models faster.

In the months to come, Microsoft aims to expand the number of third-party certified Azure Percept devices, so anybody who builds and trains a proof-of-concept edge AI solution with the Azure Percept development kit will be able to deploy it with a certified device from the marketplace, according to Christa St. Pierre, a product manager in Microsoft’s Azure edge and platform group.

“Anybody who builds a prototype using one of our development kits, if they buy a certified device, they don’t have to do any additional work,” she said.

Security and responsibility

Because Azure Percept runs on Azure, it includes the security protections already baked into the Azure platform, the company says.

Microsoft also says that all the components of the Azure Percept platform, from the development kit and services to Azure AI models, have gone through Microsoft’s internal assessment process to operate in accordance with Microsoft’s responsible AI principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

The Azure Percept team is currently working with select early customers to understand their concerns around the responsible development and deployment of AI on edge devices, and the team will provide them with documentation and access to toolkits such as Fairlearn and InterpretML for their own responsible AI implementations.

Ultimately, Sones said, Microsoft hopes to enable the development of an ecosystem of intelligent edge devices that can take advantage of Azure services, in the same way that the Windows operating system has helped enable the personal computer marketplace.

“We are a platform company at our core. If we’re going to truly get to a scale where the billions of devices that exist on the edge get connected to Azure, there is not going to be one hyperscale cloud that solves all that through their first-party devices portfolio,” she said. “That is why we’ve done it in an ecosystem-centric way.”

Source: https://blogs.microsoft.com/ai/with-azure-percept-microsoft-adds-new-ways-for-customers-to-bring-ai-to-the-edge/