Why AI Is the transformational technology of the digital age

ai 3

It wasn’t long ago that those of us working on “digital” solutions were almost entirely engulfed in a future focused on four key technologies – social, mobile, analytic and cloud. Organizations were trying to incorporate social into their customer service operations, deliver responsive experiences over every device and overcome the security concerns that paralyzed their decision to move to the cloud. While all four remain incredibly important, it seems few business conversations these days focus exclusively on one of these domains. Now, artificial intelligence (AI) is the focus from boardrooms to basements. AI now stands out as is the transformational technology of the digital age.

There are many reasons why this shift has happened so quickly. Obviously, storage costs continue to fall, the proliferation of data and data sources continue to sky-rocket and compute power continues to increase. Just as important, public cloud providers continue to improve, and add to, the impressive machine learning and deep learning capabilities that they make available to the masses.

When you combine all of the technological improvements with the growing corporate investment in this space, it becomes clear why AI is expected to be the defining technology of our future. The number of AI use cases, from enhancing the client experience in call centers (improved language processing and speech recognition) to predictive maintenance (fixing equipment before failures) is resulting in another powerful wave of business improvement driven by technology. In a recent study, McKinsey estimates that AI has the potential to create between $3.5 trillion and $5.8 trillion in value, annually, across various business functions and industries.

So, with all this promise, why aren’t more firms adopting AI at scale and growing the number of AI solutions across their business processes?

  • There is a lack of skills in the data science discipline.
  • There are regulatory issues that have to be addressed.
  • There remains a trust issue (transparency in how AI decisions are reached)

However, from my experience, the primary reason for the lack of AI scale comes back to the quality of artificial intelligence “nutrients” that the algorithms require for ingestion. That is, many organizations just do not have their data in a state of readiness to take advantage of this AI-powered world.

The first step in creating value from any applied intelligence solution is accessing all the information relevant to a given problem. The concept underpinning all of machine learning is giving an algorithm a massive number of “experiences” (training data) and a generalized strategy for learning, and then letting the AI identify patterns, associations and insight from that data. But, if the data is siloed in an organization and inaccessible, or if it is difficult to obtain data sets sufficiently large and comprehensible to be used for training, then the AI value cannot be realized.

To overcome these challenges, many organizations need to get back to the basics before attempting the AI “leap.” There are three areas that must be addressed:

  • Data Strategy.

To build out the required data collection and data architecture, an organization must understand what the data (and associated analytics) will be used for. In many cases, executives worry about their ability to choose the most effective systems for their needs and they get lost in a state of paralysis. Data is no longer about just measuring and managing. Data is core to a firm’s innovation. Defining the data strategy is core organizational function.

  • Data Generation & Aggregation.

I have met with numerous firms that are sourcing and collecting large amounts of data but that still do not have a plan or a platform to consolidate the information in a useful way. Organizations struggle with creating the right structure for any meaningful synthesis to take place. This is why cloud platforms, such as Microsoft Azure, are fundamental. The ability to generate and aggregate becomes only more important with AI since the quantity of available data is core to the machine learning.

  • Driving Insight.

Driving insight is all about revealing the invisible and gleaning new information from data that can be acted upon. While insight is obviously the output, understanding that business problem upfront is important. In understanding what insight is required, an organization can balance the requirements for traditional analytics and developing AI-powered solutions.

Artificial intelligence is here and advancing quickly. The technology can drive significant value and the opportunity is tremendous. For organizations wishing to deploy AI to realize that value, however, there are some basics that must be in place. Developing the data strategy, collecting and aggregating the information in a thoughtful manner and focusing on the insight required to address specific business problems are table stakes. From there, the value of AI can be mined. All companies have the opportunity in front of them. As Mark Twain wrote, “there’s gold in them thar hills!”

Source: https://www.avanade.com/en/blogs/avanade-insights/artificial-intelligence/ai-is-the-transformational-tech-of-digital-age

Advertisements

Practicing ‘No Code’ Data Science

Blog Insights by, Ashvini Shahane, President – Learning Services, Synergetics Information Technology Services India Pvt. Ltd.

“This is a great article and really fascinating to see how the world of Data Science and Machine Learning is becoming more democratized with the “No-code Data Science Tools” enabling the growth of more “Citizen Data Scientists”. Quoting from the article – “In advanced analytics and AI it’s about the shortage, cost, and acquisition of sufficient skilled data scientists.”, the need to make Machine Learning solutions faster with more efficiency, consistency is what is needed.

When I started out on the journey of AI and Data Science, coming from a Microsoft technologies space tools like Microsoft ML Studio, helped build, train and operationalize ML models quickly with minimum Data Science background. Microsoft started with tools like Azure ML Studio for the budding inexperienced Data Scientist and then went on to provide tools for the more experienced in the field with Microsoft ML Services.

Recently, Microsoft has grown its offerings on ML with its most recent addition of “Automated ML” capability in Azure Machine Learning Services. Automated ML empowers customers, with or without data science expertise, to identify an end-to-end machine learning pipeline for any problem, achieving higher accuracy while spending far less of their time. It is like a recommender system for machine learning pipelines.

AS-insight

https://azure.microsoft.com/en-us/blog/announcing-automated-ml-capability-in-azure-machine-learning/

Really looking forward to the innovations in the “No-code Data Science” space making the creation and usage of Data Science and ML solutions easier, faster and more accurate.”

 

datascience

Summary:  We are entering a new phase in the practice of data science, the ‘Code-Free’ era.  Like all major changes this one has not sprung fully grown but the movement is now large enough that its momentum is clear.  Here’s what you need to know.

We are entering a new phase in the practice of data science, the ‘Code-Free’ era.  Like all major changes this one has not sprung fully grown but the movement is now large enough that its momentum is clear.

Barely a week goes by that we don’t learn about some new automated / no-code capability being introduced.  Sometimes these are new startups with integrated offerings.  More frequently they’re features or modules being added by existing analytic platform vendors.

I’ve been following these automated machine learning (AML) platforms since they emerged.  I wrote first about them in the spring of 2016 under the somewhat scary title “Data Scientists Automated and Unemployed by 2025!”.

Of course this was never my prediction, but in the last 2 ½ years the spread of automated features in our profession has been striking.

No Code Data Science

datascience 1

No-Code data science, or automated machine learning, or as Gartner has tried to brand this, ‘augmented’ data science offers a continuum of ease-of-use.  These range from:

Guided Platforms: Platforms with highly guided modeling procedures (but still requiring the user to move through the steps, (e.g. BigML, SAS, Alteryx). Classic drag-and-drop platforms are the basis for this generation.

Automated Machine Learning (AML): Fully automated machine learning platforms (e.g. DataRobot).

Conversational Analytics: In this last version, the user merely poses the question to be solved in common English and the platform presents the best answer, selecting data, features, modeling technique, and presumably even best data visualization.

This list also pretty well describes the developmental timeline.  Guided Platforms are now old hat.  AML platforms are becoming numerous and mature.  Conversational analytics is just beginning.

Not Just for Advanced Analytics

This smart augmentation of our tools extends beyond predictive / prescriptive modeling into the realm of data blending and prep, and even into data viz.  What this means is that code-free smart features are being made available to classical BI business analysts, and of course to power user LOB managers (aka Citizen Data Scientists).

The market drivers for this evolution are well known.  In advanced analytics and AI it’s about the shortage, cost, and acquisition of sufficient skilled data scientists.  In this realm it’s about time to insight, efficiency, and consistency.  Essentially doing more with less and faster.

However in the data prep, blending, feature identification world which is also important to data scientists, the real draw is the much larger data analyst / BI practitioner world.  In this world the ETL of classic static data is still a huge burden and time delay that is moving rapidly from an IT specialist function to self-service.

Everything Old is New Again

When I started in data science in about 2001 SAS and SPSS were the dominant players and were already moving away from their proprietary code toward drag-and-drop, the earliest form of this automation.

The transition in academia 7 or 8 years later to teaching in R seems to have been driven financially by the fact that although SAS and SPSS gave essentially free access to students, they still charged instructors, albeit at a large academic discount.  R however was free.

We then regressed back to an age, continuing till today when to be a data scientist means working in code.  That’s the way this current generation of data scientists has been taught, and expectedly, that’s how they practice.

There has also been an incorrect bias that working in a drag-and-drop system did not allow the fine grain hyperparameter tuning that code allows.  If you’ve ever worked in SAS Enterprise Miner or its competitors you know this is incorrect, and in fact that fine tuning is made all the easier.

In my mind this was always an unnecessary digression back to the bad old days of coding-only which tended to take the new practitioner’s eye off the ball of the fundamentals and make it look like just another programming language to master.  So I for one both welcome and expected this return to procedures that are both speedy and consistent among practitioners.

What About Model Quality

We tend to think of a ‘win’ in advanced analytics as improving the accuracy of a model.  There’s a perception that relying on automated No-Code solutions gives up some of this accuracy.  This isn’t true.

The AutoML platforms like DataRobot, Tazi.ai, and OneClick.ai (among many others) not only run hundreds of model types in parallel including variations on hyperparameters, but they also perform transforms, feature selection, and even some feature engineering.  It’s unlikely that you’re going to beat one of these platforms on pure accuracy.

A caveat here is that domain expertise applied to feature engineering is still a human advantage.

Perhaps more importantly, when we’re talking about variations in accuracy at the second or third data point, is the many weeks you spent on development a good cost tradeoff compared to the few days or even hours these AutoML platforms offer?

The Broader Impact of No Code

It seems to me that the biggest beneficiaries of no-code are actually classic data analysts and LOB managers who continue to be most focused on BI static data.  The standalone data blending and prep platforms are a huge benefit to this group (and to IT whose workload is significantly lightened).

These no-code data prep platforms like ClearStory Data, Paxata, and Trifacta are moving rapidly to incorporate ML features into their processes that help users select which data sources are appropriate to blend, what the data items actually mean (using more ad hoc sources in the absence of good data dictionaries), and even extending into feature engineering and feature selection.

Modern data prep platforms are using embedded ML for example for smart automated cleaning or treatment of outliers.

Others like Octopai, just reviewed by Gartner as one of “5 Cool Companies” focus on enabling users to quickly find trusted data through automation by using machine learning and pattern analysis to determine the relationships among different data elements, the context in which the data was created, and the data’s prior uses and transformations.

These platforms also enable secure self-service by enforcing permissions and protecting PID and other similarly sensitive data.

Even data viz leader Tableau is rolling out conversational analytic features using NLP and other ML tools to allow users to pose queries in plain English and return optimum visualizations.

What Does This Actually Mean for Data Scientists

Gartner believes that within two years, by 2020, citizen data scientists will surpass data scientists in the quantity and value of the advanced analytics they produce.  They propose that data scientists will instead focus on specialized problems and embedding enterprise-grade models into applications.

I disagree.  This would seem to relegate data scientists to the role of QA and implementation.  That’s not what we signed on for.

My take is that this will rapidly expand the use of advanced analytics deeper and deeper into organizations thanks to smaller groups of data scientists being able to handle more and more projects.

We’ve already emerged by only a year or two from where the data scientist’s most important skills included blending and cleaning the data, and selecting the right predictive algorithms for the task.  These are specifically the areas that augmented/automatic no-code tools are taking over.

Companies that must create, monitor, and manage hundreds or thousands of models have been the earliest adopters, specifically insurance and financial services.

What’s that leave?  It leaves the senior role of Analytic Translator.  That’s the role McKinsey recently identified as the most important in any data science initiative.  In short, the job of Analytics Translator is to:

  • Lead the identification of opportunities where advanced analytics can make a difference.
  • Facilitate the process of prioritizing these opportunities.
  • Frequently serve as project manager on the projects.
  • Actively champion adoption of the solutions across the business and promote cost effective scaling.

In other words, translate business problems into data science projects and lead in quantifying the various types of risk and rewards that allow these projects to be prioritized.

What About AI?

Yes even our most recent advancements into image, text, and speech with CNNs and RNNs are rapidly being rolled out as automated no-code solutions.  And it couldn’t come fast enough because the shortage of data scientists with deep learning skills is even greater than with our more general practitioners.

Both Microsoft and Google rolled out automated deep learning platforms within the last year.  These started with transfer learning but are headed toward full AutoDL.  See Microsoft Custom Vision Services (https://www.customvision.ai/) and Google’s similar entry Cloud AutoML.

There are also a number of startup integrated AutoDL platforms.  We reviewed OneClick.AI earlier this year.  They include both a full AutoML and AutoDL platform.  Gartner recently nominated DimensionalMechanics as one of its “5 Cool Companies” with an AutoDL platform.

For a while I tried to personally keep up with the list of vendors of both No-Code AutoML and AutoDL and offer updates on their capabilities.  This rapidly became too much.

I was hoping Gartner or some other worthy group would step up with a comprehensive review and in 2017 Gartner did a fairly lengthy report “Augmented Analytics In the Future of Data and Analytics”.  The report was a good broad brush but failed to capture many of the vendors I was personally aware of.

To the best of my knowledge there’s still no comprehensive listing of all the platforms that offer either complete automation or significantly automated features.  They do however run from IBM and SAS all the way down to small startups, all worthy of your consideration.

Many of these are mentioned or reviewed in the articles linked below.  If you’re using advanced analytics in any form, or simply want to make your traditional business analysis function better, look at the solutions mentioned in these.

Source: https://www.datasciencecentral.com/profiles/blogs/practicing-no-code-data-science

 

What is the difference between AI, machine learning and deep learning?

ai 2

In the first part of this blog series, we gave you simple and elaborative definitions of what is artificial intelligence (AI), machine learning and deep learning. This is the second part of the series; here we are elucidating our readers with – What is the difference between AI, machine learning, and deep learning.

You can think of artificial intelligence (AI), machine learning and deep learning as a set of a matryoshka doll, also known as a Russian nesting doll. Deep learning is a subset of machine learning, which is a subset of AI.

Artificial intelligence is any computer program that does something smart. It can be a stack of a complex statistical model or if-then statements. AI can refer to anything from a computer program playing chess, to a voice-recognition system like Alexa. However, the technology can be broadly categorized into three groups — Narrow AI, artificial general intelligence (AGI), and superintelligent AI.

IBM’s Deep Blue, which beat chess grandmaster Garry Kasparov at the game in 1996, or Google DeepMind’s AlphaGo, which beat Lee Sedol at Go in 2016, are examples of narrow AI — AI that is skilled at one specific task. This is different from AGI — AGI is the intelligence of a machine that could successfully perform a range of tasks intellectual task that a human being can. On the other hand, Superintelligent AI takes things a step further. As Nick Bostrom describes it, this is “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.” In other words, it is when the machines have outfoxed us.

ai 3

Machine learning is a subset of AI. The theory is simple, machines take data and ‘learn’ for themselves. It is currently the most promising tool in the AI pool for businesses. Machine learning systems can quickly apply knowledge and training from large datasets to excel at facial recognition, speech recognition, object recognition, translation, and many other tasks. Machine learning allows a system to learn to recognize patterns on its own and make predictions, contrary to hand-coding a software program with specific instructions to complete a task.

While Deep Blue and DeepMind are both types of AI, Deep Blue was rule-based, dependent on programming — so it was not a form of machine learning. DeepMind, on the other hand — beat the world champion in Go by training itself on a large data set of expert moves.

That is, all machine learning counts as AI, but not all AI counts as machine learning.

Deep learning is a subset of machine learning. Deep artificial neural networks are a set of algorithms reaching new levels of accuracy for many important problems, such as image recognition, sound recognition, recommender systems, etc.

It uses some machine learning techniques to solve real-world problems by tapping into neural networks that simulate human decision-making. Deep learning can be costly and requires huge datasets to train itself. This is because there are a huge number of parameters that need to be understood by a learning algorithm, which can primarily yield a lot of false-positives. For example, a deep learning algorithm could be trained to ‘learn’ how a dog looks like. It would take an enormous dataset of images for it to understand the minor details that distinguish a dog from a wolf or a fox.

Deep learning is part of DeepMind’s notorious AlphaGo algorithm, which beat the former world champion Lee Sedol in 4 out of 5 games of Go using deep learning in early 2016. Google said, “the way the deep learning system worked was by combining Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play.”

ai 4

Source: https://www.geospatialworld.net/blogs/difference-between-ai%EF%BB%BF-machine-learning-and-deep-learning/

 

Bot Framework – New perspective of Marketing Automation

Developing intelligent chat bots with Microsoft AI platform and Bot Framework

bot framework

Nowadays, we all are using different kinds of applications on different platform and devices. Somebody uses mobiles, somebody uses desktops and laptops to manage their day to day activities and business. In out daily life we use different kinds of applications such as social media applications, messengers, shopping and ticket booking applications, customer service applications and other business applications. What if you need a help while using these applications? What if you get confused while choosing menu options or to get started? Definitely you need some kind of assistance to go ahead. You can contact the customer support team to get assistance for your queries. But you may need to send mail or call to the customer service number and wait for their responses. What if you need immediate assistance? There comes the role of and intelligent online assistant who can help you for choosing options, providing suggestions, and can converse with you in your language.

A chat bot is an intelligent online assistant that can converse with you in your language. It can be programmed with a powerful AI backend that can understand you language and feelings, provide suggestions, collect data from user and respond quickly or later with the results you want. Chat bots can be programmed in different languages and can be hosted in various cloud platforms. A chat bot can be easily integrated with any kind of applications of your choice. It could be a messenger application such as Skype, Facebook messenger, Google Talk, WeChat, Kik or web applications. There are various Bot frameworks available for developers such as Microsoft Bot Framework, Facebook Wit.ai, Google’s api.ai. You can host your bot applications on various platforms such as Azure Bot Services, Chatfuel, HubSpot Motion.ai etc.

Microsoft Bot Framework is one of the best and rich framework for developing Intelligent Bot Applications on Microsoft Azure Cloud platform. The Bot Framework consists of three main components: The Bot builder SDK, Channels, and the Bot Framework Directory. The Bot Builder provides an SDK, libraries, samples, and tools to help you build and debug bots. Microsoft Bot Builder provides SDK for Node.JS and C# ie you can develop your bot applications using Node.JS, C#.NET, Java and Python.

Developing Bot applications using .NET

You can start creating your first bot applications using Visual Studio. For that you need to install the project templates for Bot applications. Two templates are available for .NET, targetting the v3 and v4 versions of the SDK respectively. Both are available as VSIX packages. Both are available in Visual Studio market place. You can download them from the following links.

Bot Builder V3 template: https://marketplace.visualstudio.com/items?itemName=BotBuilder.BotBuilderV3

Bot Builder V4 template: https://aka.ms/Ylcwxk

You need Visual Studio 2015 or later versions to install and develop using these templates. Bot Builder SDK requires .NET framework version 4.6 or later.

Sonu-Blog

Developing Bot Applications using Node.JS

You can develop your bot applications using Node.JS also. To install the Bot templates for Node.JS you need to install the latest version of Node.JS (8.5 or later) and Yeoman. You can download and install the latest version of Node.JS from the Node.JS web site. Install the latest version of Yeoman by running the following command.

npm install -g yo

Install the Node.JS project templates using the following npm command.

npm install generator-botbuilder

Developing Bot using Java and Python

You can also install the Bot templates for Java and python. You can use the following npm commands to install the Yeoman generators for the Java and Python project templates.

npm install generator-botbuilder-java

npm install generator-botbuilder-python

Run the Yeoman command to generate the project template you want.

Sonu-Blog 2

Making your bot intelligent

How you can create an intelligent bot that can understand your language and respond to your queries. Microsoft Azure AI platform provides a set of APIs that can be integrated with any of your applications. These APIs are called Cognitive Services. These APIs include APIs for Language processing, text to speech translation, suggestions, Search APIs, Face API etc. You can integrate these APIs with your bot applications to make your application more intelligent.

The interaction between bot and user is free-form, so it is important for a bot application to understand the user language and the context. Microsoft Azure Cognitive Services provides the LUIS (Language Understanding Intelligent Service) API that helps the bot to understand the users language and context. For that you need to create a LUIS app model and train your nodel to understand the utterances (What the user says) and the entities. Once the model starts processing input, LUIS begins active learning, allowing you to constantly update and improve the model.

Author: Sonu Sathyadas, Tech Lead, Synergetics

 

12 major Artificial Intelligence trends to watch for in 2018

12 major Artificial Intelligence trends to watch for in 2018Artificial Intelligence (AI) has the peculiar ability to simultaneously amaze, enthrall, leave us gasping and intimidate. The possibilities of AI are innumerable and they easily surpass our most artistically fecund imaginations. What all we read in science fiction novels or saw in movies like ‘The Matrix’ could someday materialize into reality. Bill Gates, the founder of Microsoft, recently said that ‘AI can be our friend’ and is good for the society. From decision-making to computing to robotics to vehicles and even cosmetics, AI has left its mark everywhere and it will usher in the grandest social engineering experiment in the history of the world.

CBInsights has prepared a list of the major AI trends to follow in 2018. Let’s have a look at the 13 trends In AI that will have a huge impact in years to come.

Robotic workforce

It is no more a closely guarded secret that in the future much of the labor-intensive work in assembly lines of factories would be done by AI programmed robots and not workers. This would bring down the cost of hiring workers and also reduce outsourcing and offshoring.

Recently, a Chinese T-shirt manufacturer Tianyuan Garments Company signed a Memorandum of Understanding (MoU) with the Arkansas government to employ 400 workers at $14/hr at its new garment factory in Arkansas. Operations were scheduled to begin by the end of 2017. Tianyuan’s factory in Little Rock, Arkansas, will use sewing robots developed by Georgia-based startup SoftWear Automation to manufacture apparel.

In Japan, by 2025, more than 80% of elderly care would be done by robots, not caregivers.

Ubiquitous Artificial Intelligence

Artificial Intelligence impacts multiple fields, even those that we least expect it to. Machine learning, a crucial component of AI, refers to the training of algorithms on large data sets so that they learn how to identify desired patterns better at their tasks.

The functioning of AI is getting more versatile with each passing day.

Uncle Sam vs The Dragon in the realm of AI

12 major Artificial Intelligence trends to watch for in 2018-1China is all set to prove its prowess in AI and outshine the US and other western countries. The Chinese government is investing a lot in this futuristic technology.

The Chinese government is promoting an intelligence plan. It includes everything from smart agriculture and intelligent logistics to military applications.

In 2017 China’s artificial intelligence startups took 48% of all dollars going to AI startups globally in, more than that of the USA. In deep learning also China publishes six times more patents than the US.

Battlefields in the age of AI

The wars of the future will rely on smart technology like never before. Drones are just the beginning. With the increasing convergence of conventional defense, surveillance, and reconnaissance with cybersecurity, the need for algorithm-based AI only expands.

Cyber security is a real opportunity area for AI since attacks are constantly-evolving and the main challenge is new forms of malware. Prima facie, AI would have an extra edge here given its ability to operate at scale and sift through millions of incidents to identify aberrations, risks, and signals of future threats.

The market is mushrooming with new cybersecurity companies trying to leverage machine learning to some extent.

Voice Assistants

Voice-enabled computing was all over at the Consumer Electronics Show in 2018. Barely any IoT device was without integration into the Amazon Echo or Google Home.

Samsung is also working on its own voice assistant, Bixby. It wants all of its products to be internet-connected and have intelligence from Bixby by 2020.

AI to throw the gauntlet before professionals

Skilled professionals — including lawyers, consultants, financial advisors etc —will face the heat of artificial intelligence as much as unskilled and semi-skilled workers.

For instance, artificial intelligence has huge potential to reduce the time and improve efficiency in legal work. As AI platforms become more efficient, affordable and commercialized, this will influence the remuneration structure of external law firms that charge by the hour.

Decentralization and Democratization

Artificial Intelligence isn’t only limited to powerful supercomputers and big devices; it is also becoming a part and parcel of smartphones and wearable devices and equipment. Edge computing is emerging as the next big area in AI.

Apple released its A11 chip with a neural engine for iPhone 8 and X. Apple claims it can perform machine learning tasks at up to 600B operations per second.

Another case for edge AI would be training your personal AI assistant locally on your device to recognize your unique accent or identify faces.

Capsule Networks

Neural networks have myriad architectures. One of the most popular one in deep learning these days is known as convolutional neural networks. Now a new architecture, capsule networks, has been developed and it would outpace the convolutional neural networks (CNNs) on multiple fronts.

CNNs have certain limitations that lead to lack of performance or gaps in security.

Capsule Networks would allow AIs to identify general patterns with less data and be less susceptible to false results.

Capsule Networks would take relative positions and orientation of an object into consideration without needing to be trained exhaustively on variations.

Dream salaries in AI talent hunt

As per a recent report, the approximate number of qualified researchers currently in the field of AI is 300,000, including students in relevant research areas. Meanwhile, companies require a million or more AI specialists for their engineering needs.

In the US, a Glassdoor search for “artificial intelligence” shows over 32,000 jobs currently listed, with several salary ranges well into the 6 digits. Companies are more than willing to pay handsome emoluments to intelligent AI experts.

Bigwigs of enterprise AI

As tech giants like Google, Amazon, Salesforce, and Microsoft improve their enterprise AI capability.

AI medical diagnostics

Regulators in the US are looking forward at approving AI for use in clinical settings. The advantage of AI in diagnostics is early detection and better accuracy.

Machine learning algorithms can compare a medical image with those of millions of other patients, picking up on nuances that a human eye may otherwise miss.

Consumer-focused AI monitoring tools like SkinVision — which uses computer vision to monitor suspicious skin boils — are already in use. But a new wave of healthcare AI applications will set the ground for machine learning capabilities in hospitals and clinics.

Build your own AI

Because of open source software libraries, hundreds of APIs and SDKs, and easy assembly kits from Amazon and Google, the barrier to entry in AI could not have been lower. Google launched an “AI for all ages” project called AIY (artificial Intelligence yourself).

Source: https://www.geospatialworld.net/blogs/13-artificial-intelligence-trends-2018/

Introduction to Object Detection Artificial Intelligence | Cognitive | Machine Learning | Python

Introduction to object detection

Humans can easily detect and identify objects present in an image. The human visual system is fast and accurate and can perform complex tasks like identifying multiple objects and detect obstacles with little conscious thought. With the availability of large amounts of data, faster GPUs, and better algorithms, we can now easily train computers to detect and classify multiple objects within an image with high accuracy. In this blog, we will explore terms such as object detection, object localization, loss function for object detection and localization, and finally explore an object detection algorithm known as “You only look once” (YOLO).

Object Localization

An image classification or image recognition model simply detect the probability of an object in an image. In contrast to this, object localization refers to identifying the location of an object in the image. An object localization algorithm will output the coordinates of the location of an object with respect to the image. In computer vision, the most popular way to localize an object in an image is to represent its location with the help of bounding boxes. Fig. 1 shows an example of a bounding box.

Introduction to Object Detection 2A bounding box can be initialized using the following parameters:

bx, by :

coordinates of the center of the bounding box

bw :

width of the bounding box w.r.t the image width

bh :

height of the bounding box w.r.t the image height

Object Detection

An approach to building an object detection is to first build a classifier that can classify closely cropped images of an object. Fig 2. shows an example of such a model, where a model is trained on a dataset of closely cropped images of a car and the model predicts the probability of an image being a car.Introduction to Object Detection 7

Now, we can use this model to detect cars using a sliding window mechanism. In a sliding window mechanism, we use a sliding window (similar to the one used in convolutional networks) and crop a part of the image in each slide. The size of the crop is the same as the size of the sliding window. Each cropped image is then passed to a ConvNet model which in turn predicts the probability of the cropped image is a car.

Introduction to Object Detection 4

After running the sliding window through the whole image, we resize the sliding window and run it again over the image again. We repeat this process multiple times. Since we crop through a number of images and pass it through the ConvNet, this approach is both computationally expensive and time-consuming, making the whole process really slow. Convolutional implementation of the sliding window helps resolve this problem.

The YOLO (You Only Look Once) Algorithm

A better algorithm that tackles the issue of predicting accurate bounding boxes while using the convolutional sliding window technique is the YOLO algorithm. YOLO stands for you only look once and was developed in 2015 by Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. It’s popular because it achieves high accuracy while running in real time. This algorithm is called so because it requires only one forward propagation pass through the network to make the predictions.

introduction-to-object-detection-5.jpg

The algorithm divides the image into grids and runs the image classification and localization algorithm (discussed under object localization) on each of the grid cells. For example, we have an input image of size 256 × 256. We place a 3 × 3 grid on the image (see Fig.).

Next, we apply the image classification and localization algorithm on each grid cell. Do everything once with the convolution sliding window. Since the shape of the target variable for each grid cell is 1 × 9 and there are 9 (3 × 3) grid cells, the final output of the model will be:

Final Output= 3 X 3 X 9 ( Number of grid cells X Output for )

The advantages of the YOLO algorithm is that it is very fast and predicts much more accurate bounding boxes. Also, in practice to get more accurate predictions, we use a much finer grid, say 19 × 19, in which case the target output is of the shape 19 × 19 × 9.

Conclusion

With this, we come to the end of the introduction to object detection. We now have a better understanding of how we can localize objects while classifying them in an image. We also learned to combine the concept of classification and localization with the convolutional implementation of the sliding window to build an object detection system. In the next blog, we will go deeper into the YOLO algorithm, loss function used, and implement some ideas that make the YOLO algorithm better. Also, we will learn to implement the YOLO algorithm in real time.

Source: https://www.hackerearth.com/blog/machine-learning/introduction-to-object-detection/