Practicing ‘No Code’ Data Science

Blog Insights by, Ashvini Shahane, President – Learning Services, Synergetics Information Technology Services India Pvt. Ltd.

“This is a great article and really fascinating to see how the world of Data Science and Machine Learning is becoming more democratized with the “No-code Data Science Tools” enabling the growth of more “Citizen Data Scientists”. Quoting from the article – “In advanced analytics and AI it’s about the shortage, cost, and acquisition of sufficient skilled data scientists.”, the need to make Machine Learning solutions faster with more efficiency, consistency is what is needed.

When I started out on the journey of AI and Data Science, coming from a Microsoft technologies space tools like Microsoft ML Studio, helped build, train and operationalize ML models quickly with minimum Data Science background. Microsoft started with tools like Azure ML Studio for the budding inexperienced Data Scientist and then went on to provide tools for the more experienced in the field with Microsoft ML Services.

Recently, Microsoft has grown its offerings on ML with its most recent addition of “Automated ML” capability in Azure Machine Learning Services. Automated ML empowers customers, with or without data science expertise, to identify an end-to-end machine learning pipeline for any problem, achieving higher accuracy while spending far less of their time. It is like a recommender system for machine learning pipelines.

AS-insight

https://azure.microsoft.com/en-us/blog/announcing-automated-ml-capability-in-azure-machine-learning/

Really looking forward to the innovations in the “No-code Data Science” space making the creation and usage of Data Science and ML solutions easier, faster and more accurate.”

 

datascience

Summary:  We are entering a new phase in the practice of data science, the ‘Code-Free’ era.  Like all major changes this one has not sprung fully grown but the movement is now large enough that its momentum is clear.  Here’s what you need to know.

We are entering a new phase in the practice of data science, the ‘Code-Free’ era.  Like all major changes this one has not sprung fully grown but the movement is now large enough that its momentum is clear.

Barely a week goes by that we don’t learn about some new automated / no-code capability being introduced.  Sometimes these are new startups with integrated offerings.  More frequently they’re features or modules being added by existing analytic platform vendors.

I’ve been following these automated machine learning (AML) platforms since they emerged.  I wrote first about them in the spring of 2016 under the somewhat scary title “Data Scientists Automated and Unemployed by 2025!”.

Of course this was never my prediction, but in the last 2 ½ years the spread of automated features in our profession has been striking.

No Code Data Science

datascience 1

No-Code data science, or automated machine learning, or as Gartner has tried to brand this, ‘augmented’ data science offers a continuum of ease-of-use.  These range from:

Guided Platforms: Platforms with highly guided modeling procedures (but still requiring the user to move through the steps, (e.g. BigML, SAS, Alteryx). Classic drag-and-drop platforms are the basis for this generation.

Automated Machine Learning (AML): Fully automated machine learning platforms (e.g. DataRobot).

Conversational Analytics: In this last version, the user merely poses the question to be solved in common English and the platform presents the best answer, selecting data, features, modeling technique, and presumably even best data visualization.

This list also pretty well describes the developmental timeline.  Guided Platforms are now old hat.  AML platforms are becoming numerous and mature.  Conversational analytics is just beginning.

Not Just for Advanced Analytics

This smart augmentation of our tools extends beyond predictive / prescriptive modeling into the realm of data blending and prep, and even into data viz.  What this means is that code-free smart features are being made available to classical BI business analysts, and of course to power user LOB managers (aka Citizen Data Scientists).

The market drivers for this evolution are well known.  In advanced analytics and AI it’s about the shortage, cost, and acquisition of sufficient skilled data scientists.  In this realm it’s about time to insight, efficiency, and consistency.  Essentially doing more with less and faster.

However in the data prep, blending, feature identification world which is also important to data scientists, the real draw is the much larger data analyst / BI practitioner world.  In this world the ETL of classic static data is still a huge burden and time delay that is moving rapidly from an IT specialist function to self-service.

Everything Old is New Again

When I started in data science in about 2001 SAS and SPSS were the dominant players and were already moving away from their proprietary code toward drag-and-drop, the earliest form of this automation.

The transition in academia 7 or 8 years later to teaching in R seems to have been driven financially by the fact that although SAS and SPSS gave essentially free access to students, they still charged instructors, albeit at a large academic discount.  R however was free.

We then regressed back to an age, continuing till today when to be a data scientist means working in code.  That’s the way this current generation of data scientists has been taught, and expectedly, that’s how they practice.

There has also been an incorrect bias that working in a drag-and-drop system did not allow the fine grain hyperparameter tuning that code allows.  If you’ve ever worked in SAS Enterprise Miner or its competitors you know this is incorrect, and in fact that fine tuning is made all the easier.

In my mind this was always an unnecessary digression back to the bad old days of coding-only which tended to take the new practitioner’s eye off the ball of the fundamentals and make it look like just another programming language to master.  So I for one both welcome and expected this return to procedures that are both speedy and consistent among practitioners.

What About Model Quality

We tend to think of a ‘win’ in advanced analytics as improving the accuracy of a model.  There’s a perception that relying on automated No-Code solutions gives up some of this accuracy.  This isn’t true.

The AutoML platforms like DataRobot, Tazi.ai, and OneClick.ai (among many others) not only run hundreds of model types in parallel including variations on hyperparameters, but they also perform transforms, feature selection, and even some feature engineering.  It’s unlikely that you’re going to beat one of these platforms on pure accuracy.

A caveat here is that domain expertise applied to feature engineering is still a human advantage.

Perhaps more importantly, when we’re talking about variations in accuracy at the second or third data point, is the many weeks you spent on development a good cost tradeoff compared to the few days or even hours these AutoML platforms offer?

The Broader Impact of No Code

It seems to me that the biggest beneficiaries of no-code are actually classic data analysts and LOB managers who continue to be most focused on BI static data.  The standalone data blending and prep platforms are a huge benefit to this group (and to IT whose workload is significantly lightened).

These no-code data prep platforms like ClearStory Data, Paxata, and Trifacta are moving rapidly to incorporate ML features into their processes that help users select which data sources are appropriate to blend, what the data items actually mean (using more ad hoc sources in the absence of good data dictionaries), and even extending into feature engineering and feature selection.

Modern data prep platforms are using embedded ML for example for smart automated cleaning or treatment of outliers.

Others like Octopai, just reviewed by Gartner as one of “5 Cool Companies” focus on enabling users to quickly find trusted data through automation by using machine learning and pattern analysis to determine the relationships among different data elements, the context in which the data was created, and the data’s prior uses and transformations.

These platforms also enable secure self-service by enforcing permissions and protecting PID and other similarly sensitive data.

Even data viz leader Tableau is rolling out conversational analytic features using NLP and other ML tools to allow users to pose queries in plain English and return optimum visualizations.

What Does This Actually Mean for Data Scientists

Gartner believes that within two years, by 2020, citizen data scientists will surpass data scientists in the quantity and value of the advanced analytics they produce.  They propose that data scientists will instead focus on specialized problems and embedding enterprise-grade models into applications.

I disagree.  This would seem to relegate data scientists to the role of QA and implementation.  That’s not what we signed on for.

My take is that this will rapidly expand the use of advanced analytics deeper and deeper into organizations thanks to smaller groups of data scientists being able to handle more and more projects.

We’ve already emerged by only a year or two from where the data scientist’s most important skills included blending and cleaning the data, and selecting the right predictive algorithms for the task.  These are specifically the areas that augmented/automatic no-code tools are taking over.

Companies that must create, monitor, and manage hundreds or thousands of models have been the earliest adopters, specifically insurance and financial services.

What’s that leave?  It leaves the senior role of Analytic Translator.  That’s the role McKinsey recently identified as the most important in any data science initiative.  In short, the job of Analytics Translator is to:

  • Lead the identification of opportunities where advanced analytics can make a difference.
  • Facilitate the process of prioritizing these opportunities.
  • Frequently serve as project manager on the projects.
  • Actively champion adoption of the solutions across the business and promote cost effective scaling.

In other words, translate business problems into data science projects and lead in quantifying the various types of risk and rewards that allow these projects to be prioritized.

What About AI?

Yes even our most recent advancements into image, text, and speech with CNNs and RNNs are rapidly being rolled out as automated no-code solutions.  And it couldn’t come fast enough because the shortage of data scientists with deep learning skills is even greater than with our more general practitioners.

Both Microsoft and Google rolled out automated deep learning platforms within the last year.  These started with transfer learning but are headed toward full AutoDL.  See Microsoft Custom Vision Services (https://www.customvision.ai/) and Google’s similar entry Cloud AutoML.

There are also a number of startup integrated AutoDL platforms.  We reviewed OneClick.AI earlier this year.  They include both a full AutoML and AutoDL platform.  Gartner recently nominated DimensionalMechanics as one of its “5 Cool Companies” with an AutoDL platform.

For a while I tried to personally keep up with the list of vendors of both No-Code AutoML and AutoDL and offer updates on their capabilities.  This rapidly became too much.

I was hoping Gartner or some other worthy group would step up with a comprehensive review and in 2017 Gartner did a fairly lengthy report “Augmented Analytics In the Future of Data and Analytics”.  The report was a good broad brush but failed to capture many of the vendors I was personally aware of.

To the best of my knowledge there’s still no comprehensive listing of all the platforms that offer either complete automation or significantly automated features.  They do however run from IBM and SAS all the way down to small startups, all worthy of your consideration.

Many of these are mentioned or reviewed in the articles linked below.  If you’re using advanced analytics in any form, or simply want to make your traditional business analysis function better, look at the solutions mentioned in these.

Source: https://www.datasciencecentral.com/profiles/blogs/practicing-no-code-data-science

 

What is the difference between AI, machine learning and deep learning?

ai 2

In the first part of this blog series, we gave you simple and elaborative definitions of what is artificial intelligence (AI), machine learning and deep learning. This is the second part of the series; here we are elucidating our readers with – What is the difference between AI, machine learning, and deep learning.

You can think of artificial intelligence (AI), machine learning and deep learning as a set of a matryoshka doll, also known as a Russian nesting doll. Deep learning is a subset of machine learning, which is a subset of AI.

Artificial intelligence is any computer program that does something smart. It can be a stack of a complex statistical model or if-then statements. AI can refer to anything from a computer program playing chess, to a voice-recognition system like Alexa. However, the technology can be broadly categorized into three groups — Narrow AI, artificial general intelligence (AGI), and superintelligent AI.

IBM’s Deep Blue, which beat chess grandmaster Garry Kasparov at the game in 1996, or Google DeepMind’s AlphaGo, which beat Lee Sedol at Go in 2016, are examples of narrow AI — AI that is skilled at one specific task. This is different from AGI — AGI is the intelligence of a machine that could successfully perform a range of tasks intellectual task that a human being can. On the other hand, Superintelligent AI takes things a step further. As Nick Bostrom describes it, this is “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.” In other words, it is when the machines have outfoxed us.

ai 3

Machine learning is a subset of AI. The theory is simple, machines take data and ‘learn’ for themselves. It is currently the most promising tool in the AI pool for businesses. Machine learning systems can quickly apply knowledge and training from large datasets to excel at facial recognition, speech recognition, object recognition, translation, and many other tasks. Machine learning allows a system to learn to recognize patterns on its own and make predictions, contrary to hand-coding a software program with specific instructions to complete a task.

While Deep Blue and DeepMind are both types of AI, Deep Blue was rule-based, dependent on programming — so it was not a form of machine learning. DeepMind, on the other hand — beat the world champion in Go by training itself on a large data set of expert moves.

That is, all machine learning counts as AI, but not all AI counts as machine learning.

Deep learning is a subset of machine learning. Deep artificial neural networks are a set of algorithms reaching new levels of accuracy for many important problems, such as image recognition, sound recognition, recommender systems, etc.

It uses some machine learning techniques to solve real-world problems by tapping into neural networks that simulate human decision-making. Deep learning can be costly and requires huge datasets to train itself. This is because there are a huge number of parameters that need to be understood by a learning algorithm, which can primarily yield a lot of false-positives. For example, a deep learning algorithm could be trained to ‘learn’ how a dog looks like. It would take an enormous dataset of images for it to understand the minor details that distinguish a dog from a wolf or a fox.

Deep learning is part of DeepMind’s notorious AlphaGo algorithm, which beat the former world champion Lee Sedol in 4 out of 5 games of Go using deep learning in early 2016. Google said, “the way the deep learning system worked was by combining Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play.”

ai 4

Source: https://www.geospatialworld.net/blogs/difference-between-ai%EF%BB%BF-machine-learning-and-deep-learning/

 

Bot Framework – New perspective of Marketing Automation

Developing intelligent chat bots with Microsoft AI platform and Bot Framework

bot framework

Nowadays, we all are using different kinds of applications on different platform and devices. Somebody uses mobiles, somebody uses desktops and laptops to manage their day to day activities and business. In out daily life we use different kinds of applications such as social media applications, messengers, shopping and ticket booking applications, customer service applications and other business applications. What if you need a help while using these applications? What if you get confused while choosing menu options or to get started? Definitely you need some kind of assistance to go ahead. You can contact the customer support team to get assistance for your queries. But you may need to send mail or call to the customer service number and wait for their responses. What if you need immediate assistance? There comes the role of and intelligent online assistant who can help you for choosing options, providing suggestions, and can converse with you in your language.

A chat bot is an intelligent online assistant that can converse with you in your language. It can be programmed with a powerful AI backend that can understand you language and feelings, provide suggestions, collect data from user and respond quickly or later with the results you want. Chat bots can be programmed in different languages and can be hosted in various cloud platforms. A chat bot can be easily integrated with any kind of applications of your choice. It could be a messenger application such as Skype, Facebook messenger, Google Talk, WeChat, Kik or web applications. There are various Bot frameworks available for developers such as Microsoft Bot Framework, Facebook Wit.ai, Google’s api.ai. You can host your bot applications on various platforms such as Azure Bot Services, Chatfuel, HubSpot Motion.ai etc.

Microsoft Bot Framework is one of the best and rich framework for developing Intelligent Bot Applications on Microsoft Azure Cloud platform. The Bot Framework consists of three main components: The Bot builder SDK, Channels, and the Bot Framework Directory. The Bot Builder provides an SDK, libraries, samples, and tools to help you build and debug bots. Microsoft Bot Builder provides SDK for Node.JS and C# ie you can develop your bot applications using Node.JS, C#.NET, Java and Python.

Developing Bot applications using .NET

You can start creating your first bot applications using Visual Studio. For that you need to install the project templates for Bot applications. Two templates are available for .NET, targetting the v3 and v4 versions of the SDK respectively. Both are available as VSIX packages. Both are available in Visual Studio market place. You can download them from the following links.

Bot Builder V3 template: https://marketplace.visualstudio.com/items?itemName=BotBuilder.BotBuilderV3

Bot Builder V4 template: https://aka.ms/Ylcwxk

You need Visual Studio 2015 or later versions to install and develop using these templates. Bot Builder SDK requires .NET framework version 4.6 or later.

Sonu-Blog

Developing Bot Applications using Node.JS

You can develop your bot applications using Node.JS also. To install the Bot templates for Node.JS you need to install the latest version of Node.JS (8.5 or later) and Yeoman. You can download and install the latest version of Node.JS from the Node.JS web site. Install the latest version of Yeoman by running the following command.

npm install -g yo

Install the Node.JS project templates using the following npm command.

npm install generator-botbuilder

Developing Bot using Java and Python

You can also install the Bot templates for Java and python. You can use the following npm commands to install the Yeoman generators for the Java and Python project templates.

npm install generator-botbuilder-java

npm install generator-botbuilder-python

Run the Yeoman command to generate the project template you want.

Sonu-Blog 2

Making your bot intelligent

How you can create an intelligent bot that can understand your language and respond to your queries. Microsoft Azure AI platform provides a set of APIs that can be integrated with any of your applications. These APIs are called Cognitive Services. These APIs include APIs for Language processing, text to speech translation, suggestions, Search APIs, Face API etc. You can integrate these APIs with your bot applications to make your application more intelligent.

The interaction between bot and user is free-form, so it is important for a bot application to understand the user language and the context. Microsoft Azure Cognitive Services provides the LUIS (Language Understanding Intelligent Service) API that helps the bot to understand the users language and context. For that you need to create a LUIS app model and train your nodel to understand the utterances (What the user says) and the entities. Once the model starts processing input, LUIS begins active learning, allowing you to constantly update and improve the model.

Author: Sonu Sathyadas, Tech Lead, Synergetics

 

12 major Artificial Intelligence trends to watch for in 2018

12 major Artificial Intelligence trends to watch for in 2018Artificial Intelligence (AI) has the peculiar ability to simultaneously amaze, enthrall, leave us gasping and intimidate. The possibilities of AI are innumerable and they easily surpass our most artistically fecund imaginations. What all we read in science fiction novels or saw in movies like ‘The Matrix’ could someday materialize into reality. Bill Gates, the founder of Microsoft, recently said that ‘AI can be our friend’ and is good for the society. From decision-making to computing to robotics to vehicles and even cosmetics, AI has left its mark everywhere and it will usher in the grandest social engineering experiment in the history of the world.

CBInsights has prepared a list of the major AI trends to follow in 2018. Let’s have a look at the 13 trends In AI that will have a huge impact in years to come.

Robotic workforce

It is no more a closely guarded secret that in the future much of the labor-intensive work in assembly lines of factories would be done by AI programmed robots and not workers. This would bring down the cost of hiring workers and also reduce outsourcing and offshoring.

Recently, a Chinese T-shirt manufacturer Tianyuan Garments Company signed a Memorandum of Understanding (MoU) with the Arkansas government to employ 400 workers at $14/hr at its new garment factory in Arkansas. Operations were scheduled to begin by the end of 2017. Tianyuan’s factory in Little Rock, Arkansas, will use sewing robots developed by Georgia-based startup SoftWear Automation to manufacture apparel.

In Japan, by 2025, more than 80% of elderly care would be done by robots, not caregivers.

Ubiquitous Artificial Intelligence

Artificial Intelligence impacts multiple fields, even those that we least expect it to. Machine learning, a crucial component of AI, refers to the training of algorithms on large data sets so that they learn how to identify desired patterns better at their tasks.

The functioning of AI is getting more versatile with each passing day.

Uncle Sam vs The Dragon in the realm of AI

12 major Artificial Intelligence trends to watch for in 2018-1China is all set to prove its prowess in AI and outshine the US and other western countries. The Chinese government is investing a lot in this futuristic technology.

The Chinese government is promoting an intelligence plan. It includes everything from smart agriculture and intelligent logistics to military applications.

In 2017 China’s artificial intelligence startups took 48% of all dollars going to AI startups globally in, more than that of the USA. In deep learning also China publishes six times more patents than the US.

Battlefields in the age of AI

The wars of the future will rely on smart technology like never before. Drones are just the beginning. With the increasing convergence of conventional defense, surveillance, and reconnaissance with cybersecurity, the need for algorithm-based AI only expands.

Cyber security is a real opportunity area for AI since attacks are constantly-evolving and the main challenge is new forms of malware. Prima facie, AI would have an extra edge here given its ability to operate at scale and sift through millions of incidents to identify aberrations, risks, and signals of future threats.

The market is mushrooming with new cybersecurity companies trying to leverage machine learning to some extent.

Voice Assistants

Voice-enabled computing was all over at the Consumer Electronics Show in 2018. Barely any IoT device was without integration into the Amazon Echo or Google Home.

Samsung is also working on its own voice assistant, Bixby. It wants all of its products to be internet-connected and have intelligence from Bixby by 2020.

AI to throw the gauntlet before professionals

Skilled professionals — including lawyers, consultants, financial advisors etc —will face the heat of artificial intelligence as much as unskilled and semi-skilled workers.

For instance, artificial intelligence has huge potential to reduce the time and improve efficiency in legal work. As AI platforms become more efficient, affordable and commercialized, this will influence the remuneration structure of external law firms that charge by the hour.

Decentralization and Democratization

Artificial Intelligence isn’t only limited to powerful supercomputers and big devices; it is also becoming a part and parcel of smartphones and wearable devices and equipment. Edge computing is emerging as the next big area in AI.

Apple released its A11 chip with a neural engine for iPhone 8 and X. Apple claims it can perform machine learning tasks at up to 600B operations per second.

Another case for edge AI would be training your personal AI assistant locally on your device to recognize your unique accent or identify faces.

Capsule Networks

Neural networks have myriad architectures. One of the most popular one in deep learning these days is known as convolutional neural networks. Now a new architecture, capsule networks, has been developed and it would outpace the convolutional neural networks (CNNs) on multiple fronts.

CNNs have certain limitations that lead to lack of performance or gaps in security.

Capsule Networks would allow AIs to identify general patterns with less data and be less susceptible to false results.

Capsule Networks would take relative positions and orientation of an object into consideration without needing to be trained exhaustively on variations.

Dream salaries in AI talent hunt

As per a recent report, the approximate number of qualified researchers currently in the field of AI is 300,000, including students in relevant research areas. Meanwhile, companies require a million or more AI specialists for their engineering needs.

In the US, a Glassdoor search for “artificial intelligence” shows over 32,000 jobs currently listed, with several salary ranges well into the 6 digits. Companies are more than willing to pay handsome emoluments to intelligent AI experts.

Bigwigs of enterprise AI

As tech giants like Google, Amazon, Salesforce, and Microsoft improve their enterprise AI capability.

AI medical diagnostics

Regulators in the US are looking forward at approving AI for use in clinical settings. The advantage of AI in diagnostics is early detection and better accuracy.

Machine learning algorithms can compare a medical image with those of millions of other patients, picking up on nuances that a human eye may otherwise miss.

Consumer-focused AI monitoring tools like SkinVision — which uses computer vision to monitor suspicious skin boils — are already in use. But a new wave of healthcare AI applications will set the ground for machine learning capabilities in hospitals and clinics.

Build your own AI

Because of open source software libraries, hundreds of APIs and SDKs, and easy assembly kits from Amazon and Google, the barrier to entry in AI could not have been lower. Google launched an “AI for all ages” project called AIY (artificial Intelligence yourself).

Source: https://www.geospatialworld.net/blogs/13-artificial-intelligence-trends-2018/

Introduction to Object Detection Artificial Intelligence | Cognitive | Machine Learning | Python

Introduction to object detection

Humans can easily detect and identify objects present in an image. The human visual system is fast and accurate and can perform complex tasks like identifying multiple objects and detect obstacles with little conscious thought. With the availability of large amounts of data, faster GPUs, and better algorithms, we can now easily train computers to detect and classify multiple objects within an image with high accuracy. In this blog, we will explore terms such as object detection, object localization, loss function for object detection and localization, and finally explore an object detection algorithm known as “You only look once” (YOLO).

Object Localization

An image classification or image recognition model simply detect the probability of an object in an image. In contrast to this, object localization refers to identifying the location of an object in the image. An object localization algorithm will output the coordinates of the location of an object with respect to the image. In computer vision, the most popular way to localize an object in an image is to represent its location with the help of bounding boxes. Fig. 1 shows an example of a bounding box.

Introduction to Object Detection 2A bounding box can be initialized using the following parameters:

bx, by :

coordinates of the center of the bounding box

bw :

width of the bounding box w.r.t the image width

bh :

height of the bounding box w.r.t the image height

Object Detection

An approach to building an object detection is to first build a classifier that can classify closely cropped images of an object. Fig 2. shows an example of such a model, where a model is trained on a dataset of closely cropped images of a car and the model predicts the probability of an image being a car.Introduction to Object Detection 7

Now, we can use this model to detect cars using a sliding window mechanism. In a sliding window mechanism, we use a sliding window (similar to the one used in convolutional networks) and crop a part of the image in each slide. The size of the crop is the same as the size of the sliding window. Each cropped image is then passed to a ConvNet model which in turn predicts the probability of the cropped image is a car.

Introduction to Object Detection 4

After running the sliding window through the whole image, we resize the sliding window and run it again over the image again. We repeat this process multiple times. Since we crop through a number of images and pass it through the ConvNet, this approach is both computationally expensive and time-consuming, making the whole process really slow. Convolutional implementation of the sliding window helps resolve this problem.

The YOLO (You Only Look Once) Algorithm

A better algorithm that tackles the issue of predicting accurate bounding boxes while using the convolutional sliding window technique is the YOLO algorithm. YOLO stands for you only look once and was developed in 2015 by Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. It’s popular because it achieves high accuracy while running in real time. This algorithm is called so because it requires only one forward propagation pass through the network to make the predictions.

introduction-to-object-detection-5.jpg

The algorithm divides the image into grids and runs the image classification and localization algorithm (discussed under object localization) on each of the grid cells. For example, we have an input image of size 256 × 256. We place a 3 × 3 grid on the image (see Fig.).

Next, we apply the image classification and localization algorithm on each grid cell. Do everything once with the convolution sliding window. Since the shape of the target variable for each grid cell is 1 × 9 and there are 9 (3 × 3) grid cells, the final output of the model will be:

Final Output= 3 X 3 X 9 ( Number of grid cells X Output for )

The advantages of the YOLO algorithm is that it is very fast and predicts much more accurate bounding boxes. Also, in practice to get more accurate predictions, we use a much finer grid, say 19 × 19, in which case the target output is of the shape 19 × 19 × 9.

Conclusion

With this, we come to the end of the introduction to object detection. We now have a better understanding of how we can localize objects while classifying them in an image. We also learned to combine the concept of classification and localization with the convolutional implementation of the sliding window to build an object detection system. In the next blog, we will go deeper into the YOLO algorithm, loss function used, and implement some ideas that make the YOLO algorithm better. Also, we will learn to implement the YOLO algorithm in real time.

Source: https://www.hackerearth.com/blog/machine-learning/introduction-to-object-detection/

 

 

The rise of artificial intelligence: what does it mean for development?

riseofAI

Typically, there are two arguments against ICTs ( Information and communications technology ) for development. First, to properly reap the benefits of ICTs ( Information and communications technology ) , countries need to be equipped with basic communication and other digital service delivery infrastructure, which remains a challenge for many of our low-income clients. Second, we need to be mindful of the growing divide between digital-ready groups vs. the rest of the population, and how it may exacerbate broader socio-economic inequality.

These concerns certainly apply to artificial intelligence (AI), which has recently re-emerged as an exciting frontier of technological innovation. In a nutshell, artificial intelligence is intelligence exhibited by machines. Unlike the several “AI winters” of the past decades, AI technologies really seem to be taking off this time. This may be promising news, but it challenges us to more clearly validate the vision of ICT ( Information and communications technology ) for development, while incorporating the potential impact of AI.

It is probably too early to figure out whether AI will be blessing or a curse for international development… or perhaps this type of binary framing may not be the best approach. Rather than providing a definite answer, I’d like to share some thoughts on what AI means for ICT ( Information and communications technology ) and development.

AI and the Vision of ICT ( Information and communications technology ) for Development

Fundamentally, the vision of ICT ( Information and communications technology ) for development is rooted in the idea that universal access to information is critical to development. That is why ICT ( Information and communications technology ) projects at development finance institutions share the ultimate goal of driving down the cost of information. However, we have observed several notable features of the present information age: 1) there is a gigantic amount of data to analyze, which is growing at an unprecedented rate and 2) in the highly complex challenges of our world, it is almost impossible to discover structures in raw data that can be described as simple equations, for example when finding cures for cancer or predicting natural disasters.

This calls for a new powerful tool to convert unstructured information into actionable knowledge, which is expected to be greatly aided by artificial intelligence. For instance, machine learning, one of the fastest-evolving subfields in AI research, provides feature predictions with greatly enhanced accuracies at much lower costs. As an example, we can train a machine with a lot of pictures, so that it can later tell which photos have dogs in it or not, without a human’s prior algorithmic input.

To summarize, AI promises to achieve the vision of ICT ( Information and communications technology ) for development much more effectively. Then, what are some practical areas of its usage?

AI for development: areas of application

Since AI research is rapidly progressing, it is challenging to get a clear sense of all the different ways AI could be applied to development work in the future; nonetheless, the following are a couple areas where current AI technologies are expected to provide significant added-value.

First, AI allows us to develop innovative new solutions to many complex problems faced by developing countries. As an example, a malaria test traditionally requires a well-trained medical professional who analyzes blood samples under a microscope. In Uganda, an experiment showed that real-time and high-accuracy malaria diagnoses are possible with machines running on low-powered devices such as Android phones.

Secondly, AI could make significant contributions to designing effective development policies by enabling accurate predictions at lower costs. One promising example is the case of the US-based startup called Descartes. The company uses satellite imagery and machine learning to make corn yield forecasts in the US. They use spectral information to measure chlorophyll levels of corn, which is then used to estimate corn production. Their projections have proven to be consistently more accurate than the survey-based estimates used by the US Department of Agriculture. This kind of revolution in prediction has great potential to help developing economies design more effective policies, including for mitigating the impact of natural disasters.

riseofai2.jpeg

Looking forward – Toward the democratization of AI?

Many assume that it is too early to talk about AI in the developing world, but the mainstreaming of AI may happen sooner than most people would assume. Years ago, some tech visionaries already envisioned that AI would soon become a commodity like electricity. And this year, Google revealed TensorFlow Lite, the first software of its kind that runs machine learning models on individual smartphones. Further, Google is working on the AutoML project, an initiative to leverage machine learning to automate the process of designing machine learning models themselves.

As always, new technology can be liberating and disruptive, and the outcome will largely depend on our own ability to use it wisely. Despite the uncertainty, AI provides another exciting opportunity for the ICT ( Information and communications technology ) sector to leverage technological innovation for the benefit of the world’s marginalized populations.

Source: https://blogs.worldbank.org/ic4d/rise-artificial-intelligence-what-does-it-mean-development

 

Artificial Intelligence Trends 2018

AI trends of 2018-1

Artificial intelligence (AI) continued to be a major driver of digital transformation in 2017, with the rapidly advancing technology affecting business strategy and operations, customer interactions and the workforce itself. While these are all general and broad impacts of AI, they will continue to be important for businesses trying to keep up with rapid technological advancements in 2018.

Embedded deep learning will become the focus of software product teams in the coming year, as buyers will begin to inquire about the machine learning as a service capabilities of the tools they are purchasing. Many vendors are already including machine learning in products to enhance and automate certain functionalities, and build marketing campaigns around those AI enhancements. As embedded AI becomes more standard in solutions, there will be less of an emphasis around the glitz and glamour of machine learning and more of a focus on how the embedded AI is contributing to a business’ overall digital transformation.

There will also be a push to open data sources in 2018 for the benefit of machine learning developers. AI is only as good as the data that it has to learn from, so when building embedded AI applications or training machine and deep learning models, one needs as much data as possible. Enterprise companies, like Amazon and Google, among others, do not have a problem accessing mammoth data sets, because their everyday businesses are so large that they create a seemingly endless supply of data. However, small businesses or independent developers do not have that luxury; therefore, they will take advantage of open-source data sets, often made available by those same enterprise companies.

Similarly, businesses will begin to share their data with the software they work with instead of trying to hoard their own data in secrecy. As embedded AI becomes the norm, companies will have the option to share data with the vendor to increase the machine learning capabilities, and have the technology learn not just from the business’ data, but also from the data of the vendors’ entire customer base. Businesses using AI-enabled software will begin to realize that the benefits of data sharing outweigh the risks, which primarily center around data security.

Businesses and software vendors will also more frequently open up data to partnership opportunities in the form of data swaps. This will be particularly helpful for AI and general automation. Software vendors will begin to trade valuable data to best improve the embedded AI within its products. This will most likely happen across software categories, because the race to have the smartest and most intelligent application will be fiercely competitive. Any edge a vendor can get will be crucial. This will also benefit businesses outside the software space who begin to implement AI into general businesses processes.

Adoption of AI in businesses will be driven by digital platform providers, the same way that those enterprise service providers drove adoption of the cloud. Amazon Web Services (AWS), Google Cloud Platform and Microsoft Azure have created a number of machine and deep learning API’s and microservices that will make it easy for businesses to deploy AI for business operations and automation purposes. These solutions will have the same advantages as the vendors’ other service offerings; they will be cost effective, easy to setup and quick to deploy, making them attractive options for companies that do not have highly skilled, in-house developers. This machine learning as a service (MLaaS) type of deployment will become much more mainstream in 2018.

Finally, robotics process automation (RPA) will make its emergence in the workplace. This technology is still in its infancy, but it will begin to have an impact on business process management. RPA creates intelligent robots that access the software a business already uses, and creates automation for mundane tasks, like data entry. The benefit of RPA systems is that they are very easy to build, setup and train. These solutions can eliminate human error and help IT teams focus on bigger and more important implementations, instead of wasting time and energy improving and correcting the minor, but necessary aspects of industry. Look out for more updates on RPA throughout 2018.

Look for each of these trends to emerge as a focal point for AI in the coming year and have an impact on business modernization and digital transformation. Small businesses and enterprise companies alike will be adopting and embracing these intelligent trends, because the benefits will be so important that they will be unavoidable.

Open Data and Big Data Sharing

Enterprise companies, such as Amazon, Microsoft, Google and IBM, have been able to make the biggest strides in the AI space because they have access to enormous amounts of data. As businesses continue to accumulate and create massive amounts of data, there will be a need for data sharing unlike anything we have seen before. In the past, companies have kept data very close to the vest, with the exception of enterprise companies, but as the need to develop machine learning tools becomes more critical, companies will actively seek partnerships to share their data.

A number of enterprise companies have open sourced specific data sets to help developers train machine learning applications. For example, Google opened up AudioSet which, “consists of an expanding ontology of 632 audio event classes and a collection of 2,084,320 human-labeled 10-second sound clips drawn from YouTube videos.” An AI developer could potentially use this data set to help train a machine learning application for natural language processing purposes, and to better understand human, animal, musical and everyday sounds.

Uber has opened data from more than 2 billion of its ride sharing trips to improve urban planning. Uber partners with cities to better understand people and transportation in a program it calls Uber Movement. According to Uber, “We’ve gotten consistent feedback from cities we partner with that access to our aggregated data will inform decisions about how to adapt existing infrastructure and invest in future solutions to make our cities more efficient.” All data is anonymous, but one can imagine the potential opportunities a city or business can pull out of that rich data set. As a resident of Chicago, I hope that Uber and the city planners can work together to build an AI tool to optimize street lights, because hitting every red light on the way to the office may be my demise.

Of course, not all businesses are willing to just give away this data. As Uber states, it “partners” with cities, which one could speculate means that the company gets something beneficial in return, while other businesses like Yelp have opened up data for academic purposes. The Yelp data set can be used to, “teach students about databases, to learn NLP, or for sample production data while you learn how to make mobile apps.” If you are a hospitality management student trying to learn about restaurant trends or an aspiring developer, it could be extremely helpful to pull out insights from Yelp’s data set, which spans 12 metropolitan areas and consists of 4.7 million reviews, 156,000 businesses and 200,000 pictures, among other data points.

In 2018 it will not just be these huge companies that are opening up their data, but software vendors and customers alike. More and more companies will begin to opt in to software AI tools, such as Salesforce’s Einstein, to better automate tasks for employees. Businesses will happily share their own private CRM data with Salesforce if it means they now have access to the data from the thousands of other customers utilizing the solution’s AI capabilities. This would create better lead scoring, provide automated prospecting tools based on what others have found successful and, ultimately, save sales employees time.

The examples are not limited to the CRM space at all, instead they are seemingly endless. The boom of data from the internet of things (IoT) will open up even more data sharing opportunities for manufacturing and field service companies. Companies will be able to conceivably benchmark their machines’ performance and uptime against competitors within their industries by comparing IoT data. For these data-sharing opportunities to expand, there will be a greater emphasis on data security as well. All of these points will be critical to a company’s digital transformation.

In the coming year, other software companies will follow suit and allow their users to opt in to data sharing that will enhance business processes and offer growth opportunities that never before existed. It will also help pave the way for all software utilizing machine learning as the cornerstone of its solution.

Embedded AI

As more software companies discover ways to take advantage of their existing data sets, they will be able to strengthen their tools with embedded AI and make it the core of their products. Embedded AI is a blanket term for the use of machine and deep learning inside a software platform that improves aspects of an employee’s day-to-day. These machine learning advancements within the software may go unnoticed by most users, but they will be helpful for business strategy and operations, and relieve employees from mundane tasks with automation.

Over the past few years, many vendors have made large announcements notifying their customers, and their prospects, that they have added deep and machine learning to their current product offerings. Salesforce made Einstein the true focus of its major user conference, Dreamforce, back in 2016. That same year, when Microsoft transformed its products into the Dynamics 365 cloud suites, the company sure to highlight the fact that AI was going to be a major part of the tool.

I’m not currently a user of either solution, but I do use Expensify for expense management, and when it added machine learning to its tool, all users were sent an email explaining how the updates would benefit them. While I know the AI capabilities are in the tool, I’ve never actually seen it. That’s the point. For nearly all tools, users will not be aware that they are utilizing AI. But for those decision-makers in charge of purchasing tools, it will become a mandatory question when researching a software product and speaking with vendors. How does this product take advantage of AI to benefit my employees?

Vendors are aware of this, and most have been preparing for some time now, which is why Salesforce and Microsoft drove it home so heavily in their press releases. They have been looking towards the future and understand that as businesses continue their digital transformations, AI will need to be embedded into all of their tools. For the software vendors that have not started engraining AI into their products, they will certainly fall behind, and quickly. There is simply too much potential to improve their own products and too many opportunities to help customers to not make the conscience effort to make machine and deep learning the focal point of their products’ functionality.

Machine Learning as a Service (MLaaS)

In recent years, businesses have begun taking advantage of digital platforms and microservices to build their tech stacks (the holistic view of technology and software a company uses), utilizing the “as-a-service” model for everything from software to infrastructure. Businesses will lean into this strategy for AI, using the microservices from major enterprise vendors for “machine learning as a service.” Amazon Web Services, Google Cloud Platform and Microsoft Azure are a few of the digital platforms already providing this service to businesses.

Developers who understand how to build machine and deep learning applications are few and far between, and on top of that, expensive. Instead of trying to train algorithms with in-house resources, businesses can now purchase pre-built algorithms from said enterprise vendors and run their own company data through them, teaching these applications to do what is needed to better the business. Because of the data these companies have access to, their machine learning tools are likely more advanced than anything that could be built in-house anyways, so why not save time, effort and budget by taking advantage of the available services? That’s the question more and more companies will find themselves asking in 2018.

In the coming year, the algorithms and services provided by these large companies will only continue to expand and advance to the point where your business will be able to quickly and efficiently implement a natural language processing solution into your application or website. The major players will perfect these systems, so their AI offerings will rapidly consume data to be as effective as possible. As companies work through their digital transformations, these microservices will become the easiest and fastest way to progress rapidly with AI.

Conclusion

Artificial intelligence is possibly the most well-known aspect of digital transformation due to constant media coverage and the fear factor that it will make humans obsolete, but in reality, it is becoming a necessity for businesses. Whether companies are investing in software that uses embedded AI, or they are deploying their own MLaaS offering internally to automate processes, they should be taking of advantage of AI to modernize. In 2018, CIO’s and IT departments that have not yet adopted AI in some fashion will begin to feel the pressure, both externally and internally, to use the technology that is out there to better traditional business aspects.

Developers will continue to build important and useful machine learning tools with the help of open data. As companies begin to let go of the tight grip they have historically maintained around their proprietary data, more and more opportunities will present themselves — from data swap partnerships to AI enhancements — simply by sharing data. Each of these opportunities will help to modernize a business.

Source: https://blog.g2crowd.com/blog/trends/artificial-intelligence/2018-ai/