Machine learning can boost the value of wind energy

aib19-10

Carbon-free technologies like renewable energy help combat climate change, but many of them have not reached their full potential. Consider wind power: over the past decade, wind farms have become an important source of carbon-free electricity as the cost of turbines has plummeted and adoption has surged. However, the variable nature of wind itself makes it an unpredictable energy source—less useful than one that can reliably deliver power at a set time.

In search of a solution to this problem, last year, DeepMind and Google started applying machine learning algorithms to 700 megawatts of wind power capacity in the central United States. These wind farms—part of Google’s global fleet of renewable energy projects—collectively generate as much electricity as is needed by a medium-sized city.

Using a neural network trained on widely available weather forecasts and historical turbine data, we configured the DeepMind system to predict wind power output 36 hours ahead of actual generation. Based on these predictions, our model recommends how to make optimal hourly delivery commitments to the power grid a full day in advance. This is important, because energy sources that can be scheduled (i.e. can deliver a set amount of electricity at a set time) are often more valuable to the grid.

Although we continue to refine our algorithm, our use of machine learning across our wind farms has produced positive results. To date, machine learning has boosted the value of our wind energy by roughly 20 percent, compared to the baseline scenario of no time-based commitments to the grid.

We can’t eliminate the variability of the wind, but our early results suggest that we can use machine learning to make wind power sufficiently more predictable and valuable. This approach also helps bring greater data rigor to wind farm operations, as machine learning can help wind farm operators make smarter, faster and more data-driven assessments of how their power output can meet electricity demand.

aib19-11

Our hope is that this kind of machine learning approach can strengthen the business case for wind power and drive further adoption of carbon-free energy on electric grids worldwide. Researchers and practitioners across the energy industry are developing novel ideas for how society can make the most of variable power sources like solar and wind. We’re eager to join them in exploring general availability of these cloud-based machine learning strategies.

Google recently achieved 100 percent renewable energy purchasing and is now striving to source carbon-free energy on a 24×7 basis. The partnership with DeepMind to make wind power more predictable and valuable is a concrete step toward that aspiration. While much remains to be done, this step is a meaningful one—for Google, and more importantly, for the environment.

Source: https://deepmind.com/blog/machine-learning-can-boost-value-wind-energy/

AI and Robotics in Retail: Drivers, Impact, and Challenges

aib19-9

As the modern world seeks innovation and convenience, retail providers are faced with the new challenge — to keep up with the trend or fall behind.

Due to this, many retailers are delving into the latest technologies that seek to address the new needs of their businesses, and that may mean looking toward enterprise software development. Let’s look at how retailers are innovating and dive deeper into their artificial intelligence and robotics solutions.

Why Do Retailers Need to Modernize?

According to Statista, by 2021 online e-commerce sales are set to total a record of $4.8 trillion (USD). Meanwhile, in 2018 this amount was estimated at a lower $2.8 trillion. What this shows is an industry in rapid growth, and there are no signs of it slowing down.

This growth makes one factor exceptionally clear — if you want to stay competitive in the retail business, no matter whether you have a small corner shop or a multinational enterprise, you need to consider optimizing your operations with new technology. Across web, mobile, and in-store, such technology is poised to include AI and robotic process automation (RPA), and here’s why:

The Value Driven by AI and Robotics in Retail
  1. Better insights into inventory and supply planning
  2. No or fewer employees required in physical location management and delivery tracking
  3. Predictive analytics of customer-tailored demands
  4. Personalization of customer support
  5. Cashier-less checkout operations
  6. Better product categorization of both local and global stock units
How AI and Robotics Solutions Boost Retail Businesses

Now that we know the benefits, let’s look at how these solutions work. To begin, let’s consider retail business processes as divided into two parts:

  • Back-office operations — consisting of paperwork, staff and product management
  • Shop-front operations — serving customers and addressing their issues

Across all of these functions, AI and robotics help retailers achieve better results.

Improving Planning and Strategy

AI technologies allow retailers to gather, rework and standardize data, automatically enter it into spreadsheets, and transform it into understandable visuals such as charts. In turn, this helps build efficient business plans, reduces the time on report compilations, forecasts sales figures, generates customer profiles, and understand customers’ shopping preferences.

Equipped with these reports on customer and market behavior, marketing and sales professionals can efficiently plan campaigns and target them toward real consumers. For managers, this aids in ensuring certain products remain stocked as they know which are in demand.

Optimizing Logistics and Inventory

AI programs stock, process and analyze significant amounts of information, resulting in a prediction of the outcomes and even applying them to discover new revenue channels. This can be helpful in back-office operations such as accounting and business planning, but is not limited to these areas.

For example, when paired with IoT, AI applications have already begun to improve the transportation of goods by managing their provenance and shipping conditions data. This can be tracked through the entire journey, ensuring better food security and enabling logistics enterprises to make more informed decisions.

In addition, cloud technologies assist retailers in restocking the shelves and tracking customers’ movement in-store, gathering information on the demand and forecasting the popularity of certain products.

Personalization and Customer Experience Management

According to McKinsey & Company, the retail sector is one of the foremost industries that has benefited from AI and robotics implementation. One of the reasons is that this can transform retail businesses by making them more customer-oriented.

AI-equipped systems can collect exceptionally accurate data about buyers’ preferences and habits. Relying on this data, retailers can grow their sales by recommending suitable items to customers. This is something that a few big names have already tried out with visible results:

  • NY-based company Caper has recently developed a handy computerized shopping cart. This cart helps customers to learn more about products by simply scanning them; the details then show up on the screen. In addition to this, buyers can “checkout” their goods online to avoid standing in a line.
  • Ocado, a grocery company, uses the Google technology based on speech recognition to deal with customer complaints. Google Cloud AI speeds up the process of complaint analysis, helping Ocado to promptly fix and improve their services.

In addition, robotics proves beneficial for in-store service, too. For example, robots can provide retailers with the information on the shelf inventory, price tag changes and consumer preferences, personalizing the products in stock. Robotized call-centers can help cut on the expenses while ensuring customer support is available 24 hours a day.

Finally, the buyers themselves can benefit from machine learning systems by using automated checkouts, avoiding long queues or getting quick support through digital kiosks.

Challenges of AI Adoption and Their Solutions

Despite these numerous benefits, it is an undoubtable fact that any business seeking to integrate new technologies, AI in particular, will be faced with certain challenges:

1. New working practices

As IT integrations advance, we are likely to see more changes in how we work. The current trend sees manual labor activities increasingly performed by robots, while “mental” work is performed by humans. But even this could be set to change as AI programs are gaining skills and are able to effectively work with data.

Recent research by McKinsey & Company has shown that out of 2,000 labor activities, about 800 occupations can be automated to some extent. For society, in general, this will mean a new drive in skill building and a changing job market in the future.

However, for retailers, this means having to both reconsider their staffing needs and their technology firepower to be able to keep up with the competition.

2. Costs of new software

For retail businesses that are just starting to introduce technology, the initial costs may seem off-putting. Usually, this means developing customized software and products to improve the business, and this may be more costly than off-the-shelf products. In addition, companies may need to consider hiring specialists to maintain and service such systems.

While initial roll-outs of such developments come at a price, companies should look at their long-term benefits and the overall effect on the business.

3. Security

Finally, retail providers will find new challenges in dealing with security. For many of these systems to work effectively, a large amount of information has to be collected and stored. This means that companies will be ever more responsible for data security, in the areas of individual privacy and the privacy of their whole businesses.

Safe data storage and consent management is one aspect; another is protection from hackers. This is essential to keep data from being exploited and systems from becoming corrupted.

Conclusion

For retailers to adapt and thrive in the new era, they will need to undertake changes to how they do business, and this may mean involving AI and robotics technologies.

These changes have both advantages and disadvantages for the retail sector and its employees. Personalization and robots taking over routine operations may be seen as positives, while the changing roles within an organization may be a negative. It will take flexibility and thought-out strategies for retailers to go with this AI flow without major disruptions to their modus operandi.

Source: https://chatbotsmagazine.com/ai-and-robotics-in-retail-drivers-impact-and-challenges-68a51dbf74cb

 

AI Chatbots and Recognition Technology: How Do The Machines Learn?

aib19-8

AI chatbots and recognition technology are seeping into every segment of society. It is making new waves everywhere. This intelligent technology is making to every corner of our lives, right from our homes, businesses to our relationships.

Now it is not just about home experiences and trending gadgets anymore. The current market has already seen chatbot therapists, chatbot educators, chatbot lawyers, and chatbot customer service representatives. Let us see to what extent AI chatbots and Recognition Technology, is impacting our lives.

AI Chatbot Success

As AI technologies proliferate, they are becoming integral for businesses globally. They are giving businesses a competitive edge over others. A strategically designed and implemented chatbot can work wonders for businesses worldwide.

AI chatbots and recognition technology is a brilliant way to outsource manual work and non-judgmental work. This technology saves up time, effort, and money. With AI in place, businesses can concentrate and invest in skilled work.

It also substantially reduces staff workload. As per the Grand View Research, the chatbot market is expected to hit a whopping $1.25 Billion by the year 2025 at a CAGR of 24.3%.

As digital transactions are becoming the standard norm of purchasing goods and services, leading eCommerce firms are using AI to enhance their customer loyalty and brand competitiveness. Some of the leading e-Commerce brands using AI technology include eBay, Alibaba, Amazon, ASOS, and JD.com.

As per an Oracle survey, 80% of businesses want chatbots by 2020. Companies such as Nitro Café, Sephora, 1–800 Flowers, Coca Cola, Snap Travel, and Marriott have started seeing returns. Here are a few AI chatbot success stories.

Nitro Café: Nitro Café’s messenger chatbot designed for direct payments, easy ordering, and instant 2-way communication has led to an increase in Nitro Café’s sales by 20%.

Sephora: Sephora’s facebook messenger chatbot has increased its makeover appointments by 11%.

ASOS: ASOS’s Messenger chatbots helped reach 3.5x more people, increased returns by 250% and increased its number of orders by 300%.

1–800 Flowers: 1–800 Flowers reported that 70% of its messenger orders were derived from new customers.

Uses of AI Recognition Technology
  1. Voice Recognition Technology

Voice recognition technology has revolutionized our lives in multiple ways. It is already being used in live subtitling on television, for offline note making systems or offline speech to text conversion and in dictation tools for the legal and medical profession.

Virtual assistants such as Amazon’s Alexa, Google’s Google Home, and Apple’s HomePod use voice recognition technology. These virtual assistants can control your smart home.

They can control thermostats, TVs, garage doors, lights, fans, locks, sprinklers, and switches. They can also play music, make calls, send texts, help you watch the footage from your security cameras, let you listen to audio books, make food orders, create alarms and reminders for you and give you News information.

You can also browse the internet for information about gazillion things as per your discretion. You can do all of this with just your voice.

With “OK Google” and “Hey Siri” making it to our smartphones, voice recognition technology has largely impacted the way we function.

With the help of voice recognition technology, you can also solve crimes, secure your bank accounts, and buy products and services.

  1. Facial Recognition Technology

For long now AI facial recognition technology has been associated with the security sector. However, today, you can see its active expansion into other industries such as marketing, retail, and health.

Some of the common uses of AI facial recognition technology include unlocking of phones, prevention of retail crimes, smarter advertising, helping the blind, finding missing persons and pets, protection of law enforcement, facilitating forensic investigations, identifying people on social media platforms, diagnosing diseases, tracking attendance at school, college and workplace, facilitate secure transactions, validate identities at ATMs and control access to sensitive areas.

AI-based recognition technology has also revolutionized the photography industry. An example of the same would be Accent AI 2.0, which is an AI recognition technology implemented in Luminar 3.

It features object and facial recognition technology that helps photographers to improve different part of the photo instantly, for instance, make the sky more expressive by applying a brighter color or replace portrait background.

Chris Burkard — a well-known photographer and artist, has in-length appreciated the fascinating and diverse use of the distinguishing AI facial recognition technology in the field of photography. He thinks the AI recognition technology has amplified accuracy and acts as a significant support system for an artist’s creativity.

Pioneering applications such as AiCure and ePAT are dramatically improving the health care setting. While AiCure uses facial recognition technology to improve medication adherence practices on a mobile device, ePAT can detect facial nuances associated with pain and help in prudent pain management.

AI chatbots and recognition technology have become decidedly mainstream. This radical technology is here to stay and evolve.

Source: https://chatbotsmagazine.com/ai-chatbots-and-recognition-technology-how-do-the-machines-learn-b458545e505b

From Crawling to Sprinting: Advances in Natural Language Processing

aib19-4

Natural language processing (NLP) is one of the fastest evolving branches in machine learning and among the most fundamental. It has applications in diplomacy, aviation, big data sentiment analysis, language translation, customer service, healthcare, policing and criminal justice, and countless other industries.

NLP is the reason we’ve been able to move from CTRL-F searches for single words or phrases to conversational interactions about the contents and meanings of long documents. We can now ask computers questions and have them answer.

Algorithmia hosts more than 8,000 individual models, many of which are NLP models and complete tasks such as sentence parsing, text extraction and classification, as well as translation and language identification.

Allen Institute for AI NLP Models on Algorithmia

The Allen Institute for Artificial Intelligence (Ai2), is a non-profit created by Microsoft co-founder Paul Allen. Since its founding in 2013, Ai2 has worked to advance the state of AI research, especially in natural language applications. We are pleased to announce that we have worked with the producers of AllenNLP—one of the leading NLP libraries—to make their state-of-the-art models available with a simple API call in the Algorithmia AI Layer.

Among the algorithms new to the platform are:

Machine Comprehension: Input a body of text and a question based on it and get back the answer (strictly a substring of the original body of text).

Textual Entailment: Determine whether one statement follows logically from another

Semantic role labeling: Determine “who” did “what” to “whom” in a body of text

These and other algorithms are based on a collection of pre-trained models that are published on the AllenNLP website.

Algorithmia provides an easy-to-use interface for getting answers out of these models. The underlying AllenNLP models provide a more verbose output, which is aimed at researchers who need to understand the models and debug their performance—this additional information is returned if you simply set debug=True.

The Ins and Outs of the AllenNLP Models

Machine Comprehension: Create natural-language interfaces to extract information from text documents.

This algorithm provides the state-of-the-art ability to answer a question based on a piece of text. It takes in a passage of text and a question based on that passage, and returns a substring of the passage that is guessed to be the correct answer.

This model could feature into the backend of a chatbot or provide customer support based on a user’s manual. It could also be used to extract structured data from textual documents, such as a collection of doctors’ reports could be turned into a table that says (for every report) the patient’s concern, what the patient should do, and when they should schedule a follow-up appointment.

aib19-5

Entailment: This algorithm provides state-of-the-art natural language reasoning. It takes in a premise, expressed in natural language, and a hypothesis that may or may not follow up from. It determines whether the hypothesis follows from the premise, contradicts the premise, or is unrelated. The following is an example:

Input

The input JSON blob should have the following fields:

premise: a descriptive piece of text

hypothesis: a statement that may or may not follow from the premise of the text

Any additional fields will pass through into the AllenNLP model.

Output

The following output field will always be present:

contradiction: Probability that the hypothesis contradicts the premise

entailment: Probability that the hypothesis follows from the premise

neutral: Probability that the hypothesis is independent from the premise

aib19-6

Semantic role labeling: This algorithm provides state-of-the-art natural language reasoning—decomposing a sentence into a structured representation of the relationships it describes.

The concept of this algorithm is considering a verb and the entities involved in it as its arguments (like logical predicates). The arguments describe who or what does the action of this verb, to whom or what it is done, etc.

aib19-7

NLP Moving Forward

NLP applications are rife in everyday life, and applications will only continue to expand and improve because the possibilities of a computer understanding written and spoken human language and executing on it are endless.

 

Source: https://blog.algorithmia.com/from-crawling-to-sprinting-advances-in-natural-language-processing/

Best Practices in Machine Learning Infrastructure

aib19-1

Developing processes for integrating machine learning within an organization’s existing computational infrastructure remains a challenge for which robust industry standards do not yet exist. But companies are increasingly realizing that the development of an infrastructure that supports the seamless training, testing, and deployment of models at enterprise scale is as important to long-term viability as the models themselves.

Small companies, however, struggle to compete against large organizations that have the resources to pour into the large, modular teams and processes of internal tool development that are often necessary to produce robust machine learning pipelines.

Luckily, there are some universal best practices for achieving successful machine learning model rollout for a company of any size and means.

The Typical Software Development Workflow

Although DevOps is a relatively new subfield of software development, accepted procedures have already begun to arise. A typical software development workflow usually looks something like this:

aib19-2

This is relatively straightforward and works quite well as a standard benchmark for the software development process. However, the multidisciplinary nature of machine learning introduces a unique set of challenges that traditional software development procedures weren’t designed to address.

Machine Learning Infrastructure Development

If you were to visualize the process of creating a machine learning model from conception to production, it might have multiple tracks and look something like these:

aib19-3

Data Ingestion

It all starts with data.

Even more important to a machine learning workflow’s success than the model itself is the quality of the data it ingests. For this reason, organizations that understand the importance of high-quality data put an incredible amount of effort into architecting their data platforms. First and foremost, they invest in scalable storage solutions, be they on the cloud or in local databases. Popular options include Azure Blob, Amazon S3, DynamoDB, Cassandra, and Hadoop.

Often finding data that conforms well to a given machine learning problem can be difficult. Sometimes datasets exist, but are not commercially licensed. In this case, companies will need to establish their own data curation pipelines whether by soliciting data through customer outreach or through a third-party service.

Once data has been cleaned, visualized, and selected for training, it needs to be transformed into a numerical representation so that it can be used as input for a model. This process is called vectorization. The selection process for determining what’s important in the dataset for training is called featurization. While featurization is more of an art then a science, many machine learning tasks possess associated featurization methods that are commonly used in practice.

Since common featurizations exist and generating these features for a given dataset takes time, it behooves organizations to implement their own feature stores as part of their machine learning pipelines. Simply put, a feature store is just a common library of featurizations that can be applied to data of a given type.

Having this library accessible across teams allows practitioners to set up their models in standardized ways, thus aiding reproducibility and sharing between groups.

Model Selection

Current guides to machine learning tend to focus on standard algorithms and model types and how they can best be applied to solve a given business problem.

Selecting the type of model to use when confronted with a business problem can often be a laborious task. Practitioners tend to make a choice informed by the existing literature and their first-hand experience about which models they’d like to try first.

There are some general rules of thumb that help guide this process. For example, Convolutional Neural Networks tend to perform quite well on image recognition and text classification, LSTMs and GRUs are among the go-to choices for sequence prediction and language modeling, and encoder-decoder architectures excel on translation tasks.

After a model has been selected, the practitioner must then decide which tool to implement the chosen model. The interoperability of different frameworks has improved greatly in recent years due to the introduction of universal model file formats such as the Open Neural Network eXchange (ONNX), which allow for the porting of models trained in one library to be exported for use in another.

What’s more, the advent of machine learning compilers such as Intel’s nGraph, Facebook’s Glow, or the University of Washington’s TVM promise the holy grail of being able to specify your model in a universal language of sorts and have it be compiled to seamlessly target a vast array of different platforms and hardware architectures.

Model Training

Model training constitutes one of the most time consuming and labor-intensive stages in any machine learning workflow. What’s more, the hardware and infrastructure used to train models depends greatly on the number of parameters in the model, the size of the dataset, the optimization method used, and other considerations.

In order to automate the quest for optimal hyperparameter settings, machine learning engineers often perform what’s called a grid search or hyperparameter search. This involves a sweep across parameter space that seeks to maximize some score function, often cross-validation accuracy.

Even more advanced methods exist that focus on using Bayesian optimization or reinforcement learning to tune hyperparameters. What’s more, the field has recently seen a surge in tools focusing on automated machine learning methods, which act as black boxes used to select a semi-optimal model and hyperparameter configuration.

After a model is trained, it should be evaluated based on performance metrics including cross-validation accuracy, precision, recall, F1 score, and AUC. This information is used to inform either further training of the same model or the next iterate in the model selection process. Like all other metrics, these should be logged in a database for future use.

Visualization

Model visualization can be integrated at any point in the machine learning pipeline, but proves especially valuable at the training and testing stages. As discussed, appropriate metrics should be visualized after each stage in the training process to ensure that the training procedure is tending towards convergence.

Many machine learning libraries are packaged with tools that allow users to debug and investigate each step in the training process. For example, TensorFlow comes bundled with TensorBoard, a utility that allows users to apply metrics to their model, view these quantities as a function of time as the model trains, and even view each node in a neural network’s computational graph.

Model Testing

Once a model has been trained, but before deployment, it should be thoroughly tested. This is often done as part of a CI/CD pipeline. Each model should be subjected to both qualitative and quantitative unit tests. Many training datasets have corresponding test sets which consist of hand-labeled examples against which the model’s performance can be measured. If a test set does not yet exist for a given dataset, it can often be beneficial for a team to curate one.

The model should also be applied to out-of-domain examples coming from a distribution outside of that on which the model was trained. Often, a qualitative check as to the model’s performance, obtained by cross-referencing a model’s predictions with what one would intuitively expect, can serve as a guide as to whether the model is working as hoped.

For example, if you trained a model for text classification, you might give it the sentence “the cat walked jauntily down the street, flaunting its shiny coat” and ensure that it categorizes this as “animals” or “sass.”

Deployment

After a model has been trained and tested, it needs to be deployed in production. Current practices often push for the deploying of models as microservices, or compartmentalized packages of code that can be queried through and interact via API calls.

Successful deployment often requires building utilities and software that data scientists can use to package their code and rapidly iterate on models in an organized and robust way such that the backend and data engineers can efficiently translate the results into properly architected models that are deployed at scale.

For traditional businesses, without sufficient in-house technological expertise, this can prove a herculean task. Even for large organizations with resources available, creating a scalable deployment solution is a dangerous, expensive commitment. Building an in-house solution like Uber’s Michelangelo just doesn’t make sense for any but a handful of companies with unique, cutting-edge ML needs that are fundamental to their business.

Fortunately, commercial tools exist to offload this burden, providing the benefits of an in-house platform without signing the organization up for a life sentence of proprietary software development and maintenance.

Algorithmia’s AI Layer allows users to deploy models from any framework, language, or platform and connect to most all data sources. We scale model inference on multi-cloud infrastructures with high efficiency and enable users to continuously manage the machine learning life cycle with tools to iterate, audit, secure, and govern.

No matter where you are in the machine learning life cycle, understanding each stage at the start and what tools and practices will likely yield successful results will prime your ML program for sophistication. Challenges exist at each stage, and your team should also be primed to face them.

Source: https://blog.algorithmia.com/best-practices-in-machine-learning-infrastructure/

Financial Services and the cloud: Accelerating the compliance journey

In the financial services industry, there is a growing interest in the cloud and its advanced capabilities to improve existing operations and innovate and transform business. Yet, when it comes to cloud adoption and implementation, there is still a lot of confusion. The most common misperception: Regulation is a barrier to the cloud.

Debunking the cloud myth

Our team at Microsoft has spent the last seven years working closely with financial services regulators and found the opposite to be true. Regulations are tech neutral to cloud computing, and our experience is that regulators are more open to cloud technology than when we started this journey years ago. That said, there is a lot of banking regulation that comes into play when using cloud services, and both banks and regulators want to get it right. Regulators are also modernizing their laws to address cloud computing.

The role of technology vendors

As banks look to third-party vendors for cloud services, our role goes beyond providing a scalable platform they can use to run and operate their business. Technology providers are also responsible for helping them understand the cloud journey; providing both financial services organizations and regulators with transparency in how they manage and operate their cloud services; and ensuring their customers have the control and security of their data to meet their compliance obligations.

Accelerating the compliance journey

As a global cloud service provider, Microsoft has made significant investments in helping the financial services industry meet and manage its regulatory responsibilities and accelerate the compliance journey.

Here are a few ways we are doing this, read full blog here:

Discover your journey in Cloud Computing career, by preparing on following courses:

Where Cloud Computing Jobs Will Be In 2019

Demand for cloud computing expertise continues to increase exponentially and will accelerate in 2019. To better understand the current and future direction of cloud computing hiring trends, I utilized Gartner TalentNeuron. Gartner TalentNeuron is an online talent market intelligence portal with real-time labor market insights, including custom role analytics and executive-ready dashboards and presentations. Gartner TalentNeuron also supports a range of strategic initiatives covering talent, location, and competitive intelligence.

Gartner TalentNeuron maintains a database of more than one billion unique job listings and is collecting hiring trend data from more than 150 countries across six continents, resulting in 143GB of raw data being acquired daily. In response to many Forbes readers’ requests for recommendations on where to find a job in cloud computing, I contacted Gartner to gain access to TalentNeuron.

Find how many Cloud Computing jobs are standing in the current market, read here:

Find your career in Cloud Computing by joining right course: