How to Use Feature Extraction on Tabular Data for Machine Learning

Machine learning predictive modeling performance is only as good as your data, and your data is only as good as the way you prepare it for modeling.

The most common approach to data preparation is to study a dataset and review the expectations of a machine learning algorithm, then carefully choose the most appropriate data preparation techniques to transform the raw data to best meet the expectations of the algorithm. This is slow, expensive, and requires a vast amount of expertise.

An alternative approach to data preparation is to apply a suite of common and commonly useful data preparation techniques to the raw data in parallel and combine the results of all of the transforms together into a single large dataset from which a model can be fit and evaluated.

This is an alternative philosophy for data preparation that treats data transforms as an approach to extract salient features from raw data to expose the structure of the problem to the learning algorithms. It requires learning algorithms that are scalable of weight input features and using those input features that are most relevant to the target that is being predicted.

This approach requires less expertise, is computationally effective compared to a full grid search of data preparation methods, and can aid in the discovery of unintuitive data preparation solutions that achieve good or best performance for a given predictive modeling problem.

In this tutorial, you will discover how to use feature extraction for data preparation with tabular data.

After completing this tutorial, you will know:

Feature extraction provides an alternate approach to data preparation for tabular data, where all data transforms are applied in parallel to raw input data and combined together to create one large dataset.
How to use the feature extraction method for data preparation to improve model performance over a baseline for a standard classification dataset.
How to add feature selection to the feature extraction modeling pipeline to give a further lift in modeling performance on a standard dataset.
Discover data cleaning, feature selection, data transforms, dimensionality reduction and much more in my new book, with 30 step-by-step tutorials and full Python source code.

Let’s get started.

How to Use Feature Extraction on Tabular Data for Data Preparation
How to Use Feature Extraction on Tabular Data for Data Preparation
Photo by Nicolas Valdes, some rights reserved.

Tutorial Overview
This tutorial is divided into three parts; they are:

Feature Extraction Technique for Data Preparation
Dataset and Performance Baseline
Wine Classification Dataset
Baseline Model Performance
Feature Extraction Approach to Data Preparation
Feature Extraction Technique for Data Preparation
Data preparation can be challenging.

The approach that is most often prescribed and followed is to analyze the dataset, review the requirements of the algorithms, and transform the raw data to best meet the expectations of the algorithms.

This can be effective, but is also slow and can require deep expertise both with data analysis and machine learning algorithms.

An alternative approach is to treat the preparation of input variables as a hyperparameter of the modeling pipeline and to tune it along with the choice of algorithm and algorithm configuration.

This too can be an effective approach exposing unintuitive solutions and requiring very little expertise, although it can be computationally expensive.

An approach that seeks a middle ground between these two approaches to data preparation is to treat the transformation of input data as a feature engineering or feature extraction procedure. This involves applying a suite of common or commonly useful data preparation techniques to the raw data, then aggregating all features together to create one large dataset, then fit and evaluate a model on this data.

The philosophy of the approach treats each data preparation technique as a transform that extracts salient features from raw data to be presented to the learning algorithm. Ideally, such transforms untangle complex relationships and compound input variables, in turn allowing the use of simpler modeling algorithms, such as linear machine learning techniques.

For lack of a better name, we will refer to this as the “Feature Engineering Method” or the “Feature Extraction Method” for configuring data preparation for a predictive modeling project.

It allows data analysis and algorithm expertise to be used in the selection of data preparation methods and allows unintuitive solutions to be found but at a much lower computational cost.

The exclusion in the number of input features can also be explicitly addressed through the use of feature selection techniques that attempt to rank order the importance or value of the vast number of extracted features and only select a small subset of the most relevant to predicting the target variable.

We can explore this approach to data preparation with a worked example.

Before we dive into a worked example, let’s first select a standard dataset and develop a baseline in performance.

Source- https://machinelearningmastery.com/feature-extraction-on-tabular-data/

Machine Learning and Artificial Intelligence in Radar Technology

Technology is always changing. A large percentage of it will make our lives easier by enhancing how we learn or go about our daily jobs in ways that were never thought before. Artificial intelligence and machine learning stand at the forefront of technology’s future, including their use in radar technology. The purpose of this article is to define what AI and machine learning are, how they relate to each other and what their role may be in radar technology.

Simply put, artificial intelligence is technology that incorporates human intelligence to machines. This is accomplished by the machine following a set of problem-solving algorithms to complete tasks.

The roots of AI are rooted in different research disciplines, including computer science, futures and philosophy. AI research is separated into streams that relate to the AI application’s objective of “thinking vs. acting” or “human-like decision vs. ideal, rational decision.” This utilizes four research currents:

1)Cognitive Modeling – thinking like a human
2) Turing Test – acting like a human when interacting with humans
3) Laws of Thought – a weak AI pretends to think while a strong AI is mind that has mental states
4) Rational Agent – the intelligence is produced through the act of agents that are characterized by five traits that include:
Operating autonomously
Perception of their environment
Persisting over an extended time period
Adapting to change
Creating and pursuing goals
Artificial intelligence agents can be categorized into four different types:

1) Simple reflex agent that reacts to sensor data
2) Model-based reflex agent that considers the agent’s internal state
3) Goal-based agent that determines the best decision to achieve its goals based on binary logic
4) Utility-based agent whose function is to maximize its utility
5) Any of the four agents can become a learning agent through the extension of its programming.

The term machine learning is used to describe techniques that can be used to solve a variety of real-world problems by using computer systems that are able to solve problems through learning instead of being programmed to solve problems.

Some machine learning systems are able to work without constant supervision. Others use supervised learning techniques that apply an algorithm on a set of known data points to gain insight on an unknown set of data to construct a model.

A third type, reinforcement learning continually learns from its observations that are obtained through interacting with its environment through iteration.

Creating a machine learning model typically employs three main phases:

Model initiation where the user defines the problem, prepares and processes the chosen data set and chooses the applicable machine learning algorithm
Performance estimation where various parameter combinations that describe the algorithm are validated and the best performing one is chosen
Deployment of model to begin solving the task on the unseen data
Machine learning adapts and mimics the cognitive abilities of human beings, but in an isolated manner.

Despite their differences, there is some confusion regarding what each technology does. This confusion is often exacerbated by the fact that both terms are often mistakenly used interchangeably. In reality, AI depends on machine learning to accomplish its goals.

Source- https://camrojud.com/machine-learning-and-artificial-intelligence-in-radar-technology

Research reflects how AI sees through the looking glass

Text is backward. Clocks run counterclockwise. Cars drive on the wrong side of the road. Right hands become left hands.

Intrigued by how reflection changes images in subtle and not-so-subtle ways, a team of Cornell University researchers used artificial intelligence to investigate what sets originals apart from their reflections. Their algorithms learned to pick up on unexpected clues such as hair parts, gaze direction and, surprisingly, beards — findings with implications for training machine learning models and detecting faked images.

“The universe is not symmetrical. If you flip an image, there are differences,” said Noah Snavely, associate professor of computer science at Cornell Tech and senior author of the study, “Visual Chirality,” presented at the 2020 Conference on Computer Vision and Pattern Recognition, held virtually June 14-19. “I’m intrigued by the discoveries you can make with new ways of gleaning information.”

Zhiqui Lin is the paper’s first author; co-authors are Abe Davis, assistant professor of computer science, and Cornell Tech postdoctoral researcher Jin Sun.

Differentiating between original images and reflections is a surprisingly easy task for AI, Snavely said — a basic deep learning algorithm can quickly learn how to classify if an image has been flipped with 60% to 90% accuracy, depending on the kinds of images used to train the algorithm. Many of the clues it picks up on are difficult for humans to notice.
For this study, the team developed technology to create a heat map that indicates the parts of the image that are of interest to the algorithm, to gain insight into how it makes these decisions.

They discovered, not surprisingly, that the most commonly used clue was text, which looks different backward in every written language. To learn more, they removed images with text from their data set, and found that the next set of characteristics the model focused on included wrist watches, shirt collars (buttons tend to be on the left side), faces and phones — which most people tend to carry in their right hands — as well as other factors revealing right-handedness.

The researchers were intrigued by the algorithm’s tendency to focus on faces, which don’t seem obviously asymmetrical. “In some ways, it left more questions than answers,” Snavely said.

They then conducted another study focusing on faces and found that the heat map lit up on areas including hair part, eye gaze — most people, for reasons the researchers don’t know, gaze to the left in portrait photos — and beards.

Snavely said he and his team members have no idea what information the algorithm is finding in beards, but they hypothesized that the way people comb or shave their faces could reveal handedness.
“It’s a form of visual discovery,” Snavely said. “If you can run machine learning at scale on millions and millions of images, maybe you can start to discover new facts about the world.”

Each of these clues individually may be unreliable, but the algorithm can build greater confidence by combining multiple clues, the findings showed. The researchers also found that the algorithm uses low-level signals, stemming from the way cameras process images, to make its decisions.

Though more study is needed, the findings could impact the way machine learning models are trained. These models need vast numbers of images in order to learn how to classify and identify pictures, so computer scientists often use reflections of existing images to effectively double their datasets.

Examining how these reflected images differ from the originals could reveal information about possible biases in machine learning that might lead to inaccurate results, Snavely said.

“This leads to an open question for the computer vision community, which is, when is it OK to do this flipping to augment your dataset, and when is it not OK?” he said. “I’m hoping this will get people to think more about these questions and start to develop tools to understand how it’s biasing the algorithm.”

Understanding how reflection changes an image could also help use AI to identify images that have been faked or doctored — an issue of growing concern on the internet.

“This is perhaps a new tool or insight that can be used in the universe of image forensics, if you want to tell if something is real or not,” Snavely said.

SOURCE-https://www.sciencedaily.com/releases/2020/07/200702152445.htm

AI Being Applied to Improve Health, Better Predict Life of Batteries

AI techniques are being applied by researchers aiming to extend the life and monitor the health of batteries, with the aim of powering the next generation of electric vehicles and consumer electronics.

Researchers at Cambridge and Newcastle Universities have designed a machine learning method that can predict battery health with ten times the accuracy of the current industry standard, according to an account in ScienceDaily. The promise is to develop safer and more reliable batteries.

In a new way to monitor batteries, the researchers sent electrical pulses into them and monitored the response. The measurements were then processed by a machine learning algorithm to enable a prediction of the battery’s health and useful life. The method is non-invasive and can be added on to any battery system.

The inability to predict the remaining useful charge in lithium-ion batteries is a limitation to the adoption of electric vehicles, and annoyance to mobile phone users. Current methods for predicting battery health are based on tracking the current and voltage during battery charging and discharging. The new methods capture more about what is happening inside the battery and can better detect subtle changes.

“Safety and reliability are the most important design criteria as we develop batteries that can pack a lot of energy in a small space,” stated Dr. Alpha Lee from Cambridge’s Cavendish Laboratory, who co-led the research. “By improving the software that monitors charging and discharging, and using data-driven software to control the charging process, I believe we can power a big improvement in battery performance.”

Dr. Alpha Lee, Cavendish Laboratory, Cambridge University

The researchers performed over 20,000 experimental measurements to train the model in how to spot signs of battery aging. The model learns how to distinguish important signals from irrelevant noise. The model learns which electrical signals are most correlated with aging, which then allows the researchers to design specific experiments to probe more deeply why batteries degrade.

“Machine learning complements and augments physical understanding,” stated co-author Dr Yunwei Zhang, also from the Cavendish Laboratory, in .”The interpretable signals identified by our machine learning model are a starting point for future theoretical and experimental studies.”

Department of Energy Researchers Using AI Computer Vision Techniques

Researchers at the Department of Energy’s SLAC National Accelerator Laboratory are using AI computer vision techniques to study battery life. The scientists are combining machine learning algorithms with X-ray tomography data to produce a detailed picture of degradation in one battery component, the cathode, according to an account in SciTechDaily. The referenced study was published in Nature Communications.

Dr. Yunwei Zhang, Cavendish Laboratory, Cambridge University

For cathodes made of nickel-manganese-cobalt (NMC) particles are held together by a conductive carbon matrix. Researchers have speculated that a cause of battery performance decline could be particles breaking away from that matrix. The team had access to advanced capabilities at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), a unit of the Department of Energy operated by Stanford University, and the European Synchrotron Radiation Facility (ESRF), a European collaboration for the advancement of X-rays, based in Grenoble, France. The goal was to build a picture of how NMC particles break apart and away from the matrix, and how that relates to battery performance loss.

The team turned to computer vision with AI capability to help conduct the research. They needed a machine learning model to train the data in how to recognize different types of particles, so they could develop a three-dimensional picture of how NMC particles, large or small, break away from the cathode.

The authors encouraged more research into battery health. “Our findings highlight the importance of precisely quantifying the evolving nature of the battery electrode’s microstructure with statistical confidence, which is a key to maximize the utility of active particles towards higher battery capacity,” the authors stated.

Sourcehttps://www.aitrends.com/ai-research/ai-being-applied-to-improve-health-better-predict-life-of-batteries/

AI is Beating the Hype With Stunning Growth

Follow the money. It is true in politics, business, and investing.

5-24Growth-2
Gartner, a global IT research and advisory company, surveyed 3,000 CIOs operating in 89 countries in January. The Stamford, Conn., firm found that AI implementations grew 37% during 2018, and 270% over the last four years.
This is a trend investors should embrace. That’s because it is going to last for a while. And it’s going to make a lot of people very rich.
Investors have soured on AI recently. Self-driving cars, smart cities, and robotics keep getting smacked down as idealistic hype. That’s mostly because their implementations are decades away … or that these ideas are expensive solutions looking for problems.
So say the critics, anyway.
They point to once-high-flying stocks like Nvidia, which just saw its share price get cut in half because of slowing demand for cutting-edge gear and software.
However, that assessment is lazy. It also misses the point.
AI is a digital transformation story. Corporate managers realize AI software can help automate large parts of the enterprise, increasing productivity and saving a boatload of money.
It is true that machines will not be able to wholly replace complex human decision-making anytime soon. But the software is more than sufficient to processes mundane, repetitive tasks. And machine learning, a type of data science, can help humans see important patterns they might otherwise miss.
So companies are going all-in.
They are deploying software bots online, along with customer-relationship software to help service reps assist customers.
Executives are using integrated suites and data analytics to manage projects, workflows, payrolls and human resources.
Source link: https://www.aitrends.com/ai-and-business-strategy/ai-is-beating-the-hype-with-stunning-growth/

Devices in IOT

Components of IoT

IoT is becoming the trend of nowadays. Sooner or later it will not just take over the industrial sector but also impact the daily household. When designing an IoT ecosystem, there are certain factors which we need to bear in mind. Security, devices to be used i.e. sensors, microcontroller, gateways and cloud computing. The devices or “things” play a vital role. Right from a refrigerator door to a coffee machine; will be connected to the Internet.  With rapid advancement in technology and 5G’s demand in the future, it is believed by the year 2021, approximately 20 billion devices will be connected to the internet.

What is IoT basically? A network where not only a vehicle, a home appliance like fan or air conditioner but also embedded systems like electronics, software, sensors which enable exchange or transfer of data over wireless technologies like Wi-Fi, ZigBee, Bluetooth etc. IoT in short will reduce human effort and increase our dependency on machines.

In IoT, the major components are Sensors, Gateway or microcontrollers, connectivity, analytics or data processing and cloud computing.

 

iot blog1

  • Sensors: They are the “things” in an IoT system. They are responsible for collecting and transmitting real-time data to the microcontroller. Sensors are used to detect physical changes for example, temperature, humidity or pressure. Here are some features which should be the basis of a good sensor:
  • It should not be sensitive to the phenomena it measures
  • It should not detect other physical changes apart from its designation for example, DHT11 is designed to sense the temperature and humidity of its surroundings. If it starts determining the luminosity then its problem which needs attention.
  • It should not modify the readings during the measurement process.

 

There are several properties or characteristics one should keep in mind while selecting a sensor:

 

Characteristics of a sensor
Range
Drift
Selectivity
Sensitivity
Precision
Hysteresis
Response and Recovery Time
Calibration

 

  • Microcontrollers or Gateways as they are commonly known. Like in a human body the brain controls actions and movements, similarly microcontroller acts as the brain of an IoT system.

 

Why should we use a microcontroller in an IoT system?

  • Simplicity: Programming a microcontroller or a setting up is no difficult job. Also, interfacing sensors with a microcontroller is easy.
  • Security: Code is written on a “bare metal” in a microcontroller which results in very little attack and maintains a secure environment.
  • Cost: Microcontroller are very cost-effective. With minimum cost they offer you simplicity and security.

Raspberry Pi, Arduino Uno, NodeMCU are a few examples of microcontrollers.

The factors one should keep in mind while selecting a Microcontroller:

  1. Compatibility: Will our Microcontroller support the sensors/ actuators? Depending upon how sensors are being, you should decide the number ports required.
  2. Architecture: If your Microcontroller will be able to handle complex programming, what are the functional requirement of your application and also the power it should compute for the application to run.
  • Memory: Choosing the microcontroller with enough memory size is of utmost importance, in order to save time and money.
  1. Availability: A thorough research is a must about the availability of the microcontroller and the quantity. Selecting a correct microcontroller during the initial stages of your project is important which can help scale your application.
  2. Power: Energy efficiency plays a key role in designing an IoT system. How much power does it require, will it need to be wired or whether batteries are required.

 

Hence, this component of the IoT needs to be the most secured part as it analyses, processes data from thousands of sensors and acts a gateway to the cloud. Microcontroller (MCU) should also have the capability to host the process, store data i.e. act like a memory and provide a secured operating system.

 

  • Analytics or data processing or Data Analytics play another significant role in an IoT system. Drawing conclusions from big or small data is basically data analytics. It will play an integral part in the development of IoT. The following points enlist the effect it will have on the businesses.

 

Volume: The shear amount of data to analysed will be huge in an IoT application. Real-time data from many sensors or actuators will require data analytics.

 

Structure: The data coming from sensors will be structured, semi-structured or unstructured. This will require data to be analysed on a bigger scale.

 

Revenues: Data analytics will help us provide an insight of what the customer demands and expects from the analytics. This will increase or generate revenues for the same.

 

Competition: As we know IoT is going to be the future. It provides freedom and better performance. Hence, by offering data analytics, one can upgrade his or her business.

 

 

  • Cloud computing: If Microcontroller is the brain of an IoT system then Internet is the heart of an IoT system. There are many ways to connect to the Internet for example Wi-Fi, Bluetooth. Cloud computing is a sector vital in the evolution of IoT. If IoT provides the data then cloud provides the path for the data to travel.

 

Cloud computing’s sole motto is to enable IoT or users to access data from remote parts of the world through its storage option. Also, for the developers to work from different parts of the world. Cloud computing is also economically viable as it has minimum pay charges which depends on the cloud model. For example, Microsoft Azure Cloud Services, in their free trial provide upto 8,000 messages per day. This will encourage IoT companies or start-ups and in turn reduce costs.

 

Some of the cloud computing platforms are:

 

 

 

 

 

 

References:

https://pinaclsolutions.com/blog/2017/cloud-computing-and-iot

https://www.fingent.com/blog/role-of-data-analytics-in-internet-of-things-iot

https://blog.temboo.com/how-to-choose-a-microcontroller-for-iot/

Book: Internet of Things by Dr. Jeeva Jose

Picture: https://www.google.co.in/url?sa=i&url=https%3A%2F%2Fdata-flair.training%2Fblogs%2Fhow-iot-works%2F&psig=AOvVaw2AWfZpoQkEJEJaNcV1fBhn&ust=1587817543607000&source=images&cd=vfe&ved=0CAkQjhxqFwoTCOCH97aHgekCFQAAAAAdAAAAABAD

 

 

 

 

 

 

Improving Verifiability in AI Development

trustworthy_ai

We’ve contributed to a multi-stakeholder report by 58 co-authors at 30 organizations, including the Centre for the Future of IntelligenceMilaSchwartz Reisman Institute for Technology and SocietyCenter for Advanced Study in the Behavioral Sciences, and Center for Security and Emerging Technologies. This report describes 10 mechanisms to improve the verifiability of claims made about AI systems. Developers can use these tools to provide evidence that AI systems are safe, secure, fair, or privacy-preserving. Users, policymakers, and civil society can use these tools to evaluate AI development processes.

READ REPORT

While a growing number of organizations have articulated ethics principles to guide their AI development process, it can be difficult for those outside of an organization to verify whether the organization’s AI systems reflect those principles in practice. This ambiguity makes it harder for stakeholders such as users, policymakers, and civil society to scrutinize AI developers’ claims about properties of AI systems and could fuel competitive corner-cutting, increasing social risks and harms. The report describes existing and potential mechanisms that can help stakeholders grapple with questions like:

  • Can I (as a user) verify the claims made about the level of privacy protection guaranteed by a new AI system I’d like to use for machine translation of sensitive documents?
  • Can I (as a regulator) trace the steps that led to an accident caused by an autonomous vehicle? Against what standards should an autonomous vehicle company’s safety claims be compared?
  • Can I (as an academic) conduct impartial research on the risks associated with large-scale AI systems when I lack the computing resources of industry?
  • Can I (as an AI developer) verify that my competitors in a given area of AI development will follow best practices rather than cut corners to gain an advantage?

The 10 mechanisms highlighted in the report are listed below, along with recommendations aimed at advancing each one. (See the report for discussion of how these mechanisms support verifiable claims as well as relevant caveats about our findings.)

Institutional Mechanisms and Recommendations

  1. Third party auditing. A coalition of stakeholders should create a task force to research options for conducting and funding third party auditing of AI systems.
  2. Red teaming exercises. Organizations developing AI should run red teaming exercises to explore risks associated with systems they develop, and should share best practices and tools.
  3. Bias and safety bounties. AI developers should pilot bias and safety bounties for AI systems to strengthen incentives and processes for broad-based scrutiny of AI systems.
  4. Sharing of AI incidents. AI developers should share more information about AI incidents, including through collaborative channels.

Software Mechanisms and Recommendations

  1. Audit trails. Standard setting bodies should work with academia and industry to develop audit trail requirements for safety-critical applications of AI systems.
  2. Interpretability. Organizations developing AI and funding bodies should support research into the interpretability of AI systems, with a focus on supporting risk assessment and auditing.
  3. Privacy-preserving machine learning. AI developers should develop, share, and use suites of tools for privacy-preserving machine learning that include measures of performance against common standards.

Hardware Mechanisms and Recommendations

  1. Secure hardware for machine learning. Industry and academia should work together to develop hardware security features for AI accelerators or otherwise establish best practices for the use of secure hardware (including secure enclaves on commodity hardware) in machine learning contexts.
  2. High-precision compute measurement. One or more AI labs should estimate the computing power involved in a single project in great detail and report on lessons learned regarding the potential for wider adoption of such methods.
  3. Compute support for academia. Government funding bodies should substantially increase funding for computing power resources for researchers in academia, in order to improve the ability of those researchers to verify claims made by industry.

Source: https://openai.com/blog/improving-verifiability/


Angular vs React.JS

Pattem-Digital-Angular-vs-React-Landing-page
Angular

Angular is a TypeScript based Web development framework for Single Page Applications. Angular is an open-source web framework maintained by Google. Initialy Google came with a library for developing Single Page Applications called AngularJS. Later, the same team worked on a different project that released as a development framework for SPA applications. It is named as Angular. Angular JS uses JavaScript to develop web UI applications. But, Angular uses TypeScript to develop SPA applications that helps developers to create type-safe and ES6 based JavaScript applications. Angular is a web UI development framework not a library. A libary offers just a collection of functions that can be called from any web application.

Angular offers the following features:
  1. Angular CLI
    Angular providers a command line tool to create, test, run and build project. This CLI tool provides a rich set of commands that can also be used to generate your Angular components, services, directives, pipes, modules, classes and more. Use the CLI command to run the project in watch mode during development. You can also run the test files using CLI command. A single CLI command can also produce code that can be deployed in your web server.
  2. Open-source and cross-platform development
    Angular is an open-source web framework for developing Single Page Applications. It provides cross-platform support, so you can develop your web application from and OS using your favorite IDEs such as VS code, Atom, JetBrains WebStorm, NetBeans, IntelliJ IDEA etc.
  3. MVC or MVVM Architecture
    MVC refers to Model-View-Controller and MVVM refers to Model-View-ViewModel. You can develop you application in MVC or MVVM architecture using Angular. Angular allows you to create reusable components. A component provides and HTML view and code file. The code file contains the event handling code and other functions. The HTML file contains the markup and Angular directives and pipes. You can also create injectable services for reusable code logic.
  4. Performance and fast view rendering
    Next version of Angular framework will come with a new compilation and rendering engine. This next generation rendering engine is named as Ivy. With the version 9 release of Angular, the new compiler and runtime instructions are used by default instead of the older compiler and runtime, known as View Engine.
  5. TypeScript for development
    Angular uses TypeScript as the default language for development. It helps developers to use the ES6 features in your application. TypeScript is a superset of JavaScript that provides compile-time error checking. The type-safe TypeScript language increases the productivity of developers by helping them to generate error-free code.
  6. Built-in Dependency Injection (DI) support
    To increase the efficiency and modularity of your application you can create reusable service classes in Angular. These service classes can be injected in to any component, directive, pipe or other services using Dependency Injection. Angular uses its own DI framework to handle it. With DI application manages the number of instances, scope and life time of your service objects.
  7. Event handling and Two-way data binding support
    Angular offers built-in two-way data binding which helps us to bind the objects to form controls. Angular also also provides event-handling functionality that helps to invoke function on various events of the UI elements.
  8. Built-in form validation and error handling Angular provides two ways for creating and managin forms- Template Driven and Reactive. The template driven using the FormsModule and directives such as ngModel and ngForm. Reactive forms uses ReactiveFormsModule and directives and services suchas FormGroupFormControlValidatorsFormBuilder etc.
  9. Enahanced and simple routing
    Angular uses built-in routing module that uses the HTML 5 routing paths. You can use route parameters and query parameters to the routes. Angular uses the RouterModule to enable routing in your applciation. Angular routing also offers the following features:

    • Lazy loading
    • Route guards
    • Data resolvers
    • Http Interceptors
  10. Component Development Kit (CDK) and support for Angular Material
    The Component Dev Kit (CDK) is a set of tools that implement common interaction patterns whilst being unopinionated about their presentation. Angular CDK provides a feature called Virtual Scrolling that loads only a set of data that fits the screen. When you scroll down it loads the data dynamically and load into the page component. The latest version of Angular provides support for Material themes using Angular Material that is used as the backbone of the Angular CDK.
  11. Differential loading
    Angular 8 comes with a new feature called Differential loading. Using this Angular CLI can now generate two separate bundles of project output, one for the legacy JavaScript (ES5) and another one for modern JavaScript (ES6 and later).

angular vs react

 

Navigate to Angular website 

Navigate to Angular CLI web site

React

ReactJS is a JavaScript library for building fast rendering User Interfaces for you web applications. ReactJS library is developed and maintained by Facebook. React uses JSX for developing the UI components. It is declarative, open-source and cross-platform library that uses a concept called Virtual DOM for developing fast rendering UI elements.

Features of ReactJS
  1. Open-source and cross-platform support
    ReactJS is developed by Facebook and it is available as an open-source library for UI developers. Since it is a small JavaScript library you can develop your application in any platform with any IDEs.
  2. CLI tool to start with quick start templates You can start creating your first React application using the create-react-app CLI tool. This CLI tool can generate basic template of the React application with JavaScript and TypeScript language. You can use this CLI tool to create, run and build the project with ease. Install the tool gloabally using npm install -g create-react-app command.
  3. Virtual DOM support
    React uses a concept called Virtual DOM for fast rendering of UI elements. Virtual DOM is an exact copy of the Browser DOM that is updated frequently based on the data changes. It is quick to update the Virtual DOM than the browser DOM since it is a memory object.
  4. One-way data binding
    ReactJS is introduced as a UI development library for rendering data quickly on the web pages. For that, it used the one-way data bidning to update the data on the UI element. React does not support two-way data binding by default. But you can use the events and properties to achieve this.
  5. Easy integration with other web frameworks Since it is a UI development library you can easily integrate React with any of your web frameworks such as PHP, JSP and Servlets, ASP.NET, Angular etc. You can use the CDN links or downloadable JS files in your applications.
  6. Ideal for mobile app development
    You can create native apps for you Android and iOS devices using the React Native. React Native is a custom renderer that runs on the React platform. It uses the native components instead of the web components.
  7. Rich set of libraries
    Since React is introduced as a library for fast rendering UI components, it does not support some of the web features such as routing, form validation, centralized state management, Dependency Injection etc out of the box. But, it allows you to use a rich set of JavaScript libraries such as React Router for routing, Redux for state management, React Bootstrap for responsive web design, React From for form validations etc.
  8. Better community support
    ReactJS is now driven by a community and individual developers. You can contribute to the React through the community.

Navigate to React website

Which one to choose – React or Angular?

One of the major question asked by developers and project managers to me about is ‘Which one to choose- React or Angular?’. Every one have their own reason for choosing Angular and React for their projects. If you closely look into the capabilities of the Angular and React, you will find the solution for it. You may read about Angular and React from many blogs and forums, and you may come with an answer ‘React’- because uses Virtual DOM for fast rendering of the UI elements.

If you read the above description about Angular and React you will notice one important point that Angular is a complete framework for SPA development and React is just a library. Angular is a complete web framework for developing an end-to-end web application. It provides all the features for developing a complete web application such as routing, two-way data binding, form validation, Dependency injection, CLI tool, asynchrounous functions using Observables and promises and more. But, React is just a library like jQuery which can be easily integrated with any other web framework. It is used just to increase the speed of the view rendering.
If you are looking for a complete web application such as HR management application, E-commerce applications, Financial applications etc you need to choose Angular. Such applciations are very large and they use multiple pages, data entry forms and reusable code logics. Angular offers built-in routing module that provides features such as lazy loading of modules, http interceptors for reqeust and response processing, data resolvers for loading data when a route is activated, guards for conditionally activating and deactivating routes and more. It also provides built-in form handling modules control binding and validation, event handing etc. The builting DI engine helps to control the scope and lifetime of the services.

But, if you are developing a web application which is mostly used for presenting data to users than entry forms such as Dashboards, social media applications, online newspaper websites etc then you can choose ReactJS for it. Because these kind of applications are mostly using data representation UI components than the data entry forms. React’s Virtual DOM with one-way data binding helps to render the data quickly on the web pages. You can also develop end-to-end web application using React but you may need to use a large set of external libraries for state-management (Redux or Flux), routing (React router), responsive design (React Bootstrap/Material-UI) and form validation (React Form).

I hope this will help you to understand the differences of two promising JS technologies for Web development.

Author: Sonu Sathyadas

 

Machine learning can boost the value of wind energy

aib19-10

Carbon-free technologies like renewable energy help combat climate change, but many of them have not reached their full potential. Consider wind power: over the past decade, wind farms have become an important source of carbon-free electricity as the cost of turbines has plummeted and adoption has surged. However, the variable nature of wind itself makes it an unpredictable energy source—less useful than one that can reliably deliver power at a set time.

In search of a solution to this problem, last year, DeepMind and Google started applying machine learning algorithms to 700 megawatts of wind power capacity in the central United States. These wind farms—part of Google’s global fleet of renewable energy projects—collectively generate as much electricity as is needed by a medium-sized city.

Using a neural network trained on widely available weather forecasts and historical turbine data, we configured the DeepMind system to predict wind power output 36 hours ahead of actual generation. Based on these predictions, our model recommends how to make optimal hourly delivery commitments to the power grid a full day in advance. This is important, because energy sources that can be scheduled (i.e. can deliver a set amount of electricity at a set time) are often more valuable to the grid.

Although we continue to refine our algorithm, our use of machine learning across our wind farms has produced positive results. To date, machine learning has boosted the value of our wind energy by roughly 20 percent, compared to the baseline scenario of no time-based commitments to the grid.

We can’t eliminate the variability of the wind, but our early results suggest that we can use machine learning to make wind power sufficiently more predictable and valuable. This approach also helps bring greater data rigor to wind farm operations, as machine learning can help wind farm operators make smarter, faster and more data-driven assessments of how their power output can meet electricity demand.

aib19-11

Our hope is that this kind of machine learning approach can strengthen the business case for wind power and drive further adoption of carbon-free energy on electric grids worldwide. Researchers and practitioners across the energy industry are developing novel ideas for how society can make the most of variable power sources like solar and wind. We’re eager to join them in exploring general availability of these cloud-based machine learning strategies.

Google recently achieved 100 percent renewable energy purchasing and is now striving to source carbon-free energy on a 24×7 basis. The partnership with DeepMind to make wind power more predictable and valuable is a concrete step toward that aspiration. While much remains to be done, this step is a meaningful one—for Google, and more importantly, for the environment.

Source: https://deepmind.com/blog/machine-learning-can-boost-value-wind-energy/

AI and Robotics in Retail: Drivers, Impact, and Challenges

aib19-9

As the modern world seeks innovation and convenience, retail providers are faced with the new challenge — to keep up with the trend or fall behind.

Due to this, many retailers are delving into the latest technologies that seek to address the new needs of their businesses, and that may mean looking toward enterprise software development. Let’s look at how retailers are innovating and dive deeper into their artificial intelligence and robotics solutions.

Why Do Retailers Need to Modernize?

According to Statista, by 2021 online e-commerce sales are set to total a record of $4.8 trillion (USD). Meanwhile, in 2018 this amount was estimated at a lower $2.8 trillion. What this shows is an industry in rapid growth, and there are no signs of it slowing down.

This growth makes one factor exceptionally clear — if you want to stay competitive in the retail business, no matter whether you have a small corner shop or a multinational enterprise, you need to consider optimizing your operations with new technology. Across web, mobile, and in-store, such technology is poised to include AI and robotic process automation (RPA), and here’s why:

The Value Driven by AI and Robotics in Retail
  1. Better insights into inventory and supply planning
  2. No or fewer employees required in physical location management and delivery tracking
  3. Predictive analytics of customer-tailored demands
  4. Personalization of customer support
  5. Cashier-less checkout operations
  6. Better product categorization of both local and global stock units
How AI and Robotics Solutions Boost Retail Businesses

Now that we know the benefits, let’s look at how these solutions work. To begin, let’s consider retail business processes as divided into two parts:

  • Back-office operations — consisting of paperwork, staff and product management
  • Shop-front operations — serving customers and addressing their issues

Across all of these functions, AI and robotics help retailers achieve better results.

Improving Planning and Strategy

AI technologies allow retailers to gather, rework and standardize data, automatically enter it into spreadsheets, and transform it into understandable visuals such as charts. In turn, this helps build efficient business plans, reduces the time on report compilations, forecasts sales figures, generates customer profiles, and understand customers’ shopping preferences.

Equipped with these reports on customer and market behavior, marketing and sales professionals can efficiently plan campaigns and target them toward real consumers. For managers, this aids in ensuring certain products remain stocked as they know which are in demand.

Optimizing Logistics and Inventory

AI programs stock, process and analyze significant amounts of information, resulting in a prediction of the outcomes and even applying them to discover new revenue channels. This can be helpful in back-office operations such as accounting and business planning, but is not limited to these areas.

For example, when paired with IoT, AI applications have already begun to improve the transportation of goods by managing their provenance and shipping conditions data. This can be tracked through the entire journey, ensuring better food security and enabling logistics enterprises to make more informed decisions.

In addition, cloud technologies assist retailers in restocking the shelves and tracking customers’ movement in-store, gathering information on the demand and forecasting the popularity of certain products.

Personalization and Customer Experience Management

According to McKinsey & Company, the retail sector is one of the foremost industries that has benefited from AI and robotics implementation. One of the reasons is that this can transform retail businesses by making them more customer-oriented.

AI-equipped systems can collect exceptionally accurate data about buyers’ preferences and habits. Relying on this data, retailers can grow their sales by recommending suitable items to customers. This is something that a few big names have already tried out with visible results:

  • NY-based company Caper has recently developed a handy computerized shopping cart. This cart helps customers to learn more about products by simply scanning them; the details then show up on the screen. In addition to this, buyers can “checkout” their goods online to avoid standing in a line.
  • Ocado, a grocery company, uses the Google technology based on speech recognition to deal with customer complaints. Google Cloud AI speeds up the process of complaint analysis, helping Ocado to promptly fix and improve their services.

In addition, robotics proves beneficial for in-store service, too. For example, robots can provide retailers with the information on the shelf inventory, price tag changes and consumer preferences, personalizing the products in stock. Robotized call-centers can help cut on the expenses while ensuring customer support is available 24 hours a day.

Finally, the buyers themselves can benefit from machine learning systems by using automated checkouts, avoiding long queues or getting quick support through digital kiosks.

Challenges of AI Adoption and Their Solutions

Despite these numerous benefits, it is an undoubtable fact that any business seeking to integrate new technologies, AI in particular, will be faced with certain challenges:

1. New working practices

As IT integrations advance, we are likely to see more changes in how we work. The current trend sees manual labor activities increasingly performed by robots, while “mental” work is performed by humans. But even this could be set to change as AI programs are gaining skills and are able to effectively work with data.

Recent research by McKinsey & Company has shown that out of 2,000 labor activities, about 800 occupations can be automated to some extent. For society, in general, this will mean a new drive in skill building and a changing job market in the future.

However, for retailers, this means having to both reconsider their staffing needs and their technology firepower to be able to keep up with the competition.

2. Costs of new software

For retail businesses that are just starting to introduce technology, the initial costs may seem off-putting. Usually, this means developing customized software and products to improve the business, and this may be more costly than off-the-shelf products. In addition, companies may need to consider hiring specialists to maintain and service such systems.

While initial roll-outs of such developments come at a price, companies should look at their long-term benefits and the overall effect on the business.

3. Security

Finally, retail providers will find new challenges in dealing with security. For many of these systems to work effectively, a large amount of information has to be collected and stored. This means that companies will be ever more responsible for data security, in the areas of individual privacy and the privacy of their whole businesses.

Safe data storage and consent management is one aspect; another is protection from hackers. This is essential to keep data from being exploited and systems from becoming corrupted.

Conclusion

For retailers to adapt and thrive in the new era, they will need to undertake changes to how they do business, and this may mean involving AI and robotics technologies.

These changes have both advantages and disadvantages for the retail sector and its employees. Personalization and robots taking over routine operations may be seen as positives, while the changing roles within an organization may be a negative. It will take flexibility and thought-out strategies for retailers to go with this AI flow without major disruptions to their modus operandi.

Source: https://chatbotsmagazine.com/ai-and-robotics-in-retail-drivers-impact-and-challenges-68a51dbf74cb