How to Use Feature Extraction on Tabular Data for Machine Learning

Machine learning predictive modeling performance is only as good as your data, and your data is only as good as the way you prepare it for modeling.

The most common approach to data preparation is to study a dataset and review the expectations of a machine learning algorithm, then carefully choose the most appropriate data preparation techniques to transform the raw data to best meet the expectations of the algorithm. This is slow, expensive, and requires a vast amount of expertise.

An alternative approach to data preparation is to apply a suite of common and commonly useful data preparation techniques to the raw data in parallel and combine the results of all of the transforms together into a single large dataset from which a model can be fit and evaluated.

This is an alternative philosophy for data preparation that treats data transforms as an approach to extract salient features from raw data to expose the structure of the problem to the learning algorithms. It requires learning algorithms that are scalable of weight input features and using those input features that are most relevant to the target that is being predicted.

This approach requires less expertise, is computationally effective compared to a full grid search of data preparation methods, and can aid in the discovery of unintuitive data preparation solutions that achieve good or best performance for a given predictive modeling problem.

In this tutorial, you will discover how to use feature extraction for data preparation with tabular data.

After completing this tutorial, you will know:

Feature extraction provides an alternate approach to data preparation for tabular data, where all data transforms are applied in parallel to raw input data and combined together to create one large dataset.
How to use the feature extraction method for data preparation to improve model performance over a baseline for a standard classification dataset.
How to add feature selection to the feature extraction modeling pipeline to give a further lift in modeling performance on a standard dataset.
Discover data cleaning, feature selection, data transforms, dimensionality reduction and much more in my new book, with 30 step-by-step tutorials and full Python source code.

Let’s get started.

How to Use Feature Extraction on Tabular Data for Data Preparation
How to Use Feature Extraction on Tabular Data for Data Preparation
Photo by Nicolas Valdes, some rights reserved.

Tutorial Overview
This tutorial is divided into three parts; they are:

Feature Extraction Technique for Data Preparation
Dataset and Performance Baseline
Wine Classification Dataset
Baseline Model Performance
Feature Extraction Approach to Data Preparation
Feature Extraction Technique for Data Preparation
Data preparation can be challenging.

The approach that is most often prescribed and followed is to analyze the dataset, review the requirements of the algorithms, and transform the raw data to best meet the expectations of the algorithms.

This can be effective, but is also slow and can require deep expertise both with data analysis and machine learning algorithms.

An alternative approach is to treat the preparation of input variables as a hyperparameter of the modeling pipeline and to tune it along with the choice of algorithm and algorithm configuration.

This too can be an effective approach exposing unintuitive solutions and requiring very little expertise, although it can be computationally expensive.

An approach that seeks a middle ground between these two approaches to data preparation is to treat the transformation of input data as a feature engineering or feature extraction procedure. This involves applying a suite of common or commonly useful data preparation techniques to the raw data, then aggregating all features together to create one large dataset, then fit and evaluate a model on this data.

The philosophy of the approach treats each data preparation technique as a transform that extracts salient features from raw data to be presented to the learning algorithm. Ideally, such transforms untangle complex relationships and compound input variables, in turn allowing the use of simpler modeling algorithms, such as linear machine learning techniques.

For lack of a better name, we will refer to this as the “Feature Engineering Method” or the “Feature Extraction Method” for configuring data preparation for a predictive modeling project.

It allows data analysis and algorithm expertise to be used in the selection of data preparation methods and allows unintuitive solutions to be found but at a much lower computational cost.

The exclusion in the number of input features can also be explicitly addressed through the use of feature selection techniques that attempt to rank order the importance or value of the vast number of extracted features and only select a small subset of the most relevant to predicting the target variable.

We can explore this approach to data preparation with a worked example.

Before we dive into a worked example, let’s first select a standard dataset and develop a baseline in performance.

Source- https://machinelearningmastery.com/feature-extraction-on-tabular-data/

Machine Learning and Artificial Intelligence in Radar Technology

Technology is always changing. A large percentage of it will make our lives easier by enhancing how we learn or go about our daily jobs in ways that were never thought before. Artificial intelligence and machine learning stand at the forefront of technology’s future, including their use in radar technology. The purpose of this article is to define what AI and machine learning are, how they relate to each other and what their role may be in radar technology.

Simply put, artificial intelligence is technology that incorporates human intelligence to machines. This is accomplished by the machine following a set of problem-solving algorithms to complete tasks.

The roots of AI are rooted in different research disciplines, including computer science, futures and philosophy. AI research is separated into streams that relate to the AI application’s objective of “thinking vs. acting” or “human-like decision vs. ideal, rational decision.” This utilizes four research currents:

1)Cognitive Modeling – thinking like a human
2) Turing Test – acting like a human when interacting with humans
3) Laws of Thought – a weak AI pretends to think while a strong AI is mind that has mental states
4) Rational Agent – the intelligence is produced through the act of agents that are characterized by five traits that include:
Operating autonomously
Perception of their environment
Persisting over an extended time period
Adapting to change
Creating and pursuing goals
Artificial intelligence agents can be categorized into four different types:

1) Simple reflex agent that reacts to sensor data
2) Model-based reflex agent that considers the agent’s internal state
3) Goal-based agent that determines the best decision to achieve its goals based on binary logic
4) Utility-based agent whose function is to maximize its utility
5) Any of the four agents can become a learning agent through the extension of its programming.

The term machine learning is used to describe techniques that can be used to solve a variety of real-world problems by using computer systems that are able to solve problems through learning instead of being programmed to solve problems.

Some machine learning systems are able to work without constant supervision. Others use supervised learning techniques that apply an algorithm on a set of known data points to gain insight on an unknown set of data to construct a model.

A third type, reinforcement learning continually learns from its observations that are obtained through interacting with its environment through iteration.

Creating a machine learning model typically employs three main phases:

Model initiation where the user defines the problem, prepares and processes the chosen data set and chooses the applicable machine learning algorithm
Performance estimation where various parameter combinations that describe the algorithm are validated and the best performing one is chosen
Deployment of model to begin solving the task on the unseen data
Machine learning adapts and mimics the cognitive abilities of human beings, but in an isolated manner.

Despite their differences, there is some confusion regarding what each technology does. This confusion is often exacerbated by the fact that both terms are often mistakenly used interchangeably. In reality, AI depends on machine learning to accomplish its goals.

Source- https://camrojud.com/machine-learning-and-artificial-intelligence-in-radar-technology

Research reflects how AI sees through the looking glass

Text is backward. Clocks run counterclockwise. Cars drive on the wrong side of the road. Right hands become left hands.

Intrigued by how reflection changes images in subtle and not-so-subtle ways, a team of Cornell University researchers used artificial intelligence to investigate what sets originals apart from their reflections. Their algorithms learned to pick up on unexpected clues such as hair parts, gaze direction and, surprisingly, beards — findings with implications for training machine learning models and detecting faked images.

“The universe is not symmetrical. If you flip an image, there are differences,” said Noah Snavely, associate professor of computer science at Cornell Tech and senior author of the study, “Visual Chirality,” presented at the 2020 Conference on Computer Vision and Pattern Recognition, held virtually June 14-19. “I’m intrigued by the discoveries you can make with new ways of gleaning information.”

Zhiqui Lin is the paper’s first author; co-authors are Abe Davis, assistant professor of computer science, and Cornell Tech postdoctoral researcher Jin Sun.

Differentiating between original images and reflections is a surprisingly easy task for AI, Snavely said — a basic deep learning algorithm can quickly learn how to classify if an image has been flipped with 60% to 90% accuracy, depending on the kinds of images used to train the algorithm. Many of the clues it picks up on are difficult for humans to notice.
For this study, the team developed technology to create a heat map that indicates the parts of the image that are of interest to the algorithm, to gain insight into how it makes these decisions.

They discovered, not surprisingly, that the most commonly used clue was text, which looks different backward in every written language. To learn more, they removed images with text from their data set, and found that the next set of characteristics the model focused on included wrist watches, shirt collars (buttons tend to be on the left side), faces and phones — which most people tend to carry in their right hands — as well as other factors revealing right-handedness.

The researchers were intrigued by the algorithm’s tendency to focus on faces, which don’t seem obviously asymmetrical. “In some ways, it left more questions than answers,” Snavely said.

They then conducted another study focusing on faces and found that the heat map lit up on areas including hair part, eye gaze — most people, for reasons the researchers don’t know, gaze to the left in portrait photos — and beards.

Snavely said he and his team members have no idea what information the algorithm is finding in beards, but they hypothesized that the way people comb or shave their faces could reveal handedness.
“It’s a form of visual discovery,” Snavely said. “If you can run machine learning at scale on millions and millions of images, maybe you can start to discover new facts about the world.”

Each of these clues individually may be unreliable, but the algorithm can build greater confidence by combining multiple clues, the findings showed. The researchers also found that the algorithm uses low-level signals, stemming from the way cameras process images, to make its decisions.

Though more study is needed, the findings could impact the way machine learning models are trained. These models need vast numbers of images in order to learn how to classify and identify pictures, so computer scientists often use reflections of existing images to effectively double their datasets.

Examining how these reflected images differ from the originals could reveal information about possible biases in machine learning that might lead to inaccurate results, Snavely said.

“This leads to an open question for the computer vision community, which is, when is it OK to do this flipping to augment your dataset, and when is it not OK?” he said. “I’m hoping this will get people to think more about these questions and start to develop tools to understand how it’s biasing the algorithm.”

Understanding how reflection changes an image could also help use AI to identify images that have been faked or doctored — an issue of growing concern on the internet.

“This is perhaps a new tool or insight that can be used in the universe of image forensics, if you want to tell if something is real or not,” Snavely said.

SOURCE-https://www.sciencedaily.com/releases/2020/07/200702152445.htm

AI Being Applied to Improve Health, Better Predict Life of Batteries

AI techniques are being applied by researchers aiming to extend the life and monitor the health of batteries, with the aim of powering the next generation of electric vehicles and consumer electronics.

Researchers at Cambridge and Newcastle Universities have designed a machine learning method that can predict battery health with ten times the accuracy of the current industry standard, according to an account in ScienceDaily. The promise is to develop safer and more reliable batteries.

In a new way to monitor batteries, the researchers sent electrical pulses into them and monitored the response. The measurements were then processed by a machine learning algorithm to enable a prediction of the battery’s health and useful life. The method is non-invasive and can be added on to any battery system.

The inability to predict the remaining useful charge in lithium-ion batteries is a limitation to the adoption of electric vehicles, and annoyance to mobile phone users. Current methods for predicting battery health are based on tracking the current and voltage during battery charging and discharging. The new methods capture more about what is happening inside the battery and can better detect subtle changes.

“Safety and reliability are the most important design criteria as we develop batteries that can pack a lot of energy in a small space,” stated Dr. Alpha Lee from Cambridge’s Cavendish Laboratory, who co-led the research. “By improving the software that monitors charging and discharging, and using data-driven software to control the charging process, I believe we can power a big improvement in battery performance.”

Dr. Alpha Lee, Cavendish Laboratory, Cambridge University

The researchers performed over 20,000 experimental measurements to train the model in how to spot signs of battery aging. The model learns how to distinguish important signals from irrelevant noise. The model learns which electrical signals are most correlated with aging, which then allows the researchers to design specific experiments to probe more deeply why batteries degrade.

“Machine learning complements and augments physical understanding,” stated co-author Dr Yunwei Zhang, also from the Cavendish Laboratory, in .”The interpretable signals identified by our machine learning model are a starting point for future theoretical and experimental studies.”

Department of Energy Researchers Using AI Computer Vision Techniques

Researchers at the Department of Energy’s SLAC National Accelerator Laboratory are using AI computer vision techniques to study battery life. The scientists are combining machine learning algorithms with X-ray tomography data to produce a detailed picture of degradation in one battery component, the cathode, according to an account in SciTechDaily. The referenced study was published in Nature Communications.

Dr. Yunwei Zhang, Cavendish Laboratory, Cambridge University

For cathodes made of nickel-manganese-cobalt (NMC) particles are held together by a conductive carbon matrix. Researchers have speculated that a cause of battery performance decline could be particles breaking away from that matrix. The team had access to advanced capabilities at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), a unit of the Department of Energy operated by Stanford University, and the European Synchrotron Radiation Facility (ESRF), a European collaboration for the advancement of X-rays, based in Grenoble, France. The goal was to build a picture of how NMC particles break apart and away from the matrix, and how that relates to battery performance loss.

The team turned to computer vision with AI capability to help conduct the research. They needed a machine learning model to train the data in how to recognize different types of particles, so they could develop a three-dimensional picture of how NMC particles, large or small, break away from the cathode.

The authors encouraged more research into battery health. “Our findings highlight the importance of precisely quantifying the evolving nature of the battery electrode’s microstructure with statistical confidence, which is a key to maximize the utility of active particles towards higher battery capacity,” the authors stated.

Sourcehttps://www.aitrends.com/ai-research/ai-being-applied-to-improve-health-better-predict-life-of-batteries/