Show Menu
Cheatography

AI fundamentals Cheat Sheet (DRAFT) by

AI cheatsheet for fourth sem

This is a draft cheat sheet. It is a work in progress and is not finished yet.

Basics

What is AI?
Artificial intell­igence (AI) is a field of computer science that focuses on creating machines that can perform tasks that typically require human intell­igence, such as visual percep­tion, speech recogn­ition, decisi­on-­making, and language transl­ation.
Timeline
1935
Alan Turing, a British logician and computer pioneer, did the earliest substa­ntial work in the field of artificial intell­igence
1940
Edward Condon displayed Nimatron, a digital computer that played Nim perfectly. Konrad Zuse built the first working progra­m-c­ont­rolled computers.
1943
Warren Sturgis McCulloch and Walter Pitts published "A Logical Calculus of the Ideas Immanent in Nervous Activi­ty,­" laying founda­tions for artificial neural networks.
1950
Alan Turing proposed the Turing test as a measure of machine intell­igence. Claude Shannon published a detailed analysis of chess playing as search. Isaac Asimov published his Three Laws of Robotics
1955
John McCarthy, known as the father of AI, developed the progra­mming language LISP and coined the term "­art­ificial intell­ige­nce­".
1956
The Dartmouth College summer AI conference was organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM, and Claude Shannon. McCarthy coined the term "­art­ificial intell­ige­nce­," and the conference is considered the formal founding of the field of AI.
1957-1974
AI flouri­shed, and computers became faster, cheaper, and more access­ible. Machine learning algorithms improved, and people got better at knowing which algorithm to apply to their problem. Early demons­tra­tions such as Newell and Simon's General Problem Solver and John McCarthy's Advice Taker showed the promise of AI.
1980s
AI was reignited by two sources: an expansion of the algori­thmic toolkit and a boost of funds. John Hopfield and David Rumelhart popula­rized "deep learni­ng" techni­ques, which allowed computers to learn using experi­ence. Edward Feigenbaum introduced expert systems, which used a knowledge base of rules to make decisions.
1990s
AI research shifted toward practical applic­ations, such as speech recogn­ition, computer vision, and robotics. The develo­pment of the World Wide Web and the explosion of digital data created new opport­unities for AI.
2000s
AI experi­enced a resurg­ence, thanks to advances in deep learning, big data, and cloud computing. Companies such as Google, Facebook, and Microsoft invested heavily in AI research and develo­pment, leading to breakt­hroughs in natural language proces­sing, image recogn­ition, and game playing

Classi­fic­ation of AI

Type 1
Narrow AI
This type of AI is designed to perform a specific task with intell­igence. It is the most common and currently available AI in the world of artificial intell­igence. Examples of narrow AI include playing chess, purchasing sugges­tions on e-commerce sites, self-d­riving cars, speech recogn­ition, and image recogn­ition.
General AI
This type of AI is designed to perform any intell­ectual task with efficiency like a human. It is capable of unders­tanding and learning any intell­ectual task that a human can perform.
Super AI
This type of AI is hypoth­etical and does not exist yet. It is capable of performing intell­ectual tasks that are beyond human capabi­lities.
Capabi­lities of AI
Make Predic­tions
Detect Anomalies
Analyze images
Comprehend speech
interact in natural ways
Type 2 AI
Reactive Machines
hese are the most basic types of AI that do not store memories or past experi­ences. They can only react to the current situation based on pre-pr­ogr­ammed rules.
Limited Memory
These types of AI can use past experi­ences to inform future decisions. They can learn from historical data and use that knowledge to make decisions.
Theory of Mind
This type of AI can understand the emotions, beliefs, and intentions of others. It can predict the behavior of others based on their mental state.
Self Aware
This is the most advanced type of AI that can have consci­ousness and understand its own existence. It can have desires, needs, and emotions.

Machine Learning

Machine learning is an applic­ation of artificial intell­igence that involves algorithms and data that automa­tically analyze and make decision by itself without human interv­ention. It describes how computer perform tasks on their own by previous experi­ences. Therefore we can say in machine language artificial intell­igence is generated on the basis of experi­ence.
Supervised learning: AI systems that learn from labelled training data. Example: Email spam filter
Unsupe­rvised learning: AI systems that learn from unlabelled data. Example: Clustering customer data.
Reinfo­rcement learning: AI systems that learn from the feedback of the enviro­nment. Example: AlphaGo.

Supervised Learning

Classi­fic­ation
Regression
Time series foreca­sting
to identify the category of new observ­ations on the basis of training data. In Classi­fic­ation, a program learns from the given dataset or observ­ations and then classifies new observ­ation into a number of classes or groups.
is a process of finding the correl­ations between dependent and indepe­ndent variables. It helps in predicting the continuous variables such as prediction of Market Trends, prediction of House prices, etc.
Time series foreca­sting is the process of analyzing time series data using statistics and modeling to make predic­tions and inform strategic decisi­on-­making. It’s not always an exact predic­tion, and likelihood of forecasts can vary wildly­—es­pec­ially when dealing with the commonly fluctu­ating variables in time series data as well as factors outside our control.

Machine Learning Process

Data Ingestion

 

Interd­epe­ndency and Key Features of AI

Artificial Intell­igence
Any technique that enables computers to mimic human intell­igence, using logic, if-then rules, decision trees, and machine learning (including deep learning.
Machine Learning
A subset of AI that includes abstruse statis­tical techniques that enables machines to improve the tasks with experi­ence. The category includes deep learning.
Deep Learning
The subset of machine learning composed of algorithms that permit software to train itself to perform task, like speech and image recogn­ition, by exposing multil­ayered neural networks to vast amount of data
Key Features of AI
1. Machine Learning
 
2. Deep Learning
 
3. Natural Language Processing
 
4. Computer Vision
 
5. Neural Network
 
6. Cognitive Computing

Labelled and Unlabelled Data

Labelled Data
Unlabelled Data
Data that has some predefined tags such as name, type, or number.
Contains no tags or no specified name.
Used in Supervised Learning techni­ques.
Used in Unsupe­rvised Learning.
Difficult to get.
Easy to acquire.
e.g., An image has an apple or banana.
e.g., Anomaly detection, associ­ation rule learning.

Data Prepar­ation

ML solutions

 

Labels and Features in Machine Learning

How Data Labelling Works

Benefits and Challenges of Data Labelling

Benefits
Challenges
Precise Predic­tions
Costly and time-c­ons­uming
Better Data Usability
Possib­ilities of Human-­Error

Approaches to Data Labeling

Internal / In-house data labeling
Synthetic Labeling
Progra­mmatic Labeling
Outsou­rcing
Crowds­ourcing

Labels and Features in Machine Learning

Labels
Features
1.Also known as tags 2. Give an identi­fic­ation to a piece of data 3. Provide some inform­ation about that element.
1. Individual indepe­ndent variables. 2. Work as input for the ML system.

Unsupe­rvised Learning

Clustering
An unsupe­rvised learning method is a method in which we draw references from datasets consisting of input data without labeled responses. Generally, it is used as a process to find meaningful structure, explan­atory underlying processes, generative features, and groupings inherent in a set of examples.

Types of Machine Learning

Data Ingestion