Regularization: vs.

فهرست عناوین اصلی در این پاورپوینت

فهرست عناوین اصلی در این پاورپوینت

● CSC2515 Fall 2007
Introduction to Machine Learning

Lecture 1: What is Machine Learning?
● What is Machine Learning?
● A classic example of a task that requires machine learning: It is very hard to say what makes a 2
● Some more examples of tasks that are best solved by using a learning algorithm
● Some web-based examples of machine learning
● Displaying the structure of a set of documents
using Latent Semantic Analysis (a form of PCA)
● Displaying the structure of a set of documents
using a deep neural network
● Machine Learning & Symbolic AI
● Machine Learning & Statistics
● A spectrum of machine learning tasks
● Types of learning task
● Hypothesis Space
● Searching a hypothesis space
● Some Loss Functions
● Generalization
● Trading off the goodness of fit against the complexity of the model
● A sampling assumption
● The probabilistic guarantee
● A simple example: Fitting a polynomial
● Some fits to the data: which is best?
● A simple way to reduce model complexity
● Regularization: vs.
● Polynomial Coefficients
● Using a validation set
● The Bayesian framework
● A coin tossing example
● Some problems with picking the parameters that are most likely to generate the data
● Using a distribution over parameter values
● Lets do it again: Suppose we get a tail
● Lets do it another 98 times
● Bayes Theorem
● A cheap trick to avoid computing the posterior probabilities of all weight vectors
● Why we maximize sums of log probs
● A even cheaper trick
● Supervised Maximum Likelihood Learning


نوع زبان: انگلیسی حجم: 1.17 مگا بایت
نوع فایل: اسلاید پاورپوینت تعداد اسلایدها: 36 صفحه
سطح مطلب: نامشخص پسوند فایل: ppt
گروه موضوعی: زمان استخراج مطلب: 2019/06/14 11:14:26

لینک دانلود رایگان لینک دانلود کمکی

اسلایدهای پاورپوینت مرتبط در پایین صفحه

عبارات مهم استفاده شده در این مطلب

عبارات مهم استفاده شده در این مطلب

., learn, machine, datum, learning, example, structure, program, hypothesis, input, represent, task,

توجه: این مطلب در تاریخ 2019/06/14 11:14:26 به صورت خودکار از فضای وب آشکار توسط موتور جستجوی پاورپوینت جمع آوری شده است و در صورت اعلام عدم رضایت تهیه کننده ی آن، طبق قوانین سایت از روی وب گاه حذف خواهد شد. این مطلب از وب سایت زیر استخراج شده است و مسئولیت انتشار آن با منبع اصلی است.

http://www.cs.toronto.edu/~hinton/csc2515/notes/lec1.ppt

در صورتی که محتوای فایل ارائه شده با عنوان مطلب سازگار نبود یا مطلب مذکور خلاف قوانین کشور بود لطفا در بخش دیدگاه (در پایین صفحه) به ما اطلاع دهید تا بعد از بررسی در کوتاه ترین زمان نسبت به حدف با اصلاح آن اقدام نماییم. جهت جستجوی پاورپوینت های بیشتر بر روی اینجا کلیک کنید.

عبارات پرتکرار و مهم در این اسلاید عبارتند از: ., learn, machine, datum, learning, example, structure, program, hypothesis, input, represent, task,

مشاهده محتوای متنیِ این اسلاید ppt

مشاهده محتوای متنیِ این اسلاید ppt

csc۲۵۱۵ fall ۲ ۷ introduction to machine learning lecture ۱ what is machine learning all lecture slides will be available as .ppt .ps .htm at www.cs.toronto.edu ~hinton many of the figures are provided by chris bishop from his textbook pattern recognition and machine learning what is machine learning it is very hard to write programs that solve problems like recognizing a face. we don’t know what program to write because we don’t know how our brain does it. even if we had a good idea about how to do it the program might be horrendously complicated. instead of writing a program by hand we collect lots of examples that specify the correct output for a given input. a machine learning algorithm then takes these examples and produces a program that does the job. the program produced by the learning algorithm may look very different from a typical hand written program. it may contain millions of numbers. if we do it right the program works for new cases as well as the ones we trained it on. a classic example of a task that requires machine learning it is very hard to say what makes a ۲ some more examples of tasks that are best solved by using a learning algorithm recognizing patterns facial identities or facial expressions handwritten or spoken words medical images generating patterns generating images or motion sequences demo recognizing anomalies unusual sequences of credit card transactions unusual patterns of sensor readings in a nuclear power plant or unusual sound in your car engine. prediction future stock prices or currency exchange rates some web based examples of machine learning the web contains a lot of data. tasks with very big datasets often use machine learning especially if the data is noisy or non stationary. spam filtering fraud detection the enemy adapts so we must adapt too. recommendation systems lots of noisy data. million dollar prize information retrieval find documents or images with similar content. data visualization display a huge database in a revealing way demo displaying the structure of a set of documents using latent semantic analysis a form of pca each document is converted to a vector of word counts. this vector is then mapped to two coordinates and displayed as a colored dot. the colors represent the hand labeled classes. when the documents are laid out in ۲ d the classes are not used. so we can judge how good the algorithm is by seeing if the classes are separated. displaying the structure of a set of documents using a deep neural network machine learning symbolic ai knowledge representation works with facts assertions and develops rules of logical inference. the rules can handle quantifiers. learning and uncertainty are usually ignored. expert systems used logical rules or conditional probabilities provided by experts for specific domains. graphical models treat uncertainty properly and allow learning but they often ignore quantifiers and use a fixed set of variables set of logical assertions  values of a subset of the variables and local models of the probabilistic interactions between variables. logical inference  probability distributions over subsets of the unobserved variables or individual ones learning refining the local models of the interactions. machine learning statistics a lot of machine learning is just a rediscovery of things that statisticians already knew. this is often disguised by differences in terminology ridge regression weight decay fitting learning held out data test data but the emphasis is very different a good piece of statistics clever proof that a relatively simple estimation procedure is asymptotically unbiased. a good piece of machine learning demonstration that a complicated algorithm produces impressive results on a specific task. data mining using very simple machine learning techniques on very large databases because computers are too slow to do anything more interesting with ten billion examples. a spectrum of machine learning tasks low dimensional data e.g. less than ۱ dimensions lots of noise in the data there is not much structure in the data and what structure there is can be represented by a fairly simple model. the main problem is distinguishing true structure from noise. high dimensional data e.g. more than ۱ dimensions the noise is not sufficient to obscure the structure in the data if we process it right. there is a huge amount of structure in the data but the structure is too complicated to be represented by a simple model. the main problem is figuring out a way to represent the complicated structure that allows it to be learned. statistics artificial intelligence types of learning task supervised learning learn to predict output when given an input vector who provides the correct answer reinforcement learning learn action to maximize payoff not much information in a payoff signal payoff is often delayed reinforcement learning is an important area that will not be covered in this course. unsupervised learning create an internal representation of the input e.g. form clusters extract features how do we know if a representation is good this is the new frontier of machine learning because most big datasets do not come with labels. hypothesis space one way to think about a supervised learning machine is as a device that explores a hypothesis space . each setting of the parameters in the machine is a different hypothesis about the function that maps input vectors to output vectors. if the data is noise free each training example rules out a region of hypothesis space. if the data is noisy each training example scales the posterior probability of each point in the hypothesis space in proportion to how likely the training example is given that hypothesis. the art of supervised machine learning is in deciding how to represent the inputs and outputs selecting a hypothesis space that is powerful enough to represent the relationship between inputs and outputs but simple enough to be searched. searching a …

کلمات کلیدی پرکاربرد در این اسلاید پاورپوینت: ., learn, machine, datum, learning, example, structure, program, hypothesis, input, represent, task,

این فایل پاورپوینت شامل 36 اسلاید و به زبان انگلیسی و حجم آن 1.17 مگا بایت است. نوع قالب فایل ppt بوده که با این لینک قابل دانلود است. این مطلب برگرفته از سایت زیر است و مسئولیت انتشار آن با منبع اصلی می باشد که در تاریخ 2019/06/14 11:14:26 استخراج شده است.

http://www.cs.toronto.edu/~hinton/csc2515/notes/lec1.ppt

  • جهت آموزش های پاورپوینت بر روی اینجا کلیک کنید.
  • جهت دانلود رایگان قالب های حرفه ای پاورپوینت بر روی اینجا کلیک کنید.

رفتن به مشاهده اسلاید در بالای صفحه


پاسخی بگذارید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *