天天干天天操天天爱-天天干天天操天天操-天天干天天操天天插-天天干天天操天天干-天天干天天操天天摸

課程目錄: 會(huì)計(jì)學(xué)數(shù)據(jù)分析基礎(chǔ) II培訓(xùn)

4401 人關(guān)注
(78637/99817)
課程大綱:

會(huì)計(jì)學(xué)數(shù)據(jù)分析基礎(chǔ) II培訓(xùn)

 

 

Course Orientation

You will become familiar with the course, your classmates,

and our learning environment. The orientation will also help you obtain the technical skills required for the course.

Module 1: Introduction to Machine Learning

This module provides the basis for the rest of the course by introducing the basic concepts behind machine learning,

and, specifically, how to perform machine learning by using Python and the scikit learn machine learning module.

First, you will learn how machine learning and artificial intelligence are disrupting businesses.

Next, you will learn about the basic types of machine learning and how to leverage these algorithms in a Python script.

Third, you will learn how linear regression can be considered a machine learning problem with parameters that must be determined

computationally by minimizing a cost function. Finally, you will learn about neighbor-based algorithms,

including the k-nearest neighbor algorithm, which can be used for both classification and regression tasks.

 

Module 2: Fundamental Algorithms

This module introduces several of the most important machine learning algorithms: logistic regression, decision trees,

and support vector machine. Of these three algorithms, the first, logistic regression,

is a classification algorithm (despite its name). The other two,

however, can be used for either classification or regression tasks. Thus,

this module will dive deeper into the concept of machine classification,

where algorithms learn from existing, labeled data to classify new,

unseen data into specific categories; and, the concept of machine regression,

where algorithms learn a model from data to make predictions for new,

unseen data. While these algorithms all differ in their mathematical underpinnings,

they are often used for classifying numerical, text, and image data or performing regression in a variety of domains.

This module will also review different techniques for

quantifying the performance of a classification and regression algorithms and how to deal with imbalanced training data.

 

Module 3: Practical Concepts in Machine Learning

 

This module introduces several important and practical concepts in machine learning.

First, you will learn about the challenges inherent in applying data analytics (and machine learning in particular) to real world data sets.

This also introduces several methodologies that you may encounter in the future that dictate how to approach,

tackle, and deploy data analytic solutions.

Next, you will learn about a powerful technique to combine the predictions

from many weak learners to make a better prediction via a process known as ensemble learning.

Specifically, this module will introduce two of the most popular ensemble learning

techniques: bagging and boosting and demonstrate how to employ them in a Python data

analytics script. Finally, the concept of a machine learning pipeline is introduced,

which encapsulates the process of creating, deploying, and reusing machine learning models.

Module 4: Overfitting & Regularization

 

This module introduces the concept of regularization, problems it can cause in machine learning analyses,

and techniques to overcome it. First, the basic concept of overfitting is presented along with ways to identify its occurrence. Next,

the technique of cross-validation is introduced,

which can mitigate the likelihood that overfitting can occur. Next, the use of cross-validation to identify the optimal parameters for a machine

learning algorithm trained on a given data set is presented. Finally, the concept of regularization,

where an additional penalty term is applied when determining the best machine learning model parameters,

is introduced and demonstrated for different regression and classification algorithms.

Module 5: Fundamental Probabilistic Algorithms

This module starts by discussing practical machine learning workflows that are deployed in production environments,

which emphasizes the big picture view of machine learning.

Next this module introduces two additional fundamental algorithms: naive Bayes and Gaussian

Processes. These algorithms both have foundations in probability theory but operate under very different

assumptions. Naive Bayes is generally used for classification tasks, while Gaussian Processes are generally used for regression tasks.

This module also discusses practical issues in constructing machine learning workflows.

 

Module 6: Feature Engineering

 

This module introduces an important concept in machine learning,

the selection of the actual features that will be used by a machine learning

algorithm. Along with data cleaning, this step in the data analytics process is extremely important,

yet it is often overlooked as a method for improving the overall performance of an analysis.

This module beings with a discussion of ethics in machine learning,

in large part because the selection of features can have (sometimes) non-obvious impacts on the final performance of an algorithm.

This can be important when machine learning is applied to data in a regulated industry or when the improper application of an algorithm

might lead to discrimination. The rest of this module introduces different techniques for either selecting the best features in a data set,

 

Module 7: Introduction to Clustering

This module introduces clustering, where data points are assigned to larger groups of points based on some specific property,

such as spatial distance or the local density of points. While humans often find clusters visually with ease in given data sets, computationally the problem is more challenging.

This module starts by exploring the basic ideas behind

this unsupervised learning technique, as well as different areas in which clustering can be used by businesses. Next,

one of the most popular clustering techniques, K-means, is introduced. Next the density-based DB-SCAN technique is introduced. This module

concludes by introducing the mixture models technique for probabilistically assigning points to clusters.

or the construction of new features from the existing set of features.

Module 8: Introduction to Anomaly Detection

This module introduces the concept of an anomaly, or outlier,

and different techniques for identifying these unusual data points. First,

the general concept of an anomaly is discussed and demonstrated in the business community via the detection of fraud,

which in general should be an anomaly when compared to normal customers or operations.

Next, statistical techniques for identifying outliers are introduced, which often involve simple

descriptive statistics that can highlight data that are sufficiently far from the norm for a given data set. Finally,

machine learning techniques are reviewed that can either classify outliers or identify

points in low density (or outside normal clusters) areas as potential outliers.

 

主站蜘蛛池模板: 成人二区 | 青青青国产依人精品视频 | a毛片在线看片免费 | 国产一级毛片外aaaa | 国产一区二区三区美女图片 | juliaann与黑人丝袜交 | 97视频在线观看视频在线精品 | 日韩黄色免费观看 | 91精品一区二区三区在线观看 | 亚洲 欧美 日韩 另类 | 中文在线第一页 | a一级爱做片免费 | 国产人成精品综合欧美成人 | 国产成人精品无缓存在线播放 | 国产三级在线观看 | 精品国产香蕉在线播出 | 男人看片网址 | 国内永久第一免费福利视频 | 国产精亚洲视频 | a大片大片网y | 成人区精品一区二区不卡亚洲 | 国产精品夫妇久久 | 久久本道综合色狠狠五月 | 午夜欧美成人久久久久久 | 18欧美乱大交hd88av | 欧美国产日韩精品 | 黄色激情网址 | 日本中文字幕不卡在线一区二区 | 91精品国产人成网站 | 麻豆404| 日韩欧美一区二区三区在线播放 | 日韩美在线 | 国产精品久久国产三级国电话系列 | 日本丰满hdxxxxx护士 | 男女毛片 | 嫩草影院永久在线播放 | 小明看看成人免费视频 | 日本三级理论 | 中文字幕综合久久久久 | 看一级片 | 日韩欧美亚洲国产一区二区三区 |