這是幾位機(jī)器學(xué)習(xí)權(quán)威專家匯總的725個(gè)機(jī)器學(xué)習(xí)術(shù)語(yǔ)表,非常全面了,值得收藏!
英文術(shù)語(yǔ) | 中文翻譯 |
---|---|
0-1 Loss Function | 0-1損失函數(shù) |
Accept-Reject Sampling Method | 接受-拒絕抽樣法/接受-拒絕采樣法 |
Accumulated Error Backpropagation | 累積誤差反向傳播 |
Accuracy | 精度 |
Acquisition Function | 采集函數(shù) |
Action | 動(dòng)作 |
Activation Function | 激活函數(shù) |
Active Learning | 主動(dòng)學(xué)習(xí) |
Adaptive Bitrate Algorithm | 自適應(yīng)比特率算法 |
Adaptive Boosting | AdaBoost |
Adaptive Gradient Algorithm | AdaGrad |
Adaptive Moment Estimation Algorithm | Adam算法 |
Adaptive Resonance Theory | 自適應(yīng)諧振理論 |
Additive Model | 加性模型 |
Affinity Matrix | 親和矩陣 |
Agent | 智能體 |
Algorithm | 算法 |
Alpha-Beta Pruning | α-β修剪法 |
Anomaly Detection | 異常檢測(cè) |
Approximate Inference | 近似推斷 |
Area Under ROC Curve | AUC |
Artificial Intelligence | 人工智能 |
Artificial Neural Network | 人工神經(jīng)網(wǎng)絡(luò) |
Artificial Neuron | 人工神經(jīng)元 |
Attention | 注意力 |
Attention Mechanism | 注意力機(jī)制 |
Attribute | 屬性 |
Attribute Space | 屬性空間 |
Autoencoder | 自編碼器 |
Automatic Differentiation | 自動(dòng)微分 |
Autoregressive Model | 自回歸模型 |
Back Propagation | 反向傳播 |
Back Propagation Algorithm | 反向傳播算法 |
Back Propagation Through Time | 隨時(shí)間反向傳播 |
Backward Induction | 反向歸納 |
Backward Search | 反向搜索 |
Bag of words | 詞袋 |
Bandit | 賭博機(jī)/老虎機(jī) |
Base Learner | 基學(xué)習(xí)器 |
Base Learning Algorithm | 基學(xué)習(xí)算法 |
Baseline | 基準(zhǔn) |
Batch | 批量 |
Batch Normalization | 批量規(guī)范化 |
Bayes Decision Rule | 貝葉斯決策準(zhǔn)則 |
Bayes Model Averaging | 貝葉斯模型平均 |
Bayes Optimal Classifier | 貝葉斯最優(yōu)分類器 |
Bayes' Theorem | 貝葉斯定理 |
Bayesian Decision Theory | 貝葉斯決策理論 |
Bayesian Inference | 貝葉斯推斷 |
Bayesian Learning | 貝葉斯學(xué)習(xí) |
Bayesian Network | 貝葉斯網(wǎng)/貝葉斯網(wǎng)絡(luò) |
Bayesian Optimization | 貝葉斯優(yōu)化 |
Beam Search | 束搜索 |
Benchmark | 基準(zhǔn) |
Belief Network | 信念網(wǎng)/信念網(wǎng)絡(luò) |
Belief Propagation | 信念傳播 |
Bellman Equation | 貝爾曼方程 |
Bernoulli Distribution | 伯努利分布 |
Beta Distribution | 貝塔分布 |
Between-Class Scatter Matrix | 類間散度矩陣 |
BFGS | BFGS |
Bias | 偏差/偏置 |
Bias In Affine Function | 偏置 |
Bias In Statistics | 偏差 |
Bias Shift | 偏置偏移 |
Bias-Variance Decomposition | 偏差 - 方差分解 |
Bias-Variance Dilemma | 偏差 - 方差困境 |
Bidirectional Recurrent Neural Network | 雙向循環(huán)神經(jīng)網(wǎng)絡(luò) |
Bigram | 二元語(yǔ)法 |
Bilingual Evaluation Understudy | BLEU |
Binary Classification | 二分類 |
Binomial Distribution | 二項(xiàng)分布 |
Binomial Test | 二項(xiàng)檢驗(yàn) |
Boltzmann Distribution | 玻爾茲曼分布 |
Boltzmann machine | 玻爾茲曼機(jī) |
Boosting | Boosting |
Bootstrap Aggregating | Bagging |
Bootstrap Sampling | 自助采樣法 |
Bootstrapping | 自助法/自舉法 |
Break-Event Point | 平衡點(diǎn) |
Bucketing | 分桶 |
Calculus of Variations | 變分法 |
Cascade-Correlation | 級(jí)聯(lián)相關(guān) |
Catastrophic Forgetting | 災(zāi)難性遺忘 |
Categorical Distribution | 類別分布 |
Cell | 單元 |
Chain Rule | 鏈?zhǔn)椒▌t |
Chebyshev Distance | 切比雪夫距離 |
Class | 類別 |
Class-Imbalance | 類別不平衡 |
Classification | 分類 |
Classification And Regression Tree | 分類與回歸樹(shù) |
Classifier | 分類器 |
Clique | 團(tuán) |
Cluster | 簇 |
Cluster Assumption | 聚類假設(shè) |
Clustering | 聚類 |
Clustering Ensemble | 聚類集成 |
Co-Training | 協(xié)同訓(xùn)練 |
Coding Matrix | 編碼矩陣 |
Collaborative Filtering | 協(xié)同過(guò)濾 |
Competitive Learning | 競(jìng)爭(zhēng)型學(xué)習(xí) |
Comprehensibility | 可解釋性 |
Computation Graph | 計(jì)算圖 |
Computational Learning Theory | 計(jì)算學(xué)習(xí)理論 |
Conditional Entropy | 條件熵 |
Conditional Probability | 條件概率 |
Conditional Probability Distribution | 條件概率分布 |
Conditional Random Field | 條件隨機(jī)場(chǎng) |
Conditional Risk | 條件風(fēng)險(xiǎn) |
Confidence | 置信度 |
Confusion Matrix | 混淆矩陣 |
Conjugate Distribution | 共軛分布 |
Connection Weight | 連接權(quán) |
Connectionism | 連接主義 |
Consistency | 一致性 |
Constrained Optimization | 約束優(yōu)化 |
Context Variable | 上下文變量 |
Context Vector | 上下文向量 |
Context Window | 上下文窗口 |
Context Word | 上下文詞 |
Contextual Bandit | 上下文賭博機(jī)/上下文老虎機(jī) |
Contingency Table | 列聯(lián)表 |
Continuous Attribute | 連續(xù)屬性 |
Contrastive Divergence | 對(duì)比散度 |
Convergence | 收斂 |
Convex Optimization | 凸優(yōu)化 |
Convex Quadratic Programming | 凸二次規(guī)劃 |
Convolution | 卷積 |
Convolutional Kernel | 卷積核 |
Convolutional Neural Network | 卷積神經(jīng)網(wǎng)絡(luò) |
Coordinate Descent | 坐標(biāo)下降 |
Corpus | 語(yǔ)料庫(kù) |
Correlation Coefficient | 相關(guān)系數(shù) |
Cosine Similarity | 余弦相似度 |
Cost | 代價(jià) |
Cost Curve | 代價(jià)曲線 |
Cost Function | 代價(jià)函數(shù) |
Cost Matrix | 代價(jià)矩陣 |
Cost-Sensitive | 代價(jià)敏感 |
Covariance | 協(xié)方差 |
Covariance Matrix | 協(xié)方差矩陣 |
Critical Point | 臨界點(diǎn) |
Cross Entropy | 交叉熵 |
Cross Validation | 交叉驗(yàn)證 |
Curse of Dimensionality | 維數(shù)災(zāi)難 |
Cutting Plane Algorithm | 割平面法 |
Data Mining | 數(shù)據(jù)挖掘 |
Data Set | 數(shù)據(jù)集 |
Davidon-Fletcher-Powell | DFP |
Decision Boundary | 決策邊界 |
Decision Function | 決策函數(shù) |
Decision Stump | 決策樹(shù)樁 |
Decision Tree | 決策樹(shù) |
Decoder | 解碼器 |
Decoding | 解碼 |
Deconvolution | 反卷積 |
Deconvolutional Network | 反卷積網(wǎng)絡(luò) |
Deduction | 演繹 |
Deep Belief Network | 深度信念網(wǎng)絡(luò) |
Deep Boltzmann Machine | 深度玻爾茲曼機(jī) |
Deep Convolutional Generative Adversarial Network | 深度卷積生成對(duì)抗網(wǎng)絡(luò) |
Deep Learning | 深度學(xué)習(xí) |
Deep Neural Network | 深度神經(jīng)網(wǎng)絡(luò) |
Deep Q-Network | 深度Q網(wǎng)絡(luò) |
Delta-Bar-Delta | Delta-Bar-Delta |
Denoising | 去噪 |
Denoising Autoencoder | 去噪自編碼器 |
Denoising Score Matching | 去躁分?jǐn)?shù)匹配 |
Density Estimation | 密度估計(jì) |
Density-Based Clustering | 密度聚類 |
Derivative | 導(dǎo)數(shù) |
Determinant | 行列式 |
Diagonal Matrix | 對(duì)角矩陣 |
Dictionary Learning | 字典學(xué)習(xí) |
Dimension Reduction | 降維 |
Directed Edge | 有向邊 |
Directed Graphical Model | 有向圖模型 |
Directed Separation | 有向分離 |
Dirichlet Distribution | 狄利克雷分布 |
Discriminative Model | 判別式模型 |
Discriminator | 判別器 |
Discriminator Network | 判別網(wǎng)絡(luò) |
Distance Measure | 距離度量 |
Distance Metric Learning | 距離度量學(xué)習(xí) |
Distributed Representation | 分布式表示 |
Diverge | 發(fā)散 |
Divergence | 散度 |
Diversity | 多樣性 |
Diversity Measure | 多樣性度量/差異性度量 |
Domain Adaptation | 領(lǐng)域自適應(yīng) |
Dominant Strategy | 主特征值 |
Dominant Strategy | 占優(yōu)策略 |
Down Sampling | 下采樣 |
Dropout | 暫退法 |
Dropout Boosting | 暫退Boosting |
Dropout Method | 暫退法 |
Dual Problem | 對(duì)偶問(wèn)題 |
Dummy Node | 啞結(jié)點(diǎn) |
Dynamic Bayesian Network | 動(dòng)態(tài)貝葉斯網(wǎng)絡(luò) |
Dynamic Programming | 動(dòng)態(tài)規(guī)劃 |
Early Stopping | 早停 |
Eigendecomposition | 特征分解 |
Eigenvalue | 特征值 |
Element-Wise Product | 逐元素積 |
Embedding | 嵌入 |
Empirical Conditional Entropy | 經(jīng)驗(yàn)條件熵 |
Empirical Distribution | 經(jīng)驗(yàn)分布 |
Empirical Entropy | 經(jīng)驗(yàn)熵 |
Empirical Error | 經(jīng)驗(yàn)誤差 |
Empirical Risk | 經(jīng)驗(yàn)風(fēng)險(xiǎn) |
Empirical Risk Minimization | 經(jīng)驗(yàn)風(fēng)險(xiǎn)最小化 |
Encoder | 編碼器 |
Encoding | 編碼 |
End-To-End | 端到端 |
Energy Function | 能量函數(shù) |
Energy-Based Model | 基于能量的模型 |
Ensemble Learning | 集成學(xué)習(xí) |
Ensemble Pruning | 集成修剪 |
Entropy | 熵 |
Episode | 回合 |
Epoch | 輪 |
Error | 誤差 |
Error Backpropagation Algorithm | 誤差反向傳播算法 |
Error Backpropagation | 誤差反向傳播 |
Error Correcting Output Codes | 糾錯(cuò)輸出編碼 |
Error Rate | 錯(cuò)誤率 |
Error-Ambiguity Decomposition | 誤差-分歧分解 |
Estimator | 估計(jì)/估計(jì)量 |
Euclidean Distance | 歐氏距離 |
Evidence | 證據(jù) |
Evidence Lower Bound | 證據(jù)下界 |
Exact Inference | 精確推斷 |
Example | 樣例 |
Expectation | 期望 |
Expectation Maximization | 期望最大化 |
Expected Loss | 期望損失 |
Expert System | 專家系統(tǒng) |
Exploding Gradient | 梯度爆炸 |
Exponential Loss Function | 指數(shù)損失函數(shù) |
Factor | 因子 |
Factorization | 因子分解 |
Feature | 特征 |
Feature Engineering | 特征工程 |
Feature Map | 特征圖 |
Feature Selection | 特征選擇 |
Feature Vector | 特征向量 |
Featured Learning | 特征學(xué)習(xí) |
Feedforward | 前饋 |
Feedforward Neural Network | 前饋神經(jīng)網(wǎng)絡(luò) |
Few-Shot Learning | 少試學(xué)習(xí) |
Filter | 濾波器 |
Fine-Tuning | 微調(diào) |
Fluctuation | 振蕩 |
Forget Gate | 遺忘門 |
Forward Propagation | 前向傳播/正向傳播 |
Forward Stagewise Algorithm | 前向分步算法 |
Fractionally Strided Convolution | 微步卷積 |
Frobenius Norm | Frobenius 范數(shù) |
Full Padding | 全填充 |
Functional | 泛函 |
Functional Neuron | 功能神經(jīng)元 |
Gated Recurrent Unit | 門控循環(huán)單元 |
Gated RNN | 門控RNN |
Gaussian Distribution | 高斯分布 |
Gaussian Kernel | 高斯核 |
Gaussian Kernel Function | 高斯核函數(shù) |
Gaussian Mixture Model | 高斯混合模型 |
Gaussian Process | 高斯過(guò)程 |
Generalization Ability | 泛化能力 |
Generalization Error | 泛化誤差 |
Generalization Error Bound | 泛化誤差上界 |
Generalize | 泛化 |
Generalized Lagrange Function | 廣義拉格朗日函數(shù) |
Generalized Linear Model | 廣義線性模型 |
Generalized Rayleigh Quotient | 廣義瑞利商 |
Generative Adversarial Network | 生成對(duì)抗網(wǎng)絡(luò) |
Generative Model | 生成式模型 |
Generator | 生成器 |
Generator Network | 生成器網(wǎng)絡(luò) |
Genetic Algorithm | 遺傳算法 |
Gibbs Distribution | 吉布斯分布 |
Gibbs Sampling | 吉布斯采樣/吉布斯抽樣 |
Gini Index | 基尼指數(shù) |
Global Markov Property | 全局馬爾可夫性 |
Global Minimum | 全局最小 |
Gradient | 梯度 |
Gradient Clipping | 梯度截?cái)?/td> |
Gradient Descent | 梯度下降 |
Gradient Descent Method | 梯度下降法 |
Gradient Exploding Problem | 梯度爆炸問(wèn)題 |
Gram Matrix | Gram 矩陣 |
Graph Convolutional Network | 圖卷積神經(jīng)網(wǎng)絡(luò)/圖卷積網(wǎng)絡(luò) |
Graph Neural Network | 圖神經(jīng)網(wǎng)絡(luò) |
Graphical Model | 圖模型 |
Grid Search | 網(wǎng)格搜索 |
Ground Truth | 真實(shí)值 |
Hadamard Product | Hadamard積 |
Hamming Distance | 漢明距離 |
Hard Margin | 硬間隔 |
Hebbian Rule | 赫布法則 |
Hidden Layer | 隱藏層 |
Hidden Markov Model | 隱馬爾可夫模型 |
Hidden Variable | 隱變量 |
Hierarchical Clustering | 層次聚類 |
Hilbert Space | 希爾伯特空間 |
Hinge Loss Function | 合頁(yè)損失函數(shù)/Hinge損失函數(shù) |
Hold-Out | 留出法 |
Hyperparameter | 超參數(shù) |
Hyperparameter Optimization | 超參數(shù)優(yōu)化 |
Hypothesis | 假設(shè) |
Hypothesis Space | 假設(shè)空間 |
Hypothesis Test | 假設(shè)檢驗(yàn) |
Identity Matrix | 單位矩陣 |
Imitation Learning | 模仿學(xué)習(xí) |
Importance Sampling | 重要性采樣 |
Improved Iterative Scaling | 改進(jìn)的迭代尺度法 |
Incremental Learning | 增量學(xué)習(xí) |
Independent and Identically Distributed | 獨(dú)立同分布 |
Indicator Function | 指示函數(shù) |
Individual Learner | 個(gè)體學(xué)習(xí)器 |
Induction | 歸納 |
Inductive Bias | 歸納偏好 |
Inductive Learning | 歸納學(xué)習(xí) |
Inductive Logic Programming | 歸納邏輯程序設(shè)計(jì) |
Inference | 推斷 |
Information Entropy | 信息熵 |
Information Gain | 信息增益 |
Inner Product | 內(nèi)積 |
Instance | 示例 |
Internal Covariate Shift | 內(nèi)部協(xié)變量偏移 |
Inverse Matrix | 逆矩陣 |
Inverse Resolution | 逆歸結(jié) |
Isometric Mapping | 等度量映射 |
Jacobian Matrix | 雅可比矩陣 |
Jensen Inequality | Jensen不等式 |
Joint Probability Distribution | 聯(lián)合概率分布 |
K-Armed Bandit Problem | k-搖臂老虎機(jī) |
K-Fold Cross Validation | k 折交叉驗(yàn)證 |
Karush-Kuhn-Tucker Condition | KKT條件 |
Karush–Kuhn–Tucker | Karush–Kuhn–Tucker |
Kernel Function | 核函數(shù) |
Kernel Method | 核方法 |
Kernel Trick | 核技巧 |
Kernelized Linear Discriminant Analysis | 核線性判別分析 |
KL Divergence | KL散度 |
L-BFGS | L-BFGS |
Label | 標(biāo)簽 |
Label Space | 標(biāo)記空間 |
Lagrange Duality | 拉格朗日對(duì)偶性 |
Lagrange Multiplier | 拉格朗日乘子 |
Language Model | 語(yǔ)言模型 |
Laplace Smoothing | 拉普拉斯平滑 |
Laplacian Correction | 拉普拉斯修正 |
Latent Dirichlet Allocation | 潛在狄利克雷分配 |
Latent Semantic Analysis | 潛在語(yǔ)義分析 |
Latent Variable | 潛變量/隱變量 |
Law of Large Numbers | 大數(shù)定律 |
Layer Normalization | 層規(guī)范化 |
Lazy Learning | 懶惰學(xué)習(xí) |
Leaky Relu | 泄漏修正線性單元/泄漏整流線性單元 |
Learner | 學(xué)習(xí)器 |
Learning | 學(xué)習(xí) |
Learning By Analogy | 類比學(xué)習(xí) |
Learning Rate | 學(xué)習(xí)率 |
Learning Vector Quantization | 學(xué)習(xí)向量量化 |
Least Square Method | 最小二乘法 |
Least Squares Regression Tree | 最小二乘回歸樹(shù) |
Left Singular Vector | 左奇異向量 |
Likelihood | 似然 |
Linear Chain Conditional Random Field | 線性鏈條件隨機(jī)場(chǎng) |
Linear Classification Model | 線性分類模型 |
Linear Classifier | 線性分類器 |
Linear Dependence | 線性相關(guān) |
Linear Discriminant Analysis | 線性判別分析 |
Linear Model | 線性模型 |
Linear Regression | 線性回歸 |
Link Function | 聯(lián)系函數(shù) |
Local Markov Property | 局部馬爾可夫性 |
Local Minima | 局部極小 |
Local Minimum | 局部極小 |
Local Representation | 局部式表示/局部式表征 |
Log Likelihood | 對(duì)數(shù)似然函數(shù) |
Log Linear Model | 對(duì)數(shù)線性模型 |
Log-Likelihood | 對(duì)數(shù)似然 |
Log-Linear Regression | 對(duì)數(shù)線性回歸 |
Logistic Function | 對(duì)數(shù)幾率函數(shù) |
Logistic Regression | 對(duì)數(shù)幾率回歸 |
Logit | 對(duì)數(shù)幾率 |
Long Short Term Memory | 長(zhǎng)短期記憶 |
Long Short-Term Memory Network | 長(zhǎng)短期記憶網(wǎng)絡(luò) |
Loopy Belief Propagation | 環(huán)狀信念傳播 |
Loss Function | 損失函數(shù) |
Low Rank Matrix Approximation | 低秩矩陣近似 |
Machine Learning | 機(jī)器學(xué)習(xí) |
Macron-R | 宏查全率 |
Manhattan Distance | 曼哈頓距離 |
Manifold | 流形 |
Manifold Assumption | 流形假設(shè) |
Manifold Learning | 流形學(xué)習(xí) |
Margin | 間隔 |
Marginal Distribution | 邊緣分布 |
Marginal Independence | 邊緣獨(dú)立性 |
Marginalization | 邊緣化 |
Markov Chain | 馬爾可夫鏈 |
Markov Chain Monte Carlo | 馬爾可夫鏈蒙特卡羅 |
Markov Decision Process | 馬爾可夫決策過(guò)程 |
Markov Network | 馬爾可夫網(wǎng)絡(luò) |
Markov Process | 馬爾可夫過(guò)程 |
Markov Random Field | 馬爾可夫隨機(jī)場(chǎng) |
Mask | 掩碼 |
Matrix | 矩陣 |
Matrix Inversion | 逆矩陣 |
Max Pooling | 最大匯聚 |
Maximal Clique | 最大團(tuán) |
Maximum Entropy Model | 最大熵模型 |
Maximum Likelihood Estimation | 極大似然估計(jì) |
Maximum Margin | 最大間隔 |
Mean Filed | 平均場(chǎng) |
Mean Pooling | 平均匯聚 |
Mean Squared Error | 均方誤差 |
Mean-Field | 平均場(chǎng) |
Memory Network | 記憶網(wǎng)絡(luò) |
Message Passing | 消息傳遞 |
Metric Learning | 度量學(xué)習(xí) |
Micro-R | 微查全率 |
Minibatch | 小批量 |
Minimal Description Length | 最小描述長(zhǎng)度 |
Minimax Game | 極小極大博弈 |
Minkowski Distance | 閔可夫斯基距離 |
Mixture of Experts | 混合專家模型 |
Mixture-of-Gaussian | 高斯混合 |
Model | 模型 |
Model Selection | 模型選擇 |
Momentum Method | 動(dòng)量法 |
Monte Carlo Method | 蒙特卡羅方法 |
Moral Graph | 端正圖/道德圖 |
Moralization | 道德化 |
Multi-Class Classification | 多分類 |
Multi-Head Attention | 多頭注意力 |
Multi-Head Self-Attention | 多頭自注意力 |
Multi-Kernel Learning | 多核學(xué)習(xí) |
Multi-Label Learning | 多標(biāo)記學(xué)習(xí) |
Multi-Layer Feedforward Neural Networks | 多層前饋神經(jīng)網(wǎng)絡(luò) |
Multi-Layer Perceptron | 多層感知機(jī) |
Multinomial Distribution | 多項(xiàng)分布 |
Multiple Dimensional Scaling | 多維縮放 |
Multiple Linear Regression | 多元線性回歸 |
Multitask Learning | 多任務(wù)學(xué)習(xí) |
Multivariate Normal Distribution | 多元正態(tài)分布 |
Mutual Information | 互信息 |
N-Gram Model | N元模型 |
Naive Bayes Classifier | 樸素貝葉斯分類器 |
Naive Bayes | 樸素貝葉斯 |
Nearest Neighbor Classifier | 最近鄰分類器 |
Negative Log Likelihood | 負(fù)對(duì)數(shù)似然函數(shù) |
Neighbourhood Component Analysis | 近鄰成分分析 |
Net Input | 凈輸入 |
Neural Network | 神經(jīng)網(wǎng)絡(luò) |
Neural Turing Machine | 神經(jīng)圖靈機(jī) |
Neuron | 神經(jīng)元 |
Newton Method | 牛頓法 |
No Free Lunch Theorem | 沒(méi)有免費(fèi)午餐定理 |
Noise-Contrastive Estimation | 噪聲對(duì)比估計(jì) |
Nominal Attribute | 列名屬性 |
Non-Convex Optimization | 非凸優(yōu)化 |
Non-Metric Distance | 非度量距離 |
Non-Negative Matrix Factorization | 非負(fù)矩陣分解 |
Non-Ordinal Attribute | 無(wú)序?qū)傩?/td> |
Norm | 范數(shù) |
Normal Distribution | 正態(tài)分布 |
Normalization | 規(guī)范化 |
Nuclear Norm | 核范數(shù) |
Number of Epochs | 輪數(shù) |
Numerical Attribute | 數(shù)值屬性 |
Object Detection | 目標(biāo)檢測(cè) |
Oblique Decision Tree | 斜決策樹(shù) |
Occam's Razor | 奧卡姆剃刀 |
Odds | 幾率 |
Off-Policy | 異策略 |
On-Policy | 同策略 |
One-Dependent Estimator | 獨(dú)依賴估計(jì) |
One-Hot | 獨(dú)熱 |
Online Learning | 在線學(xué)習(xí) |
Optimizer | 優(yōu)化器 |
Ordinal Attribute | 有序?qū)傩?/td> |
Orthogonal | 正交 |
Orthogonal Matrix | 正交矩陣 |
Out-Of-Bag Estimate | 包外估計(jì) |
Outlier | 異常點(diǎn) |
Over-Parameterized | 過(guò)度參數(shù)化 |
Overfitting | 過(guò)擬合 |
Oversampling | 過(guò)采樣 |
Pac-Learnable | PAC可學(xué)習(xí) |
Padding | 填充 |
Pairwise Markov Property | 成對(duì)馬爾可夫性 |
Parallel Distributed Processing | 分布式并行處理 |
Parameter | 參數(shù) |
Parameter Estimation | 參數(shù)估計(jì) |
Parameter Space | 參數(shù)空間 |
Parameter Tuning | 調(diào)參 |
Parametric ReLU | 參數(shù)化修正線性單元/參數(shù)化整流線性單元 |
Part-Of-Speech Tagging | 詞性標(biāo)注 |
Partial Derivative | 偏導(dǎo)數(shù) |
Partially Observable Markov Decision Processes | 部分可觀測(cè)馬爾可夫決策過(guò)程 |
Partition Function | 配分函數(shù) |
Perceptron | 感知機(jī) |
Performance Measure | 性能度量 |
Perplexity | 困惑度 |
Pointer Network | 指針網(wǎng)絡(luò) |
Policy | 策略 |
Policy Gradient | 策略梯度 |
Policy Iteration | 策略迭代 |
Polynomial Kernel Function | 多項(xiàng)式核函數(shù) |
Pooling | 匯聚 |
Pooling Layer | 匯聚層 |
Positive Definite Matrix | 正定矩陣 |
Post-Pruning | 后剪枝 |
Potential Function | 勢(shì)函數(shù) |
Power Method | 冪法 |
Pre-Training | 預(yù)訓(xùn)練 |
Precision | 查準(zhǔn)率/準(zhǔn)確率 |
Prepruning | 預(yù)剪枝 |
Primal Problem | 主問(wèn)題 |
Primary Visual Cortex | 初級(jí)視覺(jué)皮層 |
Principal Component Analysis | 主成分分析 |
Prior | 先驗(yàn) |
Probabilistic Context-Free Grammar | 概率上下文無(wú)關(guān)文法 |
Probabilistic Graphical Model | 概率圖模型 |
Probabilistic Model | 概率模型 |
Probability Density Function | 概率密度函數(shù) |
Probability Distribution | 概率分布 |
Probably Approximately Correct | 概率近似正確 |
Proposal Distribution | 提議分布 |
Prototype-Based Clustering | 原型聚類 |
Proximal Gradient Descent | 近端梯度下降 |
Pruning | 剪枝 |
Quadratic Loss Function | 平方損失函數(shù) |
Quadratic Programming | 二次規(guī)劃 |
Quasi Newton Method | 擬牛頓法 |
Radial Basis Function | 徑向基函數(shù) |
Random Forest | 隨機(jī)森林 |
Random Sampling | 隨機(jī)采樣 |
Random Search | 隨機(jī)搜索 |
Random Variable | 隨機(jī)變量 |
Random Walk | 隨機(jī)游走 |
Recall | 查全率/召回率 |
Receptive Field | 感受野 |
Reconstruction Error | 重構(gòu)誤差 |
Rectified Linear Unit | 修正線性單元/整流線性單元 |
Recurrent Neural Network | 循環(huán)神經(jīng)網(wǎng)絡(luò) |
Recursive Neural Network | 遞歸神經(jīng)網(wǎng)絡(luò) |
Regression | 回歸 |
Regularization | 正則化 |
Regularizer | 正則化項(xiàng) |
Reinforcement Learning | 強(qiáng)化學(xué)習(xí) |
Relative Entropy | 相對(duì)熵 |
Reparameterization | 再參數(shù)化/重參數(shù)化 |
Representation | 表示 |
Representation Learning | 表示學(xué)習(xí) |
Representer Theorem | 表示定理 |
Reproducing Kernel Hilbert Space | 再生核希爾伯特空間 |
Rescaling | 再縮放 |
Reset Gate | 重置門 |
Residual Connection | 殘差連接 |
Residual Network | 殘差網(wǎng)絡(luò) |
Restricted Boltzmann Machine | 受限玻爾茲曼機(jī) |
Reward | 獎(jiǎng)勵(lì) |
Ridge Regression | 嶺回歸 |
Right Singular Vector | 右奇異向量 |
Risk | 風(fēng)險(xiǎn) |
Robustness | 穩(wěn)健性 |
Root Node | 根結(jié)點(diǎn) |
Rule Learning | 規(guī)則學(xué)習(xí) |
Saddle Point | 鞍點(diǎn) |
Sample | 樣本 |
Sample Complexity | 樣本復(fù)雜度 |
Sample Space | 樣本空間 |
Scalar | 標(biāo)量 |
Selective Ensemble | 選擇性集成 |
Self Information | 自信息 |
Self-Attention | 自注意力 |
Self-Organizing Map | 自組織映射網(wǎng) |
Self-Training | 自訓(xùn)練 |
Semi-Definite Programming | 半正定規(guī)劃 |
Semi-Naive Bayes Classifiers | 半樸素貝葉斯分類器 |
Semi-Restricted Boltzmann Machine | 半受限玻爾茲曼機(jī) |
Semi-Supervised Clustering | 半監(jiān)督聚類 |
Semi-Supervised Learning | 半監(jiān)督學(xué)習(xí) |
Semi-Supervised Support Vector Machine | 半監(jiān)督支持向量機(jī) |
Sentiment Analysis | 情感分析 |
Separating Hyperplane | 分離超平面 |
Sequential Covering | 序貫覆蓋 |
Sigmoid Belief Network | Sigmoid信念網(wǎng)絡(luò) |
Sigmoid Function | Sigmoid函數(shù) |
Signed Distance | 帶符號(hào)距離 |
Similarity Measure | 相似度度量 |
Simulated Annealing | 模擬退火 |
Simultaneous Localization And Mapping | 即時(shí)定位與地圖構(gòu)建 |
Singular Value | 奇異值 |
Singular Value Decomposition | 奇異值分解 |
Skip-Gram Model | 跳元模型 |
Smoothing | 平滑 |
Soft Margin | 軟間隔 |
Soft Margin Maximization | 軟間隔最大化 |
Softmax | Softmax/軟最大化 |
Softmax Function | Softmax函數(shù)/軟最大化函數(shù) |
Softmax Regression | Softmax回歸/軟最大化回歸 |
Softplus Function | Softplus函數(shù) |
Span | 張成子空間 |
Sparse Coding | 稀疏編碼 |
Sparse Representation | 稀疏表示 |
Sparsity | 稀疏性 |
Specialization | 特化 |
Splitting Variable | 切分變量 |
Squashing Function | 擠壓函數(shù) |
Standard Normal Distribution | 標(biāo)準(zhǔn)正態(tài)分布 |
State | 狀態(tài) |
State Value Function | 狀態(tài)值函數(shù) |
State-Action Value Function | 狀態(tài)-動(dòng)作值函數(shù) |
Stationary Distribution | 平穩(wěn)分布 |
Stationary Point | 駐點(diǎn) |
Statistical Learning | 統(tǒng)計(jì)學(xué)習(xí) |
Steepest Descent | 最速下降法 |
Stochastic Gradient Descent | 隨機(jī)梯度下降 |
Stochastic Matrix | 隨機(jī)矩陣 |
Stochastic Process | 隨機(jī)過(guò)程 |
Stratified Sampling | 分層采樣 |
Stride | 步幅 |
Structural Risk | 結(jié)構(gòu)風(fēng)險(xiǎn) |
Structural Risk Minimization | 結(jié)構(gòu)風(fēng)險(xiǎn)最小化 |
Subsample | 子采樣 |
Subsampling | 下采樣 |
Subset Search | 子集搜索 |
Subspace | 子空間 |
Supervised Learning | 監(jiān)督學(xué)習(xí) |
Support Vector | 支持向量 |
Support Vector Expansion | 支持向量展式 |
Support Vector Machine | 支持向量機(jī) |
Surrogat Loss | 替代損失 |
Surrogate Function | 替代函數(shù) |
Surrogate Loss Function | 代理?yè)p失函數(shù) |
Symbolism | 符號(hào)主義 |
Tangent Propagation | 正切傳播 |
Teacher Forcing | 強(qiáng)制教學(xué) |
Temporal-Difference Learning | 時(shí)序差分學(xué)習(xí) |
Tensor | 張量 |
Test Error | 測(cè)試誤差 |
Test Sample | 測(cè)試樣本 |
Test Set | 測(cè)試集 |
Threshold | 閾值 |
Threshold Logic Unit | 閾值邏輯單元 |
Threshold-Moving | 閾值移動(dòng) |
Tied Weight | 捆綁權(quán)重 |
Tikhonov Regularization | Tikhonov正則化 |
Time Delay Neural Network | 時(shí)延神經(jīng)網(wǎng)絡(luò) |
Time Homogenous Markov Chain | 時(shí)間齊次馬爾可夫鏈 |
Time Step | 時(shí)間步 |
Token | 詞元 |
Token | 詞元 |
Tokenization | 詞元化 |
Tokenizer | 詞元分析器 |
Topic Model | 話題模型 |
Topic Modeling | 話題分析 |
Trace | 跡 |
Training | 訓(xùn)練 |
Training Error | 訓(xùn)練誤差 |
Training Sample | 訓(xùn)練樣本 |
Training Set | 訓(xùn)練集 |
Transductive Learning | 直推學(xué)習(xí) |
Transductive Transfer Learning | 直推遷移學(xué)習(xí) |
Transfer Learning | 遷移學(xué)習(xí) |
Transformer | Transformer |
Transformer Model | Transformer模型 |
Transpose | 轉(zhuǎn)置 |
Transposed Convolution | 轉(zhuǎn)置卷積 |
Trial And Error | 試錯(cuò) |
Trigram | 三元語(yǔ)法 |
Turing Machine | 圖靈機(jī) |
Underfitting | 欠擬合 |
Undersampling | 欠采樣 |
Undirected Graphical Model | 無(wú)向圖模型 |
Uniform Distribution | 均勻分布 |
Unigram | 一元語(yǔ)法 |
Unit | 單元 |
Universal Approximation Theorem | 通用近似定理 |
Universal Approximator | 通用近似器 |
Universal Function Approximator | 通用函數(shù)近似器 |
Unknown Token | 未知詞元 |
Unsupervised Layer-Wise Training | 無(wú)監(jiān)督逐層訓(xùn)練 |
Unsupervised Learning | 無(wú)監(jiān)督學(xué)習(xí) |
Update Gate | 更新門 |
Upsampling | 上采樣 |
V-Structure | V型結(jié)構(gòu) |
Validation Set | 驗(yàn)證集 |
Validity Index | 有效性指標(biāo) |
Value Function Approximation | 值函數(shù)近似 |
Value Iteration | 值迭代 |
Vanishing Gradient Problem | 梯度消失問(wèn)題 |
Vapnik-Chervonenkis Dimension | VC維 |
Variable Elimination | 變量消去 |
Variance | 方差 |
Variational Autoencoder | 變分自編碼器 |
Variational Inference | 變分推斷 |
Vector | 向量 |
Vector Space Model | 向量空間模型 |
Version Space | 版本空間 |
Viterbi Algorithm | 維特比算法 |
Vocabulary | 詞表 |
Warp | 線程束 |
Weak Learner | 弱學(xué)習(xí)器 |
Weakly Supervised Learning | 弱監(jiān)督學(xué)習(xí) |
Weight | 權(quán)重 |
Weight Decay | 權(quán)重衰減 |
Weight Sharing | 權(quán)共享 |
Weighted Voting | 加權(quán)投票 |
Whitening | 白化 |
Winner-Take-All | 勝者通吃 |
Within-Class Scatter Matrix | 類內(nèi)散度矩陣 |
Word Embedding | 詞嵌入 |
Word Sense Disambiguation | 詞義消歧 |
Word Vector | 詞向量 |
Zero Padding | 零填充 |
Zero-Shot Learning | 零試學(xué)習(xí) |
Zipf's Law | 齊普夫定律 |