训练集

  • 网络Training set;train set;Training Data;training dataset
训练集训练集
  1. 然后介绍了SVM分类器的训练集以及训练流程。

    Then introduced the SVM classifier training set and the training process .

  2. 设A是一训练集,B是A的一个子集,B是选择A中部分有代表性的示例而生成的。

    Suppose that A is a training set and B is a subset of A. B is generated by selecting some representative samples from A.

  3. 而像谷歌和脸书拥有的那些训练集是私有的。

    Training sets like the ones Google and Facebook have are private .

  4. 最后采用SVM算法对训练集进行训练,构建分类器并在测试集上进行测试。

    Finally use SVM algorithm to build a classifier , and test the classifier performance .

  5. 对CascadeSVM中训练集的划分提出了一种新的划分方法。

    This paper proposes a new division method for the division of the Cascade SVM training data .

  6. 基于任意给定训练集的离散型Hopfield网学习算法

    A Learning Algorithm of Discrete time Hopfield with any Given Training Set

  7. 我们用C语言编制了多层神经网络的程序,把已知的DNA启动子序列和随机的DNA序列作为训练集,用back&propagation算法训练神经网络。

    We have made a multi-layer neural network program in C language . The network was trained by known DNA promoter and random sequences using back-propagation algorithm .

  8. 将量化后的指标因素集作为SVM的训练集,采用一对一的分类策略建立了CSD的评价模型。

    Then the CSD evaluation system based on one-against-one mode of SVM was built .

  9. 训练集包括38个样本,其中27个是ALL,11个是AML。

    Training set included 38 samples , with 27 ALL and 11 AML .

  10. 原始Bagging算法随机可重复选取训练集训练个体网络。

    Original algorithm select training set randomly and repeatedly to train the individual network .

  11. 在训练集的图像中,用手工标注相关特征点的坐标并求取Jets;对于样本人脸图像,对应的特征点被自动搜索出。

    For training images , relevant feature points can be selected by hand and jets are extracted from the positions .

  12. 将增量式学习的方法与建树算法相结合,使其能够处理不断生长的训练集,提高算法的实时、有效性。将树剪枝融入到建树过程中的PUBLIC算法;

    We combine incremental methods with tree building algorithm and present a new algorithm , which is suitable for increasingly training set and is very efficient . PUBLIC , which puts tree pruning in the tree building phase ;

  13. 然而,GMM对数据有较强的依赖性,在有限训练集下,过多的模型参数将不能保证可靠估计,这就限制了GMM模型的性能。

    However , GMM relies on training data to make sure of reliably estimation of parameters .

  14. 研究结果表明,当训练集的规模变小时,SVM的精确性和推广性能优于BPN。

    The results demonstrate that the accuracy and generalization performance of SVM is better than that of BPN as the training set size gets smaller .

  15. 针对支持向量机(SVM)在处理大规模训练集时,训练速度和分类速度变慢的缺点,提出了一种基于卫向量的简化SVM模型。

    A simplification SVM model based on guard vectors is proposed for overcoming the slow speed of training and classification for large scale training set .

  16. 训练集建立的QSAR模型优于或者大致与文献结果相当。

    The QSAR model for training set is superior to or similar to the results in the literature .

  17. 将控制图作为信息图用于趋势模式的提取,用模糊化的控制图模式数据构造训练集,将训练后的BP神经网络用来对趋势模式进行自动分类识别。

    While universal control chart is used for extracting data sets of trend pattern , training data sets of neural network is made of fuzzy data from sampling data in universal control chart .

  18. K最近邻分类法自然支持增量学习的特性刚好能满足垃圾邮件过滤中要求训练集不断更新的要求。

    As we know , incremental learning is supported by K-Nearest Neighbor classification naturally , it is just to meet the requirements of updating the training sample set in the course of filtering spams .

  19. 与之相比,本文提出的基于SVM的方法只要改变相应的训练集数据即可实现自动的分割,是一种更加有效可行的方法。

    Different with them , the method I proposed in this paper is an auto segmentation procedure based on SVM , and it is a realizable and effective method .

  20. 一是从扩大分类器训练集方面考虑,本文在分析传统的SVM算法和EMNB算法及模型的特性的基础上,提出一种EMSVM分类算法。

    On one side , to enlarge the train-set , an EM_SVM classification algorithm is proposed , based on the analysis of traditional SVM algorithm and EM_NB algorithm .

  21. 通过颜色量化手段,能够显著减少训练集的颜色向量数量,实现SVM实时训练和分类。

    By the means of color quantization , the number of color vectors in training set would be reduced significantly , so the training speed is close to real time .

  22. 对于SVM,本文给出了一个核函数选择与参数调整的算法,它能够对给定训练集得到最优的参数调整。

    For SVM , in this paper , a kernel function selection and parameter adjustment algorithm are presented . It can get optimal parameter adjustment in a given training set .

  23. 结果:RBF网络对训练集的拟合度为97.3%,对测试集的分类准确率为95.4%。

    Results : The goodness of fit of the neural network model in training set was 97.3 % , and the classification accuracy in testing set was 95.4 % .

  24. 但在SVM的研究中仍然存在许多问题尚待解决,例如:模型选择问题、针对大规模训练集的学习效率问题等。

    However , some problems , for example , the model selection , efficiency of SVM for large-scale training set , etc , still need to be solved in SVM research .

  25. 首先生成合成GEI丰富训练集样本数量。

    Synthetic GEI samples are first created to address the problem of lacking training data .

  26. 建立了在训练集内部具有较强预测能力的3D-QSAR模型,并对建立的模型进行了分析。

    A strong predictive power 3D-QSAR model was established within in the training set ;

  27. 采用云理论建立训练集的各属性模型,表达各属性值隶属于其类别中心Ex的程度。

    Every attribute model of training set is set up by cloud model , which describes the membership to which any an attribute value belongs its class center Ex.

  28. 利用交叉检验方法将所有特征向量分成训练集和测试集,使用了BP神经网络、小波神经网络和不同核函数的支持向量机进行分类的训练和测试,最后对测试的结果进行了比较和分析。

    All the feature vectors were divided to training set and testing set with Cross-validation method for classification and prediction , using BP neural network , wavelet neural network and support vector machine with different kernel functions .

  29. 该算法首先对目标类训练集进行样本修剪,然后对保留的典型样本构建Steiner最小树覆盖模型。

    The method firstly prunes the training set . Then it builds Steiner minimal tree covering model on the retained typical samples .

  30. 所以如何找到一种有效且复杂度较低的训练集划分方法,从而得到相对平衡的划分子集对M3网络非常重要。

    Then how to find an effective and low complexity partition method , which can result in a relatively balanced division , is very important for M3 network .