Codota Logo
Evaluation.evaluateModel
Code IndexAdd Codota to your IDE (free)

How to use
evaluateModel
method
in
weka.classifiers.evaluation.Evaluation

Best Java code snippets using weka.classifiers.evaluation.Evaluation.evaluateModel (Showing top 20 results out of 315)

  • Common ways to obtain Evaluation
private void myMethod () {
Evaluation e =
  • Codota IconInstances data;new Evaluation(data)
  • Smart code suggestions by Codota
}
origin: stackoverflow.com

 Classifier cModel = (Classifier)new NaiveBayes();  
cModel.buildClassifier(isTrainingSet);  

weka.core.SerializationHelper.write("/some/where/nBayes.model", cModel);

Classifier cls = (Classifier) weka.core.SerializationHelper.read("/some/where/nBayes.model");

// Test the model
Evaluation eTest = new Evaluation(isTrainingSet);
eTest.evaluateModel(cls, isTrainingSet);
origin: nz.ac.waikato.cms.weka/distributedWekaBase

m_eval.evaluateModel(m_classifier, test);
if (m_predFrac <= 0) {
 ((AggregateableEvaluationWithPriors) m_eval).deleteStoredPredictions();
origin: nz.ac.waikato.cms.weka/weka-stable

/**
 * Evaluates the classifier on a given set of instances. Note that the data
 * must have exactly the same format (e.g. order of attributes) as the data
 * used to train the classifier! Otherwise the results will generally be
 * meaningless.
 * 
 * @param classifier machine learning classifier
 * @param data set of test instances for evaluation
 * @param forPredictionsPrinting varargs parameter that, if supplied, is
 *          expected to hold a
 *          weka.classifiers.evaluation.output.prediction.AbstractOutput
 *          object
 * @return the predictions
 * @throws Exception if model could not be evaluated successfully
 */
public double[] evaluateModel(Classifier classifier, Instances data,
 Object... forPredictionsPrinting) throws Exception {
 return m_delegate.evaluateModel(classifier, data, forPredictionsPrinting);
}
origin: Waikato/weka-trunk

/**
 * Evaluates the classifier on a given set of instances. Note that the data
 * must have exactly the same format (e.g. order of attributes) as the data
 * used to train the classifier! Otherwise the results will generally be
 * meaningless.
 * 
 * @param classifier machine learning classifier
 * @param data set of test instances for evaluation
 * @param forPredictionsPrinting varargs parameter that, if supplied, is
 *          expected to hold a
 *          weka.classifiers.evaluation.output.prediction.AbstractOutput
 *          object
 * @return the predictions
 * @throws Exception if model could not be evaluated successfully
 */
public double[] evaluateModel(Classifier classifier, Instances data,
 Object... forPredictionsPrinting) throws Exception {
 return m_delegate.evaluateModel(classifier, data, forPredictionsPrinting);
}
origin: nz.ac.waikato.cms.weka/weka-stable

throws Exception {
return weka.classifiers.evaluation.Evaluation.evaluateModel(
 classifierString, options);
origin: nz.ac.waikato.cms.weka/weka-stable

return weka.classifiers.evaluation.Evaluation.evaluateModel(classifier,
 options);
origin: Waikato/weka-trunk

return weka.classifiers.evaluation.Evaluation.evaluateModel(classifier,
 options);
origin: nz.ac.waikato.cms.weka/weka-stable

/**
 * A test method for this class. Just extracts the first command line argument
 * as a classifier class name and calls evaluateModel.
 *
 * @param args an array of command line arguments, the first of which must be
 *             the class name of a classifier.
 */
public static void main(String[] args) {
 try {
  if (args.length == 0) {
   throw new Exception("The first argument must be the class name of a classifier");
  }
  String classifier = args[0];
  args[0] = "";
  System.out.println(evaluateModel(classifier, args));
 } catch (Exception ex) {
  ex.printStackTrace();
  System.err.println(ex.getMessage());
 }
}
origin: stackoverflow.com

 Evaluation eval = new Evaluation(data);
eval.evaluateModel(j48DecisionTree, data);
System.out.println(eval.toSummaryString("\nResults\n======\n", true));
origin: Waikato/weka-trunk

/**
 * A test method for this class. Just extracts the first command line argument
 * as a classifier class name and calls evaluateModel.
 *
 * @param args an array of command line arguments, the first of which must be
 *             the class name of a classifier.
 */
public static void main(String[] args) {
 try {
  if (args.length == 0) {
   throw new Exception("The first argument must be the class name of a classifier");
  }
  String classifier = args[0];
  args[0] = "";
  System.out.println(evaluateModel(classifier, args));
 } catch (Exception ex) {
  ex.printStackTrace();
  System.err.println(ex.getMessage());
 }
}
origin: stackoverflow.com

Evaluation eval = new Evaluation(train);
 eval.evaluateModel(mlp, train);
 System.out.println(eval.errorRate()); //Printing Training Mean root squared Error
 System.out.println(eval.toSummaryString()); //Summary of Training
origin: stackoverflow.com

 //Learning
DataSource source = new DataSource(Path);
Instances data = source.getDataSet();
J48 tree = tree.buildClassifier(data);

//Evaluation
Evaluation eval = new Evaluation(data);
eval.evaluateModel(tree, data);
System.out.println((eval.correct()/data.numInstances())*100);
origin: stackoverflow.com

 Instances trainData = ds.getDataset(); //get training dataset

SMO sm = new SMO(); //build classifier

sm.buildClassifier(data); //train classifier

Instances testData = ds.getDataSet(); //now get the test set

Evaluation eval = new Evaluation(data); //for recording results

eval.evaluateModel(sm, testData);

System.out.println(eval.toMatrixString()); //gives the confusion matrix for predictions
origin: stackoverflow.com

 1.        filteredData = new Instances(new BufferedReader(new FileReader("/Users/Passionate/Desktop/train_std.arff")));     

2.         Instances filteredTests= new Instances(new BufferedReader(new FileReader("/Users/Passionate/Desktop/test_std.arff")));

3.         filteredData.setClassIndex(filteredData.attribute("@@class@@").index());

4.         Classifier classifier=new SMO();

5.         classifier.buildClassifier(filteredData);

6.         FilteredClassifier filteredClassifier=new FilteredClassifier();
7.         filteredClassifier.setClassifier(classifier);

8.         Evaluation eval = new Evaluation(filteredData);
9.         eval.evaluateModel(filteredClassifier, filteredTests); **// Error line.**

10.        System.out.println(eval.toSummaryString("\nResults\n======\n", false));
origin: stackoverflow.com

InputMappedClassifier mappedCls = new InputMappedClassifier();
 cls.buildClassifier(data);
 mappedCls.setModelHeader(data);
 mappedCls.setClassifier(cls);
 mappedCls.setSuppressMappingReport(true);
 Evaluation eval = new Evaluation(testdata);
 eval.evaluateModel(mappedCls, testdata);
origin: stackoverflow.com

eval.evaluateModel(cls, test);
System.out.println(cls);
System.out.println(eval.toSummaryString("\nResults\n======\n", false));
origin: stackoverflow.com

 public static void classify() {      
  try {            
    Instances train = new Instances (...);            
    train.setClassIndex(train.numAttributes() - 1);         
    Instances test = new Instances (...);            
    test.setClassIndex(test.numAttributes() - 1);                      
    ClassificationType classificationType = ClassificationTypeDAO.get(6);  // 6 is SVM.        
    LibSVM classifier = new LibSVM();
    String options = (classificationType.getParameters());
    String[] optionsArray = options.split(" ");                          
    classifier.setOptions(optionsArray);        
    classifier.buildClassifier(train);        
    Evaluation eval = new Evaluation(train);
    eval.evaluateModel(classifier, test);
    System.out.println(eval.toSummaryString("\nResults\n======\n", false));       
  } 
  catch (Exception ex) {            
    Misc_Utils.printStackTrace(ex);
  }                       
}
origin: stackoverflow.com

eval1.evaluateModel(cls, test);
origin: nz.ac.waikato.cms.weka/weka-stable

 evaluateModel(copiedClassifier, test, forPrinting);
} else {
 evaluateModel(copiedClassifier, test);
origin: Waikato/weka-trunk

 evaluateModel(copiedClassifier, test, forPrinting);
} else {
 evaluateModel(copiedClassifier, test);
weka.classifiers.evaluationEvaluationevaluateModel

Javadoc

Evaluates a classifier with the options given in an array of strings.

Valid options are:

-t filename
Name of the file with the training data. (required)

-T filename
Name of the file with the test data. If missing a cross-validation is performed.

-c index
Index of the class attribute (1, 2, ...; default: last).

-x number
The number of folds for the cross-validation (default: 10).

-no-cv
No cross validation. If no test file is provided, no evaluation is done.

-split-percentage percentage
Sets the percentage for the train/test set split, e.g., 66.

-preserve-order
Preserves the order in the percentage split instead of randomizing the data first with the seed value ('-s').

-s seed
Random number seed for the cross-validation and percentage split (default: 1).

-m filename
The name of a file containing a cost matrix.

-l filename
Loads classifier from the given file. In case the filename ends with ".xml",a PMML file is loaded or, if that fails, options are loaded from XML.

-d filename
Saves classifier built from the training data into the given file. In case the filename ends with ".xml" the options are saved XML, not the model.

-v
Outputs no statistics for the training data.

-o
Outputs statistics only, not the classifier.

-output-models-for-training-splits
Output models for training splits if cross-validation or percentage-split evaluation is used.

-do-not-output-per-class-statistics
Do not output statistics per class.

-k
Outputs information-theoretic statistics.

-classifications "weka.classifiers.evaluation.output.prediction.AbstractOutput + options"
Uses the specified class for generating the classification output. E.g.: weka.classifiers.evaluation.output.prediction.PlainText or : weka.classifiers.evaluation.output.prediction.CSV

-p range
Outputs predictions for test instances (or the train instances if no test instances provided and -no-cv is used), along with the attributes in the specified range (and nothing else). Use '-p 0' if no attributes are desired.

Deprecated: use "-classifications ..." instead.

-distribution
Outputs the distribution instead of only the prediction in conjunction with the '-p' option (only nominal classes).

Deprecated: use "-classifications ..." instead.

-no-predictions
Turns off the collection of predictions in order to conserve memory.

-r
Outputs cumulative margin distribution (and nothing else).

-g
Only for classifiers that implement "Graphable." Outputs the graph representation of the classifier (and nothing else).

-xml filename | xml-string
Retrieves the options from the XML-data instead of the command line.

-threshold-file file
The file to save the threshold data to. The format is determined by the extensions, e.g., '.arff' for ARFF format or '.csv' for CSV.

-threshold-label label
The class label to determine the threshold data for (default is the first label)

Popular methods of Evaluation

  • <init>
    Initializes all the counters for the evaluation and also takes a cost matrix as parameter. Use useNo
  • evaluateModelOnceAndRecordPrediction
    Evaluates the supplied distribution on a single instance.
  • toClassDetailsString
    Generates a breakdown of the accuracy for each class, incorporating various information-retrieval st
  • toSummaryString
    Calls toSummaryString() with a default title.
  • areaUnderROC
    Returns the area under ROC for those predictions that have been collected in the evaluateClassifier(
  • correct
    Gets the number of instances correctly classified (that is, for which a correct prediction was made)
  • crossValidateModel
    Performs a (stratified if class is nominal) cross-validation for a classifier on a set of instances.
  • numInstances
    Gets the number of test instances that had a known class value (actually the sum of the weights of t
  • predictions
    Returns the predictions that have been collected.
  • toMatrixString
    Outputs the performance statistics as a classification confusion matrix. For each class value, shows
  • areaUnderPRC
    Returns the area under precision-recall curve (AUPRC) for those predictions that have been collected
  • errorRate
    Returns the estimated error rate or the root mean squared error (if the class is numeric). If a cost
  • areaUnderPRC,
  • errorRate,
  • evaluateModelOnce,
  • fMeasure,
  • falseNegativeRate,
  • getHeader,
  • incorrect,
  • kappa,
  • meanAbsoluteError

Popular in Java

  • Creating JSON documents from java classes using gson
  • findViewById (Activity)
  • onCreateOptionsMenu (Activity)
  • getExternalFilesDir (Context)
  • GridLayout (java.awt)
    The GridLayout class is a layout manager that lays out a container's components in a rectangular gri
  • Collections (java.util)
    This class consists exclusively of static methods that operate on or return collections. It contains
  • Map (java.util)
    A Map is a data structure consisting of a set of keys and values in which each key is mapped to a si
  • TimerTask (java.util)
    A task that can be scheduled for one-time or repeated execution by a Timer.
  • AtomicInteger (java.util.concurrent.atomic)
    An int value that may be updated atomically. See the java.util.concurrent.atomic package specificati
  • JButton (javax.swing)
Codota Logo
  • Products

    Search for Java codeSearch for JavaScript codeEnterprise
  • IDE Plugins

    IntelliJ IDEAWebStormAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimAtomGoLandRubyMineEmacsJupyter
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogCodota Academy Plugin user guide Terms of usePrivacy policyJava Code IndexJavascript Code Index
Get Codota for your IDE now