Support      Solutions        Blog        News       Contact 
DOWNLOAD   /   DISCOVER   /   TUTORIALS    /   VIDEOS   /   STORE   /   ABOUT
To select features, right-click on the yellow XUSelect icon, and select Select features:
By pull-down menu:

Before starting the run, ensure that feature transformations (normalization or standardization) are turned on for class prediction runs.   These options will instruct Explorer to use the results (min/max, and mean & s.d.) from the training data in training folds for normalization/standardization of feature values in test objects.   It's inappropriate to normalize or standardize all objects in the dataset together, since this results in information leakage.   By using results from training data for normalizing/standardizing feature values in the test fold, there is no information from training that's used in testing. 

Therefore, select Preferences, then General default settings, and then select the following options which are outlined:
By workflow icon:
To select features, right-click on the yellow XUSelect icon, and select Select features:
A popup window for feature selection will then appear (left, below):

For this example run, we are using the 3-class wine.xlsx dataset, which is distributed with Explorer CE.   To perform class prediction using all models or methods, in the Analysis pull-down menu, select Class Prediction and then Run all methods:
Select the features that are highlighted in the left panel below,and select class as the class feature, and accept all the other options:
After you click on Apply, the run will begin.  You will notive the following progress bar, showing the completion of resamplings:

When using the workflow (icons) for a run, to see the parameter selection popup window shown to the left, right-click on the green-colored run icon, and select Edit parameters:
Once processing has completed, you will see the following output icons:
When using the workflow (icons) for a pipeline, you can run all the tasks in the current workflow by clicking on the green-colored button shown below:
Otherwise, to run a single task in the workflow, then right-click on the specific green run-icon, and select Run:
Click on the Bootstrap Bias icon for the spreadheet, and you will see the average 0.632 accuracy from 10 iterations at each object sample size.   During each random draw of objects, the relevant sample size of objects is used, and the 0.632 accuracy is determined, which was repeated 10 times (i.e., 10 bootstrap iterations): 
If you click on on the NBC image icon, you will see a plot of the mean and s.d. of in-bag, out-of-bag, and 0.632 accuracy as a function of the number of randomly drawn objects:
Click on the Accuracy by Classifier icon, you will see a line plot of mean accuracy for each classifier: