Commit 94e65f2b authored by Antoine RICHARD's avatar Antoine RICHARD
Browse files

update plots + add refs

parent d9f17cb0
......@@ -48,7 +48,6 @@ target/
# temporary files
tmp/
results/
experiment_files/
# Rmarkdown
......
@article{bezdek1984,
title={FCM: The fuzzy c-means clustering algorithm},
author={Bezdek, James C and Ehrlich, Robert and Full, William},
journal={Computers \& Geosciences},
volume={10},
number={2-3},
pages={191--203},
year={1984},
publisher={Elsevier}
}
@article{boutell2004,
title={Learning multi-label scene classification},
......
......@@ -26,10 +26,232 @@ the Mulan Java Library, developped by @tsoumakas2011mulan,
which contains plethora of classification algorithms,
but also tools to evaluate them.
# Classification systems
## BPMLL
Implementation of Back-Propagation Multi-Label Learning,
based on the work of @zhang2006.
BPMLL is based on neural network.
According to our criteria of "transparency", BPMLL is the less "transparent"
classification system evaluated.
## MLkNN
Implementation of the Multi-Label k-Nearest Neighbours
algorith, based on the work of @zhang2007.
MLkNN is based on k-Nearest Neighbours clustering algorithm and
co-occurences probabilities.
According to our criteria of "transparency", MLkNN is the third less
"transparent" classification system evaluated.
## RAkEL
Implementation of the RAndom k-labELsets alogrithm,
based on the work of @tsoumakas2011rakel.
RAkEL is a multilabel classifier using other classifier
designed for simple classification problem.
### C4.5
Implementation of the C4.5 algorithm, based on the
work of @quinlan1993.
C4.5 is based on decision trees and shannon's entropy.
According to our criteria of "transparency", C4.5 is the third
most "transparent" classification system evaluated.
### Naive Bayes
Implementation of a Naive Bayes classification algorithm,
based on the work of @john1995.
Naive Bayes is based on probalities and the bayes theorem.
According to our criteria of "transparency", Naive Bayes
with RAkEL is the second most "transparent" classification system evaluated.
### RIPPER
Implementation of Repeated Incremental Pruning
to Produce Error Reduction algorithm,
based on the work of @cohen1995.
RIPPER is based on rule-based classifier and shannon's entropy.
According to our criteria of "transparency", RIPPER is
the fourth most "transparent" classification system evaluated.
### SMO
Implementation of sequential minimal optimization
algorithm for training a support vector classifier,
based on the work of @platt1998, @hastie1998
and @keerthi2001.
SMO is based on the aggregation of several mathemical functions.
According to our criteria of "transparency", SMO is
the second less "transparent" classification system evaluated.
## HistBayes
A version of the Naive Bayes algorithm, proposed for this study,
which discretize numerical variables using histograms.
Apply the Naive Bayes algorithm for each label independently.
HistBayes is based on histograms, probabilities and the bayes theorem.
According to our criteria of "transparency", HistBayes is the most "transparent"
classification system evaluated.
## FuzzyBayes
A version of the Naive Bayes algorithm, proposed for this study,
which discretize numerical variables using the fuzzy c-means
clustering algorithm proposed by @bezdek1984.
FuzzyBayes is based on fuzzy sets, probabilities and the bayes theorem.
According to our criteria of "transparency", FuzzyBayes is the fifth
most "transparent" classification system evaluated.
# Datasets
Here are listed datasets used for this test plan.
According to @tsoumakas2007,
a dataset $D$ is defined as a set of instances,
a set of attributes $X$ (nominal or numeric)
and a set of labels $L$ (with 0 or 1 as value).
A dataset consisting of $|D|$ multi-instances
$(x_i, Y_i), i = 1..|D|$,
with $Y_i \subseteq L$ the set of labels equals
to 1 for the instance $i$.
Then, common attributes of a dataset are :
- number of instances $|D|$
- number of attributes $|X|$
- number of labels $|L|$
- label cardinality $LC(D)$
$$LC(D) = \frac{1}{|D|}\sum_{i=1}^{|D|}|Y_i|$$
- label density $LD(D)$
$$LD(D) = \frac{1}{|D|}\sum_{i=1}^{|D|}\frac{|Y_i|}{|L|}$$
## Birds
Proposed by @briggs2013.
Statistics :
- instances : 645
- attributes :
- nominal : 2
- numeric : 258
- labels : 19
- cardinality : 1.014
- density : 0.053
## CAL500
Proposed by @turnbull2008.
Statistics :
- instances : 502
- attributes :
- nominal : 0
- numeric : 68
- labels : 174
- cardinality : 26.004
- density : 0.149
## Consultations
Proposed for this experiment.
Statistics:
- instances : 50
- attributes :
- nominal : 2
- numeric : 2
- labels : 18
- cardinality : 5.4
- density : 0.3
## Emotions
Proposed by @trohidis2008.
Statistics :
- instances : 593
- attributes :
- nominal : 0
- numeric : 72
- labels : 6
- cardinality : 1.869
- density : 0.312
## Genbase
Proposed by @diplaris2005.
Statistics :
- instances : 662
- attributes :
- nominal : 1186
- numeric : 0
- labels : 27
- cardinality : 1.252
- density : 0.046
## Medical
Proposed by @pestian2007.
Statistics :
- instances : 978
- attributes :
- nominal : 1449
- numeric : 0
- labels : 45
- cardinality : 1.245
- density : 0.028
## Scene
Proposed by @boutell2004.
Statistics :
- instances : 2407
- attributes :
- nominal : 0
- numeric : 294
- labels : 6
- cardinality : 2.402
- density : 0.4
# Performances
```{r loadCsv}
results <- read.csv("results/crossvalidation.csv", sep=";")
lessTransparent2MostTransparent <- c("BPMLL","RAkEL+SMO","MLkNN","FuzzyBayes","RAkEL+Ripper","RAkEL+C4.5","RAkEL+NaiveBayes","HistBayes")
mostTransparent2lessTransparent <- rev(lessTransparent2MostTransparent)
```
## Hamming Loss
......@@ -50,7 +272,7 @@ ggplot(
aes(
x=Dataset,
y=Hamming_Loss,
fill=Learner
fill=factor(Learner,levels = mostTransparent2lessTransparent)
)
) +
geom_bar(
......@@ -68,8 +290,10 @@ geom_errorbar(
) +
ylim(0.0,1.0) +
ylab("Hamming Loss") +
labs(fill="Learner")+
scale_fill_manual(values = cbPalette)+
ggtitle("Hamming losses of multi-label classification systems by dataset") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=10, face="bold", hjust=0.5))
```
```{r hammingLossBox}
......@@ -81,27 +305,30 @@ ggplot(
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Learner)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=factor(Learner,levels = mostTransparent2lessTransparent))) +
labs(color="Learner")+
scale_colour_manual(values = cbPalette)+
ylim(0.0,1.0) +
ylab("Hamming Loss") +
ggtitle("Distribution of hamming losses of multi-label classification systems by datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
```{r hammingLossLearnerBox}
ggplot(
results,
aes(
x=Learner,
x=factor(Learner,levels = lessTransparent2MostTransparent),
y=Hamming_Loss
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Dataset)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=Dataset)) +
xlab("Learner") +
ylim(0.0,1.0) +
ylab("Hamming Loss") +
ggtitle("Distribution of hamming losses by multi-label classification systems for different datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
## Precision
......@@ -114,7 +341,7 @@ ggplot(
aes(
x=Dataset,
y=Micro.averaged_Precision,
fill=Learner
fill=factor(Learner,levels = mostTransparent2lessTransparent)
)
) +
geom_bar(
......@@ -132,8 +359,10 @@ geom_errorbar(
) +
ylim(0.0,1.0) +
ylab("Micro-averaged Precision") +
labs(fill="Learner")+
scale_fill_manual(values = cbPalette)+
ggtitle("Micro-averaged precisions of multi-label classification systems by datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=10, face="bold", hjust=0.5))
```
```{r microPrecisionBox}
......@@ -145,27 +374,30 @@ ggplot(
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Learner)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=factor(Learner,levels = mostTransparent2lessTransparent))) +
labs(color="Learner")+
scale_colour_manual(values = cbPalette)+
ylim(0.0,1.0) +
ylab("Micro-averaged Precision") +
ggtitle("Distribution of micro-averaged precisions of multi-label classification systems by datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
```{r microPrecisionLearnerBox}
ggplot(
results,
aes(
x=Learner,
x=factor(Learner,levels = lessTransparent2MostTransparent),
y=Micro.averaged_Precision
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Dataset)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=Dataset)) +
xlab("Learner") +
ylim(0.0,1.0) +
ylab("Micro-averaged Precision") +
ggtitle("Distribution of micro-averaged precisions by multi-label classification systems for different datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
### Macro-averaged
......@@ -176,7 +408,7 @@ ggplot(
aes(
x=Dataset,
y=Macro.averaged_Precision,
fill=Learner
fill=factor(Learner,levels = mostTransparent2lessTransparent)
)
) +
geom_bar(
......@@ -194,8 +426,10 @@ geom_errorbar(
) +
ylim(0.0,1.0) +
ylab("Macro-averaged Precision") +
labs(fill="Learner")+
scale_fill_manual(values = cbPalette)+
ggtitle("Macro-averaged precisions of multi-label classification systems by datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=10, face="bold", hjust=0.5))
```
```{r macroPrecisionBox}
......@@ -207,27 +441,30 @@ ggplot(
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Learner)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=factor(Learner,levels = mostTransparent2lessTransparent))) +
labs(color="Learner")+
scale_colour_manual(values = cbPalette)+
ylim(0.0,1.0) +
ylab("Macro-averaged Precision") +
ggtitle("Distribution of macro-averaged precisions of multi-label classification systems by datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
```{r macroPrecisionLearnerBox}
ggplot(
results,
aes(
x=Learner,
x=factor(Learner,levels = lessTransparent2MostTransparent),
y=Macro.averaged_Precision
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Dataset)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=Dataset)) +
xlab("Learner") +
ylim(0.0,1.0) +
ylab("Macro-averaged Precision") +
ggtitle("Distribution of macro-averaged precisions by multi-label classification systems for different datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
## Recall
......@@ -240,7 +477,7 @@ ggplot(
aes(
x=Dataset,
y=Micro.averaged_Recall,
fill=Learner
fill=factor(Learner,levels = mostTransparent2lessTransparent)
)
) +
geom_bar(
......@@ -258,8 +495,10 @@ geom_errorbar(
) +
ylim(0.0,1.0) +
ylab("Micro-averaged Recall") +
labs(fill="Learner")+
scale_fill_manual(values = cbPalette)+
ggtitle("Micro-averaged recalls of multi-label classification systems by dataset") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=10, face="bold", hjust=0.5))
```
```{r microRecallBox}
......@@ -271,27 +510,30 @@ ggplot(
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Learner)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=factor(Learner,levels = mostTransparent2lessTransparent))) +
labs(color="Learner")+
scale_colour_manual(values = cbPalette)+
ylim(0.0,1.0) +
ylab("Micro-averaged Recall") +
ggtitle("Distribution of micro-averaged recalls of multi-label classification systems by datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
```{r microRecallLearnerBox}
ggplot(
results,
aes(
x=Learner,
x=factor(Learner,levels = lessTransparent2MostTransparent),
y=Micro.averaged_Recall
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Dataset)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=Dataset)) +
xlab("Learner") +
ylim(0.0,1.0) +
ylab("Micro-averaged Recall") +
ggtitle("Distribution of micro-averaged recalls by multi-label classification systems for different datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
### Macro-averaged
......@@ -302,7 +544,7 @@ ggplot(
aes(
x=Dataset,
y=Macro.averaged_Recall,
fill=Learner
fill=factor(Learner,levels = mostTransparent2lessTransparent)
)
) +
geom_bar(
......@@ -319,9 +561,11 @@ geom_errorbar(
position = position_dodge(.9)
) +
ylim(0.0,1.0) +
labs(fill="Learner")+
scale_fill_manual(values = cbPalette)+
ylab("Macro-averaged Recall") +
ggtitle("Macro-averaged recalls of multi-label classification systems by dataset") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=10, face="bold", hjust=0.5))
```
```{r macroRecallBox}
......@@ -333,27 +577,30 @@ ggplot(
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Learner)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=factor(Learner,levels = mostTransparent2lessTransparent))) +
ylim(0.0,1.0) +
labs(color="Learner")+
scale_colour_manual(values = cbPalette)+
ylab("Macro-averaged Recall") +
ggtitle("Distribution of macro-averaged recalls of multi-label classification systems by dataset") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
```{r macroRecallLearnerBox}
ggplot(
results,
aes(
x=Learner,
x=factor(Learner,levels = lessTransparent2MostTransparent),
y=Macro.averaged_Recall
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Dataset)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=Dataset)) +
ylim(0.0,1.0) +
xlab("Learner") +
ylab("Macro-averaged Recall") +
ggtitle("Distribution of macro-averaged recalls by multi-label classification systems for different datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
## F-Measure
......@@ -366,7 +613,7 @@ ggplot(
aes(
x=Dataset,
y=Micro.averaged_F.Measure,
fill=Learner
fill=factor(Learner,levels = mostTransparent2lessTransparent)
)
) +
geom_bar(
......@@ -383,9 +630,11 @@ geom_errorbar(
position = position_dodge(.9)
) +
ylim(0.0,1.0) +
labs(fill="Learner")+
scale_fill_manual(values = cbPalette)+
ylab("Micro-averaged F-Measure") +
ggtitle("Micro-averaged F-Measure of multi-label classification systems by dataset") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=10, face="bold", hjust=0.5))
```
```{r microFBox}
......@@ -397,27 +646,30 @@ ggplot(
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Learner)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=factor(Learner,levels = mostTransparent2lessTransparent))) +
ylim(0.0,1.0) +
labs(color="Learner")+
scale_colour_manual(values = cbPalette)+
ylab("Micro-averaged F-Measure") +
ggtitle("Distribution of micro-averaged F-measures of different multi-label classification systems by datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
ggtitle("Distribution of micro-averaged F-measures of different multi-label classification systems by dataset") +
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
```{r microFLearnerBox}
ggplot(
results,
aes(
x=Learner,
x=factor(Learner,levels = lessTransparent2MostTransparent),
y=Micro.averaged_F.Measure
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Dataset)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=Dataset)) +
ylim(0.0,1.0) +
xlab("Learner") +
ylab("Micro-averaged F-Measure") +
ggtitle("Distribution of micro-averaged F-measures by multi-label classification systems for different datasets") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
ggtitle("Distribution of micro-averaged F-measures by multi-label classification systems for different dataset") +
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
### Macro-averaged
......@@ -428,7 +680,7 @@ ggplot(
aes(
x=Dataset,
y=Macro.averaged_F.Measure,
fill=Learner
fill=factor(Learner,levels = mostTransparent2lessTransparent)
)
) +
geom_bar(
......@@ -446,8 +698,10 @@ geom_errorbar(
) +
ylim(0.0,1.0) +
ylab("Macro-averaged F-Measure") +
labs(fill="Learner")+
scale_fill_manual(values = cbPalette)+
ggtitle("Macro-averaged F-Measure of multi-label classification systems by dataset") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=10, face="bold", hjust=0.5))
```
```{r macroFBox}
......@@ -459,29 +713,30 @@ ggplot(
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=factor(Learner,levels = c("HistBayes","RAkEL+NaiveBayes","RAkEL+C4.5","RAkEL+Ripper","FuzzyBayes","MLkNN","RAkEL+SMO","BPMLL")))) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=factor(Learner,levels = mostTransparent2lessTransparent))) +
ylim(0.0,1.0) +
ylab("Macro-averaged F-Measure") +
labs(color="Learner")+
scale_colour_manual(values = cbPalette)+
ggtitle("Distribution of macro-averaged F-measures of different multi-label classification systems by dataset") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
```{r macroFLearnerBox}
ggplot(
results,
aes(
x=Learner,
x=factor(Learner,levels = lessTransparent2MostTransparent),
y=Macro.averaged_F.Measure
)
) +
geom_boxplot() +
geom_jitter(shape=16, position=position_jitter(0.1), aes(colour=Dataset)) +
geom_jitter(shape=15, size=2, position=position_jitter(0.2), aes(colour=Dataset)) +
ylim(0.0,1.0) +
ylab("Macro-averaged F-Measure") +
xlab("Learner") +
ggtitle("Distribution of macro-averaged F-measures by multi-label classification systems for different dataset") +
theme(plot.title = element_text(size=14, face="bold", hjust=0.5))
theme(plot.title = element_text(size=9, face="bold", hjust=0.5))
```
# References
......
Learner;Dataset;Hamming_Loss;Hamming_Loss_std;Micro-averaged_Precision;Micro-averaged_Precision_std;Micro-averaged_Recall;Micro-averaged_Recall_std;Micro-averaged_F-Measure;Micro-averaged_F-Measure_std;Macro-averaged_Precision;Macro-averaged_Precision_std;Macro-averaged_Recall;Macro-averaged_Recall_std;Macro-averaged_F-Measure;Macro-averaged_F-Measure_std
FuzzyBayes;medical;0.0261308647170208;0.0015527700038920993;0.5465056309871376;0.032904847837644864;0.33567845549353614;0.03648306471187917;0.41461663908475926;0.03142704013686496;0.627613388398476;0.043431117987806765;0.5500096344947297;0.050720660226542744;0.5668242909047728;0.04834991870153477
RAkEL+NaiveBayes;medical;0.024700189354092128;0.0015253064603210618;0.5728816841826683;0.045393520374548246;0.442090417169524;0.04881124164455429;0.49632612554351196;0.03279466115221816;0.549997184281072;0.05727732542375166;0.5398983167606498;0.04984192553219552;0.5377620295437542;0.05113466148900289
RAkEL+Ripper;medical;0.010113846225775535;0.001264310313015008;0.8034965285445672;0.02308743040444655;0.8414388417674333;0.03602200725743984;0.8214902808631155;0.02153710485475672;0.7597516634288464;0.051619198023519185;0.7887620124331691;0.05507584916761995;0.7675530669026653;0.05138988008564799
BPMLL;medical;0.37333052808752354;0.3040729718922969;0.022490442545897753;0.018452413239805412;0.5035277379592725;0.4444473197494315;0.0428459645436325;0.035140686074660005;0.3406638679446845;0.12162288189974413;0.573675756342423;0.13701182780789334;0.3525350325884102;0.11316918666509396
RAkEL+SMO;medical;0.010134651798863876;0.001284632962678238;0.8399940698693271;0.025565632812202012;0.7847776733147355;0.03325772482330665;0.8108927380556545;0.021451642324512743;0.7726180168657293;0.06227810158485568;0.763626846651186;0.06437938274354361;0.7607358452110728;0.061652635021905544
MLkNN;medical;0.01511302802908104;0.0017900963031357733;0.8190955686931588;0.04744866381388705;0.5818934499298638;0.03907705817117778;0.6800361892519409;0.0401382951984258;0.6889664508331176;0.047883675990246656;0.6461037608045707;0.04259898588736652;0.6573950541280649;0.043682054151882324
HistBayes;medical;0.0261308647170208;0.0015527700038920993;0.5465056309871376;0.032904847837644864;0.33567845549353614;0.03648306471187917;0.41461663908475926;0.03142704013686496;0.627613388398476;0.043431117987806765;0.5500096344947297;0.050720660226542744;0.5668242909047728;0.04834991870153477
RAkEL+C4.5;medical;0.010161067863571545;0.0012211500623122585;0.831135149879594;0.029142415482411702;0.7940324094199238;0.0293298627840365;0.8118565426832527;0.02454739459615031;0.7630323195712603;0.04538668040415538;0.7612717650662095;0.044810063835292006;0.7566518138495689;0.04345996020736577
FuzzyBayes;emotions;0.22599811676082862;0.015214046353867534;0.6138606111380435;0.028466729129367447;0.7449162802313973;0.031238473243524096;0.6723236334314222;0.019748587746171612;0.6019328122167563;0.026023952588083175;0.7338455558203105;0.031206787218615608;0.6557222701939528;0.01789801521254333
RAkEL+NaiveBayes;emotions;0.25402542372881354;0.022841207424170857;0.5763675115251562;0.039051889787060354;0.7105570303562673;0.03286342932812904;0.6354121310747721;0.027637102572886943;0.5758762723573432;0.03875460443820559;0.7074573197544707;0.03771719348592492;0.6222596052115715;0.025453192434960137
RAkEL+Ripper;emotions;0.21474576271186438;0.016660704669537355;0.6894392085448591;0.042776315991092405;0.5669782375669623;0.03118043173808695;0.6216092713129449;0.030260251439651914;0.6397156497867058;0.0701756694244393;0.5492147623070645;0.029439987039157598;0.5622551666023673;0.026298153366154094
BPMLL;emotions;0.20268832391713745;0.024867997216233194;0.6620824283739347;0.03671805442596619;0.7159200882327069;0.049694340796497774;0.6872899950982667;0.037216839906036754;0.6624147610091194;0.037320907331352474;0.6993305547339455;0.04518499626140986;0.6710623499555802;0.03668492027962018
RAkEL+SMO;emotions;0.1830131826741996;0.024369269509372366;0.7131597967143628;0.03823020683276973;0.6887393045787589;0.04970315967282851;0.7004045490649484;0.042184374157013635;0.7159541759212463;0.03921428116261865;0.6728672664895382;0.051657769430868576;0.6750160915488574;0.04807847865417579
MLkNN;emotions;0.19512241054613938;0.024290027666005395;0.724236992982302;0.05711934583428696;0.6087239194036751;0.050493678916325994;0.659763488248986;0.042323141683226866;0.7330072880085836;0.06915593621121124;0.59215058130702;0.042472901027878415;0.6242834438944406;0.041261324265395256
HistBayes;emotions;0.2442561205273069;0.016547728547760487;0.60331116160733;0.032404799030091226;0.6368546327642347;0.03303832400161101;0.618714596612416;0.022363867134046984;0.5938874306639762;0.03059440924840531;0.6327747711736115;0.03697550405892179;0.6054726251840649;0.020150605133177367
RAkEL+C4.5;emotions;0.21526836158192086;0.025141604859255474;0.6615683955989771;0.05248604130555776;0.630362680546713;0.05191064152593666;0.6450563824728918;0.04915016878148534;0.6543769105011095;0.06015975500376327;0.6191725130836605;0.050616814963891935;0.6282852612627216;0.055215506180225984