Professional Documents
Culture Documents
Research Article
A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis
Method for Wheeled Robot Driving System
Xianfeng Yuan,1 Mumin Song,1 Fengyu Zhou,1 Zhumin Chen,2 and Yan Li1
1
School of Control Science and Engineering, Shandong University, Jinan 250061, China
2
School of Computer Science and Technology, Shandong University, Jinan 250101, China
Copyright © 2015 Xianfeng Yuan et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service
robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework
based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data
sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis
(PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers
that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier
is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs)
are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result
is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and
identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared
with the traditional methods.
Moreover, the wheeled robots are well equipped with multiple the application studies for various fault conditions. Section 5
sensors which implies that large data volumes containing is devoted to conclusions.
robot running status information are available. Those large
data volumes imply difficulties in system modeling, while 2. SVM Algorithm and the Presented
they provide the required information for data driven based
fault diagnosis method. ML-Kernel Function
Principal component analysis (PCA) is a typical represen- 2.1. Conventional SVM Algorithm. In the past few years,
tative of the data driven fault diagnosis method. PCA is more SVM has been one of the most highly studied topics in
suitable for fault detection rather than diagnosis, because it the machine learning fields, and it has been successfully
does not use the input-output relationships [14]. Therefore, in applied in practice, especially for classification problems (e.g.,
order to improve the diagnosis ability, PCA is often used by fault diagnosis) [23, 24]. Based on the statistical theory of
combining the classifiers, such as the neural network (NN) VC dimension and structural risk minimization inductive
and the support vector machine (SVM). This hybrid method principle, SVM reaches the best compromise between the
has been applied in the fault diagnosis of rotating machinery complexity of modeling and the leaning ability and hunts
[15], power transmission systems [16], and some other aspects the best generalization ability. The basic SVM [25] deals
[17, 18]. Applications of PCA could be useful in extracting with linearly separable two class cases and it can cope with
and interpreting process information from massive data sets, nonlinear problems by introducing kernel functions and
and the pattern recognition techniques could also be used to
slack penalty. Given a training set 𝑆𝑡 = {𝑥𝑖 , 𝑦𝑖 }𝑁 𝑖=1 , where
diagnose the specific running status of the robot. 𝑑
Nevertheless, there are mainly two problems that exist in 𝑥𝑖 ∈ 𝑅 is the 𝑖th training input vector, 𝑁 is the number of
the above hybrid diagnosis methods. On the one hand, most training data for SVM, 𝑑 is the dimension of the input data,
of the studies adopted the existing classical kernel (e.g., Gaus- and 𝑦𝑖 ∈ {−1, 1} is the set of classification tag for training. The
sian kernel and polynomial kernel) as the kernel function of optimal hyperplane separating the data can be obtained as a
SVM in their diagnosis methods, while new kernel functions solution to the following optimization problem:
with better classification performance need to be proposed, 𝑁
1
proved, and applied to the robot fault diagnosis fields. On the min ( 𝜔𝑇 𝜔 + 𝑐∑𝜉𝑖 ) ,
other hand, the diagnosed objects are usually complex and 𝜔,𝑏,𝜉𝑖 2 𝑖=1 (1)
with varying degrees of uncertainties. A single PCA model
cannot achieve full and complete awareness of the diagnosed s.t. 𝑦𝑖 (𝜔𝑇 𝑥𝑖 + 𝑏) ≥ 1 − 𝜉𝑖 , 𝜉𝑖 ≥ 0, 𝑖 = 1, 2, . . . , 𝑁,
object so that the information fusion in data level or decision
level is needed to reduce the existing uncertainties. where 𝑐 is the slack penalty, 𝜔 is the adjustable weight vector,
Mittag-Leffer functions [19, 20] play fundamental roles 𝑏 is the offset of the hyperplane, and 𝜉𝑖 is the distance between
in fractional calculus, which exhibit intermediate process the margin and the 𝑥𝑖 lying on the wrong side. The equivalent
among exponential function, power function, and poly- Lagrangian dual problem can be described as follows:
nomial function. Nowadays, fractional calculus has been
𝑁
successfully applied in many aspects, such as the application 1 𝑁
min (∑𝛼𝑖 − ∑ 𝑦 𝑦 𝛼 𝛼 𝑥𝑇 𝑥 ) ,
of fractional Fourier transform in signal processing [21] 𝛼
𝑖
𝑖=1 2 𝑖,𝑗=1 𝑖 𝑗 𝑖 𝑗 𝑖 𝑗
and the application of fractional order PI controllers [22]. (2)
Inspired by fractional calculus, a novel fractional Gaussian 𝑁
kernel named ML-kernel is proposed in this paper, which s.t. ∑𝛼𝑖 𝑦𝑖 = 0, 0 ≤ 𝛼𝑖 ≤ 𝐶, 𝑖 = 1, 2, . . . , 𝑁,
is a generalization of the traditional Gaussian kernel. The 𝑖=1
proposed ML-kernel is proved to be positive definite and its
where 𝛼𝑖 is the Lagrangian coefficient, from which we can
diagnosis performance is discussed in this paper. Besides,
a hybrid fault diagnosis framework is discussed for robot obtain 𝜔 = ∑𝑁 𝑇
𝑖=1 𝛼𝑖 𝑦𝑖 𝑥𝑖 , 𝑏 = 𝑦𝑖 − 𝜔 𝑥𝑖 , to solve (1).
driving system based on Dempster-Shafer (D-S) fusion and The kernel function can map the input vector 𝑥 into
ML-kernel support vector machine (SVM). Multiple PCA feature space and returns a dot product of the feature space.
models are established to do fault feature extraction and the The linear discriminant function with kernel 𝐾(𝑥𝑖 , 𝑥𝑗 ) is
fault feature vectors are used as the inputs of the ML-kernel given by the following:
SVM classifiers. The ML-kernel SVM classifiers output the 𝑁
preliminary fault diagnosis results which are fused by D-S 𝑓 (𝑥) = sgn ( ∑ 𝑎𝑖 𝑦𝑖 𝐾 (𝑥𝑖 , 𝑥𝑗 ) + 𝑏) , (3)
fusion and the fusion result is taken as the final diagnosis 𝑖,𝑗=1
result. Two sets of comparative experiments are carried out
to validate the proposed method. where sgn(⋅) is the signum function.
The remainder of this paper is organized as follows. The fault diagnosis of a robot driving system is a multiple
Section 2 briefly introduces the SVM method and the positive class classification problem, while the conventional SVM was
definiteness of the presented ML-kernel is also proved in this designed for the binary classification problem, so it is not
section. In Section 3, the proposed fault diagnosis framework suitable for the fault diagnosis in its original form. A few
is described in detail. Section 4 illustrates the architecture types of methods for multiclass SVM have been proposed
of the experimental wheeled robot driving system and [26]: one against one, one against others, direct acyclic graph,
Computational Intelligence and Neuroscience 3
and so forth. This study employs the “one against one” For convenience, letting 𝑡 = ‖𝑥𝑖 −𝑥𝑗 ‖2 /2𝛿2 , the ML-kernel
multiclass SVM. In order to construct the BPAs, we need the (8) can be written as 𝐾(𝑥𝑖 , 𝑥𝑗 ) = 𝐸𝛼 (−𝑡𝛼 ). In Laplace domain,
probabilistic outputs of the SVM classifiers and the “pairwise we can get [30]
coupling” method [27] is used to solve this problem.
where 0 < 𝛼 ≤ 1. When 𝛼 = 1, (8) becomes 𝐸1 (−‖𝑥𝑖 − Along 𝐶, we have 𝑠 = 𝛿𝑒𝑖𝜃 , 𝜃 ∈ (−𝜋, 𝜋), and
𝑥𝑗 ‖2 /2𝛿2 ) = exp(−‖𝑥𝑖 − 𝑥𝑗 ‖2 /2𝛿2 ), and the ML-kernel is
identical to the Gaussian RBF kernel in this condition.
1 𝑒𝑠𝑡 𝑠𝛼−1
Given a kernel, it is in general straightforward to verify 𝐼2 = ∫ 𝛼 d𝑠
its continuity and symmetry, while the positive definiteness is 2𝜋𝑖 𝐶 𝑠 + 1
more important and essential for a kernel. Thus, the proof of 𝑖𝜃 𝛼−1 (13)
𝑟𝑒 𝑡
the positive definiteness of the proposed ML-kernel is given 1 𝜋 𝑒 (𝑟𝑒𝑖𝜃 )
= ∫ 𝑟𝑖𝑒𝑖𝜃 d𝜃 = 0.
as below. 2𝜋𝑖 −𝜋 (𝑟𝑒𝑖𝜃 )𝛼 + 1
4 Computational Intelligence and Neuroscience
Hence, we can obtain Given that the robot driving system is equipped with 𝑚
sensors and we get 𝑛 groups of samples in each running
𝐼1 + 𝐼3 state, so the original sampled data sets can be written as
𝐷all = [𝐷0 , . . . , 𝐷𝑘 , . . . , 𝐷ℎ ]𝑇 , 𝐷𝑘 ∈ 𝑅𝑛×𝑚 (𝑘 = 0, 1, . . . , ℎ).
1 +∞ 𝑟𝛼−1 𝑒−𝛼𝜋𝑖 −𝑟𝑡 𝑟𝛼−1 𝑒𝛼𝜋𝑖 −𝑟𝑡 𝐷0 represents the samples under normal condition 𝑆0 and
= ∫ [− 𝛼 −𝛼𝜋𝑖 𝑒 + 𝛼 𝛼𝜋𝑖 𝑒 ] d𝑟
2𝜋𝑖 𝛿 𝑟 𝑒 +1 𝑟 𝑒 +1 𝐷𝑘 (𝑘 = 1, . . . , ℎ) represent the samples in the 𝑘th kind of
faulty state 𝑆𝑘 . To establish PCA models, several steps are
1 +∞ 𝑟𝛼−1 𝑟𝛼−1
= ∫ [ 𝛼 −𝛼𝜋𝑖 − 𝛼 𝛼𝜋𝑖 ] 𝑒−𝑟𝑡 d𝑟 introduced.
2𝜋𝑖 𝛿 𝑟 +𝑒 𝑟 +𝑒 (14)
Step 1 (data normalization). To reduce the influence of
1 +∞ 𝑟𝛼−1 (𝑒𝛼𝜋𝑖 − 𝑒−𝛼𝜋𝑖 ) different dimensions of the sensors, the training data should
= ∫ 𝑒−𝑟𝑡 d𝑟
2𝜋𝑖 𝛿 𝑟2𝛼 + 𝑟𝛼 (𝑒𝛼𝜋𝑖 + 𝑒−𝛼𝜋𝑖 ) + 1 be normalized before establishing the PCA models. For a data
set of 𝑛 observations and 𝑚 process variables 𝐷𝑘 ∈ 𝑅𝑛×𝑚 (𝑘 =
1 +∞ 𝑟𝛼−1 sin (𝛼𝜋) 0, 1, . . . , ℎ), we can get the normalized data matrix 𝐷𝑘 by
= ∫ 𝑒−𝑟𝑡 d𝑟.
𝜋 𝛿 𝑟 + 2𝑟𝛼 cos (𝛼𝜋) + 1
2𝛼
1 𝑛
+ 𝑀𝑗 = ∑𝑑 ,
Letting 𝛿 → 0 , we have 𝑛 𝑖=1 𝑖𝑗
𝐸𝛼 (−𝑡𝛼 ) = 𝐼1 + 𝐼2 + 𝐼3
1 𝑛 2
𝜎𝑗 = √ ∑ (𝑑 − 𝑀𝑗 ) , (16)
1 +∞ 𝛼−1
𝑟 sin (𝛼𝜋) (15) 𝑛 − 1 𝑖=1 𝑖𝑗
= ∫ 𝑒−𝑟𝑡 d𝑟 > 0.
𝜋 𝛿 𝑟 + 2𝑟𝛼 cos (𝛼𝜋) + 1
2𝛼
𝑑𝑖𝑗 − 𝑀𝑗
Therefore 𝐾(𝑥𝑖 , 𝑥𝑗 ) = 𝐸𝛼 (−‖𝑥𝑖 − 𝑥𝑗 ‖2𝛼 /2𝛿2𝛼 ) = 𝐾(𝑥𝑗 , 𝑥𝑖 ), 𝑥𝑖𝑗 = ,
𝜎𝑗
and, for all 𝑎 ∈ R, we have ∑𝑛𝑖,𝑗=1 𝑎𝑖 𝑎𝑗 𝐾(𝑥𝑖 , 𝑥𝑗 ) ≥ 0. In other
words, the proposed ML-kernel is symmetrical and positive where 𝑀𝑗 and 𝜎𝑗 are the mean and standard deviation,
definite. Therefore, the proof is complete. respectively, of the 𝑗th variable, 𝑑𝑖𝑗 is an element of matrix
𝐷𝑘 , and 𝑥𝑖𝑗 is an element of the normalized data matrix 𝐷𝑘 .
3. Fault Diagnosis Method Based on
Step 2 (singular value decomposition). Consider
ML-Kernel SVM and D-S Fusion
As shown in Figure 1, there are two main processes of the 𝐶𝑘 = 𝑃𝑘 Λ 𝑘 𝑃𝑘𝑇 , (17)
proposed approach, namely, the training process and the fault
diagnosis process. Before the application of the proposed where 𝐶𝑘 represents covariance matrix of 𝐷𝑘 , Λ 𝑘 is a diagonal
approach, the initial samples should be obtained from the matrix containing the eigenvalues of 𝐶𝑘 in decreasing order,
laboratory experiments. In the training process, multiple and 𝑃𝑘 is orthogonal and contains the eigenvectors of 𝐶𝑘 .
PCA models are set up based on the data sampled in the
Step 3 (determine the loading matrix according to the number
normal and faulty states. Then, those models are used to do
of PCs). Given 𝛽𝑖 = 𝜆 𝑖 / ∑𝑚 𝑗=1 𝜆 𝑗 , the number of principal
fault feature extraction and the ML-kernel SVM classifiers
are trained. In the diagnosis process, new sampled data are components (PCs) 𝑙 is determined to satisfy the equation
normalized firstly. Secondly, the PCA models established in 𝛽1 + 𝛽2 + ⋅ ⋅ ⋅ + 𝛽𝑙 ≥ 𝜇, where 𝜇 is a constant and usually
the training process are applied to do fault feature extraction. required to be bigger than 0.85 [31].
The fault feature vectors are then served as the inputs of The loading matrix 𝑃̂𝑘 = [𝑝1 , 𝑝2 , . . . , 𝑝𝑙 ] consists of the
the trained ML-kernel SVM classifiers, respectively, and former 𝑙 eigenvectors of the covariance matrix and 𝐷𝑘 can be
the probabilistic outputs of the classifiers are taken as the decomposed as
preliminary fault diagnosis results. The BPAs are constructed
based on the preliminary fault diagnosis results and the 𝐷𝑘 = 𝑇𝑠 𝑃̂𝑘𝑇 + 𝐸, (18)
confidence values calculated from the confusion matrix. To
reduce the uncertainties of the preliminary diagnosis results, where 𝑇𝑠 = 𝐷𝑘 𝑃̂𝑘 is named as score matrix and 𝐸 is the
the D-S fusion algorithm is introduced for decision fusion residual matrix.
and the final diagnosis results are given based on the fusion
results. The proposed approach is elaborated in detail as 3.2. Feature Extraction and SVM Training. During the pro-
follows. cess of PCA, the orthogonal loading matrix 𝑃̂𝑘 can be
considered as the main features of the original training data
3.1. Data Preprocessing and Establishment of Multiple PCA set. So, we can do data dimensionality reduction and feature
Models. Suppose that there are ℎ + 1 kinds of robot running extraction at the same time using the following equation:
states represented as {𝑆0 , 𝑆1 , . . . , 𝑆ℎ }. 𝑆0 represents the normal
condition and 𝑆1 , . . . , 𝑆ℎ represent ℎ kinds of faulty states. 𝐹𝑘 = (𝐷all ⋅ 𝑃̂𝑘 ) ∈ 𝑅(ℎ+1)𝑛×𝑙 , 𝑘 = 0, 1, . . . , ℎ, (19)
Computational Intelligence and Neuroscience 5
Feature Confidence
𝛼h , 𝜔hj
𝛼1 , 𝜔1j
𝛼0 , 𝜔0j
extraction PCA0 feature values
PCA1 feature ··· PCAh feature
for BPA P0 P1 ··· Ph
extraction 0 extraction 1 extraction h
construction
SVM training
(with ML-kernel) SVM0 SVM1 ··· SVMh BPA 0 BPA 1 ··· BPA h
···
Confusion CM0 CM1 ··· CMh D-S fusion
matrix
Global and local 𝛼0 , 𝜔0j 𝛼1 , 𝜔1j ··· 𝛼h , 𝜔hj The final diagnosis result
confidence values
where 𝐷all ∈ 𝑅(ℎ+1)𝑛×𝑚 is the normalized training data sets where 𝑐𝑝𝑝 is the diagonal elements of the confusion matrix of
and 𝑃̂𝑘 ∈ 𝑅𝑚×𝑙 is the loading matrix of the 𝑘th PCA model. 𝑙 SVM𝑘 , 𝑁𝑝 = 𝑛 is the number of samples under the 𝑝th kind
is the number of principal components and 𝐹𝑘 is used as the of fault condition, and 𝑁 = (ℎ + 1)𝑛 is the total number of
final training sets of SVM𝑘 (𝑘 = 0, 1, . . . , ℎ). training samples in the training set 𝐹𝑘 . So, we can get 𝑎𝑘 =
A novel ML-kernel presented in Section 2.3 is applied as ∑ℎ+1
𝑝=1 𝑐𝑝𝑝 /(ℎ + 1), which can be used as the global confidence
the kernel function of SVM𝑘 and particle swarm optimization of SVM𝑘 .
(PSO) [32] is adopted to tune the parameters 𝑐, 𝛿, and 𝛼. Here, The 𝑞th column vector of the confusion matrix 𝑐⋅𝑞 (𝑞 =
𝑐 is the slack penalty and 𝛿 and 𝛼 are two parameters of the 1, 2, . . . , ℎ + 1) indicates the local confidence for the 𝑞th kind
ML-kernel. of fault and the local confidence can be calculated by
Battery
Driving
wheel Sensors
Encoder
Data
Driving uploading
system ARM chip
Temperature sensor
Control
boards
IR sensor
Sensor
interface
IMU module
H-bridge Data acquisition
driving interface
Laser radar
Step 2 (construction of BPAs and D-S fusion). In our Table 1: Fault position and its common fault modes.
approach, the BPAs are defined as
Fault Tag
Fault position Fault mode
𝑚𝑘 (⌀, 𝑆0 , 𝑆1 , . . . , 𝑆ℎ , Θ) categories
(23) Normal 𝑆0
None None
= (0, 𝑎𝑘 𝑓𝑘0 , 𝑎𝑘 𝑓𝑘1 , . . . , 𝑎𝑘 𝑓𝑘ℎ , 1 − 𝑎𝑘 ) . condition
Left wheel Low pressure 𝑆1
It can be indicated from (23) that the frame of dis- Mechanical Right wheel Low pressure 𝑆2
cernment 𝑃(Θ) = {⌀, 𝑆0 , 𝑆1 , . . . , 𝑆ℎ , Θ}. Here, ⌀ denotes faults Left coupling Loosening 𝑆3
the empty set, and 𝑆ℎ represents the ℎth kind of running
condition of the robot. It is obvious that ∑𝐴∈𝑃(Θ) 𝑚𝑘 (𝐴) = Right coupling Loosening 𝑆4
1; 𝑚𝑘 (⌀) = 0. With BPAs, we use a fast fusion algorithm Left encoder Pulse loss 𝑆5
based on the matrix analysis [33] to accomplish D-S fusion Sensor faults Right encoder Pulse loss 𝑆6
algorithm. Gyroscope Constant drift 𝑆7
RS232
Data Motor driver Data
acquisition acquisition
ARM
Motor driver
PWM PWM
Driver IC Driver IC
Data bus
rate
Gyroscope
Driving wheel Driving wheel
Reducer Reducer
Mobile platform
Encoder Encoder
Motor Motor
Following
Speed Coupling wheel Coupling Speed
system. It can be indicated from Figure 3 that the working 100 𝜇 = 0.85
status of the robot driving system is closely related with motor 90
Cumulative variance proportion (%)
1
89.5
0.98
1 89.0
1 0.96
0.9 88.5 0.9
0.8
0.8 88.0 0.7 0.94
0.6
a 0.7
87.5 0.5
0.4 0.92
(%)
0.6 0.3
87.0 0.2 0.9
0.5 0.1
86.5 0
0.4 0.88
6 15 86.0 SVM0
4 10 SVM2 𝜔k7 0.86
2 5 85.5 𝜔
g c SVM4 𝜔k5 k6
0 0 85.0 𝜔 𝜔k4 0.84
SVM6 𝜔 k3
𝜔k1 k2
𝜔k0
Figure 5: Parameters tuning based on PSO algorithm (∙ denotes ak
the original distribution of the particles and △ denotes the final
distribution of the particles). Figure 7: Confidence values of SVM𝑘 .
BPAs
Fault mode PCA model Outputs
m(𝑆0 ) m(𝑆1 ) m(𝑆2 ) m(𝑆3 ) m(𝑆4 ) m(𝑆5 ) m(𝑆6 ) m(𝑆7 ) m(𝜃)
PCA0 0.004 0.003 0.463 0.003 0.446 0.002 0.024 0.004 0.051 𝑆2
PCA1 0.004 0.004 0.492 0.006 0.333 0.003 0.095 0.005 0.058 𝑆2
PCA2 0.004 0.003 0.620 0.004 0.279 0.002 0.046 0.003 0.039 𝑆2
PCA3 0.001 0.001 0.708 0.001 0.151 0.001 0.107 0.002 0.028 𝑆2
𝑆2 PCA4 0.004 0.002 0.350 0.002 0.546 0.002 0.010 0.003 0.081 𝑆4
PCA5 0.004 0.002 0.498 0.004 0.410 0.002 0.025 0.004 0.051 𝑆2
PCA6 0.002 0.002 0.507 0.002 0.390 0.002 0.047 0.003 0.045 𝑆2
PCA7 0.003 0.002 0.609 0.003 0.301 0.002 0.006 0.003 0.071 𝑆2
DS 0.000 0.000 0.959 0.000 0.041 0.000 0.000 0.000 0.000 𝑆2
PCA0 0.007 0.004 0.419 0.006 0.501 0.002 0.005 0.005 0.051 𝑆4
PCA1 0.005 0.003 0.021 0.004 0.902 0.002 0.001 0.004 0.058 𝑆4
PCA2 0.006 0.003 0.515 0.005 0.423 0.002 0.003 0.004 0.039 𝑆2
PCA3 0.001 0.001 0.022 0.001 0.942 0.000 0.003 0.002 0.028 𝑆4
𝑆4 PCA4 0.007 0.003 0.742 0.004 0.153 0.002 0.002 0.006 0.081 𝑆2
PCA5 0.007 0.003 0.320 0.005 0.605 0.002 0.002 0.005 0.051 𝑆4
PCA6 0.004 0.003 0.400 0.003 0.535 0.002 0.004 0.004 0.045 𝑆4
PCA7 0.005 0.003 0.363 0.005 0.544 0.002 0.002 0.005 0.071 𝑆4
DS 0.000 0.000 0.005 0.000 0.995 0.000 0.000 0.000 0.000 𝑆4
Table 3: Diagnosis accuracy of the proposed hybrid framework with different kernel functions.
Diagnosis accuracy
Kernel function
𝑆0 𝑆1 𝑆2 𝑆3 𝑆4 𝑆5 𝑆6 𝑆7 Total
Polynomial kernel 79% 97% 96% 85% 94% 98% 99% 81% 91.51%
Sigmoid kernel 92% 94% 94% 69% 71% 96% 99% 91% 88.25%
Gaussian RBF kernel 81% 97% 95% 86% 92% 99% 100% 84% 91.76%
ML-kernel (𝛼 = 1) 81% 97% 95% 86% 92% 99% 100% 84% 91.76%
ML-kernel (0 < 𝛼 ≤ 1) 92% 100% 96% 94% 98% 99% 100% 95% 96.75%
S5
S4
4.5.2. Evaluation of the Proposed Hybrid Diagnosis Frame-
S3 work. The performance of the proposed framework can be
S2 evaluated by comparison with traditional nonfusion diagno-
S1 sis framework. 10 groups of new test data are sampled and
S0 each group contains 800 samples which are sampled under
each of the running conditions (𝑆0 , . . . , 𝑆7 ), respectively (100
samples for each running condition). The ML-kernel is
(S0 )
(S1 )
(S2 )
(S3 )
(S4 )
(S5 )
(S6 )
(S7 )
0
100
200
300
400
500
600
700
800
100 References
90
80 [1] R. C. Luo and C. C. Lai, “Enriched indoor map construction
Diagnosis accuracy (%)
Advances in
Fuzzy
Systems
Modelling &
Simulation
in Engineering
Hindawi Publishing Corporation
Hindawi Publishing Corporation Volume 2014 http://www.hindawi.com Volume 2014
http://www.hindawi.com
International Journal of
Advances in Computer Games Advances in
Computer Engineering Technology Software Engineering
Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
International Journal of
Reconfigurable
Computing