Abstract

Expert reliability is the ability to make unmistakable evaluations on attributes for the performance of an alternative in multiattribute group decision making (MAGDM). It has a significant effect on the group consensus calculation and group decision-making; unfortunately the reliability has not been considered in the consensus-reaching model yet. This study focuses on providing a reliability-based consensus model for MAGDM with analytically evidential reasoning (analytical ER for short) approach. The basic probability assignment (BPA) function which can be discounted by expert reliability is introduced to describe the performance judgments of each expert, by combining which of the group judgments could be determined with analytical ER rule. Then the consensus degrees of three levels (attribute level, alternative level, and expert level) are defined by Jousselme distance to identify the experts who should revise their judgments and point out revised suggestions, based on which a decision-making method within interaction is proposed to determine the effective BPA functions of all experts and make final decision-making. Finally, a numerical case study is carried out to illustrate the effectiveness of the method.

1. Introduction

In multiattribute group decision-making (MAGDM), a group of experts make an evaluation on alternatives by several attributes and interact with each other to derive a common solution [1]. However, experts usually have different knowledge and backgrounds on the decision-making problems since they are from diverse professional fields, which may lead to confliction or inconsistency among experts in the group [2]. It is a hot and relevant topic about how to reach a consensus in the decision-making process. The group interaction consensus model (consensus model for short) has been proved to be an effective method to increase consensus, because it supports inconsistent experts, whose inconsistency values are higher than the predefined threshold, with advice on how to modify their evaluation information [2]. In order to assist with group interaction consensus model, three aspects of researches have been proposed as follows. The first aspect focuses on applying the fuzziness tools to construct consensus models. The fuzziness tools such as fuzzy theory [310], hesitant fuzzy set [1115], and linguistic/preference information [12, 13, 1522] were introduced into consensus model to extract experts’ subjective judgments. The second aspect focuses on constructing the feedback mechanisms. The minimum cost feedback mechanism [19, 23, 24], maximum utility feedback mechanism [25], and cost chance constraint mechanism [26] were proposed to achieve the optimum balance between individual independence and group consensus. The third aspect focuses on extending consensus models to different situations. Dynamic consensus model [2730], consensus models considering social networks [2, 19, 22], soft consensus model [5], adaptive consensus model [20, 30], and interactive consensus model [2, 31] were proposed to make consensus models play better roles to meet different requirements and characteristics. It is obvious to find that a few of new ideas for solving the problems have been put forward in consensus models.

Dempster-Shafer theory of evidence (DST) was originally investigated in the 1960’s by Dempster, formalized in the 1970’s by Shafer, and has been researched ever since widely. The basic probability assignment (BPA) function which is frequently regarded as a piece of evidence is a key concept in the DST, and it can be well combined by the Dempster’s rule. The BPA function enables the commitment of belief in a hypothesis that does not necessarily mean that the remaining belief must be assigned to the complement of the hypothesis, but to the whole sample space. The BPA functions and Dempster’s rule make the DST able to well handle the uncertainties in decision making [32]. These attractive features have motivated the use of this method in MAGDM problems. For example, Yang introduced the DST into MAGDM and proposed a well-known evidential reasoning (ER) approach [33]. The ER approach is much flexible in dealing with uncertainty in the MAGDM, since it divides the global uncertainty into ignorance and residual support without changing the nature of evidence [32]. Note that the ER approach is a recursive algorithm and it is hard to be modeled and computed sometimes; then the analytical ER methodology is developed [34]. In the framework of ER approach, studying how to reach a consensus and make decision has attracted the attention of scholars. For examples, the consensus framework and consensus model with interval-values for the MAGDM analysis in the ER context have been proposed [3537]. However, there still exist a lot of problems that need to be solved.

There are two primary methods for reaching consensus, modifying assessments of experts, and adjusting the weights of experts, in both ER framework and other frameworks. The impact of expert reliability on the consensus of MAGDM has not been noticed yet. Reliability is an important concept in various fields [38], such as engineering [39], industry [40], transportation [41], computer networks [42], wireless networks [43], and software [44]. In information fusion field, reliability is defined as an ability of evidence source to provide correct assessment/solution for the given problem, and the reliability of an evidence source should be estimated by statistics or other techniques [45]. In MAGDM, expert reliability can be defined as the ability to make an unmistakable evaluation on the specific attribute for an alternative. Obviously, the higher the reliability of the expert, the more accurate the evaluation information given by the expert. Conversely, the lower the reliability of the expert, the less accurate the evaluation information given by the expert. In the process of calculating the group consensus degree and making group decision-making, if the expert reliability is not considered, or the accuracy of the evaluation information given by the expert is not taken into consideration, it will inevitably lead to the problems such as inaccurate group consensus or poor quality of decision results.

The motivation of this paper is to propose a reliability-based consensus model for MAGDM with analytical ER approach as follows. The ER discounting method is used to reflect the influence of weight and reliability on the expert evaluation information, based on which the analytically evidential reasoning (analytical ER for short) rule is employed to obtain the group opinion by integrating individual evaluation information on a specific attribute for alternatives. Then the corresponding reliabilities of experts as well as the degrees of nonconsensus of experts are calculated with the help of Jousselme distance. The method of modifying expert’s evaluation information and interaction consensus model for MAGDM are proposed finally.

The rest of this study is organized as below. Section 2 briefly reviews the main concepts introduced in the analytical ER approach which is the preliminaries of this paper. In Section 3, a reliability-based consensus model is presented to solve the MAGDM problem with the help of analytical ER approach. In Section 4, we use a numerical case study to illustrate the interaction process of the proposed method. The conclusions are discussed in Section 5.

2. Preliminaries

In order to facilitate the later formulation, some basic concepts of the analytical ER approach are given here.

Definition 1 (see [10, 33, 34, 46]). Let be a set of mutually exclusive and collectively exhaustive propositions, with for any and . is then referred to as a frame of discernment. A Basic Probability Assignment (BPA) is a function , satisfyingwhere is an empty set, is any subset of , and is the power set of , which consists of all subsets of , i.e.,

The assigned probability measures the belief exactly assigned to and represents how strongly the evidence supports . The sum of all the assigned probabilities is 1 and there is no belief in the empty set . The assigned probability to , i.e., , is called the degree of ignorance. For convenience, let .

Definition 2 (see [46]). A belief measure, Bel, and a plausibility measure, Pl, are associated with each BPA and they are both functions: , defined bywhere and are subsets of . represents the exact support for , i.e., the belief of the hypothesis of being true; represents the possible support for , i.e., the total amount of belief that could be potentially placed in . constitutes the interval of support to and can be seen as the lower and upper bounds of the probability to which is supported. The two functions can be connected by Because the functions , , and are one-to-one corresponding, it is equivalent to talk about one of them, or also about the corresponding the evidence.

Definition 3 (see [10, 33]). Let the grades for assessing alternatives be and the belief degree that an alternative is assessed to be derived from a piece of evidence; then the assessment is profiled bywhere , , for , and represents the degree of global ignorance. If , the assessment is complete; otherwise, it is incomplete. For convenience, let and the assessment as (1) can be denoted as .

Definition 4 (see [10, 34, 47]). The ER approach first transforms the original belief degrees into BPAs by combining the relative weights and the belief degrees usingwhere , , , depicts the role extents of the evidence to be combined, and represents the incompleteness of the assessment.

Definition 5 (see [10, 33, 47]). There are pieces of evidence to be combined, let the BPA function of the one be generated by (7), , , and then the combined belief degree for pieces of evidence is calculated by the analytical ER approach as follows:whereNote that the argument in (8e) is the reciprocal of the sum of BPA functions assigned to all nonempty sets and is used as a normalization factor.

3. The Proposed Method

A reliability-based consensus model to increase the consistency among experts is proposed in analytical ER approach context. In this section, the evidence distance and the expert reliability are firstly introduced based on Jousselme distance. The consensus degrees on three levels are defined, based on which the interaction method for group consensus is presented. Finally the method for information fusion and decision-making is summarized.

3.1. Evidence Distance and Expert Reliability

Suppose a set of experts is invited to make an evaluation for alternatives on attribute set . Experts are invited to evaluate alternatives on each attribute with BPAs. The assessment of alternative given by expert with respect to attribute is expressed as a piece of evidence .

Distance is a tool that measures the consistency or inconsistency between evidence. There are at least two types of distance measure: one type measures the degree of difference among evidence and the other measures the degree of similarity or compatibility among evidence. The greater the difference among them, the lower the similarity. We should choose a distance measure according to the purpose of the application and the fusion. It is hoped that the opinions of the experts are highly consistent, and we can reasonably assume that the information given by most of experts is reasonable and basically consistent. The high conflict experts who need to be identified are a minority.

In order to depict conflicts or inconsistencies among evidence (expert information), many classical calculation methods, with their own advantages and disadvantages, have been proposed. For instance, conflict distance may reflect the conflicting or inconsistent situation where both and are not equal to zero, but the intersection such as is empty. Nevertheless, cannot depict the following inconsistency: that is, when , , is more sure whilst is likely uncommitted about its preference [48]. Pignistic probability distance based on the expected utility theory and betting commitments is proposed by Smets [49], which has ability to identify well the inconsistency when , . Let ; however, the Pignistic probability distance is , which manifests that there is no difference to from the two evidence sources, whilst the after combination is still very high. This high conflict or inconsistency cannot come from the difference between betting commitments but from other reasons. Consequently, neither nor could describe a conflict between two BPAs accurately. Fortunately, Jousselme put forward a distance as in Definition 6, which is regarded as a standard metric for two pieces of evidence [50]. Jousselme distance has the ability to measure conflicts or inconsistencies among evidence [51], and it is adopted to measure the opinions’ inconsistency between each individual expert and the group of experts.

Definition 6 (Jousselme distance). Let and be two BPAs on the same frame of discernment , containing mutually exclusive and exhaustive hypotheses. The distance between and iswhere is a -dimensional matrix whose elements are and defines a metric distance and represents the similarity among the subsets of . Given a BPA on frame , is a -dimensional column vector (can also be called a matrix). stands for vector subtraction and is the transpose of vector (or matrix) [50].

In MAGDM problem, experts are not all completely reliable since their knowledge and backgrounds are different. Expert reliability can be defined as the ability to make an unmistakable evaluation on the specific attribute for an alternative in MAGDM. In the process of calculating the group consensus degree and making group decision-making, if the expert reliability is not considered, or the accuracy of the evaluation information given by the expert is not taken into consideration, it will inevitably lead to problems such as inaccurate group consensus or poor quality of decision results. Expert reliability could be estimated by statistics or other techniques.

In our opinion, the expert reliability could be indirectly calculated by Jousselme distance between expert and the group. The reason is that group opinion can be regarded as relatively accurate information. The more consistent of the opinions between the expert and the group (the smaller the distance of opinions between the expert and the group), the higher the reliability of the expert. Expert reliability is inversely proportional to the distance between expert opinion and group opinion. So we give the following definition.

Setting a threshold, if the Jousselme distance between expert and the group is equal to or less than the threshold, the reliability of the expert is considered to be absolutely reliable, or the degree is 1.

Definition 7. Supposing the BPA function given by expert is and that of the group of experts is , the Jousselme distance between and is , is a threshold to judge whether the distance satisfies the requirement, and then the reliability degree of expert can be determined as follows:The expert reliability calculated based on Definition 7 has the following properties: (1) the reliability of the experts with the satisfied distance from group opinion is equal to 1. (2) The smaller the distance between the expert opinion and the group opinion, the more reliable the expert. (3) Expert reliability must be between 0 and 1, that is, . It is worth noting that the threshold is suggested to be equal to the threshold defined in the consensus model in this paper.

3.2. Consensus Degree on Three Levels

Following the framework of consensus reaching model defined by [1], the consensus indexes are used to describe the inconsistency or conflict among experts on three hierarchical and progressive consensus levels. They are, respectively, the attribute, the alternative, and the expert levels. The attribute level is the most basic, which could reflect the original conflict among the expert on a specific attribute for an alternative , and experts make amendments to increase their consistency in group also on this level. The alternative level is intermediate, that is, the conflict between experts and the group for the evaluation result on a specific alternative . The expert level is the highest, that is, the conflict between a specific expert and the group for the evaluation results on all alternatives, which reflects the conflict between the expert and the group as a whole [2].

Note that the dual effects of attribute weight and expert reliability should be taken into account when calculating consensus index, because the weight and the reliability of evidence are two kinds of parameters. The weight is often defined as the attribute weight, and it is relative. The reliability can be defined as the ability to make an unmistakable evaluation on the specific attribute for an alternative, and it’s absolute. The reliability of the expert on each attribute is different, because each attribute reflects one aspect for a given problem, and the required expert knowledge experience is different. We could combine expert information with the help of analytical ER approach. Based on ER discount, the discounted BPA function of alternative on attribute is obtained, which incorporates weight and reliability. Let the weight of attribute be and . Note that the sum of weights is frequently defined to be one, but such a requirement is not necessary. Let the reliability of expert on attitude be , . Obviously, describes that expert is most reliable and describes that he/she is most unreliable. The common discount method is the double discount formula of Definition 4. The discount factor is

Based on the discounted BPA function, the analytical ER approach is used to calculate the group opinion. The analytical ER approach is used to combine the derived BPAs for two times. In order to obtain group evaluation result on an attribute for an alternative , we only need combine BPAs for an alternative on an attribute with respect to all experts, i.e., . The symbol denotes combine BPAs by using the analytical ER approach, . The fusion on the attribute for the alternative with respect to all experts iswhere

The Jousselme distance is used to calculate the inconsistency among experts and group as a consensus index. The above three levels of consensus indexes are employed to find the inconsistent among experts, and the corresponding calculation methods are given by considering the characteristics of the analytical ER approach.

(1) The Consensus Index on Attribute. The consensus index on attribute , denoted , is defined to measure the inconsistency or conflict between the expert and the group for alternative with respect to the attribute .

means that the Jousselme distance between the combined BPAs , , the fusion on all experts for the alternative with respect to the attribute , and , the expert makes evaluation on the attribute for the alternative .

(2) The Consensus Index on Alternative. The consensus degree on alternative , denoted , is defined to measure the inconsistency or conflict between the expert and the group on that alternative. It could be obtained by adding the consensus index values of all attributes on the alternative for the expert .

means the distance between the combined BPAs , , the overall evaluation result for alternative with respect to all experts, and , the evaluation result for alternative with respect the expert .

(3) The Consensus Index on Expert. The consensus degree on expert , denoted , is defined to measure the inconsistency or conflict among expert overall. It could be obtained by adding the consensus index values of all alternatives for expert .

means the overall distance between the expert and the group.

Traditionally, the closer the value to 1, the greater the degree of conflict; indicates complete conflict; the closer the value to 0, the smaller the conflict; means no conflict between them. Algorithm 1 is summarized to compute the distance on three levels.

Input: Expert assessments ,
; Attribute set ; Reliability set of experts on attributes , ; Weight set of
attributes .
Output: Distance on the attribute ; Distance on the alternative ; Distance on the expert .
Begin
 Compute the discounting factor for by using .
 Establish the models as in Eq. (12a), (12b), (12c), (12d), and (12e) and compute .
 % Compute the distance between experts and group on three levels
For i=1 to I
For k=1 to K
For l=1 to L
%Compute the distance between experts and group on attribute
EndFor
% Compute the distance between experts and group on alternative
EndFor
% Compute the distance between experts and group on expert
EndFor
End
3.3. The Interaction Method for Group Consensus

All experts agree each other unanimously, however, this case is rare in reality and it is not desirable in the decision making process. We should identify the inconsistent information with the values of consensus indexes higher than the threshold in terms of the Jousselme distance. Since the “absolutely meaningful threshold” of conflict tolerance, which has the ability of satisfying all pairs of BPAs, hardly exists, so as to the choice of threshold is largely subjective and application oriented [52]. Generally, a consensus threshold such as 90%, 80%, and two-thirds is often used as a minimum level that the decision-making required to be achieved [53]. When a consensus level is lower than , the interaction method for group consensus would be activated to identify inconsistent information and give the recommendation for inconsistent experts to revise her/his assessment. In order to identify the information with high conflicts on three levels, we follow the order of the highest, the intermediate, and the basic level. The order in which the inconsistent information is determined is exactly the opposite of the process of calculating the distance. Assume that the thresholds on the expert level, the alternative level, and the attribute levels are , , and . The steps of the proposed method can be summarized as follows:

Step 1. Experts whose consensus indexes on expert level are higher than the threshold value are identified:

Step 2. For the identified experts in Step 1, their alternatives with the values of consensus index higher than the threshold are identified:

Step 3. Finally, the preference values to be changed are those with the values of consensus index higher than the threshold are identified:

When the expert level’s consensus index value for higher than the set thresholds value is identified, the interaction consensus procedure should be carried out to assist the inconsistent experts in modifying their opinion based on the recommendations given to enhance their consensus of the group. Since the attribute level is the most basic, it contains the original information of the expert and the source of the conflict. Therefore, expert only needs to make corrections at this level. In this process here, only the experts whose consensus index values are higher than the threshold may get revised suggestion. Besides, it does not command those inconsistent experts to accept the revised suggestion regardless of their will; instead, it allows experts to modify the opinions by discounting original opinions and group opinion. The discount factors () are determined by the experts themselves.

“You are suggested to revise your assessment for alternative on attribute , to be closer to .where is a parameter to control the degree of advice and is the corrected result.

If the parameter is 0.8, the expert original information is discounted by 0.2 and the group opinion is discounted by 0.8; that is, the expert is more inclined to trust the group opinion. If the parameter is 0.2, the expert original information is discounted by 0.8 and the group opinion is discounted by 0.2; that is, the expert is more inclined to believe her/his original information. If the parameter is 0.5, the expert original information and the group opinion are both discounted by 0.5, that is, the expert remains neutral. When expert determines the parameter to modify her/his opinion, the interaction process is activated. In the process, experts continuously interact with each other and modify the conflict information based on the recommendations given until the consensus level is reached or the maximum time of interaction is achieved. The maximum time of interaction is determined by experts, according to the data and the problem to be solved.

At the beginning, the reliability of the experts could not be determined, so it is a reasonable assumption that all experts are absolutely reliable, that is, , . After the first round of calculation, the actual reliability of the experts can be obtained in light of the Jousselme distance. At this point, the reliability of the identified experts with higher conflict is relatively low. However, in the later interaction, some expert reliability may increase and the Jousselme distance may decrease. This is because the experts get more information and become more reliable, which does not conflict with the objectivity and absoluteness of reliability. Algorithm 2 is summarized to the proposed model.

Input: Expert assessments , ,
, ; Attribute set ; Reliability set of experts on attribute
, ; The correction times ; The maximum time of interaction .
Output: Group opinions on attributes that meet the model requirements, .
Begin
Do:
Call Algorithm 1 to compute the distance on three levels.
% Determine the evaluation information that needs to be revised.
While
While
While
Modify the conflict information as follows
End
End
End
Establish the models as in Eq. (12a), (12b), (12c), (12d), and (12e) and compute .
Compute the reliability of experts as follows
If ,
else
While (The distances between experts and group at expert level is all up
to model requirements or )
End
3.4. The Method for Information Fusion and Decision Making

There are two ways to integrate about the analytical ER approach, both of them combine the derived BPAs for two times.

The First way is more common: (1) The first time is to fuse BPAs on all attributes for expert , i.e., , also called individual information fusion. Obviously, is the collected evaluation value given by expert for alternative . (2) The second time is to fuse the fused BPAs for all experts for alternative , i.e., , also called group information fusion, is the overall evaluation value of alternative for all experts.

The Second way is less common: (1) The first time is to fuse BPAs on all experts for attribute , i.e., . Obviously, is the collective evaluation value given by all experts for alternative on attribute for all experts. (2) The second time is to fuse the fused BPAs for all attributes for alternative ; i.e., , is the collective evaluation value for alternative on all attributes. Since the analytical ER rule is associative and commutative, the fusion results are the same whether by the first way or the second way.

According to Section 3.3, when the requirements of the consensus model are met, the evaluation information of all experts for the alternative on the attribute has been fused, i.e., . Therefore, the second way is adopted here for its first time of fusion has been conducted, and only the second time of fusion should be made. In other words, the computational complexity of the decision-making process will be decreased when the second way is adopted.

Combining the evaluation information on all attributes for the alternative , the overall evaluation value for alternative iswhere

The interaction consensus model consists of three stages: (1) consensus degrees on three levels to describe the inconsistency or conflict among experts based on Jousselme distance; (2) the interaction method for group consensus to make experts continuously interact with each other and modify the conflict information based on the recommendations given until the consensus level is reached; (3) the method for information fusion and decision-making in MAGDM. Algorithm 3 is summarized to the proposed model.

Input: Expert assessments ,
; Attribute set ; Reliability set of experts on attribute , ; Weight
set of attributes .
Output: Evaluation results for each alternative , .
Begin
Call Algorithm 1 to compute the distance on three levels.
Call Algorithm 2 to compute group opinions on attributes that meet the model requirements,
, .
For k=1 to K
For l=1 to L
Establish the models as in Eq. (20a), (20b), (20c), (20d), and (20e) and compute .
EndFor
EndFor
End

4. Illustration Example

In this section, A numerical case study is carried out to illustrate the decision making process by the proposed method. Suppose that a MAGDM problem consists of three alternatives , four attributes , and five experts . The frame of discernment is composed of five levels, namely, excellent (), good (), average (), poor (), and very poor (), i.e., . Assume the weights of attributes in the set are and . At the very beginning, the reliabilities of experts on all attributes are assumed as , . Experts are invited to evaluate alternatives on each attribute with BPAs. The assessment of alternative given by expert on attribute is expressed as , , , , can be regarded as a piece of evidence. For example, evaluated by expert with respect to is . All the assessments for alternatives on are attribute simultaneously listed in Table 13.

In the light of the discounting method with both weight and reliability, take the weight and the reliability into (11), and we obtain the values of discounting factor. Take and into (5), the double discount formula and then the BPA functions () are obtained. Take these BPA functions into the model as (12a), (12b), (12c), (12d), and (12e) to obtain the group evaluation result on an attribute for an alternative , i.e., . All the results on four attributes for three alternatives are derived as shown in Table 4.

Calculate the consensus index values on the attribute level, alternative level, and expert level according to (13)-(15) for expert . And then the reliability of the expert on an attribute for an alternative is calculated according to Definition 7. For convenience, we present the values of distance and reliability in an array. For example, (0.5403, 0.7403) means that the distance between expert and the group on attribute for alternative is 0.5575 and the corresponding reliability of expert 0.7175 is shown in the second row and second column of Table 5.

According to the characteristic of the data, we select the thresholds on the expert level, the alternative level, and the attribute level, respectively, that is, , , and . According to the above, the order of the highest (expert) level, the intermediate (alternative) level, and the most basic (attribute) level are used to determine the experts, alternatives, and attributes that need to be modified. It can be seen from Table 5 that expert differs greatly from the group opinions on the attributes for alternative and the attributes for alternative . Expert differs greatly from the group opinions on the attributes for alternative . They both need to modify their opinion based on (19). Suppose that experts and simultaneously have adopted the same discount factor values as to modify their opinions on the corresponding alternatives and attributes. The relationship between the distance and the discount factor after the expert correction is shown in Figure 1.

It can be seen from Figure 1 that when the discount factor takes values as 0, 0.8, 0.9, and 1.0, the experts could reach the consensus level after correction. When the discount factor takes a value in (0.1, 0.7), the requirement of the consensus level is not reached and the experts stop interacting after four times of interaction. Note that the maximum time of interaction is determined by experts. According to the results of many experiments, is selected.

Suppose that experts and simultaneously have adopted the value of discount factor as to modify their opinions on the corresponding alternatives and attributes. The experts interacted four times and reached the required level of consensus. The results for inconsistent expert who have made four times of modification are shown in Table 6.

After the each time of modification, we obtain the new discounting factors by taking the weight and the new reliability (as shown in Table 5) into (11). Take and into (5), the double discount formula and then the new BPA functions () are obtained. Take these BPA functions into the model as (12a), (12b), (12c), (12d), and (12e) to obtain the new group evaluation result on an attribute for an alternative , i.e., . Calculate the new consensus index values on the attribute level, the alternative level, and the expert level according to (13)-(15) for expert . And then the reliability of the expert on an attribute for an alternative is recalculated according to Definition 7. The results of distance for inconsistent expert who have made four times of modification are shown in Table 7.

From Table 7, we can see that the consensus index values of all experts are less than the threshold of 5.5 at the expert level, reaching the expected level of consensus. It is not difficult to find that there are consensus index values greater than the thresholds on the alternative and the attribute level. This is one of the advantages of our model. If the consensus level on the upper level meets the requirements, the expert does not need to modify the conflict information on the sublevel. In other words, experts do not need to modify all conflict information.

Combine the evaluation information on all attributes for the alternative , according to (20a), (20b), (20c), (20d), and (20e). All the results are shown in Table 8.

From Table 8, we can see that the evaluation grades of alternatives , , and are absolutely excellent (), good (), and average (), respectively. The consistency among experts is greatly improved.

5. Conclusions

Expert reliability has a significant effect on the group consensus calculation and group decision-making; however it has not been considered in the consensus-reaching model of MAGDM yet. This study aims at providing a reliability-based consensus model for MAGDM with analytically evidential reasoning (ER) approach. A numerical case study is carried out to illustrate the effectiveness of the method. The main contributions of this paper can be summarized as following three aspects.

Firstly, expert reliability is innovatively defined by Jousselme distance of the BPA functions between each expert and expert group in consensus-reaching model of MAGDM. The individual BPA functions given by each expert are well discounted by both expert reliability and attribute weight, by combining which of the group BPA functions can be determined.

Secondly, the consensus degrees of three levels (attribute level, alternative level, and expert level) are defined by Jousselme distance between individual and group to identify the experts who should revise their judgments and point out revised suggestions.

Thirdly, an analytical ER-based decision-making method within interaction is proposed to determine the effective BPA functions of all experts and make final decision-making.

The established method proposed in this paper is very important for solving MAGDM problems for the following reasons. If the expert reliability is not considered or the accuracy of the evaluation information given by the expert is not taken into consideration, it will inevitably lead to the problems such as inaccurate group consensus degree calculation or poor quality of decision results. For future research, we will investigate how to deal with the more complex issues such that the judgments given by experts are made by fuzzy sets or include local ignorance.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (NSFC) under Grant nos. 71874167 and 71462022, the Fundamental Research Funds for the Central Universities under Grant no. 201762026, and the Special Funds of Taishan Scholars Project of Shandong Province under Grant no. tsqn20171205.