M. Niknafs; S. Dorri Nogoorani; R. Jalili
Abstract
Reputation management systems are in wide-spread use to regulate collaborations in cooperative systems. Collusion is one of the most destructive malicious behaviors in which colluders seek to affect a reputation management system in an unfair manner. Many reputation systems are vulnerable to collusion, ...
Read More
Reputation management systems are in wide-spread use to regulate collaborations in cooperative systems. Collusion is one of the most destructive malicious behaviors in which colluders seek to affect a reputation management system in an unfair manner. Many reputation systems are vulnerable to collusion, and some model-specific mitigation methods are proposed to combat collusion. Detection of colluders is shown to be an NP-complete problem. In this paper, we propose the Colluders Similarity Measure (CSM) which is used by a heuristic clustering algorithm (the Colluders Detection Algorithm (CDA)) to detect colluders in O (n2m + n4) in which m and n are the total number of nodes and colluders, respectively. Furthermore, we propose an architecture to implement the algorithm in a distributed manner which can be used together with compatible reputation management systems. Implementation results and comparison with other mitigation methods show that our scheme prevents colluders from unfairly increasing their reputation and decreasing the reputation of the other nodes.
M. Ezhei; B. Tork Ladani
Abstract
Nowadays, the growth of virtual environments such as virtual organizations, social networks, and ubiquitous computing, has led to the adoption of trust concept. One of the methods of making trust in such environments is to use a long-term relationship with a trusted partner. The main problem of this ...
Read More
Nowadays, the growth of virtual environments such as virtual organizations, social networks, and ubiquitous computing, has led to the adoption of trust concept. One of the methods of making trust in such environments is to use a long-term relationship with a trusted partner. The main problem of this kind of trust, which is based on personal experiences, is its limited domain. Moreover, both parties of such trust relationship will face big problems of collecting data and forming reasonable and reliable beliefs. Considering the concept of "group" in modeling trust is a way to overcome the above mentioned problems. Since, group-based trust is more suited with the nature of trust in new virtual environments. In this paper, a new trust model called "GTrust" is proposed in which trust is considered as a collective and shared feature of all group members. Therefore, group membership is used as the judgment criteria regarding a person's expected behavior and how he can be a trustee. GTrust is based on Metagraphs which are graphical data structures for representing a collection of directed set-to-set mappings. We show that by using GTrust, large trust spaces between unknown individuals can be shaped effectively. The proposed model not only offers a better description of human sense of trust when considering communities, but also provides the setting for evaluating the trust of individuals whom we do not know, and therefore provides an extended evaluation domain.
H. Mohammadhassanzadeh; H. R. Shahriari
Abstract
In Social networks, users need a proper estimation of trust in others to be able to initialize reliable relationships. Some trust evaluation mechanisms have been offered, which use direct ratings to calculate or propagate trust values. However, in some web-based social networks where users only have ...
Read More
In Social networks, users need a proper estimation of trust in others to be able to initialize reliable relationships. Some trust evaluation mechanisms have been offered, which use direct ratings to calculate or propagate trust values. However, in some web-based social networks where users only have binary relationships, there is no direct rating available. Therefore, a new method is required to infer trust values in these networks. To bridge this gap, this paper aims to propose a new method which takes advantage of user similarity to predict trust values without any need for direct ratings. In this approach, which is based on socio-psychological studies, user similarity is calculated from the profile information and the texts shared by the users via text-mining techniques. Applying Ziegler ratios to our approach revealed that users are more than 50% more similar to their trusted agents than to arbitrary peers, which proves the validity of the original idea of the study about inferring trust from language similarity. In addition, comparing the real assigned ratings, gathered directly from users, with the experimental results indicated that the predicted trust values are sufficiently acceptable (with a precision of 61%). We have also studied the benefits of using context in inferring trust. In this regard, the analysis revealed that the precision of the predictions can be improved up to 72%. Besides the application of this approach in web-based social networks, the proposed technique can also be of much help in any direct rating mechanism to evaluate the correctness of trust values assigned by users, and increase the robustness of trust and reputation mechanisms against possible security threats.
H. Shakeri; A. Ghaemi Bafghi
Abstract
It is a common and useful task in a web of trust to evaluate the trust value between two nodes using intermediate nodes. This technique is widely used when the source node has no experience of direct interaction with the target node, or the direct trust is not reliable enough by itself. If trust is used ...
Read More
It is a common and useful task in a web of trust to evaluate the trust value between two nodes using intermediate nodes. This technique is widely used when the source node has no experience of direct interaction with the target node, or the direct trust is not reliable enough by itself. If trust is used to support decision-making, it is important to have not only an accurate estimate of trust, but also a measure of confidence in the intermediate nodes as well as the final estimated value of trust. The present paper thus aims to introduce a novel framework for integrated representation of trust and confidence using intervals, which provides two operations of trust interval multiplication and summation. The former is used for computing propagated trust and confidence, whereas the latter provides a formula for aggregating different trust opinions. The properties of the two operations are investigated in details. This study also proposes a time-variant method that considers freshness, expertise level and two similarity measures in confidence estimation. The results indicate that this method is more accurate compared to the existing ones. In this regard, the results of experiments carried out on two well-known trust datasets are reported and analyzed, showing that the proposed method increases the accuracy of trust inference in comparison with the existing methods.