Sorour Sheidani; Ziba Eslami
Abstract
Nowadays, from one hand multimedia authentication techniques are widely used to achieve trustworthiness, on the other hand, due to the rapid growth of image processing software technologies, having a secure method to protect the copyright of these data seems fairly essential. Multipurpose watermarking ...
Read More
Nowadays, from one hand multimedia authentication techniques are widely used to achieve trustworthiness, on the other hand, due to the rapid growth of image processing software technologies, having a secure method to protect the copyright of these data seems fairly essential. Multipurpose watermarking emerged in order to simultaneously accomplish multimedia authentication and copyright protection. In this paper, we propose a multipurpose watermarking method which achieves perfect security, the ability to detect tampered areas of the watermarked image as well as a lower BER rate, at the cost of reducing capacity by half. This watermarking scheme is blind in the sense that on the receiver side, neither the original host image nor the embedded watermark is needed for ownership watermark extraction or tamper detection. Experimental results show that our method is able to reconstruct extracted tampered watermarks even after various attacks such as JPEG compression, average filtering, gamma correction, median filtering, speckle noise, JPEG compression, sharpening, Wiener filter, and median filtering. Comparisons are provided with other multipurpose watermarking methods which primarily aim at simultaneous goals of copyright protection and authentication. We also show the superiority of our proposed method to three watermarking methods attaining these objectives on a one-goal-at-a-time basis.
M. Doroudian; H. R. Shahriari
Abstract
Nowadays, information plays an important role in organizations. Sensitive information is often stored in databases. Traditional mechanisms such as encryption, access control, and authentication cannot provide a high level of confidence. Therefore, the existence of Intrusion Detection Systems in databases ...
Read More
Nowadays, information plays an important role in organizations. Sensitive information is often stored in databases. Traditional mechanisms such as encryption, access control, and authentication cannot provide a high level of confidence. Therefore, the existence of Intrusion Detection Systems in databases is necessary. In this paper, we propose an intrusion detection system for detecting attacks in both database transaction level and inter-transaction level (user task level). For this purpose, we propose a detection method at transaction level, which is based on describing the expected transactions within the database applications. Then at inter-transaction level, we propose a detection method that is based on anomaly detection and uses data mining to find dependency and sequence rules. The main advantage of this system, in comparison with the previous database intrusion detection systems, is that it can detect malicious behaviors in both transaction and inter-transaction levels. Also, it gains advantages of a hybrid method, including specification-based detection and anomaly detection, to minimize both false positive and false negative alarms. In order to evaluate the accuracy of the proposed system, some experiments have been done. The experiment results demonstrate that the true positive rate (recall metric) is higher than 80%, and the false positive rate is lower than 10% per different data sets and choosing appropriate ranges for support and confidence thresholds. The experimental evaluation results show high accuracy and effectiveness of the proposed system.
Vajiheh Sabeti; Mahsa Amerehei
Abstract
A steganography system must embed the message in an unseen and unrecognizable manner in the cover signal. Embedding information in transform coefficients, especially Discrete Wavelet Transform (DWT), is one of the most successful approaches in this field. The proposed method in this paper has two main ...
Read More
A steganography system must embed the message in an unseen and unrecognizable manner in the cover signal. Embedding information in transform coefficients, especially Discrete Wavelet Transform (DWT), is one of the most successful approaches in this field. The proposed method in this paper has two main steps. In the first step, the XOR logical function was used to embed two bits of data in the adjacent DWT coefficient pair. No change in the coefficients will occur if the XOR result of the two bits of low-value data of the two adjacent coefficients is identical to the two bits of secret data. Otherwise, one or both of the coefficient(s) will need a one-unit increase or decrease. In the second step, the genetic algorithm was used to select, between the two possible solutions, a new value for the adjacent coefficient pair that needs to be changed. Using the genetic algorithm, the selections were made such that the generated stego image experienced the least change relative to the cover image. The results of comparing this method with the existing methods in low- and high-level embedding showed that the proposed method was successful in producing stego images with high-quality criteria. In addition, the SPAM steganalysis method did not show high accuracy in its detection. One of the benefits of the proposed method is the need for a short key to embed and extract the secret message. This issue increases the security and feasibility of the proposed method.
H. Mohammadhassanzadeh; H. R. Shahriari
Abstract
In Social networks, users need a proper estimation of trust in others to be able to initialize reliable relationships. Some trust evaluation mechanisms have been offered, which use direct ratings to calculate or propagate trust values. However, in some web-based social networks where users only have ...
Read More
In Social networks, users need a proper estimation of trust in others to be able to initialize reliable relationships. Some trust evaluation mechanisms have been offered, which use direct ratings to calculate or propagate trust values. However, in some web-based social networks where users only have binary relationships, there is no direct rating available. Therefore, a new method is required to infer trust values in these networks. To bridge this gap, this paper aims to propose a new method which takes advantage of user similarity to predict trust values without any need for direct ratings. In this approach, which is based on socio-psychological studies, user similarity is calculated from the profile information and the texts shared by the users via text-mining techniques. Applying Ziegler ratios to our approach revealed that users are more than 50% more similar to their trusted agents than to arbitrary peers, which proves the validity of the original idea of the study about inferring trust from language similarity. In addition, comparing the real assigned ratings, gathered directly from users, with the experimental results indicated that the predicted trust values are sufficiently acceptable (with a precision of 61%). We have also studied the benefits of using context in inferring trust. In this regard, the analysis revealed that the precision of the predictions can be improved up to 72%. Besides the application of this approach in web-based social networks, the proposed technique can also be of much help in any direct rating mechanism to evaluate the correctness of trust values assigned by users, and increase the robustness of trust and reputation mechanisms against possible security threats.
Faeze Rasouli; Mohammad Taheri; Reza Rohani Sarvestani
Abstract
Fragile watermarking is the task of embedding a watermark in a media (an image in this paper) such that even small changes, called tamper, can be detected or even recovered to prevent unauthorized alteration. A well-known category of spatial fragile watermarking methods is based on embedding the watermark ...
Read More
Fragile watermarking is the task of embedding a watermark in a media (an image in this paper) such that even small changes, called tamper, can be detected or even recovered to prevent unauthorized alteration. A well-known category of spatial fragile watermarking methods is based on embedding the watermark in the least significant bits of the image to preserve the quality. In addition, Hamming code is a coding algorithm in communication that transmits the data-bits by augmenting some check-bits in order to exactly detect and recover single-bit modifications. This property is previously used to detect and perfectly recover the images modified by small tampers less than a quarter of the image in diameter. To achieve this goal, the Hamming code is applied on a distributed pixel, bits of which are gathered from sufficient far pixels in the image. It guarantees that such tampers can toggle at most one bit of each distributed Hamming code that is recoverable. It was the only guaranteed perfect reconstruction method of small tampers, based on our knowledge. In this paper, the method has been extended to support distortion in two bits of a Hamming code by use of common structures of distributed codes. It leads to guarantee recovery of tampers less than half of the image in width and height. According to the experimental results, the proposed method achieved better performance, in terms of recovering the tampered areas, in comparison to state-of-the-art.
Rawan Flifel
Abstract
Wireless networks, Internet of Things (IoT), Internet of Everything (IoE), and smart homes have become extremely important terms in our present-day life. Most of the buildings, companies, institutions, and even homes depend onthese technologies for interaction, communication, automation, and ...
Read More
Wireless networks, Internet of Things (IoT), Internet of Everything (IoE), and smart homes have become extremely important terms in our present-day life. Most of the buildings, companies, institutions, and even homes depend onthese technologies for interaction, communication, automation, and everything surrounding humans. To understand the advanced topics in wireless networks and IoT devices, it is necessary to use one of the practical learning tools, calledPacket Tracer. This wireless network simulator is freely available by Cisco Networking Academy. In this paper, we will use Packet Tracer to design a smart home based on wireless and IoT devices and illustrate how to create different networking scenarios to make our homes more comfortable and convenient.
M. Yahyazadeh; M. Abadi
Abstract
Botnets are recognized as one of the most dangerous threats to the Internet infrastructure. They are used for malicious activities such as launching distributed denial of service attacks, sending spam, and leaking personal information. Existing botnet detection methods produce a number of good ideas, ...
Read More
Botnets are recognized as one of the most dangerous threats to the Internet infrastructure. They are used for malicious activities such as launching distributed denial of service attacks, sending spam, and leaking personal information. Existing botnet detection methods produce a number of good ideas, but they are far from complete yet, since most of them cannot detect botnets in an early stage of their lifecycle; moreover, they depend on a particular command and control (C&C) protocol. In this paper, we address these issues and propose an online unsupervised method, called BotOnus, for botnet detection that does not require a priori knowledge of botnets. It extracts a set of flow feature vectors from the network traffic at the end of each time period, and then groups them to some flow clusters by a novel online fixed-width clustering algorithm. Flow clusters that have at least two members, and their intra-cluster similarity is above a similarity threshold, are identified as suspicious botnet clusters, and all hosts in such clusters are identified as bot infected. We demonstrate the effectiveness of BotOnus to detect various botnets including HTTP-, IRC-, and P2P-based botnets using a testbed network. The results of experiments show that it can successfully detect various botnets with an average detection rate of 94.33% and an average false alarm rate of 3.74%.
A.A Sadeghi; F. Aminmansour; H.R. Shahriari
Abstract
Code reuse attacks such as return oriented programming and jump oriented programming are the most popular exploitation methods among attackers. A large number of practical and non-practical defenses are proposed that differ in their overhead, the source code requirement, detection rate and implementation ...
Read More
Code reuse attacks such as return oriented programming and jump oriented programming are the most popular exploitation methods among attackers. A large number of practical and non-practical defenses are proposed that differ in their overhead, the source code requirement, detection rate and implementation dependencies. However, a usual aspect among these methods is consideration of the common behaviour of code reuse attacks, which is the construction of a gadget chain. Therefore, the implication of a gadget and the minimum size of an attack chain are a matter of controversy. Conservative or relaxed thresholds may cause false positive and false negative alarms, respectively. The main contribution of this paper is to provide a tricky aspect of code reuse techniques, called tiny code reuse attacks (Tiny-CRA) that demonstrates the ineffectiveness of the threshold based detection methods. We show that with bare minimum assumptions, Tiny-CRA can reduce the size of a gadget chain in shuch a way that no distinction can be detected between normal behavior of a program and a code-reuse execution. To do so, we exhibit our Tiny-CRA primitives and introduce a useful gadget set available in libc. We demonstrate the effectiveness of our approach by implementing nine different shell-codes and exploiting real-world buffer overflow vulnerability in HT Editor 2.0.20.
R. Ramezanian
Abstract
Many security protocols have the aim of authenticating one agent acting as initiator to another agent acting as responder and vice versa. Sometimes, the authentication fails because of executing several parallel sessions of a protocol, and because an agent may play both the initiator and responder role ...
Read More
Many security protocols have the aim of authenticating one agent acting as initiator to another agent acting as responder and vice versa. Sometimes, the authentication fails because of executing several parallel sessions of a protocol, and because an agent may play both the initiator and responder role in parallel sessions. We take advantage of the notion of transition systems to specify authentication for parallel multiple session's execution. To model the authentication, two main notions called 1. agent's scope and 2. agent's recognizability are introduced, which consider the difference of ability of agents due to their different roles in the protocol and different access to keys and secrets. To formalize above notions, a process algebra provided by some primitives for manipulating cryptographic messages is used. We formalize some security protocols and examine our definition of authentication for them. We just discuss the symmetric key case.
E. Khoshhalpour; H. R. Shahriari
Abstract
Nowadays, botnets are considered as essential tools for planning serious cyber attacks. Botnets are used to perform various malicious activities such as DDoS attacks and sending spam emails. Different approaches are presented to detect botnets; however most of them may be ineffective when ...
Read More
Nowadays, botnets are considered as essential tools for planning serious cyber attacks. Botnets are used to perform various malicious activities such as DDoS attacks and sending spam emails. Different approaches are presented to detect botnets; however most of them may be ineffective when there are only a few infected hosts in monitored network, as they rely on similarity in bots activities to detect the botnet. In this paper, we present a host-based method that can detect individual bot-infected hosts. This approach is based on botnet life-cycle, which includes common symptoms of almost all types of botnet despite their differences. We analyze network activities of each process running on the host and propose some heuristics to distinguish behavioral patterns of bot process from legitimate ones based on statistical features of packet sequences and evaluating an overall security risk for it. To show the effectiveness of the approach, a tool named BotRevealer has been implemented and evaluated using real botnets and several popular applications. The results show that in spite of diversity of botnets, BotRevealer can effectively detect the bot process among other active processes.
Bashir Naderi; Hossein Kheiri; Vajiheh Vafaei
Abstract
In this study, we propose a secure communication scheme based on the synchronization of two identical fractional-order chaotic systems. The fractional-order derivative is in Caputo sense, and for synchronization, we use a robust sliding-mode control scheme. The designed sliding surface is taken simply ...
Read More
In this study, we propose a secure communication scheme based on the synchronization of two identical fractional-order chaotic systems. The fractional-order derivative is in Caputo sense, and for synchronization, we use a robust sliding-mode control scheme. The designed sliding surface is taken simply due to using special technic for fractional-order systems. Also, unlike most manuscripts, the fractional-order derivatives of state variables can be chosen differently. The stability of the error system is proved using the Lyapunov stability of fractional-order systems. Numerical simulations illustrate the ability and effectiveness of the proposed method. Moreover, synchronization results are applied to secure communication using the masking method. The security analysis demonstrates that the introduced algorithm has a large keyspace, high sensitivity to encryption keys, higher security, and the acceptable performance speed.
Siavash Ahmadi; Zahra Ahmadian; Javad Mohajeri; Mohammad Reza Aref
Abstract
In the biclique attack, a shorter biclique usually results in less data complexity, but at the expense of more computational complexity. The early abort technique can be used in partial matching part of the biclique attack in order to slightly reduce the computations. In this paper, we make use of this ...
Read More
In the biclique attack, a shorter biclique usually results in less data complexity, but at the expense of more computational complexity. The early abort technique can be used in partial matching part of the biclique attack in order to slightly reduce the computations. In this paper, we make use of this technique, but instead of slight improvement in the computational complexity, we keep the amount of this complexity the same and reduce the data complexity enormously by a shorter biclique. With this approach, we analysed full-round of LBlock, and also LBlock with modified key schedule (which was designed to resist biclique attack) both with data complexity 2^12, while the data complexity of the best biclique attack on the former was 2^52 and for the latter there is no attack on the full-round cipher, so far. Then we proposed a new key schedule that is more resistant against biclique cryptanalysis, though the low diffusion of the cipher makes it vulnerable to this attack regardless of the strength of the key schedule. Also using this method, we analyzed TWINE-80 with 2^12 data complexity. The lowest data complexity for the prior attack on the TWINE-80 was 2^60. In all the attacks presented in this paper, the computational complexities are slightly improved in comparison to the existing attacks.
S. Ghasemi Falavarjani; Behrouz Tork Ladani; S. Ghasemi
Abstract
An important issue in P2P networks is the existence of malicious nodes that decreases the performance of such networks. Reputation system in which nodes are ranked based on their behavior, is one of the proposed solutions to detect and isolate malicious (low ranked) nodes. Gossip Trust is an interesting ...
Read More
An important issue in P2P networks is the existence of malicious nodes that decreases the performance of such networks. Reputation system in which nodes are ranked based on their behavior, is one of the proposed solutions to detect and isolate malicious (low ranked) nodes. Gossip Trust is an interesting previously proposed algorithm for reputation aggregation in P2P networks based on the concept of gossip. Despite its important contribution, this algorithm has deficiencies especially with high number of nodes that leads to high execution time and low accuracy in the results. In this paper, a grouped Gossip based Reputation Aggregation (GGRA) algorithm is proposed. In GGRA, Gossip Trust is executed in each group between group members and between groups instead of executing in the whole network. Due to the reduction in the number of nodes and using strongly connected graph instead of a weakly one, gossip algorithm in GGRA is executed quickly. With grouping, not only reputation aggregation is expected to be more scalable, but also because of the decrement in the number of errors of the gossiped communication, the results get more accurate. The evaluation of the proposed algorithm and its comparison with Gossip Trust confirms the expected results.
M. Kamarei; A. Patooghy; M. Fazeli
Abstract
Wireless Sensor Networks (WSNs) offer inherent packet redundancy since each point within the network area is covered by more than one sensor node. This phenomenon, which is known as sensors co-coverage, is used in this paper to detect unauthenticated events. Unauthenticated event broadcasting in a WSN ...
Read More
Wireless Sensor Networks (WSNs) offer inherent packet redundancy since each point within the network area is covered by more than one sensor node. This phenomenon, which is known as sensors co-coverage, is used in this paper to detect unauthenticated events. Unauthenticated event broadcasting in a WSN imposes network congestion, worsens the packet loss rate, and increases the network energy congestion. In the proposed method, the more the safe, the less the unsafe (MSLU) method, each secure occurred event must be confirmed by various sensor nodes; otherwise the event is dropped. Indeed, the proposed method tends to forward event occurrence reports that are detected by various sensor nodes. The proposed method is evaluated by means of simulation as well as analytical modeling. A wide range of simulations, which are carried out using NS-2, show that the proposed method detects more than 85% of unauthenticated events. This comes at the cost of the network end-to-end delay of 20% because the proposed method does not impose delay on incoming packets. In addition, the proposed method is evaluated by means of an analytical model based on queuing networks. The model accurately estimates the network performance utilizing the proposed unauthenticated event detection method.
A. Tavakoly; R. Ebrahimi Atani
Abstract
The Tor network is probably one of the most popular online anonymity systems in the world. It has been built based on the volunteer relays from all around the world. It has a strong scientific basis which is structured very well to work in low latency mode that makes it suitable for tasks such as web ...
Read More
The Tor network is probably one of the most popular online anonymity systems in the world. It has been built based on the volunteer relays from all around the world. It has a strong scientific basis which is structured very well to work in low latency mode that makes it suitable for tasks such as web browsing. Despite the advantages, the low latency also makes Tor insecure against timing and traffic analysis attacks, which are the most dominant attacks on Tor network in recent past years. In this paper, first all kinds of attacks on Tor network will be classified and then timing and traffic analysis attacks will be described in more details. Then we present a new circuit scheduling for Tor network in order to preserve two properties, fairness and randomness. Both properties are trying to make pattern and timing analysis attacks more difficult and even in some cases impractical. Our scheduler distorts timing patterns and size of packets in a random way (randomness) without imposing artificial delays or paddings (fairness). Finally, by using our new scheduler, one of the most powerful attacks in this area is debilitated, and by it is shown that analyzing traffic patterns and size of packets will be more difficult to manage.
Saeed Banaeian Far; Maryam Rajabzadeh Assar
Abstract
A certificateless (CL) signcryption scheme is a cryptographic primitive that provides user authentication and message confidentiality at the same time. CL signcryption schemes (as a type of certificateless encryption scheme) have solved problems concerning malicious server presentation, and the server ...
Read More
A certificateless (CL) signcryption scheme is a cryptographic primitive that provides user authentication and message confidentiality at the same time. CL signcryption schemes (as a type of certificateless encryption scheme) have solved problems concerning malicious server presentation, and the server who issues users' partial private keys and certificates cannot obtain users' signing keys. Therefore, the CL signcryption scheme is an excellent choice for protecting users' signing keys and providing user authentication and message confidentiality. Moreover, signcryption schemes have lower computational costs than signature and encryption schemes. The present study presents a short and efficient CL signcryption scheme based on the hyperelliptic curve (HC). Applying HC as the calculation base for designing the presented CL signcryption scheme reduces key-length from 160 bits to 80. The presented CL signcryption scheme is shorter than other recently-proposed ones with regard to communication overhead with its less than one-third shorter length compared to the shortest of the others. Moreover, it is more efficient than other recently-proposed CL signcryption schemes in the user-side computational cost, including the \textit{key generation} and \textit{user key generation} phases that have been halved in total. Finally, the security of the presented CL signcryption scheme was analyzed in the random oracle (RO) model based on the hardness of the point factorization problem (PFP) on HC.
Vahid Chegeni; Hamid Haj Seyyed Javadi; Mohammad Reza Moazami Goudarzi; Afshin Rezakhani
Abstract
Today, the Internet of Things (IoT) is one of the emerging technologies that enable the connection and transfer of information through communication networks. The main idea of the IoT is the widespread presence of objects such as mobile devices, sensors, and RFID. With the increase in traffic volume ...
Read More
Today, the Internet of Things (IoT) is one of the emerging technologies that enable the connection and transfer of information through communication networks. The main idea of the IoT is the widespread presence of objects such as mobile devices, sensors, and RFID. With the increase in traffic volume in urban areas, the existing intelligent urban traffic management system based on IoT can be vital. Therefore, this paper focused on security in urban traffic based on using RFID. In our scheme, RFID tags chose as the purpose of this article. We, in this paper, present a mutual authentication protocol that leads to privacy based on hybrid cryptography. Also, an authentication process with RFID tags is proposed that can be read at high speed. The protocol has attempted to reduce the complexity of computing. At the same time, the proposed method can withstand attacks such as spoofing of tag and reader, tag tracking, and replay attack.
A. Diyanat; M. A. Akhaee; Sh. Ghaemmaghami
Abstract
The present paper is intended to present a robust multiplicative video watermarking scheme. In this regard, the video signal is segmented into 3-D blocks like cubes, and then, the 3-D wavelet transform is applied to each block. The low frequency components of the wavelet coefficients are then used for ...
Read More
The present paper is intended to present a robust multiplicative video watermarking scheme. In this regard, the video signal is segmented into 3-D blocks like cubes, and then, the 3-D wavelet transform is applied to each block. The low frequency components of the wavelet coefficients are then used for data embedding to make the process robust against both malicious and unintentional attacks. The hidden message is inserted through multiplying/dividing these coefficients by a constant parameter which controls the power of the watermark. The watermark extraction relies on a maximum likelihood-based procedure, observing the distribution of the watermarked coefficients. The performance of the proposed scheme has been verified via simulations and found to be superior to some of the well-known existing video watermarking methods.
Mahdi Sajadieh; Arash Mirzaei
Abstract
In terms of security, MDS matrices are one of the best choices for diffusion layer of block ciphers. However, as these matrices grow in size, their software implementation becomes a challenge. In this paper, to benefit from the properties of MDS matrices and avoid the mentioned challenge, we use 4*4 ...
Read More
In terms of security, MDS matrices are one of the best choices for diffusion layer of block ciphers. However, as these matrices grow in size, their software implementation becomes a challenge. In this paper, to benefit from the properties of MDS matrices and avoid the mentioned challenge, we use 4*4 MDS matrices to build some 16*16 matrices with low number of zero elements. We show that if these matrices are used as diffusion layer of software-based SPN structures, the resulting block ciphers have similar properties as AES in software implementation complexity (i.e. the number of required CPU instructions) and resistance against linear and differential attacks. Moreover, the best impossible differential and square distinguishers for the proposed 16*16 structures have similar length as SPN structures with 16*16 MDS matrices. Thus, the new structures outperform AES with respect to the impossible differential and square attacks. Additionally, we show that if the proposed SPN structure uses the AES key schedule, its results for the differential related-key attacks are better than those for AES. We also extend the idea and use 4*4 MDS matrices to design 24*24 and 32*32 matrices with acceptable properties for SPN structure design. Finally, we extend the idea to propose some matrices for Feistel structures with SP-type F-functions. We show that the resulting structures are more secure than the improved type-II GFS.
N. Soltani; R. Bohlooli; R. Jalili
Abstract
One of the security issues in data outsourcing is the enforcement of the data owner’s access control policies. This includes some challenges. The first challenge is preserving confidentiality of data and policies. One of the existing solutions is encrypting data before outsourcing which brings ...
Read More
One of the security issues in data outsourcing is the enforcement of the data owner’s access control policies. This includes some challenges. The first challenge is preserving confidentiality of data and policies. One of the existing solutions is encrypting data before outsourcing which brings new challenges; namely, the number of keys required to access authorized resources, efficient policy updating, write access control enforcement, overhead of accessing/processing data at the user/owner side. Most of the existing solutions address only some of the challenges, while imposing high overhead on both owner and users. Though, policy management in the Role-Based Access Control (RBAC) model is easier and more efficient due to the existence of role hierarchical structure and role inheritance; most of the existing solutions address only enforcement of policies in the form of access control matrix. In this paper, we propose an approach to enforce RBAC policies on encrypted data outsourced to a service provider. We utilize Chinese Remainder Theorem for key management and role/permission assignment. Efficient user revocation, efficient role hierarchical structure updating, availability of authorized resources for users of new roles, and enforcement of write access control policies as well as static separation of duties, are of advantages of the proposed solution.
Maryam Zarezadeh; Maede Ashouri-Talouki; Mohammad Siavashi
Abstract
Electronic health record (EHR) system facilitates integrating patients' medical information and improves service productivity. However, user access to patient data in a privacy-preserving manner is still challenging problem. Many studies concerned with security and privacy in EHR systems. Rezaeibagha ...
Read More
Electronic health record (EHR) system facilitates integrating patients' medical information and improves service productivity. However, user access to patient data in a privacy-preserving manner is still challenging problem. Many studies concerned with security and privacy in EHR systems. Rezaeibagha and Mu [1] have proposed a hybrid architecture for privacy-preserving accessing patient records in a cloud system. In their scheme, encrypted EHRs are stored in multiple clouds to provide scalability and privacy. In addition, they considered a role-based access control (RBAC) such that for any user, an EHR access policy must be determined. They also encrypt the EHRs by the public keys of all users. So, for a large amount of EHRs, this scheme is not efficient. Furthermore, using RBAC for access policy makes the policy changing difficult. In their scheme, users cannot search on encrypted EHRs based on diseases and some physicians must participate in the data retrieval by a requester physician. In this paper, we address these problems by considering a ciphertext-policy attribute-based encryption (CP-ABE) which is conceptually closer to the traditional access control methods such as RBAC. Our secure scheme can retrieve encrypted EHR based on a specific disease. Furthermore, the proposed scheme guarantees the user access control and the anonymity of the user or data owner during data retrieval. Moreover, our scheme is resistant against collusion between unauthorized retrievers to access the data. The analysis shows that our scheme is secure and efficient for cloud-based EHRs.
K. Baghery; B. Abdolmaleki; B. Akhbari; M. R. Aref
Abstract
Nowadays Radio Frequency Identification (RFID) systems have appeared in lots of identification and authentication applications. In some sensitive applications, providing secure and confidential communication is very important for end-users. To this aim, different RFID authentication protocols have been ...
Read More
Nowadays Radio Frequency Identification (RFID) systems have appeared in lots of identification and authentication applications. In some sensitive applications, providing secure and confidential communication is very important for end-users. To this aim, different RFID authentication protocols have been proposed, which have tried to provide security and privacy of RFID users. In this paper, we analyze the privacy of two recently proposed RFID authentication protocols in 2012 and 2013. We present several traceability attacks including traceability, backward traceability and forward traceability against the first protocol. We also show that, the second protocol not only suffers from Denial-of-Service (DoS) attack, but also it is vulnerable to traceability and backward traceability attacks. We present our privacy analysis based on a well-known formal RFID privacy model which has been proposed by Ouafi and Phan in 2008. Then, in order to overcome the weaknesses, we apply some modifications on these protocols and propose two modified versions.
E. K. Mabuto; H. S. Venter
Abstract
The products of graphic design applications leave behind traces of digital information which can be used during a digital forensic investigation in cases where counterfeit documents have been created. This paper analyzes the digital forensics involved in the creation of counterfeit documents. This is ...
Read More
The products of graphic design applications leave behind traces of digital information which can be used during a digital forensic investigation in cases where counterfeit documents have been created. This paper analyzes the digital forensics involved in the creation of counterfeit documents. This is achieved by first recognizing the digital forensic artifacts left behind from the use of graphic design applications, and then analyzing the files associated with these applications. When analyzing digital forensic artifacts generated by an application, the specific focus is on determining whether the graphic design application was installed, whether the application was used, and determining whether an association can be made between the application’s actions and such a digital crime. This is accomplished by locating such information from the registry, log files and prefetch files. The file analysis involves analyzing files associated with these applications for file signatures and metadata. In the end it becomes possible to determine if a system has been used for creating counterfeit documents or not.
A. Sedaghatbaf; M. Abdollahi Azgomi
Abstract
Quantitative and model-based prediction of security in the architecture design stage facilitates early detection of design faults hence reducing modification costs in subsequent stages of software life cycle. However, an important question arises with respect to the accuracy of input parameters. In practice, ...
Read More
Quantitative and model-based prediction of security in the architecture design stage facilitates early detection of design faults hence reducing modification costs in subsequent stages of software life cycle. However, an important question arises with respect to the accuracy of input parameters. In practice, security parameters can rarely be estimated accurately due to the lack of sufficient knowledge. This inaccuracy is ignored in most of the existing evaluation methods. The aim of this paper is to explicitly consider parameter uncertainty in the software security evaluation process. In particular, we use the Dempster-Shafer theory of evidence to formulate the uncertainties in input parameters and determine their effects on output measures. In the proposed method, security attacks are expressed using UML diagrams (i.e., misuse case and mal-activity diagrams) and security parameters are specified using the SecAM profile. UML/SecAM models are then transformed into attack trees, which allow quantifying the probability of security breaches. The applicability of the method is validated by a case study on an online marketing system.
M. Imani; Gh. A. Montazer
Abstract
The aim of phishing is tracing the users' s private information without their permission by designing a new website which mimics the trusted website. The specialists of information technology do not agree on a unique definition for the discriminative features that characterizes the phishing websites. ...
Read More
The aim of phishing is tracing the users' s private information without their permission by designing a new website which mimics the trusted website. The specialists of information technology do not agree on a unique definition for the discriminative features that characterizes the phishing websites. Therefore, the number of reliable training samples in phishing detection problems is limited. Moreover, among the available training samples, there are abnormal samples that cause classification error. For instance, it is possible that there are phishing samples with similar features to legitimate ones and vice versa. A supervised feature extraction method, called weighted feature line embedding, is proposed in this paper to solve these problems. The proposed method virtually generates training samples by utilizing the feature line metric. Hence, it can solve the small sample size problem. Moreover, by assigning appropriate weights to each pair of feature points, it corrects the undesirable quality of abnormal samples. The features extracted by our method improve the performance of phishing website detection specially by using small training sets.