Designing a Fuzzy-Logic Based Trust and Reputation Model for Secure Resource Allocation in Cloud Com
Designing a Fuzzy-Logic Based Trust
and Reputation Model for Secure Resource Allocation in Cloud Computing
Kamalanathan Chandran, Valarmathy Shanmugasudaram and Kirubakaran
Subramani
Department of Electronics and Communication Engineering, Bannari Amman
Institute of Technology,
India
Abstract: To plan and improve a fuzzy logic and neural network based trust
and reputation model for safe resource allocation in cloud computing is the most
important motto of this research. Among the IT professionals in current scenario,
the cloud computing is one of the main topics conversed. Now, to revise the
security, we employ the trust manager and reputation manager in our proposed
approach. At first, the user access a resource block through the scheduling
manager and a structure will send to the user following accessing the resource
block to fill the characteristic values of trust factor and reputation factor.
The trust factor and reputation value is after that computed for the resource
center and it is specified to the fuzzy logic system and neural network to obtain
the security score of a resource center. To offer the security controls is the advantage
of our suggested method in accessing the cloud resources from cloud computing owing
to different security issues occurred in networks, databases, resource
scheduling, transaction management and load balancing.
Keywords: Trust factor, reputation factor, fuzzy logic
system, security score, resource center.
Received May 25, 2013; accepted June 19, 2013
Efficient
Transmission of PKI Certificates using ECC and its Variants
Shivkumar Selvakumaraswamy1, Umamaheswari Govindaswamy2
1Anna University, India
2PSG College of Technology, India
Abstract: The demand for wireless networks is increasing
rapidly and it becomes essential to design existing Public-Key Infrastructure (PKI)
useful for wireless devices. A PKI is a set of procedures needed to create,
distribute and revoke digital certificates. PKI is an arrangement that binds
public keys with respective user identities by means of a Certificate Authority
(CA). The user identity must be unique within each CA domain. The third-party Validation
Authority (VA) can provide this information on behalf of CA. The binding is
established through the registration and issuance process which is carried out
by software at a CA or under human supervision. Elliptic Curve Cryptography
(ECC) is proved to be the best suited one for resource constrained applications.
This paper compares the two PKI algorithms ECC and Rivest-Shamir-Adleman (RSA).
It is found that ECC-based signatures on a certificate are smaller and faster
to create; and the public key that the certificate holds is smaller as well.
Verification is also faster using ECC-based certificates, especially at higher
key strengths. The security of ECC systems is based on the elliptic curve
discrete logarithm problem, rather than the integer factorization problem. This
allows for faster computations and efficient transmission of certificates.
Keywords: ECC, PKI, wireless
application protocol, registration authority, digital signature.
Received September 5, 2013; accepted December 24, 2013
An
Intelligent CRF Based Feature Selection for Effective Intrusion Detection
1Department of Information Science and Technology, Anna University, India
2Department of Computer Science and Engineering, University College of Engineering Tindivanam, India
Abstract: As the Internet applications are growing rapidly, the
intrusions to the networking system are also becoming high. In such a scenario,
it is necessary to provide security to the networks by means of effective
intrusion detection and prevention methods. This can be achieved mainly by
developing efficient intrusion detecting systems that use efficient algorithms
which can identify the abnormal activities in the network traffic and protect
the network resources from illegal penetrations by intruders. Though many
intrusion detection systems have been proposed in the past, the existing
network intrusion detections have limitations in terms of detection time and
accuracy. To overcome these drawbacks, we propose a new intrusion detection
system in this paper by developing a new intelligent Conditional Random Field
(CRF) based feature selection algorithm to optimize the number of features. In
addition, an existing layered approach based algorithm is used to perform
classification with these reduced features. This intrusion detection system
provides high accuracy and achieves efficiency in attack detection compared to
the existing approaches. The major advantages of this proposed system are
reduction in detection time, increase in classification accuracy and reduction
in false alarm rates.
Keywords: Intrusion detection
system, feature selection, false alarms, layered approach, intelligent CRF,
ICRFFSA, LAICRF.
Received January 31, 2013; accepted November 10, 2013
Using Ontologies for Extracting Differences in the
Dynamic Domain: Application on Cancer Disease
Nora Taleb
Laboratory for Electronic
Document Management LABGED Badji mokhtar University, Algeria
Abstract: Over time, the data representatives a given domain can change,
both the data model reflecting the area. In this situation, the presence of
strategies that can summarize the produced changes is mandatory. This study
presents an implemented approach based on data mining techniques in order to
extract the differences, the model is domain ontology and the changes are
represented by two ontology’s versions. The results are summarized in changes
report. An experimentation of the tool was made on the ontology of the cancer
disease and satisfactory results were obtained.
Keywords: Ontology change, ontology versionning, Web Ontology Language (OWL) scheme, retrieval information.
Received February 27, 2013; accepted September19, 2013
wPFP-PCA:
Weighted Parallel Fixed Point PCA Face Recognition
Chakchai So-In
and Kanokmon Rujirakul
Department of
Computer Science, Khon Kaen University, Thailand
Abstract: Principal Component Analysis (PCA) is one
of the feature extraction techniques, commonly used in human facial recognition
systems. PCA yields high accuracy rates when requiring lower dimensional
vectors; however, the computation during covariance matrix and eigenvalue
decomposition stages leads to a high degree of complexity that corresponds to the
increase of datasets. Thus, this research proposes an enhancement to PCA that
lowers the complexity by utilizing a Fixed Point (FP) algorithm during the
eigenvalue decomposition stage. To mitigate the effect of image projection
variability, an adaptive weight was also employed added to FP-PCA called wFP-PCA.
To further improve the system, the advances in technology of multi-core
architectures allows for a degree of parallelism to be investigated in order to
utilize the benefits of matrix computation parallelization on both feature
extraction and classification with weighted Euclidian Distance optimization.
These stages include parallel pre-processor and their combinations, called
weighed Parallel Fixed Point PCA wPFP-PCA. When compared to a traditional PCA
and its derivatives which includes our first enhancement wFP-PCA, the
performance of wPFP-PCA is very positive, especially in higher degree of recognition
precisions, i.e., 100% accuracy over the other systems as well as the increase
of computational speed-ups.
Keywords: Face recognition, FP, parallel face recognition, parallel euclidian, parallel PCA, PCA.
Received December 27, 2014; accepted May 21, 2014
A
General Characterization of Representing and Determining Fuzzy Spatial
Relations
1 College of Information Science and Engineering, Northeastern University, China
2 School of Software Northeastern University, China
Abstract: A considerable amount of fuzzy spatial data emerged in various applications leads to investigation of fuzzy spatial data and their fuzzy relations. Because of complex requirements, it is challenging to propose a general fuzzy spatial relationship representation and a general algorithm for determining all fuzzy spatial relations. This paper, presents a general characterization of representing fuzzy spatial relations assuming that fuzzy spatial regions are all fuzzy. On the basis of it, correspondences between fuzzy spatial relations and spatial relations are investigated. Finally, a general algorithm for determining all fuzzy spatial relations is proposed.
Keywords: Fuzzy spatial data, fuzzy point, fuzzy line, fuzzy region, fuzzy spatial relations.
Received May 15, 2013; accepted March 17, 2013
Empirical Evaluation of Syntactic and Semantic
Defects Introduced by Refactoring Support
Wafa Basit1, Fakhar Lodhi2 and Usman Bhatti3
1Department of Computer
Science, National University of Computer and Emerging Sciences, Pakistan
2Department of Computer
Science, GIFT University, Pakistan
3Rmod Team, Inria Lille-Nord
Europe, France
Abstract: Software maintenance is a major source of
expense in software projects. A proper evolution process is a critical
ingredient in the cost-efficient development of high-quality software. A
special case of software evolution is refactoring that cannot change the
external behavior of the software system yet should improve the internal
structure of the code. Hence, there is always a need to verify after
refactoring, whether it preserved behavior or not. As formal approaches are
hard to employ, unit tests are considered the only safety net available after
refactoring. Refactoring may change the expected interface of the software therefore
unit tests are also affected. The existing tools for refactoring do not
adequately support unit test adaptation. Also, refactoring tools and guidelines
may introduce semantic and syntactic errors in the code. This paper
qualitatively and quantitatively analyses data from an empirical investigation
involving 40 graduate students, performed against a set of semantic and
syntactic defects. Findings from the expert survey on refactoring support have
also been shared. The analysis in this paper shows that there are notable
discrepancies between preferred and actual definitions of refactoring. However,
continued research efforts are essential to provide Guide Lines(GL) in the
adaptation of the refactoring process to take care of these discrepancies, thus
improving the quality and efficiency of the software development.
Keywords: Refactoring, unit
testing, pre-conditions, semantic defects, maintenance.
Received
June 2, 2013; accepted March 29, 2013
Adaptive
Automata-based Model for Iterated n-Player’s Prisoner’s Dilemma
Sally Almanasra1, Khaled Suwais2 and Muhammad Rafie1
1School of Computer Sciences, Universiti Sains Malaysia, Malaysia
2Faculty of Computer Studies, Arab Open University, Saudi Arabia
Abstract: In this paper, we present a new technique of representing the player’s strategies by adaptive automata, which can handle complex strategies in large populations effectively. The representation the player’s strategies have a great impact on changing the player’s behaviour in rational environments. This model is built on the basis of changing the behaviour of the player’s gradually toward the cooperation. The gradualism is achieved by constructing three different adaptive automata at three different levels. The results showed that our model could represent the player’s strategies efficiently. The results proofed that the model is able to enhance the cooperation level between the participated player’s through few tournaments.
Keywords: Adaptive automata, prisoner’s dilemma, cooperative
behavior, INPPD.
Received
October 3, 2013; accepted June 9, 2014
Encryption Quality Measurement
of a Proposed Cryptosystem Algorithm for the Colored Images Compared with Another
Algorithm
Osama Abu Zaid1, Nawal El-Fishawy2
and Elsayed Nigm1
1Department of Mathematics,
Zagazig University, Egypt
2
Department of Computer Science and Engineering, Menoufia University, Egypt
Abstract: In this paper, a proposed cryptosystem algorithm based on two different
chaotic systems is presented. The proposed cryptosystem algorithm is designated
as PCACH. A recently developed encryption algorithm which is designated here as
HuXia is reviewed. These two algorithms are applied to three images of
different color frequencies, i.e., different types of colored-images are
encrypted with each of the two encryption algorithms. Both of them are applied to
the different images with two different types of encryption modes, Electronic
Code Book (ECB) and Cipher Block Chaining (CBC). Visual inspection is not sufficient
to assess the quality of encryption so other measuring factors are considered
based on measuring the maximum deviation and the correlation coefficient
between the original and the encrypted images. For judging the force of
security, we measure the plain-text sensitivity by using NPCR and UACI analysis,
measuring information entropy and measuring the key sensitivity. Also, the
encryption/decryption time and the throughput are measured for the two
algorithms. The results suggest that PCACH is a very good algorithm and superior
to HuXia.
Keywords: Encryption algorithms, image encryption, quality measurements, modes of encryption.
Received April 10, 2013; accepted June 23, 2013
An Improved Clustering Algorithm
for Text Mining: Multi-cluster Spherical K-means
Volkan Tunali1, Turgay Bilgin1 and Ali Camurcu2
1 Department of Software Engineering, Maltepe University, Turkey
2 Department of Computer Engineering, Fatih Sultan Mehmet Waqf University, Turkey
Abstract: Thanks to advances in information and communication technologies, there is a prominent increase in the amount of information produced specifically in the form of text documents. In order to, effectively deal with this “information explosion” problem and utilize the huge amount of text databases, efficient and scalable tools and techniques are indispensable. In this study, text clustering which is one of the most important techniques of text mining that aims at extracting useful information by processing data in textual form is addressed. An improved variant of Spherical K-means algorithm named multi-cluster spherical K-means is developed for clustering high dimensional document collections with high performance and efficiency. Experiments were performed on several document data sets and it is shown that the new algorithm provides significant increase in clustering quality without causing considerable difference in CPU time usage when compared to Spherical K-means algorithm.
Keywords: Data mining, text mining, document clustering, spherical k-means.
Received February 10, 2013; accepted March 17,
2014
VParC: A
Compression Scheme for Numeric Data in Column-oriented Databases
Ke Yan1, Hong Zhu1
and Kevin Lü2
1School of Computer Science and Technology, Huazhong University of Science and Technology, China
2Brunel University, UK
Abstract: Compression is one of
the most important techniques in data management, which is usually used to
improve the query efficiency in database. However, there are some
restrictions on existing compression algorithms that have been applied to
numeric data in column-oriented databases. First, a compression algorithm is
suitable only for columns with certain data distributions not for all kinds of data
columns; second, a data column with irregular distribution is hard to be
compressed; third, the data column compressed by using heavyweight methods
cannot be operated before decompression which leads to inefficient query. Based
on the fact that it is more possible for a column to have sub-regularity than
have global-regularity, we developed a compression scheme called Vertically
Partitioning Compression (VParC). This method is suitable for columns with
different data distributions, even for irregular columns in some cases. The
more important thing is that data compressed by VParC can be operated directly without
decompression in advance. Details of the compression and query evaluation approaches
are presented in this paper and the results of our experiments demonstrate the promising features of VParC.
Keywords: Column-stores, data management, compression, query processing,
analytical workload.
Received August 28, 2013; accepted 21
April, 2014
Data Mining Perspective: Prognosis of Life Style on Hypertension
and Diabetes
Abdullah
Aljumah and Mohammad Siddiqui
College
of Computer Engineering and Sciences, Salman bin Abdulaziz University,
Kingdom of Saudi Arabia.
Abstract: In the present era, the data mining techniques
are widely and deeply useful as decision support systems in the fields of
health care systems. The proposed research is an interdisciplinary work of
informatics and health care, with the help of data mining techniques to predict
the relationship among interventions of hypertension and diabetes. As the study
shows persons who have diabetes can have chances of hypertension and vice
versa. In the present work we would like to approach the life style
intervention of hypertension and diabetes and their effects using data mining. Life
style intervention plays a vital role to control these diseases. The
intervention includes the risk factor like diet, weight, smoking cessation and
exercise. The regression technique is used in which dependent and independent
variables are defined. The four interventions are treated as independent
variables and two diseases hypertension and diabetes are dependent variables.
We have established the relationship between hypertension and diabetes, using
the data set of Non Communicable Disease NCD report of Saudi Arabia, World
Health Organisation’s (WHO). The Oracle Data Miner (ODM) tool is used to
analyse the data set. Predictive data analysis gives the result that
interventions weight control and exercise have the direct relationship between
them in both the diseases.
Keywords: Oracle data mining tool, prediction, regression, support vector
machine, hypertension, diabetes.
Received April 10,
2014; accepted June 23, 2014
Design and Implementation of a Synchronous and Asynchronous-Based Data Replication Technique in Clou
Design
and Implementation of a Synchronous and Asynchronous-Based Data Replication
Technique in Cloud Computing
S.Kirubakaran,
S. Valarmathy and C.Kamalanathan
Department of
Electronics and Communication, Bannari Amman Institute of Technology, India
Abstract:
Failures are usual rather exceptional in cloud computing
environment. To access from the nearby site, the often used data should get
replicated to multiple locations to compose the users to advance the system
accessibility. A challenging task in cloud computing is to decide a sensible
number and right location of replicas. At this point, we propose an adapted
dynamic data replication approach to decide a sensible number and right
location of replicas and we compare both the adapted dynamic data replication
approach and normal dynamic data replication approach. The normal dynamic data
replication approach has three dissimilar stages which are the recognition of
data file to replicate, number of replicas to be created and placing new
replicas. We adapt the popularity degree in the initial stage of normal dynamic
data replication approach and also we consider the failure probability for
replica factor calculation and the other two stages are related to the normal
dynamic data replication approach. When we update the major data center we
moreover integrate the synchronous and asynchronous updation of replica data
file.
Keywords: Cloud computing,
data replication, synchronous, asynchronous updation.
Enhancing the Optimal Robust
Watermarking Algorithm to High Payload
1Satish Todmal and
2Suhas Patil
1Department of
Information Technology, JSPM’s Imperial College of
Engineering and Research, India
2Department of
Computer Engineering, Bharati Vidyapeeth Deemed University
College of
Engineering, India
Abstract: Digital
watermarking is a robust method that allows a person to attach hidden data into
the digital audio, video, or image signals and documents. In this paper, we
propose a watermarking technique where initially, the watermark is embedded
into the HL and LH frequency coefficients in multi-wavelet transform domain
after searching the optimal locations in order to improve both quality of
watermarked image and robustness of the watermark. Here, the payload along with
robustness is improved using Genetic Algorithm (GA) and multi-bit embedding
procedure. The experimentation is carried using the different images and the
performance of the technique is analyzed using the Perceptual Quality-Dependent
Parameter (PSNR) and NC. The proposed technique is evaluated using various
compression standards and filtering techniques which yielded good results by
having high PSNR and NC values showing the robustness and fidelity of the
technique. The technique has achieved a
peak PSNR of 38.14 and NC of 0.998. The technique is also compared to previous
technique and results show that our proposed technique have performed better.
Furthermore, payload analysis is carried out to infer that our proposed
technique uses only half the payload when compared to previous technique.
Keywords: Watermarking, GA, optimal location,
robustness, payload.
Received January 8, 2014; accepted July 8, 2014
Neural
Network with Bee Colony Optimization for MRI Brain Cancer Image Classification
Sathya
Subramaniam and Manavalan
Radhakrishnan
Department of Computer Science and applications, Periyar
University, India
Abstract:
Brain tumor is one of the foremost causes for the increase in mortality
among children and adults. Computer visions are being used by doctors to
analysis and diagnose the medical problems. Magnetic Resonance Imaging (MRI) is a medical
imaging technique, which is used to visualize internal structures of MRI
brain images for analyzing normal and abnormal prototypes of brain while
diagnosing. It is a non-invasive method to take picture of brain and the
surrounding images. Image processing techniques are used to extract meaningful
information from medical images for the purpose of diagnosis and prognosis. Raw
MRI brain images are not suitable for processing and analysis since noise and
low contrast affect the quality of the MRI images. The classification of MRI
brain images is emphasized in this paper for cancer diagnosis. It can consist
of four steps: Pre-processing, identification of region of interest, feature
extraction and classification. For improving quality of the image, partial
differential equations method is proposed and its result is compared with other
methods such as block analysis method, opening by reconstruction method and histogram
equalization method using statistical parameters such as carrier signal to
ratio, peak signal-to-ratio, structural similarity index measure, figure of
merit, mean square error. The enhanced image is converted into bi-level image,
which is utilized for sharpening the regions and filling the gaps in the
binarized image using morphological operators. Region of Interest (ROI) is
identified by applying region growing method for extorting the five features.
The classification is performed based on the extracted image feature to
determine whether the brain image is normal or abnormal and it is also,
introduced hybridization of Neural Network (NN) with bee colony optimization
for the classification and estimation of cancer affect on given MRI image. The
performance of the proposed classifier is compared with traditional NN classifier
using statistical measures such as sensitivity, specificity and accuracy. The
experiment is conducted over 100 MRI brain images.
keywords: MRI images, NN,
bee colony, PDE, biological analysis, feature extraction.
Received February 17, 2013; accepted
October 24, 2014
A DEA-Based Approach for Information Technology Risk Assessment through Risk IT Framework
Seyed Hatefi1 and Mehdi Fasanghari2
1Faculty of Engineering, Shahrekord University, Iran
2Cyber Space Research
Institute, North Karegar St., Iran
Abstract: The
use of Information Technology (IT) in organizations is subject to various kinds
of potential risks. Risk management is a key component of project management enables
an organization to accomplish its mission(s). However, IT projects have often
been found to be complex and risky to implement in organizations. The organizational
relevance and risk of IT projects make it important for organizations to focus
on ways in order to successfully implement IT projects. This paper focuses on
the IT risk management, especially the risk assessment model and proposes a
process oriented approach to risk management. To do this end, this paper applies
the risk IT framework which has three main domains, i.e., risk governance, risk
analysis, risk response and 9 key processes. Then, a set of scenarios, which
can improve the maturity level of risk IT processes, are considered and the
impact of each scenario on the risk IT processes is determined by the expert
opinions. Finally, the Data Envelopment Analysis (DEA) is customized to evaluate
improvement scenarios and select the best one. The proposed methodology is
applied to the Iran Telecommunication Research Centre (ITRC) to improve the
maturity level of its IT risk management processes.
Keywords: Risk
IT framework, risk management, process model, DEA.
Received
June 10, 2012; accepted September 11, 2013