December 2015. No. 6A
Print E-mail

A Novel Approach to Develop Dynamic Portable Instructional Operating System for Minimal Utilization

Siva Sankar Kanahasabapathy1, Jebarajan Thanganadar2, and Padmasuresh Lekshmikanthan3

1Department of Information Technology, Noorul Islam University, India

2Department of Computer Science and Engineering, Rajalekshmi Engineering College, India

3Department of Electrical and Electronics Engineering, Noorul Islam University, India

Abstract: Most well-known instructional Operating Systems (OSs) are complex, particularly if their companion software is taken into account. It takes considerable time and effort to craft these systems and their complexity may introduce maintenance and evolution problems. The purpose of this paper is to develop a mini OS which is open source and Linux based. This OS is independent of hardware simulator and platform. It encompasses a simplified kernel occupying a low memory with minimal resource consumption. It also, includes a dynamic boot loader which ignores the BIOS priority, takes itself to be as a highest priority. It is designed to utilize low primary memory and minimal CPU utilization. This is developed mainly to satisfy the minimal and basic requirements for a normal desktop user.

 Keywords: OSs, kernel, boot loader, linux, portable, opensource.

Received July 23, 2012; accepted May 19, 2015; published online September 15, 2015


Full Text


Print E-mail

Technique for Burning Area Identification Using IHS Transformation and Image Segmentation

 Thumma Kumar1 and Kamireddy Reddy2

1Computer Sciences Corporation, India

2Remote Sensing Applications Area, National Remote Sensing Centre, India

 Abstract: In this paper, we have designed and developed a technique for burning area identification using Intensity Hue Saturation (IHS) transformation and image segmentation. The process of identifying the burnt area in proposed technique consists of four steps such as: IHS transformation, object segmentation, identification of smoke area using Feed-Forward Neural Network (FFNN) and discovering burning areas from the smoke segments. Here, satellite image collected from NASA is utilized for the experimental study of the proposed research. The images obtained from the NASA is given to HIS transformation that convert the RGB image into intensity, hue, saturation transformed image so that, this process is suitable for segmentation process. After the transformation of image, object segmentation technique is done based on K-means  clustering algorithm. Subsequently, FFNN is used for identification of smoke area from the segments. After identifying the smoke segment, the burning area is identified through directional analysis. The proposed burnt area identification technique is analyzed with the help of sensitivity, specificity and the accuracy. Finally, experimental results say that, the proposed technique is achieved the overall accuracy 2.6%, which is better than the existing approach.

 Keywords: Burning, segmentation, K-means, FFNN.

 Received July 3, 2013; accepted March 20, 2014


Print E-mail

A Measurement of Similarity to Identify Identical Code Clones

Mythili ShanmughaSundaram and Sarala Subramani

Department of Information Technology, Bharathiar University, India

Abstract: Code clones are described as a part of the program which is completely or partially similar to the other portions. In the earlier research the code clones have been detected using fingerprinting technique. The major challenge in our work was to group the code clones based on similarity measure. The proposed system measures the similarity based on similarity distance. The defined expression considers two parameters for calculating the similarity measure namely the similarity distance and the population of the clone. Thereby the code clones are clustered and ranked on the basis of their similarity measures. Indexing is used to interactively identify the clones which are caused due to inconsistent changes. As a result of this work all the identical clusters for most similar and more similar categories are identified.

Keywords: Clone detection, software clones, fingerprinting, clustering, reuse.


Print E-mail

A Statistical Framework for Identification of Tunnelled Applications using Machine Learning


Ghulam Mujtaba1 and David Parish2

1Department of Electrical Engineering, Comsats Institute of Information Technology, Pakistan

2School of Electronic and Engineering, Loughborough University, UK


Abstract: This work describes a statistical approach to detect applications which are running inside application layer tunnels. Application layer tunnels are a significant threat for network abuse and violation of acceptable internet usage policy of an organisation. In tunnelling, the prohibited application packets are encapsulated as payload of an allowed protocol packet. It is much difficult to identify tunnelling using conventional methods in the case of encrypted HTTPS tunnels, for example. Hence, machine learning based approach is presented in this work in which statistical packet stream features are used to identify the application inside a tunnel. Packet Size Distribution (PSD) in the form of discrete bins is an important feature which is shown to be indicative of the respective application. This work presents a combination of other features with the PSD bins for better identification of the applications. Tunnelled applications are identifiable using these traffic statistical parameters. A comparison of the performance accuracy of five machine learning algorithms for application detection using this feature set is also given.

Keywords: Network security, tunnelled applications, firewalls, HTTP tunnels, HTTPS tunnels.

 Received May 22, 2013; Accepted May 17, 2015

Print E-mail

Utilizing Corpus Statistics for Hindi Word Sense Disambiguation

Satyendr Singh and Tanveer Siddiqui

Department of Electronics and Communication, University of Allahabad, India


Abstract: Word Sense Disambiguation (WSD) is the task of computational assignment of correct sense of a polysemous word in a given context. This paper compares three WSD algorithms for Hindi WSD based on corpus statistics. The first algorithm, called corpus-based Lesk, uses sense definitions and a sense tagged training corpus to learn weights of Content Words (CWs).

These weights are used in the disambiguation process to assign a score to each sense. We experimented with four metrics for computing weight of matching words Term Frequency (TF), Inverse Document Frequency (IDF), Term Frequency-Inverse Document frequency (TF-IDF) and CW in a fixed window size. The second algorithm uses conditional probability of words and phrases co-occurring with each sense of an ambiguous word in disambiguation. The third algorithm is based on the classification information model. The first method yields an overall maximum precision of 85.87% using TF-IDF weighting scheme. The WSD algorithm using word co-occurrence statistics results in an average precision of 68.73%. The WSD algorithm using classification information model results in an average precision of 76.34%. All the three algorithms perform significantly better than direct overlap method in which case we achieve an average precision of 47.87%.

 Keywords: Supervised hindi WSD, corpus based lesk, TF-IDF, statistical WSD, word co-occurrence, information theory, classification information model.

 Received August 15, 2013; accepted May 6, 2014


Print E-mail

Translation Rules for English to Hindi Machine Translation System: Homoeopathy Domain

Sanjay Dwivedi and Pramod Sukhadeve

Department of Computer Science, Babasaheb Bhimrao Ambedkar University (a Central University), India

Abstract: Rule based machine translation system embraces a set of grammar rules which are mandatory for the mapping of syntactic representations of a source language, on the target language. The system necessitates good linguistic knowledge to write rules and require of acquaintance source such as corpus and bilingual dictionary. In this paper, we have described the grammar rules intended for our English to Hindi machine translation system to translate the homoeopathic literatures, medical reports, prescription etc. The rules which have been written follow the transfer based approach for reordering of rules between two languages. The paper first discusses about our developed stemmer and its rules, further we discuss the Part of Speech tagging (PoS) rules for categorizing each word of the sentence grammatically and our developed homoeopathy corpus in English and Hindi of size 20085 and 20072 words respectively and at the last we discuss the agreement/translation rules for translating various homoeopathic sentences.


Keywords: Machine translation, stemmer, PoS tagging, grammar rules, homoeopathy, corpus.


Received June 14, 2013; Accepted Mrch 17, 2014

Full Text



Print E-mail

Comparative Analysis of Classifier Performance on MR Brain Images


AkilaThiyagarajan and UmaMaheswari Pandurangan

Research Scholar, Anna University, India

Info Institute of Engineering, India

Abstract: This paper, aims to reveal a comparative analysis of classifier performance of MR brain images, particularly for the brain tumor detection and classification. The detection of brain tumor stands in need of Magnetic Resonance Imaging (MRI). The moment invariant feature extraction has been evaluated to categorize the MRI slices as normal, benign and malignant by Neural Network (NN) classifier. In our comparative study, we examine the precision rate of aforementioned classification with extracted features and the classification of brain images with selected features by Association Rule (AR) based NN classifier.

The results are then analyzed with Receiver Operating Characteristics (ROC) curve and compared to illustrate the method producing higher accuracy rate in tumor recognition. Factually, our analysis proves that the classifier works below feature extraction followed by rule pruning method affords better accuracy rate.

 Keywords: Binary association rule, brain tumor, feature extraction, MRI, pruning.


Received June 17, 2013; accepted January 17, 2014

 Full Text



Print E-mail

Brain Tumor Segmentation in MRI Images Using Integrated Modified PSO-Fuzzy Approach

Krishna Priya Remamany1, Thangaraj Chelliah2, Kesavadas Chandrasekaran3, and Kannan Subramanian4

1Department of Electrical and Computer Engineering, Caledonian College of Engineering, Oman

2Anna University of Technology, India

3Department of Imaging Sciences and Interventional Radiology, SCTIMST, India

4Department of EEE, Kalasalingam University, India


Abstract: An image segmentation technique based on maximum fuzzy entropy is applied for Magnetic Resonance (MR) brain images to detect a brain tumor is presented in this paper. The proposed method performs image segmentation based on adaptive thresholding of the input MR brain images. The MR brain image is classified into two Membership Function (MF), whose MFs of the fuzzy region are Z-function and S-function. The optimal parameters of these fuzzy MFs are obtained using Modified Particle Swarm Optimization (MPSO) algorithm. The objective function for obtaining the optimal fuzzy MF parameters is considered to be the maximum the fuzzy entropy. In the course of a number of examples, the performance is compared with those using existing entropy-based object segmentation approaches and the superiority of the proposed MPSO method is demonstrated. The experimental results are compared with the exhaustive search method and Otsu segmentation technique. The result shows the proposed fuzzy entropy based segmentation method optimized using MPSO achieves maximum entropy with proper segmentation of tumor and with minimum computational time.


Keywords: Fuzzy entropy, particle swarm optimization, MRI, segmentation.

Received June 9, 2012; Accepted April 18, 2013


Print E-mail

Verifiable Multi Secret Sharing Scheme for 3D Models

Jani Anbarasi1 and Anandha Mala2

1Anna University, India

2Department of Computer Science and Engineering, Easwari Engineering College, India

Abstract: An efficient, computationally secure, verifiable (t, n) multi secret sharing scheme, based on YCH is proposed for multiple 3D models. The (t, n) scheme shares the 3D secrets among n participants, such that shares less than t cannot reveal the secret. The experimental results provide sufficient protection to 3D models. The feasibility and the security of the proposed system are demonstrated on various 3D models. The simulation results show that the secrets are retrieved from the shares without any loss.

 Keywords: Visual secret sharing, 3d graphics, cryptography.

 Received March 2, 2013; accepted June 9, 2014

Full Text


Print E-mail

Preventing Collusion Attack in Android

Iman Kashefi1, Maryam Kassiri2, and Mazleena Salleh1

1Faculty of Computing, Universiti Teknologi Malaysia, Malaysia

2Faculty of Computer Engineering, Islamic Azad University, Iran

Abstract: Globally, the number of Smartphone users has risen above a billion, and most of users use them to do their day-today activities. Therefore, the security of smartphones turns to a great concern. Recently, Android as the most popular smartphone platform has been targeted by the attackers. Many severe attacks to Android are caused by malicious applications which acquire excessive privileges at install time. Moreover some applications are able to collude together in order to increase their privileges by sharing their permissions. This paper proposes a mechanism for preventing this kind of collusion attack on Android by detecting the applications which are able to share their acquired permissions. By applying the proposed mechanism on a set of 290 applications downloaded from the Android official market, Google Play, the number of detected applications which potentially are able to conduct malicious activities increased by 12.90% in compare to the existing detection mechanism. Results showed that there were 4 applications among the detected applications which were able to collude together in order to acquire excessive privileges and were totally ignored by the existing method.


Keywords: Android security, collusion attacks, colluding applications, over-privileged applications.


Received July 19, 2012; accepted September 27, 2012

Full Text


Print E-mail

Intelligent Risk Analysis Model for Mining Adaptable Reusable Component

Iyapparaja Meenakshisundaram1 and Sureshkumar Sreedharan2

1School of Information Technology and Engineering, Anna University, India

2Vivekanandha College of Technology for Women, India

 Abstract: Every elucidation for today’s quandary has been achieved in an easier prospect, with due respect to the experience gained by a normal man. The engineers too look out for the better way in the development cycle of software apart from its traditional approach. Software, being implemented in almost every machine, is in the urge of being developed with many improvisation techniques but obeying the time and cost constrains. Adding to the available simplifications methodologies in the development phases, the proposed Intelligent Risk Analysis Model (IRAM) would abridge the limitations of an object oriented program developed for a new software product showing betterments in time and budget needed. An object oriented program would comprise of individual and exclusive objects with indicated functionalities. Recognizing the usage of the objects in the existing programs would eliminate the necessity of a new coding, thus the component could be reused if it cannot be designated any better. This methodology does a primary verification whether there are any components which match with the stated requirements in the database of programs (e.g., C++, Java, Perl and Python). Based on the analysis of the matched component, it is categorized into Exact Match (EM), Partial Match (PM) or the Rejected Match (RM) which denotes its chances of applicability into the new product. This analysis of the correspondence in the reused object depends on the defined four parameters tuple namely Expected Language (EL), Module Description (MD), Argument Description (AD) and the Usage Threshold (UT). The component that matches exactly EM can be directly incorporated into the new software product whereas if the component falls into the other category PM then it is subjected to additional tests, Rank (R) is allotted, Intelligent Report (IR) is prepared and measures for its updating as an EM are taken. The RM component is eliminated from the list of possible outcomes at once.


Keywords: Software engineering, software reusability, object oriented programming, IR, cohesion and coupling, regression test.

Received February 3, 2013; accepted September 9, 2014

Full Text


Print E-mail

Multiclass SVM based Spoken Hindi Numerals Recognition

Teena Mittal1 and Rajendra Kumar Sharma2

1Department of Electronics and Communication Engineering, Thapar University, India

2School of Mathematics and Computer Applications, Thapar University, India


Abstract: This paper presents recognition of isolated Hindi numerals using multiclass Support Vector Machine (SVM). The acoustic features in terms of Linear Predictive Coding (LPC), Mel-Frequency Cepstral Coefficients (MFCC) and combination of LPC and MFCC have been considered as inputs to the recognition process. The extracted acoustic features are given as input to the SVM. The classification is performed in two steps. In first step, a one-versus-all SVM classifier is used to identify the Hindi language. Further, in second step ten one-versus-all classifiers are used to recognize numerals. The linear, polynomial and RBF kernels are used for the construction of SVM for recognition purpose. In the first phase, the best kernel strategy was explored for a fixed number of frames of the speech signal. The highest recognition rate has been achieved using linear kernel strategy. Next, the number of frames in order to calculate LPCs and MFCCs was varied and recognition accuracy was calculated. The highest recognition accuracy achieved in this study is 96.8%.

 Keywords: LPC, MFCC, Hindi Numerals, Speech Recognition, SVM.

 Received November 9, 2012; accepted March 9, 2014

Full Text


Print E-mail

Towards the Construction of a Comprehensive Arabic Lexical Reference System

 Hamza Zidoum, Fatma Al-Rasbi, and Muna Al-Awfi

Department of Computer Science, College of Science, Sultan Qaboos University

 Abstract: Arabic is a Semitic language spoken by millions of people in 20 different countries. However, not much work has been done in the field of online dictionaries or lexical resources. WordNet is an example of a lexical resource that has not been yet developed to its full extent for Arabic. WordNet, a lexical database developed by Professor George Miller and his team at Princeton University, has seen life 20 years ago. Ever since then, it has proved to be widely successful and extremely necessary for today’s demands. Accordingly, the motivation of developing an Arabic WordNet (AWN) became strong. This project addresses the nominal part of WordNet as the first step towards the construction of a comprehensive AWN. The nominal part means nouns as a part of speech.

 Key Words: Wordnet, synsets, arabic processing, lexicon.

 Received March 10, 2012; accepted July 28, 2015

Full Text


Print E-mail
CARIM: An Efficient Algorithm for Mining Class-Association Rules with Interestingness Measures

Loan Nguyen1,2, Bay Vo3, Tzung-Pei Hong4,5

1Division of Knowledge and System Engineering for ICT, Ton Duc Thang University, Vietnam

2Faculty of Information Technology, Ton Duc Thang University, Vietnam

3Faculty of Information Technology, Ho Chi Minh City University of Technology, Vietnam

4Department of CSIE, National University of Kaohsiung, Taiwan

5Department of CSE, National Sun Yat-sen University, Taiwan

Abstract: Classification based on association rules can often achieve higher accuracy than some traditional rule-based methods such as C4.5 and ILA. The right-hand-side part of an association rule is a value of the target (or class) attribute. This study proposes a general algorithm for mining class-association rules based on a variety of interestingness measures. The proposed algorithm uses a tree structure for maintaining the related information of itemsets in the nodes, thus speeding up the process of generation of rules. The proposed algorithm can be easily extended to integrate some measures together for ranking of rules. Experiments are also conducted to show the efficiency of the proposed approach under various settings.

Keywords: Accuracy, classification, class-association rule, interestingness measure, integration.


Received July 19, 2012; Accepted September 27, 2012

Full Text



Print E-mail
The Fuzzy Logic Based ECA Rule Processing for XML Databases

Thomson Fredrick1and Govindaraju Radhamani 2

1R&D Centre, Bharathiar University, Coimbatore, India

2Dr.G.R.D College of Science, Coimbatore, India

Abstract: Current needs of E-Commerce transactions require the development of XML database system like relational database systems. Fuzzy concepts are adapted to the field of XML Databases (DB) in order to deal with ambiguous and uncertain data. Incorporating fuzziness into Event Condition Action (ECA) rules would improve the effectiveness of XML DB as it provides much flexibility in defining rules for the supported application. An architecture that specifies how the fuzzy logic based rules are processed in the context of XML database transactions is presented in this paper. The algorithm for implementing fuzzy active rule based triggers for XML is proposed in this paper. The proposed architecture provides new forms of interaction, in support of fuzzy ECA rules between any application programs and the XML database. This paper presents a motivating example that illustrates the use of fuzzy trigger in stock market brokering agency. The testing has been done to compare the performance of fuzzy XML triggers and normal XML triggers. Our testing results show that Fuzzy ECA rule based triggers are providing better output than Normal ECA rule based triggers.

Keywords: XML DB, ECA rules, fuzzy eca rules, fuzzy xquery, fuzzy trigger.

Received August 30, 2012; Accepted March 20, 2014

Print E-mail
Finger Knuckle Print Authentication Using AES and K-Means Algorithm

Muthukumar Arunachalam1 and Kannan Subramanian2

1Department of Electronics and Communication Engineering, Kalasalingam University, India

2Department of Electrical and Electronics Engineering, Kalasalingam University, India


Abstract: In general, the identification and verification are done by passwords, PIN number, etc., which can be easily cracked by hackers. Biometrics is a powerful and unique tool based on the anatomical and behavioural characteristics of the human beings in order to prove their authentication. Security is the most important thing in the world. Password is used for security, but it does not provide the effective security. So biometrics can be used to provide the higher security than the password. Finger Knuckle Print (FKP) is a unique biometric anatomical feature for an individual person. Biometric systems are suffered to a variety of attacks. In order to avoid these attacks, the biometric combined cryptography is the major tool. Bio-crypto system is to provide the authentication as well as the confidentiality of the data. This paper presents biometric key, which is generated from key points of FKP using k-means algorithm and secret hash value also generated using Secure Hash Algorithm (SHA) function, which is encrypted with the FKP extracted key points by Symmetric  Advanced Encryption Standard (AES) algorithm. The key points extraction of FKP was derived using Scale Invariant Feature Transform (SIFT). Hence encrypted secret hash value secures biometric data and the secret value. The hash function protects the biometric data from malicious tampering, and it provides error checking functionality.

Keywords: Biometric cryptosystems, key point’s extraction, K-Means algorithm, SIFT algorithm, AES, SHA function.

                                                      Received August 30, 2012; accepted April 23, 2013                                                                                                                                                                      
Print E-mail

Lightweight Anti-Censorship Online Network for Anonymity and Privacy in Middle Eastern Countries

Tameem Eissa and Gihwan Cho

Division of Electronic and Information Engineering, Chonbuk National University, Republic of Korea

Abstract:  The Onion Router (TOR) online anonymity system is a network of volunteer’s nodes that allows Internet users to be anonymous through consecutive encryption tunnels. Nodes are selected according to estimated bandwidth (bnd) values announced by the nodes themselves. Some nodes may announce false values due to a lack of accuracy or hacking intention. Furthermore, a network bottleneck may occur when running TOR in countries with low Internet speed. In this paper, we highlight the censorship challenges that Internet users face when using anti-censorship tools in such countries. We show that the current anti-censorship solutions having limitations when implemented in countries with extensive internet filtering and low Internet speed. In order to overcome such limitations, we propose a new anonymity online solution based on TOR. The network nodes are selected using a trust based system. Most encryption and path selection computation overhead are shifted to our network nodes. We also provide a new encryption framework where the nodes with higher bnd and resources are chosen and verified carefully according to specific metrics. We use an atomic encryption between entry and exit nodes (Ex) without revealing the secret components of each party. We demonstrate that our solution can provide anonymous browsing in countries with slow internet as well as fewer bottlenecks.

Keywords: Anonymity, Censorship, TOR, Anti-Censorship, Atomic Encryption.

Receiver August 31, 2012; accepted May 6, 2013

Full Text

Print E-mail

An Effective Approach to Software Cost Estimation Based on Soft Computing Techniques

Marappagounder Shanker1 and Keppanagounder Thanushkodi2

1Research Scholar, Anna University, Chennai

2Akshaya College of Engineering and Technology, Coimbatore

Abstract: Employing estimation models in software engineering help in envisaging some essential traits of future entities like  software development effort, software reliability and programmers productivity. Of these models, the one that supports the estimation of software effort has drawn substantial attention currently to carry out researches. Estimation by analogy is one among the interesting techniques used for estimating the software effort. But, the process of estimating by analogy is unable to handle categorical data accurately. A novel technique that relies on reasoning by analogy, fuzzy logic and linguistic quantifiers is being proposed here for estimating effort, provided that the software project is represented either by categorical or numerical data. Use of fuzzy logic-based cost estimation models is more suitable if unclear or inaccurate information are considered. Fuzzy systems attempt to imitate the processes of the brain through a rule base. The proposed method utilizes Fuzzy logic based analogy approach to estimate the cost and the effort.  The performance analysis of the proposed scheme is made using Mean Absolute Relative Error (MARE) and Mean Magnitude of Relative Error (MMRE) which is validated with other existing techniques.

 Keywords: Cost estimation, effort estimation, analogy, fuzzy logic, MARE, cost constructive model.

Received October 18, 2012; accepted June 30,2014

Full Text


Print E-mail

An Efficient Method for Contrast Enhancement of Real World Hyper Spectral Images

Shyam Lal1 and Rahul Kumar2

1ECE Department, National Institute of Technology Karnataka, India

2ECE Department, Moradabad Institute of Technology, India


Abstract: This paper proposed an efficient method for contrast enhancement of real world hyper spectral images. The contrast of image is an important characteristic by which the quality of image can be judged as good or poor quality. The proposed method is consists of two stages: In first stage the poor quality of image is process by automatic contrast adjustment in spatial domain and in second stage the output of first stage is further process by adaptive filtering  for image enhancement in frequency domain. Simulation and experimental results on benchmark real world hyper spectral image database demonstrates that proposed method provides better results as compared to other state-of-art contrast enhancement techniques. Proposed method performs better in different dark and bright real world hyper spectral images by adjusting their contrast very frequently. Proposed method is very simple and efficient approach for contrast enhancement of real world hyper spectral images. This method can be used in different applications where images are suffering from different contrast problems.

Keywords: Adaptive contrast enhancement, real world hyper spectral image, image processing, histogram equalization.

Received December 17, 2012; accepted June 19, 2014

Full Text


Print E-mail

Adaptability Metric for Adaptation of the Dynamic Changes


Subbian Suganthi1 and Rethanaswamy Nadarajan2

1Department of Computer Technology and Applications, Coimbatore Institute of Technology, India

2Department of Applied Mathematics and Computational Sciences, PSG College of Technology, India

Abstract: Adapting dynamic changes in the user needs or in the environment is considered as one of the important quality attributes of a system in the pervasive or ubiquitous environment. An aspect-oriented framework to modularize the dynamic changes using aspects is considered as a solution for creating dynamic adaptable systems. This framework allows the system to reflect the dynamic changes on the associated components through aspects without altering the structure of the components. For evaluating the adaptability of this framework, a new adaptability metric has been proposed using the principles of coupling. In this work, coupling is defined as a Conceptual coupling between Aspects and Classes (CBAC), which represents the semantic association between the aspects that are used to represent dynamic changes and the components that are associated with the dynamic changes at the architecture level. The adaptable efficiency of the system that is the ability of reflecting the dynamic changes on the components associated with those changes is measured using the proposed conceptual coupling metric. Based on the measures it is concluded that adaptability efficiency of the system is increased with increasing the coupling between the aspect and the components. The proposed CBAC metric is evaluated and demonstrated by measuring the adaptability of the dynamic changes in the requirements of the various software systems.

Keywords: software adaptability, modularization, aspect-oriented approach, dynamic changes, adaptability metric, coupling metric.

Received February 11, 2013; accepted May 6, 2013

Full Text
Print E-mail

A Hybrid approach for Gene Selection and Classification using Support Vector Machine

Jaison Bennet1 , Chilambuchelvan Arul Ganaprakasam1 and Nirmal Kumar 2

1Department of Computer Science and Engineering, RMK Engineering College, India

2 Software Developer, Wipro Technologies, India.

Abstract: DNA Microarray technology allows us to generate thousands of gene expression in a single chip. Analyzing gene expression data plays vital role in understanding diseases and discovering medicines. Classification of cancer based on gene expression data is a promising research area in the field of Bioinformatics and Data Mining. All genes do not contribute for efficient classification of samples. Hence a robust feature selection method is required to identify the relevant genes which help in the classification of samples effectively. Most of the existing feature selection methods are computationally expensive. Redundancy in gene expression data leads to poor classification accuracy and also acts bad on Multi-class classification. This paper proposes an ensemble feature selection technique which is a combination of Recursive Feature Elimination and Based Bayes Error Filter for gene selection and Support Vector Machine algorithm for classification. The proposed ensemble gene selection method yields comparable performance on classification when compared to existing classifiers and provides a new insight in feature selection.

Keywords: Based Bayes Filter, Classification, Microarray, Recursive Feature Elimination, Support Vector Machine.

Received February 14, 2013; accepted August 12, 2014


Print E-mail
New Bucket Join Algorithm for Faster Join Query Results

Hemalatha Gunasekaran and ThanushkodiKeppana Gowder

Akshaya College of Engineering, Anna University, India

 Abstract: Join is the most expensive and the frequent operation in database. Significant numbers of join queries are executed in the interactive applications. In interactive applications the first few thousand results need to be produced without any delay. The current join algorithms are mainly based on hash join or sort merge join which is less suitable for interactive applications because some pre-work is required by these algorithms before it could produce the join results. The nested loop join technique produces the results without any delay, but it needs more comparisons to produce the join results as it carries the tuples which will not yield any join results till the end of the join operation. In this paper we present a new join algorithm called bucket join which will over comes the limitations of hash based and sort based algorithms. In this new join algorithm the tuples are divided into buckets without any pre-work. The matched tuples and the tuples which will not produce the join results are eliminated during each phase thus the no. of comparison required to produce the join results are considerable low when compared to the other join algorithms. Thus, the bucket join algorithm can replace the other early join algorithms in any situation where a fast initial response time is required without any penalty in the memory usage and I/O operations.

 Keywords: Bucket join, hash join, query results, nested loop join, sort merge join.  

 Received February 25, 2013; Accepted June 9, 2014


Print E-mail
Short Secret Exponent Attack on LSBS-RSA

Ravva Santosh1, Challa Narasimham2, and Pallam shetty3

1Department of Information Technology, MVGR College of Engineering, India

2Department of Computer Science and Engineering, SR Engineering College, India

3Department of Computer Science and Systems Engineering, Andhra University, India

Abstract: LSBS-RSA is a variation of RSA cryptosystem with modulus primes p, q, sharing a large number of least significant bits. As original RSA, LSBS-RSA is also vulnerable to the short secret exponent attack. Sun et al. [15] studied this problem and

 they provided the bound for secret exponent. Their bound does not reduce to the optimal bound 0.292 for original RSA, which is provided by Boneh-Durfee. In this paper, we achieve the bound which reduces to the Boneh-Durfee optimal bound.  

Keyword: Lattice reduction, unravelled linearization, LSBS-RSA.

Received March 7, 2013; accepted June 9, 2014

 Full Text




Print E-mail

Intrusion Detection using Artificial Neural Networks with Best Set of Features

Kaliappan Jeyakumar1, Thiagarajan Revathi2, and Sundararajan Karpagam1

1Department of Computer Science and Engineering, Kamaraj College of Engineering and Technology, India

2Department of Information Technology, Mepco Schlenk Engineering College, India

Abstract: An intrusion detection system (IDS) monitors the behavior of a given environment and identifies the activities are malicious (intrusive) or legitimate (normal) based on features obtained from the network traffic data. In the proposed method, instead of considering all features for intrusion detection and wasting up the time in analyzing it, only the relevant feature for the particular attack is selected and intrusion detection is done with help of supervised learning Neural Network (NN). The feature selection is done with the help of information gain algorithm and genetic algorithm .The Multi Layer Perceptron (MLP) supervised NN  is used to train the relevant features alone in our proposed system. This system improves the Detection Rate  (DTR) for all types of attacks when compared to Intrusion detection system which uses all features and selected features using genetic algorithm with MLP NN as the classifier. Our proposed system results, in detecting intrusions with higher accuracy, especially for Remote to Local (R2L), User to Root (U2R) and Denial of Service (DoS) attacks.

Keywords: IDS, genetic algorithm, feature selection, NN, information gain.

Received March 13, 2013; accepted June 9, 2013

 Full Text



Print E-mail
Real Time Implementation of Integer DCT based Video Watermarking Architecture

Amit Joshi1, Vivekanand Mishra2, and Rajendra Patrikar3

1Malaviya National Institute of Technology, India

2Sardar Vallabhbhai National Institute of Technology, India

3Visvesvaraya National Institute of Technology, India


Abstract: With the recent development in multimedia communication network, data integrity and security of original content is the area of concern. Video is the one of the most popular object which is being shared easily throughout the media. Video watermarking is the current state of research to resolve the video ownership and authenticity related issues. There is a substantial amount of development in software based video watermarking from last few years. The prior works mainly focused on video watermarking that targeted for raw video where the watermark is embedded on the uncompressed video. At the present video capturing devices produce their output in one of the video compression standard. Software watermarking introduces a measurable quantity of delay between video capturing and watermark embedding process. Thus, software watermarking is not one of the ideal choices for real time watermark embedding. In the paper, a novel invisible and robust Integer Discrete Cosine Transform )DCT) based video watermarking has been proposed. The proposed video watermarking is developed for real time watermark embedding and can easily be adapted as primary part of H.264 encoder. The proposed algorithm has an essential part in form of integer DCT. Integer DCT is implemented with two different approaches, one is with fully pipeline architecture and the other is recursive architecture, for better speed and area optimization. The robustness of the algorithm has been improved against some video attacks with introducing the concept of scene change detection.

Keywords: H.264; integer DCT; parallel processing; real time watermarking; recursive architecture.

Received May 17, 2013 : accepted November 10, 2013

Full Text


Print E-mail
An Efficient Specific Update Search Domain based Glowworm Swarm Optimization for Test Case Prioritization

Beena Raman and Sarala Subramani

1Department of Information Technology, Bharathiar University, India

 Abstract: Software testing is an important activity that is carried out during the software development life cycle. Regression testing means re-executing test cases from existing test suites to assure that the modifications done to the existing software have no adverse effects. During regression testing, new test cases are not created but previously created test cases are re-executed. The ideal regression testing is to rerun all the test cases, but due to time and cost constraints only a subset of test cases are rerun based on regression testing techniques. The various regression testing techniques are test case minimization, test case selection and test case prioritization. In this paper, an approach to solve test case prioritization based on efficient swarm intelligence approach called Glowworm Swarm Optimization (GSO) is proposed. This research work focuses on a conception of definite updating search field at glowworm updating position stage. Based on the Specific Update search domain based GSO (SU-GSO) approach, an optimal number of test cases to be executed on Software Under Test (SUT) is obtained. The objectives of this research work are to maximize the path coverage and fault coverage for getting the optimal prioritized test cases. The resulting solution guarantees an optimal ordering of test cases and the performance of the proposed SU-GSO is compared with other optimization techniques such as Particle Swarm Optimization (PSO) and artificial Bee Colony Optimization (BCO).

Keywords: Regression Testing, Test Case Prioritization, GSO.

Received May 20, 2013; accepted August 10, 2014

Full Text



Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ on line 251 Warning: fsockopen(): unable to connect to (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ on line 251 skterr