January 2018, No. 1
Print E-mail

UDP based IP Traceback for Flooding DDoS Attack

Vijayalakshmi Murugesan amd MercyShalinie Selvaraj

Department of Computer Science and Engineering, Thiagarajar College of Engineering, India

Abstract: Distributed denial of service attack has become a challenging threat in today’s Internet. The adversaries often use spoofed IP addresses, which in turn makes the defense process very difficult. The sophistication of the attack is increasing due to the difficulty in tracing back the origin of attack. The researchers have contributed many traceback schemes to find out the origin of such attacks. In the majority of the existing methods they either mark the packets or log the hash digest of the packets at the routers in the attack path, which is computational and storage intensive. The proposed IP trace back scheme is an UDP(User Datagram Protocol) based approach using packet marking which requires computation and storage only at the edge router and victim and hence it does not overload the intermediate routers in the attack path. Unlike existing traceback schemes which requires numerous packets to traceback an attacker, the proposed scheme requires only a single trace information marked packet to identify an attacker. It supports incremental deployment which is a desirable characteristic of a practical traceback scheme. The work was simulated with real time Internet dataset from CAIDA and found that the storage requirement at the victim is less than 1.2 MB which is nearly 3413 times lesser than the existing related packet marking method. It was also implemented in real time in the experimental DDoS Test Bed the efficacy of the system was evaluated.

Keywords: DDoS, Mitigaton, IP Traceback, Packet Marking, Packet logging, Forensics

Received May 30, 2014; accepted October 26, 2014


Full text 


Print E-mail

Bag-of-Visual-Words Model for Fingerprint


Pulung Andono and Catur Supriyanto

Computer Science, University of Dian Nuswantoro, Indonesia

Abstract: In this paper, fingerprint classification based on Bag-of-Visual-Word (BoVW) model is proposed. In BoVW, an image is represented as a vector of occurrence count of features or words. In order to extract the features, we use Speeded-Up Robust Feature (SURF) as the features descriptor, and Contrast Limited Adaptive Histogram Equalization (CLAHE) to enhance the quality of fingerprint images. Most of the fingerprint research areas focus on Henry’s classification instead of individual person as the target of classification. We present the evaluation of clustering algorithms such as k-means, fuzzy c-means, k-medoid and hierarchical agglomerative clustering in BoVW model for FVC2004 fingerprint dataset. Our experiment shows that k-means outperforms than other clustering algorithms. The experimental result on fingerprint classification obtain the performance of 90% by applying k-means as features descriptor clustering. The results show that CLAHE improves the performance of fingerprint classification. The using of public dataset in this paper makes opportunities to conduct the future research.

Keywords: Fingerprint classification; bag of visual word model; clustering algorithm; speeded-up robust feature; contrast limited adaptive histogram equalization.

Received May 1, 2015; accept October 19, 2015



Full text   


Print E-mail

GLoBD: Geometric and Learned Logic Algorithm

for Straight or Curved Handwriting Baseline


Houcine Boubaker1, Aymen Chaabouni1, Haikal El-Abed2, and Adel Alimi1

1ReGIM-LAB: Research Groups in Intelligent Machines Laboratory,

University of Sfax, National School of Engineers ENIS, Tunisia

2German University College Riyadh,

German International Cooperation, Kingdom of Saudi Arabia

Abstract: This paper presents a developed geometric and logic algorithm of on-line Arabic handwriting baseline detection. It consists of two stages: the geometric first stage detects sets of nearly aligned points candidates to support the baseline by considering the accordance between the alignment of the trajectory points and their tangents directions. While the logic second stage uses topologic conditions and rules specific to the Arabic handwritten script in order to evaluate the relevance of each one of the three most extended sets of points from the extracted groups to be recognized as a baseline and then to correct the first stage detection result which is based only on the size of the group of points. The system is also designed to be able to extract the baseline of inclined and/or irregular aligned short handwritten sentence thanks to the flexibility of the used method for the constitution of sets of nearly aligned points. The iterative application of this last method in a relatively short neighborhood window sliding on a long and curved handwritten line script permits to extract its curved baseline.

Keywords: Online arabic handwriting, baseline detection, topologic conditions, baseline correction, curved baseline extraction.

Received June 3, 2014; accepted December 21, 2015


Full text   


Print E-mail

A Framework for Recognition and Animation of Chess Moves Printed on a Chess Book

Süleyman Eken, Abdülkadir Karabaş, Hayrunnisa Sarı, and Ahmet Sayar

Department of Computer Engineering, Kocaeli University, Turkey

Abstract: The work presented in this paper proposes a set of techniques to animate chess moves which are printed on a chess book. Those techniques include (1) extraction of chess moves from an image of printed page, (2) recognition of chess moves from the extracted image, and (3) displaying digitally encoded successive moves as an animation on a chessboard. Since all the moves are temporally related, temporal animations show change of spatial patterns in time. Moreover, it becomes easier to understand how the moves are played out and who leads the game. In this study, we animate chess moves printed in Figurine Algebraic Notation (FAN) notation. The proposed technique also eliminates false recognition by means of controlling possible moves in accordance with the rules of chess semantics. 

Keywords: Animating chess moves, chess character recognition, chess readings, chess document image analysis

Received April 22, 2015; accept Augest 12, 2015



Full text   


Print E-mail

A Physical Topology Discovery Method Based on AFTs of Down Constraint

Bin Zhang1, Xingchun Diao1, Kun Ding1, Hao Yan1, Donghong Qin1, Wei Zhang1 and Jian Tan2

1Department of Computer Science and Technology, Tsinghua University, China

2 Department of computer software, Zhengzhou University, China

Abstract: Network physical topology discovery is the key issue for network management and application, the physical topology discovery based on Address Forwarding Table (AFT) is a hot topic on current study. This paper defines three constraints of AFTs, and proposes a tree chopping algorithm based on AFTs satisfying down constraint, which can discover the physical topology of a subnet accurately. The proposed algorithm decreases the demand for AFT integrity dramatically, and is the loosest constraint for discovering physical topology which just relies on AFTs of down ports. The proposed algorithm can also be used in the switch domain of multiple subnets.

Keywords: Physical topology discovery, address forwarding table, network management.


Received January 27, 2015; accept September 9, 2015


Full text  

Print E-mail

Decision Based Detail Preserving Algorithm for the

Removal of Equal and Unequal Probability Salt

and Pepper Noise in Images and Videos 

Vasanth Kishore Babu1, Kumar Karuppaiyan2, Nagarajan Govindan1, Ravi Natarajan1, Sundersingh Jebaseelan1, and Godwin Immanuel1

1Department of Electrical and Electronic Engineering, Sathyabama University, India

 2Department of Electronics and Instrumentation Engineering, Sathyabama University, India

Abstract: A novel vicinity based algorithm for the elimination of equal and unequal probability salt and pepper noise with a fixed 3x3 kernel is proposed. The proposed method uses a tree based switching mechanism for the replacement of corrupted pixel. The processed pixel is checked for 0 or 255; if found true then the pixel is considered as noisy else termed non noisy and left unaltered. If the pixel is noisy then it checks for the 4 neighbors of the processed pixel. If all the 4 neighbors are noisy then mean of the 4 neighbors are replaced. If any of the 4 neighbors are not noisy then the corrupted pixel is replaced by unsymmetrical trimmed mean. Under high noisy conditions if all the elements of the current processing window is noisy then global mean replaces the corrupted pixel.  The proposed algorithm exhibits better performance both quantitatively and qualitatively over the standard and existing algorithms at very high noise densities. The performance of the existing non linear filters are outclassed by the proposed algorithm in terms of PSNR, IEF, MSE, and SSIM and also preserves fine details of an image even at high noise densities. The algorithm works well even for gray scale, color images and video.

Keywords: Unequal probability salt and pepper noise, unsymmetrical trimmed mean, edge preservation.

Received July 6, 2014; accepted December 16, 2014


Full text  


Print E-mail

On the Security of Two Ownership Transfer Protocols and Their Improvements

Nasour Bagheri1, Seyed Aghili1, and Masoumeh Safkhani2

1 Electrical Engineering Department, Shahid Rajaee Teacher Training University, Iran

2 Computer Engineering Department, Shahid Rajaee Teacher Training University, Iran

Abstract: In recent years, Radio Frequency Identification (RFID) systems are widely used in many applications. In some applications, the ownership of an RFID tag might change. To provide a solution, researchers have proposed several ownership transfer protocols based on encryption functions for RFID-tagged objects. In this paper, we consider the security of Kapoor and Piramuthu [3] ownership transfer protocol and Kapoor et al. [4] ownership transfer protocol. More precisely, we present de-synchronization attacks against these protocols. The success probability of all attacks is 1 while the complexity is only two runs of protocol. Finally, we present our suggestions to improve the security of these protocols.

Keywords: RFID, cryptanalysis, ownership transfer protocol, de-synchronization attack.

Received February 4, 2014; accepted December 23, 2015


Full Text  



Print E-mail

Effective and Efficient Utility Mining Technique for Incremental Dataset

Kavitha JeyaKumar1, Manjula Dhanabalachandran1, and Kasthuri JeyaKumar2

1Department of Computer Science and Engineering, Anna University, India

2Department of Electronics and Communication Engineering, SRM University, India

Abstract: Traditional association rule mining, which is based on frequency values of items, cannot meet the demands of different factors in real world applications. Thus utility mining is presented to consider additional measures, such as profit or price according to user preference. Although several algorithms were proposed for mining high utility itemsets, they incur the problem of producing large number of candidate itemsets, results in performance degradation in terms of execution time and space requirement. On the other hand when the data come intermittently, the incremental and interactive data mining approach needs to be processed to reduce unnecessary calculations by using previous data structures and mining results. In this paper, an incremental mining algorithm for efficiently mining high utility itemsets is proposed to handle the above situation. It is based on the concept of Utility Pattern Growth (UP-Growth) for mining high utility itemsets with a set of effective strategies for pruning candidate itemsets and Fast Update (FUP) approach, which first partitions itemsets into four parts according to whether they are high-transaction weighted utilization items in the original and newly inserted transactions. Experimental results show that the proposed Fast Update Utility Pattern Tree (FUUP) approach can thus achieve a good trade between execution time and tree complexity.

Keywords: Data mining, utility mining, incremental mining.

Received January 30, 2014; accepted October 14, 2014



Text full 


Print E-mail

Intelligent Human Resource Information System (i-HRIS): A Holistic Decision Support Framework for HR Excellence

Abdul-Kadar Masum1, Loo-See Beh2, Abul-Kalam Azad3, and Kazi Hoque4

1Department of Administrative Studies and Politics, University of Malaya, Malaysia

2Department of Administrative Studies and Politics, University of Malaya, Malaysia,

3Department of Applied Statistics, University of Malaya, Malaysia

4Department of Educational Management, Planning and Policy, University of Malaya, Malaysia

Abstract: Nowadays, Human Resource Information System (HRIS) plays a strategic role in the decision making process for effective and efficient Human Resource Management (HRM). For Human Resource (HR) decision making, most of the researchers propose expert systems or knowledge-based systems. Unfortunately, there are some limitations in both of expert system and knowledge-based system. In this paper, we have proposed a framework of Intelligent Human Resource Information System (i-HRIS) applying Intelligent Decision Support System (IDSS) along with Knowledge Discovery in Database (KDD) to improve structured, especially semistructured and unstructured HR decision making process. Moreover, the proposed HR IDSS stores and processes information with a set of Artificial Intelligent (AI) tools such as knowledge-based reasoning, machine learning and others. These AI tools are used to discover useful information or knowledge from past data and experience to support decision making process. We have likewise attempted to investigate IDSS applications for HR problems applying hybrid intelligent techniques such as machine learning and knowledge-based approach for new knowledge extraction and prediction. In summation, the proposed framework consists of input subsystems, decision making subsystems and output subsystems with ten HR application modules.  

Keywords: HRIS, KDD, DSS, framework.

Received October 1, 2014; accepted August 12, 2015



Full text 


Print E-mail

Financial Time Series Forecasting Using Hybrid Wavelet-Neural Model

Jovana Božić, Djordje Babić

School of Computing, University Union, Belgrade, Serbia

Abstract: In this paper, we examine and discuss results of financial time series prediction by using a combination of wavelet transform, neural networks and statistical time series analytical techniques. The analyzed hybrid model combines the capabilities of wavelet packet transform and neural networks that can capture hidden but crucial structure attributes embedded in the time series. The input data is decomposed into a wavelet representation using two different resolution levels. For each of the new time series, a neural network is created, trained and used for prediction. In order to create an aggregate forecast, the individual predictions are combined with statistical features extracted from the original input. Additional to the conclusion that the increase in resolution level does not improve the prediction accuracy, the analysis of obtained results indicates that the suggested model presents satisfactory predictor. The results also serve as an indication that denoising process generates more accurate results when applied.

Keywords: Time-series forecasting, wavelet packet transform, neural networks.

Received November 23, 2014; accepted January 20, 2016


Full text  


Print E-mail

Parameterized Matching Using Burrows-Wheeler Transform

Anjali Goel1, Rajesh Prasad2, Suneeta Agarwal3, and Amit Sangal4

1Department of Computer Science and Engineering, Ajay Kumar Garg Engineering College, India

2Department of Computer Science and Engineering, Yobe State University, Nigeria

3Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology, India

4Department of Computer Science and Engineering, Sunder Deep Engineering College, India 

Abstract: Two strings P[1, ..., m] and T[1, ..., n] with m≤n, are said to be parameterized match, if one can be transformed into the other via some bijective mapping. It is used in software maintenance, plagiarism detection and detecting isomorphism in a graph. In recent year, Directed Acyclic Word Graph (DAWG). Backward DAWG Matching (BDM) algorithm for exact string matching has been combined with compressed indexing technique: Burrows Wheeler Transform (BWT), to achieve less search time and small space. In this paper, we develop a new efficient Parameterized Burrows Wheeler Transform (PBWT) matching algorithm using the concept of BWT indexing technique. The proposed algorithm requires less space as compared to existing parameterized suffix tree based algorithm. 

Keywords: Suffix array, burrow-wheeler transform, backward DAWG matching and parameterized matching.

Received January 9, 2014; accepted December 23, 2014



Full text   




Print E-mail

Clustering Based on Correlation Fractal

Dimension overan Evolving Data Stream

Anuradha Yarlagadda1, Murthy Jonnalagedda2, and Krishna Munaga2

1Department of CSE, Jawaharlal Nehru Technological University, India

2Department of CSE, University College of Engineering Kakinada, India

Abstract: Online clustering, in an evolving high dimensional data is an amazing challenge for data mining applications. Although, many clustering strategies have been proposed, it is still an exciting task since the published algorithms fail to do well with high dimensional datasets, finding arbitrary shaped clusters and handling outliers. Knowing fractal characteristics of dataset can help abstract the dataset and provide insightful hints in the clustering process. This paper concentrates on presenting a novel strategy, FractStream for clustering data streams using fractal dimension, basic window technology, and damped window model. Core fractal-clusters, progressive fractal-cluster, outlier fractal clusters are identified, aiming to reduce search complexity and execution time. Pruning strategies are also employed based on the weights associated with each cluster, which reduced the usage of main memory. Experimental study of this paper over a number of data sets demonstrates the eectiveness and eciency of the proposed technique.

Keywords: Cluster, data stream, fractal, self-similarity, sliding window, damped window.

Received January 24, 2014; accepted October 14, 2014


Full text  


Print E-mail

New Six-Phase On-line Resource Management Process for Energy and SLA Efficient Consolidation in Cloud Data Centers

Ehsan Arianyan, Hassan Taheri, Saeed Sharifian, and Mohsen Tarighi

Department of Electrical & Electronics Engineering, Amirkabir University of Technology, Iran

Abstract: The rapid growth in demand for getting various services combined with dynamic and diverse nature of requests initiated in cloud environments have led to the establishment of huge data centers which consume a vast amount of energy. On the other hand, in order to attract more users in dynamic business cloud environments, providers have to provide high quality of service for their customers based on defined Service Level Agreement (SLA) contracts. Hence, in order to maximize their revenue, resource providers need to minimize both energy consumptions and SLA violations simultaneously. This study proposes a new six-phase procedure for on-line resource management process. More precisely, this study proposes addition of two new phases to the default on-line resource management process including VM sorting phase and condition evaluation phase. Moreover, this paper shows the deficiencies of present resource management methods which fail to consider all effective system parameters as well as their importance, and do not have load prediction models. The results of simulations using cloudSim simulator validates the applicability of our proposed algorithms in reducing energy consumption as well as decreasing SLA violations and number of VMs' migration in cloud data centers.

Keywords: Cloud computing, virtual machine, energy consumption, migration, cloudSim..

Received Augest 2, 2014; accept December 20, 2015 


Full text 



Print E-mail

Opinion within Opinion: Segmentation Approach

for Urdu Sentiment Analysis

Muhammad  Hassan and Muhammad Shoaib

Department of Computer Science and Engineering, University of Engineering and Technology, Pakistan

Abstract: In computational linguistics, sentiment analysis facilitates classification of opinion as a positive or a negative class.  Urdu is a widely used language in different parts of the world and classification of the opinions given in Urdu language is as important as for any other language. The literature contains very restricted research for sentiment analysis of Urdu language and mainly Bag-of-Word model dominates the research methods used for this purpose. The Bag-of-Word based models fail to classify a subset of the complex sentiments; the sentiments with more than one opinion. However, no known literature is available which identifies and utilizes sub-opinion level information. In this paper, we proposed a method based on sub-opinions within the text to determine the overall polarity of the sentiment in Urdu language text. The proposed method classifies a sentiment in three steps,  First it segments the sentiment into two fragments using a set of hypotheses. Next it calculates the orientation scores of these fragments independently and finally estimates the polarity of the sentiment using scores of the fragments. We developed a computational model that empirically evaluated the proposed method. The proposed method increases the precision by 8.46%, recall by 37.25% and accuracy by 24.75%, which is a significant improvement over the existing techniques based on Bag-of-Word model.

Keywords: Sentiment analysis, urdu natural language processing, social media mining, urdu discourse analysis.

Received December 7, 2014; accept January 20, 2016


Full text 



Print E-mail

Service Process Modelling and Performance Analysis for Composite Context Provisioning in IoT

Muhammad Khan1and DoHyeun Kim2

1Computer Software Engineering Department, University of Engineering and Technology, Korea

2Department of Computer Engineering, Jeju National University, Korea

Abstract: The recent increase in the research interests towards a smart life style has introduced a huge number of devices into our life. Some devices are being used by us such as the smart phones while others are most of the time invisible to us such as proximity sensors and light sensors etc. These devices are being interconnected via Internet and are being utilized to read an environment, detect patterns and predict or forecast some events. Sharing the data and information collected by these desperate devices to clients over the Internet is called as Provision. Due to disparity in the hardware and software platforms for sensing devices, the provisioning services are also limited to providing contextual data based on single provider and there is no generic process model which can be utilized for composite context provisioning from multiple providers. This paper presents a service-oriented process model for composite context provisioning. A step by step explanation has been provided for each process involved and performance analysis has been carried out using a prototype implementation of the model.

Keywords: Composite Context, Provisioning service, Sensing, Data collection, Service-Orientation.

Received April 28, 2015; accept November 29, 2015


Full text 



Print E-mail

Consensus-Based Combining Method for Classifier


Omar Alzubi1, Jafar Alzubi2, Sara Tedmori3, Hasan Rashaideh4, Omar Almomani4

1computer and Network Security, Al-Balqa Applied University, Jordan

2Computer Engineering Department, Al-Balqa Applied University, Jodan

3Computer Science Department, Princess Sumaya University, Jordan

4Information Technology,  Al-Balqa Applied University, Jordan

Abstract: In this paper, a new method for combining an ensemble of classifiers, called Consensus-based Combining Method (CCM) is proposed and evaluated. As in most other combination methods, the outputs of multiple classifiers are weighted and summed together into a single final classification decision. However, unlike the other methods, CCM adjusts the weights iteratively after comparing all of the classifiers’ outputs. Ultimately, all the weights converge to a final set of weights, and the combined output reaches a consensus. The effectiveness of CCM is evaluated by comparing it with popular linear combination methods (majority voting, product, and average method). Experiments are conducted on 14 public data sets, and on a blog spam data set created by the authors. Experimental results show that CCM provides a significant improvement in classification accuracy over the product and average methods. Moreover, results show that the CCM’s classification accuracy is better than or comparable to that of majority voting.

Keywords: Artificial intelligence, classification, machine learning, pattern recognition, classifier ensembles, consensus theory, combining methods, majority voting, mean method, product method.

Received June 3, 2015; accept January 13, 2016


Full text 



Print E-mail

Missing Values Estimation for Skylines in

Incomplete Database

Ali Awan1, Hamidah Ibrahim2, Nur Udzir3, and Fatima Sidi4

1Information and Communication Technology, International Islamic University Malaysia, Malaysia

2, 3, 4Computer Science and Information Technology, University Putra Malaysia, Malaysia

Abstract: Incompleteness of data is a common problem in many databases including web heterogeneous databases, multi-relational databases, spatial and temporal databases and data integration. The incompleteness of data introduces challenges in processing queries as providing accurate results that best meet the query conditions over incomplete database is not a trivial task. Several techniques have been proposed to process queries in incomplete database. Some of these techniques retrieve the query results based on the existing values rather than estimating the missing values. Such techniques are undesirable in many cases as the dimensions with missing values might be the important dimensions of the user’s query. Besides, the output is incomplete and might not satisfy the user preferences. In this paper we propose an approach that estimates missing values in skylines to guide users in selecting the most appropriate skylines from the several candidate skylines. The approach utilizes the concept of mining attribute correlations to generate an Approximate Functional Dependencies (AFDs) that captured the relationships between the dimensions. Besides, identifying the strength of probability correlations to estimate the values. Then, the skylines with estimated values are ranked. By doing so, we ensure that the retrieved skylines are in the order of their estimated precision.

Keywords: Skyline Queries, Preference Queries, Incomplete Database, Query Processing, Estimating Missing Values.

Received August 13, 2015; accepted November 29, 2015

Copyright © 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 Warning: fsockopen(): unable to connect to oucha.net:80 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 skterr