Print E-mail

Image Processing and Bayesian Network Based

Fabric Defect Detection

Rajendran Thilepa1 and Raman Sivakumar2

1Department of Electrical and Electronics Engineering, Priyadarshini Engineering College, India

2Department of Electrical and Communication Engineering, Priyadarshini Engineering College, India

Abstract: This Paper proposes a new approach of fabric defect detection by applying Image Processing and Bayesian Network techniques. Types of defects mainly concentrated are Holes, Scratches, Stain, Knots, Gout, Missing Yarn and No defect conditions. For analysing these defects 35 fabric samples are taken with these defects and processed through  Image processing by Matlab programming .Then the output is compared with Neural Network where efficiency more than 90% is achieved. The output of network performance is preceded to the hardware setup which is constructed using PIC16F877 controller. No other generalised approach can achieve this success rate has been reported before, and hence this result outperforms all other previously published approaches. In addition the proposed detection scheme is further evaluated in real time by using a prototype automated inspection system.

Keywords: Image processing, Matlab, Neural Network, Bayesian Network, PIC16F877 Microcontroller.

Received December 18, 2013; accepted December 23, 2014


Full text 


Print E-mail

Abductive Network Ensembles for Improved

Prediction of Future Change-Prone Classes in

Object-Oriented Software

Mojeeb Al-Khiaty1, Radwan Abdel-Aal2, and Mahmoud Elish1,3

1Information and Computer Science Department, King Fahd University of Petroleum and Minerals, Saudi Arabia

2Computer Engineering Department, King Fahd University of Petroleum and Minerals, Saudi Arabia

3Computer Science Department, Gulf University for Science and Technology, Kuwait

Abstract: Software systems are subject to a series of changes due to a variety of maintenance goals. Some parts of the software system are more prone to changes than others. These change-prone parts need to be identified so that maintenance resources can be allocated effectively. This paper proposes the use of GMDH-based abductive networks for modeling and predicting change proneness of classes in object-oriented software using both software structural properties (quantified by the C&K metrics) and software change history (quantified by a set of evolution-based metrics) as predictors. The empirical results derived from an experiment conducted on a case study of an open-source system show that the proposed approach improves the prediction accuracy as compared to statistical-based prediction models.

Keywords: change-proneness, software metrics, abductive networks, ensemble classifiers.

Received June 2, 2015; accepted September 20, 2015

  Full text    


Print E-mail

Chaotic Encryption Scheme Based an A Fast Permutation and Diffusion Structure

Jean Nkapkop1,2, Joseph Effa1, Monica Borda2, Laurent Bitjoka3, and Mohamadou Alidou4

1Department of Physics, University of Ngaoundéré, Cameroon

2Department of Communications, Technical University of Cluj-Napoca, Romania

3Department of Electrical Engineering, Energetics and Automatics, University of Ngaoundéré, Cameroon

4Department of Physics, University of Maroua, Cameroon

Abstract: The image encryption architecture presented in this paper employs a novel permutation and diffusion strategy based on the sorting of chaotic solutions of the Linear Diophantine Equation (LDE) which aims to reduce the computational time observed in Chong's permutation structure. In this scheme, firstly, the sequence generated by the combination of Piecewise Linear Chaotic Map (PWLCM) with solutions of LDE is used as a permutation key to shuffle the sub-image. Secondly, the shuffled sub-image is masked by using diffusion scheme based on Chebyshev map. Finally, in order to improve the influence of the encrypted image to the statistical attack, the recombined image is again shuffle by using the same permutation strategy applied in the first step. The design of the proposed algorithm is simple and efficient, and based on three phases which provide the necessary properties for a secure image encryption algorithm. According to NIST randomness tests the image sequence encrypted by the proposed algorithm passes all the statistical tests with the high P-values. Extensive cryptanalysis has also been performed and results of our analysis indicate that the scheme is satisfactory in term of the superior security and high speed as compared to the existing algorithms.

Keywords: Fast and secure encryption, chaotic sequence, Linear Diophantine Equation, NIST test.

Received March 17, 2015; accepted October 7, 2015


Full text 


Print E-mail

Constructing a Lexicon of Arabic-English Named

Entity using SMT and Semantic Linked Data

Emna Hkiri, Souheyl Mallat, and Mounir Zrigui

LaTICE Laboratory, Faculty of Sciences of Monastir, Tunisia

Abstract: Named entity recognition is the problem of locating and categorizing atomic entities in a given text. In this work, we used DBpedia Linked datasets and combined existing open source tools to generate from a parallel corpus a bilingual lexicon of Named Entities (NE). To annotate NE in the monolingual English corpus, we used linked data entities by mapping them to Gate Gazetteers. In order to translate entities identified by the gate tool from the English corpus, we used moses, a statistical machine translation system. The construction of the Arabic-English named entities lexicon is based on the results of moses translation. Our method is fully automatic and aims to help Natural Language Processing (NLP) tasks such as, machine translation information retrieval, text mining and question answering. Our lexicon contains 48753 pairs of Arabic-English NE, it is freely available for use by other researchers

Keywords: Named Entity Recognition (NER), Named entity translation, Parallel Arabic-English lexicon, DBpedia, linked data entities, parallel corpus, SMT.

Received April 1, 2015; accepted October 7, 2015


Full text 


Print E-mail

Forecasting of Chaotic Time Series Using RBF Neural

Networks Optimized By Genetic Algorithms

Mohammed Awad

Faculty of Engineering and Information Technology, Arab American University, Palestine

Abstract: Time series forecasting is an important tool, which is used to support the areas of planning for both individual and organizational decisions. This problem consists of forecasting future data based on past and/or present data. This paper deals with the problem of time series forecasting from a given set of input/output data. We present a hybrid approach for time series forecasting using Radial Basis Functions Neural Network (RBFNs) and Genetic Algorithms (GAs). GAs technique proposed to optimize centers c and width r of RBFN, the weights w of RBFNs optimized used traditional algorithm. This method uses an adaptive process of optimizing the RBFN parameters depending on GAs, which improve the homogenize during the process. This proposed hybrid approach improves the forecasting performance of the time series. The performance of the proposed method evaluated on examples of short-term mackey-glass time series. The results show that forecasting by RBFNs parameters is optimized using GAs to achieve better root mean square error than algorithms that optimize RBFNs parameters found by traditional algorithms.

Keywords: Time series forecasting, RBF neural networks, Genetic Algorithms, Hybrid Approach.

Received March 17, 2015; accepted October 7, 2015


Full text 


Print E-mail

Contextual Text Categorization: An Improved

Stemming Algorithm to Increase the Quality of

Categorization in Arabic Text

Said Gadri and Abdelouahab Moussaoui

 Department of Computer Science, University Ferhat Abbas of Setif, Setif, Algeria

Abstract: One of the methods used to reduce the size of terms vocabulary in Arabic text categorization is to replace the different variants (forms) of words by their common root. This process is called stemming based on the extraction of the root. Therefore, the search of the root in Arabic or Arabic word root extraction is more difficult than in other languages since the Arabic language has a very different and difficult structure, that is because it is a very rich language with complex morphology. Many algorithms are proposed in this field. Some of them are based on morphological rules and grammatical patterns, thus they are quite difficult and require deep linguistic knowledge. Others are statistical, so they are less difficult  and based only on some calculations. In this paper we propose an improved stemming algorithm based on the extraction of the root and the technique of n-grams which permit to return Arabic words’ stems without using any morphological rules or grammatical patterns.

Keywords: Root extraction, information retrieval, bigrams, stemming, Arabic morphological rules, feature selection.

Received February 22, 2015; accepted August 12, 2015


Full text 


Print E-mail

An Architecture of Thin Client-Edge Computing Collaboration for Data Distribution and Resource Allocation in Cloud

Aymen Abdullah Alsaffar, Pham Phuoc Hung, and Eui-Nam Huh

Department of Computer Science and Engineering, Kyung Hee University, South Korea

Abstract:These days, Thin-client devices are continuously accessing the Internet to perform/receive diversity of services in the cloud. However these devices might either has lack in their capacity (e.g. processing, CPU, memory, storage, battery, resource allocation, etc) or in their network resources which is not sufficient to meet users satisfaction in using Thin-client services. Furthermore, transferring big size of Big Data over the network to centralized server might burden the network, cause poor quality of services, cause long respond delay, and inefficient use of network resources. To solve this issue, Thin-client devices such as smart mobile device should be connected to Edge computing which is a localized near to user location and more powerful to perform computing or network resources. In this paper, we introduce a new method that constructs its architecture on Thin-client -Edge computing collaboration. Furthermore, present our new strategy for optimizing big data distribution in cloud computing.  Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA) and QoS requirements. Our simulation result shows that our proposed approach can improve resource allocation efficiently and shows better performance than other existing methods.

Keywords:Cloud computing, data distribution, edge computing, resource allocation, and thin client.

Received January 19, 2015; accepted August 12, 2015


Full text 


Print E-mail

TDMCS: An Efficient Method for Mining Closed Frequent Patterns over Data Streams Based on Time Decay Model

Meng Han, Jian Ding, and Juan Li

 School of Computer Science and Engineering, Beifang University of Nationalities, China

Abstract: In some data stream applications, the information embedded in the data arriving in the new recent time period is important than historical transactions. Because data stream is changing over time, concept drift problem will appear in data stream mining. Frequent pattern mining always generate useless and redundant patterns, in order to obtain the result set of lossless compression, closed pattern is needed. A novel method for efficiently mining closed frequent patterns on data stream is proposed in this paper. The main work includes: distinguish importance of recent transactions from historical transactions based on time decay model and sliding window model; design and use frame minimum support count-maximal support error threshold-decay factor (θ-ε-f) to avoid concept drift; use closure operator to improve the efficiency of algorithm; design a novel way to set decay factor: average-decay-factor faverage in order to balance the high recall and high precision of algorithm. The performance of proposed method is evaluated via experiments, and the results show that the proposed method is efficient and steady-state, it applies to mine data streams with high density and long patterns, it is suitable for different size sliding windows, and it is also superior to other analogous algorithms.

Keywords: data stream mining, frequent pattern mining, closed pattern mining, time decay model, sliding window, concept drift.

Received January 15, 2015; accepted August 12, 2015


Full text 


Print E-mail

Internal Model Control to Characterize

 Human Handwriting Motion

Ines Chihi, Afef Abdelkrim and Mohamed Benrejeb

Laboratory of Research in Automation (LA.R.A), Tunis, El Manar University,

National School of Engineers of Tunis, Tunisia

Abstract: The main purpose of this paper is to consider the human handwriting process as an Internal Model Control structure (IMC). The proposed approach allows characterizing the biological process from two muscles activities of the forearm, named ElectroMyoGraphy signals (EMG). For this, an experimental approach was used to record the coordinates of a pen-tip moving on (x,y) plane and EMG signals during the handwriting act. In this sense direct and inverse handwriting models are proposed to establish the relationship between the muscles activities of the forearm and the velocity of the pen-tip. Recursive Least Squares algorithm (RLS) is used to estimate the parameters of both models (direct and inverse). Simulations show good agreement between the proposed approach results and the recorded data.

Keywords: Human handwriting process; Internal Model Control structure (IMC); the muscular activities; direct and inverse handwriting models; velocity of the pen-tip; Recursive Least Squares algorithm.

Received January 6, 2015; accepted September 22, 2015


Full text 


Print E-mail

Efficient Segmentation of Arabic Handwritten

Characters Using Structural Features

Mazen Bahashwan, Syed Abu-Bakar, and Usman Sheikh

Department of Electronics and Computer Engineering, Universiti Teknologi Malaysia, Malaysia

Abstract: Handwriting recognition is an important field as it has many practical applications such as for bank cheque processing, post office address processing and zip code recognition. Most applications are developed exclusively for Latin characters. However, despite tremendous effort by researchers in the past three decades, Arabic handwriting recognition accuracy remains low because of low efficiency in determining the correct segmentation points. This paper presents an approach for character segmentation of unconstrained handwritten Arabic words. First, we seek all possible character segmentation points based on structural features. Next, we develop a novel technique to create several paths for each possible segmentation point. These paths are used in differentiating between different types of segmentation points. Finally, we use heuristic rules and neural networks, utilizing the information related to segmentation points, to select the correct segmentation points. For comparison, we applied our method on IESK-arDB and IFN/ENIT databases, in which we achieved a success rate of 91.6% and 90.5% respectively.

Keywords: Arabic handwriting, character segmentation and structural features.

Received November 17, 2014; accepted September 10, 2015


Full text 


Print E-mail

A Novel Swarm Intelligence Algorithm For The Evacuation Routing Optimization Problem

Jinlong Zhu1, Wenhui Li2, Huiying Li2, Qiong Wu2, Liang Zhang2

1Department of Computer Science and Technology, ChangChun Normal University, China

2Department of Computer Science and Technology, Jilin University, China

Abstract: This paper presents a novel swam intelligence optimization algorithm that combines the evolutionary method of particle swarm optimization with the filled function method in order to solve the evacuation routing optimization problem. In the proposed algorithm, the whole process is divided into three stages. In the first stage, we make use of global optimization of filled function to obtain optimal solution to set destination of all particles. In the second stage, we make use of the randomicity and rapidity of particle swarm optimization to simulate the crowd evacuation. In the third stage, we propose three methods to manage the competitive behaviors among the particles. This algorithm makes an evacuation plan using the dynamic way finding of particles from both a macroscopic and a microscopic perspective simultaneously. There are three types of experimental scenes to verify the effectiveness and efficiency of the proposed algorithm: a single room, a 4-room/1-corridor layout, and a multi-room multi-floor building layout. The simulation examples demonstrate that the proposed algorithm can greatly improve upon evacuation clear and congestion times. The experimental results demonstrate that this method takes full advantage of multiple exits to maximize the evacuation efficiency.

Keywords: PSO, filled function, global optimum, local optimum.

Received November 17, 2014; accepted September 10, 2015

Print E-mail

Splay Thread Cooperation on Ray Tracing as A

Load Balancing Technique In Speculative

Parallelism And Gpgpu

Suma Shivaraju1 and Gopalan Pudur 2

1Research Scholar, Bharathiar University, Coimbatore, India

2Professor, National Institute of Technology, Tiruchirapally, India

Abstract: The introduction of the speculative parallelism into any models can improve the performance and provides significant benefits and increases the ILP and TLP. GPGPU is the future computing technology working with both CPU and GPU to solve many real-world problems not only the graphics problems but also the general purpose applications. As GPU uses   data parallelism tasks, the dynamic memory creation and the splay trees which are self adjusting allows for the increase in throughput and load balancing. The frequently used nodes near to the root are an advantage for finding locality of threads as well as for caching and garbage collection.  A technique used to render and to study complex scenes into images and to render color, intensity of pixels, distance between pixels is referred as Ray tracing. Multithreading is a promising technique which increases the performance of the computer systems by increasing the instruction level parallelism and thread level speculation. In this paper a new technique is proposed for workload balancing on the Graphics processors and CPU that can be implemented on the graphics processors along with the CPU which provides the  optimal result with the speculation techniques and Lorentz Transformation, which is used to determine color and brightness of the ray which are refracted or reflected and also the relative distance between the thread spawning which results in time dilation and contraction. A GPUOCELOT is a compilation framework, a simulator used for the execution of the programs which has resulted in the increase in the performance of the instructions which uses the amortized cost.

Keywords: Load balancing, Graphics Processors, Splay trees, Optimization, Instruction Level Parallelism, Thread Level Speculation. Amortized cost, speculative multithreading, Ray tracing, Lorentz transformation.

Received September 26, 2014; accepted February 10, 2015

Print E-mail

The Veracious Counting Bloom Filter

Brindha Palanisamy1 and Senthilkumar Athappan2

1Research Scholar, Anna University, Chennai, India

2Professor and Head, Electrical and Electronics Engineering, Dr. Mahalingam College of Engineering and Technology, India

Abstract: Counting Bloom Filters (CBFs) are widely employed in many applications for fast membership queries. CBF works on dynamic sets rather than a static set via item insertions and deletions. CBF allows false positive, but not false negative. The Bh-Counting Bloom Filter (Bh-CBF) and Variable Increment Counting Bloom Filter (VI-CBF) are introduced to reduce the false positive probability, but they suffer from memory overhead and hardware complexity. In this paper, we proposed a multilevel optimization approach named as Veracious Bh-Counting Bloom Filter (VBh-CBF) and Veracious Variable increment Counting Bloom Filter (VVI-CBF) by partitioning the counter vector into multiple levels to reduce the False Positive Probability (FPP) and to limit the memory requirement. The experiment result shows that the false positive probability and total memory size are reduced by 65.4%, 67.74% and 20.26%, 41.29% respectively compared to basic Bh-CBF and VI-CBF.

Key words: Bloom Filter, false positive, Counting Bloom Filter, Intrusion Detection system

Received August 3, 2014; accepted November 25, 2015

Print E-mail

A MMDBM Classifier with CPU and CUDA GPU

computing in various sorting procedures

 Sivakumar Selvarasu1, Ganesan Periyanagounder 1, and Sundar Subbiah 2

1Department of Mathematics, Anna University, India

2Department of Mathematics, Indian Institute of Technology, India

Abstract: A decision tree classifier called Mixed Mode Database Miner (MMDBM) which is used to classify large number of datasets with large number of attributes is implemented with different types of sorting techniques (quick sort and radix sort) in both Central Processing Unit computing (CPU) and General-Purpose computing on Graphics Processing Unit (GPGPU) computing and the results are discussed. This classifier is suitable for handling large number of both numerical and categorical attributes. The MMDBM classifier has been implemented in CUDA GPUs and the code is provided.  We used the parallelized algorithms of the two sorting techniques on GPU using Compute Unified Device Architecture (CUDA) parallel programming platform developed by NVIDIA Corporation. In this paper, we have discussed an efficient parallel (quick sort and radix sort) sorting procedures on GPGPU computing and compared the results of GPU to the CPU computing.  The main result of MMDBM is used to compare the classifier with an existing CPU computing results and GPU computing results. The GPU sorting algorithms provides quick and exact results with less handling time and offers sufficient support in real time applications.

Keywords: Classification, Data Mining, CUDA, GPUs, Decision tree, Quick sort, Radix sort.

Received July 29, 2014; accepted April 12, 2015

Print E-mail

Inter-Path OOS Packets Differentiation Based

Congestion Control for Simultaneous Multipath


Samiullah Khan and Muhammad Abdul Qadir

Department of Computer Science, Capital University of Science and Technology, Pakistan

Abstract: An increase in the popularity and usage of Multimode’s devices for ubiquitous network access creates thrust for utilization of simultaneous network connections. Unfortunately, the standard transport layer protocols used single homed congestion control mechanism for multipath transmission. One major challenge in such multipath transmission is related to the Receiver Buffer (RBuf) blocking that hinders higher aggregation ratio of multiple paths. This study proposed Simultaneous Multipath Transmission (SMT) scheme to avoid the RBuf blocking problem. Realistic simulation scenarios were designed such as intermediate nodes, cross traffic, scalability or mix of them to thoroughly analyses SMT performance. The results revealed that SMT has overcome RBuf blocking with improvement in aggregate throughput up to 95.3 % of the total bandwidth.

Keywords: Multipath transmission, RBuf blocking, out-of-sequence arrival, throughput, congestion window.

Received June 14, 2014; accepted September 16, 2015


Full text 

Print E-mail

Method-level Code Clone Detection for Java through Hybrid Approach

Egambaram Kodhai1 and Selvadurai Kanmani2

1Department of Computer Science and Engineering, Pondicherry Engineering College, India

2Department of Information Technology, Pondicherry Engineering College, India

Abstract: A Software clone is an active research area where several researchers have investigated techniques to automatically detect duplicated code in programs. However their researches have limitations either in finding the structural or functional clones. Moreover, all these techniques detected only the first three types of clones. In this paper, we propose a hybrid approach combining metric-based approach with textual analysis of the source code for the detection of both syntactical and functional clones in a given Java source code. This proposal is also used to detect all four types of clones. The detection process makes use of a set of metrics calculated for each type of clones. A tool named CloneManager is developed based on this method in Java for high portability and platform-independency. The various types of clones detected by the tool are classified and clustered as clone clusters. The tool is also tested with seven existing open source projects developed in Java and compared with the existing approaches.

Keywords: Clone Detection, Functional Clones, Source code metrics, String-matching.

Received October 21, 2013; accepted June 24, 2014

Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ on line 251 Warning: fsockopen(): unable to connect to (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ on line 251 skterr