July 2017, No. 4
Print E-mail

A Probabilistic Approach to Building Defect Prediction Model for Platform-based Product Lines

Changkyun Jeon, Neunghoe Kim, and Hoh In

Department of Computer Science and Engineering, Korea University, Korea

Abstract: Determining when software testing should be begun and the resources that may be required to find and fix defects is complicated. Being able to predict the number of defects for an upcoming software product given the current development team enables the project managers to make better decisions. A majority of reported defects are managed and tracked using a repository system, which tracks a defect throughout its lifetime. The defect life cycle (DLC) begins when a defect is found and ends when the resolution is verified and the defect is closed. Defects transition through different states according to the evolution of the project, which involves testing, debugging, and verification. All of these defect transitions should be logged using the defect tracking systems (DTS). We construct a Markov chain theory-based defect prediction model for consecutive software products using defect transition history. During model construction, the state of each defect is modelled using the DLC states. The proposed model can predict the defect trends such as total number of defects and defect distribution states in the consecutive products. The model is evaluated using an actual industrial mobile product software project and found to be well suited for the selected domain.

Keywords: Defect prediction, defect life cycle, markov chain, product line engineering, software engineering.

Received June 12, 2014; accepted September 21, 2015

 

Full text  

 

 

 
Print E-mail

Smart City Application: Internet of Things (IoT)

Technologies Based Smart Waste Collection Using Data

Mining Approach and Ant Colony Optimization

Zeki ORALHAN1, Burcu ORALHAN2, and Yavuz YİĞİT3

 1Department of Electrical Electronics Engineering, Erciyes University, Kayseri, Turkey

2Department of Business Administration, Nuh Naci Yazgan University, Kayseri, Turkey

3Techno Park Center of Erciyes University, Kayseri, Turkey

Abstract: Globally today, Living in urban areas is more preferred than in living rural areas. This situation creates many problem for urban living. One of the big problem is waste management in urban areas. Optimizing waste collection has become very important phenomenon for being smart city. In this study, we aimed to optimize waste collection for reduce both cost of collection and pollution effect of environment. We designed a garbage container integrated sensors for measuring fill level of container, temperature, and ratio of carbon dioxide inside the container. We transmitted all information to our waste management software based Internet of Things (IoT) technologies. According to the ant colony algorithm, most efficient waste collection route delivered to garbage truck drivers’ cellular enabled smart tablet. We used data mining approach to forecast when garbage container can reach highest level, and the planning of garbage container placement. We applied this smart waste collection management system in a town where is in Kayseri, Turkey. In first step we applied for 200 Items (garbage containers) in the town that has 548.028 population and urban living ratio is 100%. Before smart waste management system 200 garbage containers was collecting by garbage trucks in a static route. After we had applied smart waste management system, containers were collected by garbage truck in dynamic route. Smart waste management system significantly decreased the trucks’ oil cost, carbon emissions, traffic, truck wear, noise pollution, environmental pollution, and work hours. The system presented approximately 30% with in direct cost savings in waste collection.

Keywords: Ant colony optimization, data mining, IoT smart device, smart city, smart waste management

Received July 29, 2016; accepted December 29, 2016

 

Full text 

 

 

 
Print E-mail

An Efficient Perceptual of CBIR System using MIL-SVM

Classification and SURF Feature Extraction

Bhuvana Shanmugam1, Radhakrishnan Rathinavel2, Tamije Perumal1 and Subhakala Subbaiyan1

1 Department of Computer Science and Engineering, Sri Krishna College of Technology, India

2Vidhya Mandhir Institute of Technology, India

Abstract: Hasty increase in use of color image in recent years has motivated to the need of retrieval system for color image. Content Based Image Retrieval (CBIR) system is used to retrieve similar images from large image repositories based on color, texture and shape. In CBIR, the invariance to geometrical transformation is one of the most desired properties. Speeded Up Robust Feature (SURF) and Multiple Instance Learning Support Vector Machine (MIL-SVM) are proposed for extracting invariant features and improving the accuracy of image retrieval respectively. The proposed system consists of the following phases 1) Image Segmentation using Quad tree Segmentation 2) Extraction  of features using SURF 3) Classification of images using MIL-SVM 4) Codebook  design using Lindae-Buzo-Gray (LBG) algorithm 5)Measurement of Similarity between Query image and the database image using Histogram Intersection (HI). In comparison with the existing approach, the proposed approach significantly improves the retrieval accuracy from 74.5% to 86.3%.

Keywords: SURF, MIL-SVM, LBG, HI.

Received February 18, 2014; accepted September 9, 2014

 

Full text 

 

 


 
Print E-mail

An Unsupervised Feed Forward Neural Network Method for Efficient Clustering

Roya Asadi1, Sameem Abdul Kareem1, Mitra Asadi2, Shokoofeh Asadi3

1 Department of Artificial Intelligence, University of Malaya, Malaysia

2Department of Research, Iranian Blood Transfusion Organization, Iran

3Department of Agreecultural Management Engineering, Iran

Abstract: This paper presents a Real Unsupervised Feed Forward Neural Network (RUFFNN) clustering method with one epoch training and data dimensionality reduction ability to overcome some critical problems such as low training speed, low accuracy as well as high memory complexity in this area. The RUFFNN method trains a code book of real weights by utilizing input data directly without using any random values. The Best Match Weight (BMW) vector is mined from the weight codebook and consequently the Total Threshold (TT) of each input data is computed based on the BMW. Finally, the input data are clustered based on their exclusive TT. For evaluation purposes, the clustering performance of the RUFFNN was compared to several related clustering methods using various data sets. The accuracy of the RUFFNN was measured through the number of clusters and the quantity of Correctly Classified Nodes.The superior clustering accuracies of 96.63%, 96.67% and 59.36% were for the Breast Cancer, Iris and Spam datasets from the UCI repository respectively. The memory complexity of the proposed method was O(m.n.sm) based on the number of nodes, attributes and size of the attribute.

Keywords: Artificial neural network, feed forward neural network, unsupervised learning, clustering, real weight.

 

Recieved December 2, 2014; accepted Augest 16, 2015

 

Full text 

 

 


 
Print E-mail

A New Way of Accelerating Web by Compressing Data with

Back Reference-prefer Geflochtener

Kushwaha Singh1, Challa Krishna2, and Saini Kumar1

1Department of Computer Science and Engineering, Rajasthan Technical University, India

2Department of Computer Science and Engineering, India

Abstract: This research focused on the synthesis of an iterative approach to improve speed of the web and also learning the new methodology to compress the large data with enhanced backward reference preference. In addition, observations on the outcomes obtained from experimentation, group-benchmarks compressions, and time splays for transmissions, the proposed system have been analysed. This resulted in improving the compression of textual data in the Web pages and with this it also gains an interest in hardening the cryptanalysis of the data by maximum reducing the redundancies. This removes unnecessary redundancies with 70% efficiency and compress pages with the 23.75- 35% compression ratio.

 

Keywords: Backward References, Shortest Path Technique, HTTP, Iterative Compression, Web, LZSS and LZ77.

Received April 25, 2014; accepted August 13, 2014

 

Full text 

 

 


 
Print E-mail

A New (k, n) Secret Image Sharing Scheme (SISS)

Amitava Nag1, Sushanta Biswas2, Debasree Sarkar2, and Partha Sarkar2

1Academy of Technology, West Bengal University of Technology, India.

2 Department of Engineering and Technological studies, University of Kalyani, India

Abstract: In this paper, a new (k, n) threshold Secret Image Sharing Scheme (SISS) is proposed. In the proposed scheme the secret image is first partitioned into several non-overlapping blocks of k pixels. Every k pixel is assumed as the vertices of a complete graph G. Each spanning tree of G is represented by k pixels along with sequence and used to form k pixels of a share image. The original secret image can be restored by k or more shares and cannot be reconstructed by (k-1).  The experimental results indicate that the proposed SISS is an efficient and safe method.

Keywords: (k, n) threshold secret image sharing, complete graph, spanning tree, Pr iifer sequence.

Received May 28, 2014; accepted March 3, 2015

 

Full text 

 

 


 
Print E-mail

Image Compression based on Iteration-Free Fractal and using Fuzzy Clustering on DCT Coefficients

Sobia Mahalingam1, Valarmathi Lakshapalam1, and Saranya Ekabaram2

1Department of Computer Science and Engineering, Government College of Technology, India

2Department of Computer Science and Engineering, VSB College of Engineering Technical Campus, India

Abstract: In the proposed method, the encoding time is reduced by combining iteration-free fractal compression technique with fuzzy c-means clustering approach to classify the domain blocks. In iteration-free fractal image compression, the mean image is considered as domain pool for range-domain mapping that reduces the number of fractal matching. Discrete cosine transform (DCT) coefficient is used as a new metric for range and domain blocks comparison. Also fuzzy clustering approach reduces the search space to only a subset of domain pool. Based on Fuzzy clustering on DCT space, the domain pool is grouped into three clusters and the search is made in any one of the three clusters. The proposed method has been tested for various standard images and found that the encoding time is reduced about 42 times than the iteration-free fractal coding method with only a slight degradation in the quality of images.

Keywords: Fractal image compression, fuzzy clustering, DCT coefficients, contractive affine transformation.

Received May 6, 2014; accepted November 25, 2015

 

Full text 

 


 
Print E-mail

Muzzle Classification Using Neural Networks

Ibrahim El-Henawy1, Hazem El-bakry2, and Hagar El-Hadad3

1Department of Information Systems, Zagazig University, Egypt

2Department of Information Systems, Mansoura University, Egypt

3Department of Information Systems, Beni-Suef University, Egypt.

Abstract: There are multiple techniques used in image classification such as Support Vector Machines (SVM), Artificial Neural Networks (ANN), Genetic Algorithms (GA), Fuzzy measures, and Fuzzy Support Vector Machines (FSVM). Classification of muzzle depending on one of this artificial technique has become widely known for guaranteeing the safety of cattle products and assisting in veterinary disease supervision and control. The aim of this paper is to focus on using neural network technique for image classification. First the area of interest in the captured image of muzzle is detected then pre-processing operations such as histogram equalization and morphological filtering have been used for increasing the contrast and removing noise of the image. Then, using box-counting algorithm to extract the texture feature of each muzzle. This feature is used for learning and testing stage of the neural network for muzzle classification. The experimental result shows that after 15 input cases for each image in neural training step, the testing result is true and gives us the correct muzzle detection. Therefore, neural networks can be applied in classification of bovines for breeding and marketing systems registration.

Keywords: Muzzle classification, image processing, neural networks.

Received November 7, 2014; accepted February 5, 2015

 

Full text 

 

 


 
Print E-mail

A Bi-Level Text Classification Approach for SMS Spam Filtering and Identifying Priority Messages

Naresh Kumar Nagwani

Department of Computer Science and Engineering, National Institute of Technology Raipur, India.

Abstract: Short Message Service (SMS) traffic is increasing day by day and trillions of sms are sent and received by billions of users every day. SPAM messages are also increasing in same proportionate. Numbers of recent advancements are taking place in the field of sms spam detection and filtering. The objective of this work is twofold, first is to identify and classify spam messages from the collection of sms messages and second is to identify the priority or important sms messages from the filtered non-spam messages. The objective of the work is to categorize the sms messages for effective management and handling of sms messages. the work is planned in two level of binary classification wherein  at the first level of classification the sms messages are categorized into the two classes spam and non-spam using popular binary classifiers, and then at the second level of classification non-spam sms messages are further categorized into the priority and normal sms messages. four state of the art popular text classification techniques namely, naïve bayes, support vector machine, latent dirichlet allocation and non-negative matrix factorization are used to categorize the sms text message at different levels of classification. The proposed bi-level classification model is also evaluated using the performance measures accuracy and f-measure. Combinations of classifiers at both levels are compared and it is shown from the experiments that support vector machine algorithm performs better for filtering the spam messages and categorizing the priority messages.

Keywords: SMS spam, priority sms, important sms, sms spam filtering, bi-level binary classification.

Received March 24, 2014; accepted August 13, 2014

Full Text

 


 
Print E-mail

Using Machine Learning Techniques for Subjectivity Analysis based on Lexical and Non-Lexical Features

Hikmat Ullah Khan and Ali Daud

Department of Computer Science, COMSATS Institute of Information, Pakistan

Abstract: Machine learning techniques have been used to address various problems and classification of  documents is one of the main applications of such techniques. Opinion mining has emerged as an active research domain due to its wide range of applications such as multi-document summarization, opinion mining of documents and users’ reviews analysis improving answers of opinion questions in forums. Existing works classify the documents using lexicon-based features only. In this work, four state of the art machine learning techniques have been applied to classify the content into subjective and objective. The subjective content contains opinionative information while objective content contains factual information. The main contribution lies in the introduction of non-lexical features and content based features in addition to the use of a conventional lexicon based feature set. We compare results of four machine learning techniques and discuss  performance in diverse categories of lexical and non-lexical features. The comparative analysis has been accomplished using standard performance evaluation measures and experiments have been performed on a real-world dataset of the online forum related to diverse topics. It has been proven that proposed content and non-lexical thread specific features play their role in the classification of subjective and non-subjective content.

Keywords: Machine Learning, classification, opinion mining, lexicon, non-lexical features.

Received December 28, 2014; accepted Augest 31, 2015

 

Full Text

________________________________________________________________________________________________________________________________________________________________________

 

 
Print E-mail

Design and Implementation of a Diacritic Arabic Text-To-Speech System

Aissa Amrouche 1, Leila Falek 2 and Hocine Teffahi 3

1, 2, 3 Laboratory of Spoken communication and signal processing, Electronics and Computer Science

Faculty, University of Sciences and Technology HOUARI BOUMEDIENE, Algeria.

1 Scientific and Technical Research Centre for Development of Arabic Language, Algeria.

Abstract: The absence of the diacritical marks from the modern Arabic text generates a significant increase of the ambiguity in the Arabic text, which can cause confusion in the pronunciation of a written word. Despite the fact that the reader with a certain level of Arabic knowledge can easily recover the missing diacritics by: Using the words context, the morphology and the syntax knowledge of the Arabic language. This paper describes a design and implementation of a Text-To-Speech (TTS) system for a diacritic Arabic text. The goal of this project is to obtain a set of high quality speech synthesizer based on unit selection using a bi-grams model taking into account the particularities of the language. It takes a diacritic Arabic text as input and produces corresponding speech; the output is available as male voice. The evaluation of our TTS system is based on subjective and objective tests. The final evaluation of GArabic TTS system, regarding the intelligibility, naturalness aspects (listening) and the quality (PESQ) is jugged successful.

Keywords: Diacritics, arabic language, diacritization, TTS, speech synthesis, unit selection, bi-grams model.

Received January 8, 2015; accepted April 23, 2015

 

Full Text

   

 

 
Print E-mail

Pattern Recognition Using the Concept of Disjoint Matrix of MIMO System

Mezbahul Islam1, Rahmina Rubaiat1, Imdadul Islam1, Mostafizur Rahaman1, and Mohamed Ruhul Amin2

1Department of Computer Science and Engineering, Jahangirnagar University, Bangladesh

2Department of Electronics and Communications Engineering, East West University, Bangladesh

Abstract: In many applications, it is necessary to compare two or more images to identify the originality of them. For example, verification of fake logo of an organization or fake signature of an official can be considered in this context. The cross correlation and wavelet transform are widely used techniques to compare two images but they are low sensitive to awgn noise and very small change in the image. Different learning algorithms for example Principle Component Analysis (PCA)  are prevalent for this purpose at the expense of process time. In this paper we apply the concept of uncorrelated Multiple Input Multiple Output(MIMO) channel of wireless link on different images such that received signal vector corresponding to the largest six eigen values will reflect the characteristics of the images. In this paper, three types of images: human face with background, fingerprint and human signature are considered for identification and the proposed model shows rigidity in identification of the image under rotation and noise contamination.

Keywords: Uncoupled Multiple Input Multiple Output links, eigenvalues, unitary matrix, noise and channel matrix.

 

Received September 9, 2014; accepted November 25, 2014

 

Full Text

 
Print E-mail

Fuzzy Modeling for Handwritten Arabic Numeral Recognition

Dhiaa Musleh, Khaldoun Halawani and Sabri Mahmoud

Information and Computer Science Department, King Fahd University of Petroleum and Minerals

 Saudi Arabia

Abstract: In this paper we present a novel fuzzy technique for Arabic (Indian) online digits recognition. We use directional features to automatically build generic fuzzy models for Arabic online digits using the training data. The fuzzy models include the samples’ trend lines, the upper and lower envelops of the samples of each digit. Automatically generated weights for the different segments of the digits’ models are also used. In addition, the fuzzy intervals are automatically estimated using the training data. The fuzzy models produce robust models that can handle the variability in the handwriting styles. The classification phase consists of two cascaded stages, in the first stage the system classifies digits into zero/nonzero classes using five features (viz. length, width, height, height’s variance and aspect ratio) and the second stage classifies digits 1 to 9 using fuzzy classification based on directional and segment histogram features. Support Vector Machine (SVM) is used in the first stage and syntactic fuzzy classifier in the second stage. A database containing 32695 Arabic online digits is used in the experimentation. The results show that the first stage (zero/nonzero) achieved accuracy of 99.55% and the second stage (digits from 1 to 9) achieved accuracy of 98.01%. The misclassified samples are evaluated subjectively and results indicate that humans could not classify » 35% of the misclassified digits.


Keywords: Automatic fuzzy modeling, arabic online digit recognition, directional features, online digits structural features.

Received November 17, 2014; accepted April 12, 2015

Full Text

   

 

 
Print E-mail

Medical Image Registration and Fusion Using Principal Component Analysis

Meisen Pan, Jianjun Jiang, Fen Zhang and Qiusheng Rong

College of Computer Science and Technology, Hunan University of Arts and Science, China

Abstract: Principal Component Analysis (PCA) is widely used in the field of medical image processing. In this paper, PCA is applied to align and fuse the images. When alignment, first, the centroids of the static and moving images are derived by computing the image moments and taken as the translation values for registration, then the subtraction of two rotation angles produced by using PCA to solve the covariance matrice of image coordinates is counted as the rotation values for registration, finally the moving image is aligned with the static one. The closest iterative point (ICP) algorithm exists some problems which worth improving. Therefore, we combine PCA with ICP to align the images in this paper. The translation and rotation values derived by PCA are views as the initial request parameters of ICP, which is conducive to further advancing the registration accuracy. The experimental results show that the combination method has a fairly simple implementation, low computational load, good registration accuracy, and also can efficiently avoid trapping in the local optima. When fusion, a slipping window with size being  is first moved across the fusing images to construct sub-block with size also being , then the eigenvectors of the covariance matrix created by using PCA to each sub-block are acquired, finally the absolute values of the eigenvectors are added to compute the fusion coefficient of the central pixel of each sub-block and the images are fused. The results reveal that this proposed fusion method is superior to the traditional PCA-based image fusion.

Keywords: Centroids, image registration, principal component analysis, image fusion.

Received October 14, 2014; accepted May 19, 2015

Full Text

 

 


 
Print E-mail

Speech Scrambling based on Independent Component Analysis and Particle Swarm Optimization

Nidaa Abbas1 and Jahanshah Kabudian2

1,2Computer Engineering and Information Technology Department, University of Razi, Iran

2College of IT, University of Babylon, Iraq

Abstract: The development of communication technologies and the use of computer networks has led that the data is vulnerable to the violation. For this reason this paper proposed scrambling algorithm based on the Independent Component Analysis (ICA), and the descrambling process was achieved on Particle Swarm Optimization (PSO) to resolve this problem. In the scrambling algorithm, the one speech signals segmented into two types, two and three. It then used the mixing process to result the scrambling of speech. In the descrambling process, we proposed the kurtosis and negative entropies as fitness function. The simulation results indicate that the scrambled speech has no residual intelligibility, and the descrambled speech quality is satisfactory. The performance of scrambling algorithm has been tested on four metrics signal to noise ratio (SNR), Perceptual Evaluation of Speech Quality and Mean Opinion Score (PESQ-MOS), Linear Predictive Coding (LPC) and itakura-saito distance. Many input speech signal of sampling frequency 16 kHz was tested for two genders male and female.

Keywords: ICA, itakura-saito distance, LPC, PSO, speech scrambling, SNR

Received July 6, 2015; accepted August 16, 2015

Full Text

________________________________________________________________________________________________________________

 
Print E-mail

 

Saliency Detection for Content Aware Computer Vision Applications

 

Manipoonchelvi Pandivalavan and Muneeswaran Karuppiah

Department of computer science and engineering, Mepco Schlenk Engineering College, India

Abstract: In recent years, there has been an increased scope for intelligent computer vision systems, which analyse the content of multimedia data. These systems are expected to process a huge quantum of image/data with high speed and without compromising on effectiveness. Such systems are benefited by reducing the amount of visual information by selectively processing only a relevant portion of the input data. The core issue in building these systems is to reduce irrelevant information and retain only a relevant subset of the input visual information. To address this issue, we propose a region-based computational visual attention model for saliency detection in images. The proposed model determines the salient object or part of the salient object without prior knowledge of its shape and color. The proposed framework has three components. First, the input image is segmented into homogeneous regions and then smaller regions are merged with neighbouring regions based on color and spatial distance between them. Second, three attributes such as spatial position, color contrast and size of each region are evaluated to distinguish salient object/parts of salient object. Finally, irrelevant background regions are suppressed and the region level saliency map is generated based on the three attributes. The generated saliency map preserves the shape and precise location of salient regions and hence it can be used to create high quality segmentation masks for high-level machine vision applications. Experimental results show that our proposed approach qualitatively better than the state-of-the-art approaches and quantitatively comparable to human perception. 

Keywords: Content aware processing, saliency detection, computational visual attention.

 Received June 12, 2014; accepted September 15, 2014

Full Text

________________________________________________________________________________________________________________________________________________________________________

 
Print E-mail

Simultaneously Identifying Opinion Targets and Opinion-bearing Words Based on Multi-features in Chinese Micro-blog Texts

Quanchao Liu, Heyan Huang and Chong Feng

Department of Computer Science and Technology, Beijing Institute of Technology, China

Abstract: We propose to simultaneously identify opinion targets and opinion-bearing words based on multi-features in Chinese micro-blog texts, i.e. to identify opinion-bearing words by means of opinion-bearing words dictionary and to identify opinion targets by considering multi-features between opinion targets and opinion-bearing words, and then we take a future step to optimize forwarding-based opinion target identification. We decompose our task into four phases: 1. Construct opinion-bearing words dictionary and identify opinion-bearing word in a sentence from Chinese micro-blog; 2. Design multiple features related to opinion target identification, containing Token, Part-Of-Speech (POS), Word Distance(WD), Direct Dependency Relation (DDR) and SRL; 3. Design three kinds of different feature templates to identify feature-opinion pairs <opinion target, opinion-bearing word> in Chinese micro-blog texts; 4. Combining forwarding relation between individual micro-blogs, we solve the problem of identifying opinion target in short micro-blog. The experiments with Natural Language Processing (NLP) and Chinese Computing (CC) 2012 and 2013’s labeled data show that our approach provides better performance than the baselines and most systems reported at NLP and CC 2012 and 2013.                                          

Keywords: Opinion mining, opinion target identification, micro-blog, feature-opinion pairs, sentiment polarity.

 Received January 14, 2015; accepted July 12, 2015

Full Text

____________________________________________________________________________________________________________________________________________________________________

 
<
 
Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
 
 
301 Moved Permanently

301 Moved Permanently

The requested resource has been assigned a new permanent URI.


Powered by Tengine/1.4.2