November 2016, No.6
Print E-mail

Iris-Pupil Thickness Based Method for Determining

Age Group of a Person

Asima Abbasi and Muhammad Khan

Shaheed Zulfikar Ali Bhutto Institute of Sciences and Technology, Pakistan

Abstract: Soft biometric attributes such as gender, ethnicity and age can be determined from the iris images. Pupil size plays an important factor in iris template aging. In this study, statistical experiments are performed to find out confidence interval for Iris-Pupil thickness of different age groups such as children, youth and senior citizen. Significant group differences have been observed by applying statistical techniques such as Analysis of Variance (ANOVA) and the Tukey’s pairwise comparison test. The results of the study conclude that the proposed methodology can be employed to determine age group of a person from the available iris images. Based on the study results, we argue that performance of an iris recognition system can be enhanced by identifying age group of the persons from their iris images.

Keywords: Iris recognition, feature extraction, iris aging, iris pupil ratio.

Received July 10, 2014; accepted April 2, 2015; Published online December 23, 2015

 
Print E-mail

Parallel Particle Filters for Multiple Target Tracking

Sebbagh Abdennour and Tebbikh Hicham

Automatic and Computing Laboratory of Guelma (LAIG), 8 Mai 1945 Guelma University, Algeria

Abstract: The Multiple Targets Tracking (MTT) problem is addressed in signal and image processing. When the state and measurement models are linear, we can find several algorithms that yield good performances in MTT problem, among them, the Multiple Hypotheses Tracker (MHT) and the Joint Probabilistic Data Association Filter (JPDAF). However, if the state and measurement models are nonlinear, these algorithms break down. In this paper we propose a method based on particle filters bank, where the objective is to make a contribution for estimating the trajectories of several targets using only bearings measurements. The main idea of this algorithm is to combine the Multiple Model approach (MM) with Sequential Monte Carlo methods (SMC). The result from this combination is a Nonlinear Multiple Model Particle Filters algorithm (NMMPF) able to estimate the trajectories of multiple targets.

Keywords: MM approach, MTT, particle filtering.

Received August 21, 2014; accepted December 21, 2014; Published online December 23, 2015

 

Full text 

 


 
Print E-mail

Face Image Super Resolution via Adaptive-Block PCA

Lin Cao and Dan Liu

 Department of Telecommunication Engineering, Beijing Information Science

 and Technology University, China

Abstract: A novel single face image Super Resolution (SR) framework based on adaptive-block Principal Component Analysis (PCA) is presented in this paper. The basic idea is the reconstruction of a High Resolution (HR) face image from a Low Resolution (LR) observation based on a set of HR and LR training image pairs. The HR image block is generated in the proposed method by using the same position image blocks of each training image. The test face image and the training image sets are divided into many overlapping blocks, then these image blocks are classified according to the characteristics of the image block and then PCA is operated directly on the non-flat image blocks to extract the optimal weights and the hallucinated patches are reconstructed using the same weights. The final HR facial image is formed by integrating the hallucinated patches. Experiments indicate that the new method produces HR faces of higher quality and costs less computational time than some recent face image SR techniques.

Keywords: SR, face image, adaptive-block, PCA.

Received October 30, 2013; accepted November 20, 2014; Published online December 23, 2015

 

Full text 

                        

 
Print E-mail

KP-Trie Algorithm for Update and Search Operations

Feras Hanandeh1, Izzat Alsmadi2, Mohammed Akour3, and Essam Al Daoud4

1Department of Computer Information Systems, Hashemite University, Jordan

2, 3Department of Computer Information Systems, Yarmouk University, Jordan

4Computer Science Department, Zarqa University, Jordan

Abstract: Radix-Tree is a space optimized data structure that performs data compression by means of cluster nodes that share the same branch. Each node with only one child is merged with its child and is considered as space optimized. Nevertheless, it can’t be considered as speed optimized because the root is associated with the empty string. Moreover, values are not normally associated with every node; they are associated only with leaves and some inner nodes that correspond to keys of interest. Therefore, it takes time in moving bit by bit to reach the desired word. In this paper we propose the KP-Trie which is consider as speed and space optimized data structure that is resulted from both horizontal and vertical compression.

Keywords: Trie, radix tree, data structure, branch factor, indexing, tree structure, information retrieval.

Received January 14, 2015; accepted March 23, 2015; Published online December 23, 2015


 
Print E-mail

Test Case Prioritization for Regression Testing Using Immune Operator

Angelin Gladston1, Khanna Nehemiah1, Palanisamy Narayanasamy2, and Arputharaj Kannan2

1Ramanujan Computing Centre, Anna University, India

2Department of Information Science and Technology, Anna University, India

Abstract: Regression testing is a time consuming, costly process of re-running existing test cases. As software evolves, the regression test suite grows in size. Test case prioritization techniques help by ordering test cases such that at least the test cases which cover the changes made in the software are executed amidst resource and time constraints. Genetic Algorithm (GA) has been widely used for test case prioritization problem, however it has low convergence problem. In this work, the Immune Genetic Algorithm (IGA) is applied for test case prioritization, so that test case prioritization converges earlier. Our contributions in Immune Prioritization Algorithm (IPA) include a method for vaccine selection, zero drop function and probability selection function. The prioritized result of IPA is evaluated against GA and the statement coverage, decision coverage and block coverage of the test cases prioritized using IPA are found to have improved. Further, IPA showed improved average fitness value as well as optimal fitness value compared to genetic algorithm.

 

Keywords: Immune operator, vaccine, test case prioritization, regression testing, GA, IPA.

 

Received July 3, 2012; accepted April 29, 2013; Published online December 23, 2015

 

Full text 

 


 
Print E-mail

An Anti-Spam Filter Based on One-Class IB Method in

Small Training Sets

Chen Yang1, Shaofeng Zhao2, Dan Zhang3, and Junxia Ma1

1School of Software Engineering, Zhengzhou University of Light Industry, China

2Henan University of Economics and Law, China

3Geophysical Exploration Center of China Earthquake Administration, China

 

Abstract: We present an approach to email filtering based on one-class Information Bottleneck (IB) method in small training sets. When themes of emails are changing continually, the available training set which is high-relevant to the current theme will be small. Hence, we further show how to estimate the learning algorithm and how to filter the spam in the small training sets. First, In order to preserve classification accuracy and avoid over-fitting while substantially reducing training set size, we consider the learning framework as the solution of one-class centroid only averaged by highly positive emails, and second, we design a simple binary classification model to filters spam by the comparison of similarity between emails and centroids. Experimental results show that in small training sets our method can significantly improve classification accuracy compared with the currently popular methods, such as: Naive Bayes, AdaBoost and SVM.

 

Keywords: IB method, one-class IB, anti-spam filter, Small training sets.

 

Received September 5, 2014; accepted November 25, 2014


 
Print E-mail

Metacognitive Awareness Assessment and Introductory Computer Programming Course Achievement at University

Siti Rum and Maizatul Ismail

Faculty of Computer Science and Information Technology, University of Malaya, Malaysia

Abstract: Computer programming is regarded as a difficult skill to learn both by researchers and often by learners themselves. Metacognition has been identified as an important factor to be a successful learner in learning computer programming. Metacognitive in educational psychology is generally described as monitoring and controlling activities of one’s cognition. The researchers have examined the Metacognitive Awareness Inventory (MAI) to identify how it relates to student academic achievement at school and universities. In this research work, an empirical research is conducted using the MAI inventory with the objective to examine the correlation between the metacognitive awareness with the Grade Point Average (GPA) performance of the introductory programming course at Universities in Malaysia. The experiment result indicates a positive relationship between metacognitive awareness with the learning success of introductory programming course at Universities.

Keywords: Novice programmer, met cognitive, MAI, educational psychology, introductory computer programming.

Received November 13, 2013; accepted December 16, 2014; Published online December 23, 2015

 

Full text 

 


 
Print E-mail

Multiple-View Face Hallucination by a Novel Regression Analysis in Tensor Space

ParinyaSanguansat

Faculty of Engineering and Technology,Panyapiwat Institute of Management,Thailand

Abstract:In this paper, the novel multiple-view face hallucination method was proposed. This method is reconstructed the high-resolution face images in various poses (normal, up, down, left, and right) from a single low-resolution face image within these poses. There are two steps in our proposed method. In the first step, a high-resolution face image in the same view of the observation is reconstructed by the position-patch face hallucination framework with the improved Locally Linear Embedding (LLE), which the number of neighbours is adaptive. In the second step, the reconstructed image is used to generate the high-resolution of the other views by the novel tensor regression technique. The experimental results on the well-known dataset show that the proposed method can achieve the better quality image than the baseline methods.

 

Keywords:Face hallucination, tensor regression, multiple views, super-resolution

 

Received January 6, 2014; accepted December 16, 2014; Published online December 23, 2015

 

Full text 

 


 
Print E-mail

The Refinement Check of Added Dynamic Diagrams

Based on p-Calculus

Zhou Xiang1 and Shao Zhiqing2

 1Qingdao University, China

2East China University of Science and Technology, China

Abstract: As the semi-formal modeling tool, UML has semantics defaults which may cause confusions or even mistakes in refinement of models. p-calculus is a formal specification based on process algebra, which can give strict semantics description for system behaviors. We seek to clearly define the semantics of refinement to a model through p- calculus and thus we are able to propose a formal verification method of the refinement. Employing this method, we can improve the efficiency of the consistency verification while decreasing the mistakes in the refinement process.

 

Keywords: p-calculus; UML; sequence diagram; statechart diagram; weak open bisimulation.

Received January 21, 2014; accepted December 22, 2014

 

Full text 

 

 
Print E-mail

A New Model for Software Inspection at the Requirements Analysis and Design Phases of Software Development

Navid Taba and Siew Ow

Department of Software Engineering, University of Malaya, Malaysia

Abstract: Software inspection models have been remarkable development in over the past four decades, particularly in the field of automatic inspection of software codes and electronic sessions. A small number of improvements have been made in the field of system analysis and design. The amount of using formal inspection model which is based on single check lists and physical or electronic sessions shows the decrease in interest in it. As inspection, in system analysis phase, is a man-cantered issue, inspectors support using electronic tools will lead to higher efficiency of the inspection process. This paper proposes a comprehensive web-based tool aimed to accelerating the inspection process in the early phases of software development. In order to evaluate the efficiency of the proposed tool, two case studies were conducted to inspect the artifacts from six software projects of two software companies. Comparing the statistics related to the defects detected using this tool with those detected using the formal method shows the efficiency of the used tool.

Keywords: Software inspection, software test, software engineering improvement, web-based solution, software inspection tool, inspection metrics.

Received September 2, 2013; accepted September 29, 2014; Published online December 23, 2015

 

 

Full text 


 
Print E-mail

RPLB: A Replica Placement Algorithm in Data Grid with Load Balancing

Kingsy Rajaretnam, Manimegalai Rajkumar, and Ranjith Venkatesan

Department of Computer Science and Engineering, Sri Ramakrishna Engineering College, India

Abstract: Data grid is an infrastructure built based on internet which facilitates sharing and management of geographically distributed data resources. Data sharing in data grids is enhanced through dynamic data replication methodologies to reduce access latencies and bandwidth consumption. Replica placement is to create and place duplicate copies of the most needed file in beneficial locations in the data grid network. To reduce the make span i.e., total job execution time, storage consumption and effective network usage in data grids, a new method for replica placement is introduced. In this proposed method, all the nodes in the same region are grouped together and replica is placed in the highest degree and highest frequency node in the region. The node to place replica should be load balanced in terms of access and storage. The proposed dynamic Replica Placement algorithm with Load Balancing (RPLB) is tested using OptorSim simulator, which is developed by European Data Grid Projects. In this paper, two variants of the proposed algorithm RPLB, namely RPLBfrequency and RPLBdegree are also presented. The comparative analysis of all the three proposed algorithms is also presented in this paper. A Graphical User Interface (GUI) is designed as an interface to OptorSim to get all values for grid configuration file, job configuration file and parameters configuration file. Simulation results reveal that the performance of the proposed methodology is better in terms of makespan, storage consumption and replication count when compared to the existing algorithms in the literature.

Keywords: Replica placement, load balancing, effective network usage, data grid, data replication.

 

Received June 17, 2013; accepted April 28, 2014

 
Print E-mail

An Intelligent Water Drop Algorithm for

OptimizingTask Scheduling in Grid Environment

SornapandySelvarani1and GangadharanSadhasivam2

1Department of Information Technology, Tamilnadu College of Engineering, India

2Department of Computer Science and Engineering, PSG College of Technology, India

Abstract: The goal of grid computing is to provide powerful computing for complex scientific problems by utilizing and sharing large scale resources available in the grid. Efficient scheduling algorithms are needed to allocate suitable resources for each submitted task. So scheduling is one of the most important issues for achieving high performance computing in grid.  This paper addresses an approach for optimizing scheduling using a nature inspired Intelligent Water Drops (IWD) algorithm. In the proposed approach IWD algorithm is adopted to improve the performance of task scheduling in grid environment. The performance of Ant Colony Optimization (ACO) algorithm for task scheduling is compared with the proposed IWD approach and it is proved that task scheduling using IWD can efficiently and effectively allocate tasks to suitable resources in the grid.

Keywords: grid computing, IWD, task scheduling, ACO.

Received January24, 2013; accepted March 19, 2014; Published online December 23, 2015

 

Full text 

 


 
Print E-mail

iHPProxy: Improving the Performance of HPProxy by Adding Extra Hot-Points

Ponnusamy Pichamuthu1 and Karthikeyan Eswaramurthy2

  1Department of Computer Science, Bharathiar University, India

 2Department of Computer Science, Government Arts College, Udumalpet, Bharathiar University, India

Abstract: In recent years, the interest of Internet users turned into viewing videos such as Video-on-Demand (VoD), online movies, online sports, news, e-learning, etc., the researchers involved proxy caching with replacement to provide immediate content delivery to the client on request. One important aspect in the content delivery is continuous playability with random seek even the client wants to watch the video from a new location by jumping into that location. The continuous play is possible if the hit location is cached already, otherwise a delay will occur. The researchers allowed a small deviation from the desired location in the backward direction to reduce the delay. Our earlier model, Hot-Point Proxy caching (HPPrxoy), also supports the shift to the nearest cached Group Of Pictures (GOP) in the backward direction to play immediately, but, in some cases the deviation was large. Hence, we proposed a new model to provide a little deviation by adding extra hot-points between existing sub-level hot-points. However, this mechanism additionally consumes cache memory, it increases the byte-hit ratio, satisfies the user requirement in random seek and provide better cache replacement.

Keywords: Proxy caching, shift distance, HPProxy, cache replacement, multimedia streaming, VoD.

Received May 9, 2013; accepted July 8, 2013; Published online December 23, 2015

 

Full text 

 

 
Print E-mail

Modified Bee Colony Optimization for the Selection of Different Combination of Food Sources

 Saravanamoorthi Moorthi

Department of Mathematics, Bannari Amman Institute of Technology, India

 Abstract: There is a trend in the scientific community to model and solve complex optimization process by employing natural metaphors. In this area, Artificial Bee Colony optimization (ABC) tries to model natural behaviour of real honeybees in food foraging. ABC algorithm is an optimization algorithm based on the intelligent behaviour of honey bee swarm. In this work, ABC is used for solving multivariable functions with different combinations of them. That is, all the routes are identified to the bees and using all the possible combinations, the outputs are measured. Based on the output the optimum value is selected.

 

Keywords: ABC algorithm, optimization, benchmark functions.

 

Received February 20, 2013; accepted June 5, 2013; Published online December 23, 2015

 

Full text 

 

 
Print E-mail

 

An Adaptive Weighted Fuzzy Mean Filter
 
Based on Cloud Model

 

Kannan Kanagaraj

 Department of Mechanical Engineering, Kamaraj College of Engineering and Technology, India

Abstract: This research proposes an Adaptive Weighted Fuzzy Mean Filter (AWFMF) based on Cloud Model (CM) to remove the salt and pepper noise in the digital images. Also, the performance of the proposed filter is compared with existing variants of median and switching filters using peak signal to noise ratio (PSNR) and quality index. The proposed filter is able to remove salt and pepper noise even at 90% noise level with good detail preservation.

Keywords: Image denoising, salt and pepper noise, CM. 

Received January 21, 2013; accepted March 17, 2014; Published online December 23, 2015

 
Print E-mail

Prediction of Part of Speech Tags for Punjabi using Support Vector

Machines 

Dinesh Kumar1 and Gurpreet Josan2

 1Department of Information Technology, DAV Institute of Engineering and Technology, India

2Department of Computer Science, Punjabi University, India

Abstract: Part-Of-Speech (POS) tagging is a task of assigning the appropriate POS or lexical category to each word in a natural language sentence. In this paper, we have worked on automated annotation of POS tags for Punjabi. We have collected a corpus of around 27,000 words, which included the text from various stories, essays, day-to-day conversations, poems etc., and divided these words into different size files for training and testing purposes. In our approach, we have used Support Vector Machine (SVM) for tagging Punjabi sentences. To the best of our knowledge, SVMs have never been used for tagging Punjabi text. The result shows that SVM based tagger has outperformed the existing taggers. In the existing POS taggers of Punjabi, the accuracy of POS tagging for unknown words is less than that for known words. But in our proposed tagger, high accuracy has been achieved for unknown and ambiguous words. The average accuracy of our tagger is 89.86%, which is better than the existing approaches.


Keywords: POS tagging, SVM, feature set, vectorization, machine learning, tagger, punjabi, indian languages.


Received September 18, 2013; accepted February 28, 2014; Published online December 23, 2015

 

Full text

 
<
 
Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
 
 
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 Warning: fsockopen(): unable to connect to oucha.net:80 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 skterr