Print E-mail

A Hybrid BATCS Algorithm to Generate Optimal Query Plan

Gomathi Ramalingam1 and Sharmila Dhandapani2

1Department of Computer Science and Engineering, Bannari Amman Institute of Technology, India.

2Department of Electronics and Instrumentation Enginering, Bannari Amman Institute of Technology, India.

Abstract: The enormous increase in the amount of web pages day by day leads to progress in semantic web data management. The issues in semantic web data management are increasing and there is a need for improvement in research to handle them. One of the most important issues is the process of query optimization. The semantic web data stored in the form of Resource Description Framework (RDF) data can be queried using the popular query language SPARQL. As the size of the data increases, complication arises in querying the RDF data. The problem of querying the RDF graphs involves multiple join operations and optimizing those joins becomes NP-hard. Nature inspired algorithms are becoming much popular in recent days to handle problems with high complexity. In this research, a hybrid BAT algorithm with Cuckoo Search (BATCS) is proposed to handle the problem of query optimization. The algorithm applies the echolocation behaviour of bats and hybrids with cuckoo search if the best solution stagnates for a designated number of iterations. Experiments were conducted with benchmark data sets and the algorithm proves that it performs efficiently in terms of query execution time. 

Keywords: Data management, Query optimization, Nature Inspired algorithms, Bat algorithm, Cuckoo Search algorithm

Revived November 7, 2014; accepted August 3, 2015

 

Full text 

 

 
Print E-mail

An Efficient Web Search Engine for Noisy Free Information Retrieval

1Pradeep Sahoo and 2Rajagopalan Parthasarthy

1Department of Computer Science and Engineering, Anna University, India.

2Department of Computer Science and Engineering, GKM College of Engineering and Technology, India.

Abstract: The vast growth, various dynamic and low quality of the World Wide Web makes it very difficult to retrieve relevant information from internet during query search. To resolve this issue, various web mining techniques are being used. The biggest challenge in web mining is to remove noisy data information or unwanted information from the webpage such as banner, video, audio, images, hyperlinks etc. which are not associated to a user query. To overcome these issues, a novel custom search engine is proposed with efficient algorithm in this paper. The proposed Uniform Resource Locator (URL) pattern extractor algorithm will extract the all relevance index pages from the web and ranking the indexes based on user query. Then, Noisy Data Cleaner (NDC) algorithm is applied to remove the unwanted content from the retrieved web pages. The results show that the proposed UPE+NDC algorithm provides very promising results for different datasets with high precision and recall rate in comparison with the existing algorithms.

Keywords: web content extraction, relevant information, noise data elimination, noisy data cleaner algorithm, URL pattern extractor algorithm

Revived November 27, 2014; accepted June 1, 2015

 

Full text 

 

                
 
Print E-mail

Effective Technology Based Sports Training System

Using Human Pose Model 

Kannan Paulraj and Nithya Natesan

Department of Electronics and Communication Engineering, Panimalar Engineering College, India

Abstract: This paper investigates the sports dynamics using human pose modeling from the video sequences. To implement human pose modeling, a human skeletal model is developed using thinning algorithm and the feature points of human body are extracted. The obtained feature points play an important role in analyzing the activities of a sports person. The proposed human pose model technique provides a technology based training to a sports person and performance can be gradually improved. Also the paper aimed at improving the computation time and efficiency of 2D and 3D model.

Keywords: Thinning Algorithm, Human activity, Motion Analysis, Feature Extraction

 

Revived March 28, 2015; accepted September 9, 2015

 

Full text 

 
Print E-mail

Vertical Links Minimized 3D NoC Topology and Router-Arbiter Design

                        Nallasamy Viswanathan1, Kuppusamy Paramasivam2 , and Kanagasabapathi Somasundaram 3

1Department of Electrical and Computer Engineering, Mahendra Engineering College, India.

2Department of Electrical and Computer Engineering, Karpagam College of Engineering, India,

3Department of Mathematics, Amrita Vishwa Vidyapeetham, India.

Abstract: Design of a topology and its router plays  a vital role in a 3D NoC architecture. In this paper, we develop a partially vertically connected topology, so called 3D Recursive Network Topology (3D RNT) and using an analytical model, we study the performance of the 3D RNT. Delay per Buffer Size (DBS) and Chip Area per Buffer Size (CABS) are the parameters considered for the performance evaluation. Our experimental results show that the vertical links are cut down upto 75% in 3D RNT compared to that of 3D Fully connected Mesh Topology (3D FMT) at the cost of increasing DBS by 8%, besides 10% lesser CABS is observed in the 3D RNT.  Further, a Programmable Prefix router-Arbiter (PPA) is designed for 3D NoC and its performance is analyzed. The results of the experimental analysis indicate that PPA has lesser delay and area (gate count) compared to Round Robin Arbiter with prefix network (RRA).

Keywords:  Network topology; Vertical links; Network Calculus; Arbiter; Latency; Chip area.

Revived June 26, 2014; accepted July 7, 2015

 
Print E-mail

Hidden Markov Random Fields and Particle Swarm

Combination for Brain Magnetic Resonance Image

Segmentation

El-Hachemi Guerrout, Ramdane Mahiou, Samy Ait-Aoudia

ESI - Ecole nationale Supérieure en Informatique, Algeria

Abstract: The interpretation of brain images is a crucial task in the practitioners’ diagnosis process. Segmentation is one of key operations to provide a decision support to physicians. There are several methods to perform segmentation. We use Hidden Markov Random Fields (HMRF) for modelling the segmentation problem. This elegant model leads to an optimization problem. Particles Swarm Optimization (PSO) method is used to achieve brain magnetic resonance image segmentation. Setting the parameters of the HMRF-PSO method is a task in itself. We conduct a study for the choice of parameters that give a good segmentation. The segmentation quality is evaluated on ground-truth images, using the Dice coefficient also called Kappa index. The results show a superiority of the HMRF-PSO method, compared to methods such as Classical MRF and MRF using variants of ACO (Ant Colony Optimization).

Keywords: Brain image segmentation, Hidden Markov Random Field, Swarm Particles Optimization, Dice coefficient.

Revived June 5, 2015; accepted October 19, 2015

 

 Full text  

 
Print E-mail

Energy consumption improvementand cost saving

by Cloud broker in Cloud datacenters

Ahmad Reza Karamolahi1, AbdolahChalechale2, and Mahmoud Ahmadi2

1B2B social sales network group, Iran

2Computer and Information Technology Department, Razi University, Iran

Abstract:Using a single Cloud datacenter in Cloud network can have several disadvantages for users, from excess energy consumption to increase dissatisfaction of users of service and price of provided services. The Cloud broker as an intermediary between users and datacenters can play a key role to enhance users' satisfaction and reducing energy consumption of datacenters that are located geographically in different areas. In this paper, we have attempted to provide an algorithm that assigns datacenter to users through rating various datacenters. This algorithm has been simulated by Cloudsim and will result in high levels of user satisfaction, cost-effectiveness and improving energy consumption. In this paper, we show that this algorithm can save 44% of energy consumption and 7% of cost saving to users are in sample simulation space.

Keywords:Cloud network, Cloud broker, Energy optimizing, Cost saving.

Revived June 10, 2015; accepted December 9, 2015

 

Full text  

 

 
Print E-mail

Advanced Architecture for Java Universal Message Passing (AA-JUMP)

Adeel-ur-Rehman1 and Naveed Riaz2

1National Centre for Physics (NCP), Pakistan

2SEECS, National University of Science and Technology, Pakistan

Abstract: The Architecture for Java Universal Message Passing (A-JUMP) is a Java based message passing framework. A-JUMP offers flexibility for programmers in order to write parallel applications making use of multiple programming languages. There is also a provision to use various network protocols for message communication. The results for standard benchmarks like ping-pong latency, Embarrassingly Parallel (EP) code execution, JGF Crypt etc. gave us the conclusion that for the cases where the data size is smaller than 256K bytes, the numbers are comparative with some of its predecessor models like MPICH2, MPJ Express etc. But, in case, the packet size exceeds 256K bytes, the performance of the A-JUMP model seems to be severely hampered. Hence, taking that peculiar behaviour into account, this paper talks about a strategy devised to cope up with the performance limitation observed under the base A-JUMP implementation, giving birth to an Advanced A-JUMP (AA-JUMP) methodology while keeping the basic workflow of the original model intact. AA-JUMP addresses to improve performance of A-JUMP by preserving its various traits like portability, simplicity, scalability etc. which are the key features offered by flourishing HPC oriented frameworks of now-a-days. The head-to-head comparisons between the two message passing versions reveals 40% performance boost; thereby suggesting AAJUMP a viable approach to adopt under parallel as well as distributed computing domains.

Keywords: A-JUMP, Java, Universal Message Passing, MPI, Distributed Computing

Revived February 15, 2015; accepted December 21, 2015

 

Full text 

 

 
Print E-mail

Performance Analysis of Security Requirements Engineering Framework by Measuring the Vulnerabilities

Salini Prabhakaran1 and Kanmani Selvadurai2

1Department of Computer Science and Engineering, Pondicherry Engineering College, India

2Department of Information Technology, Pondicherry Engineering College, India

Abstract: To develop security critical web applications, specifying security requirements is important, since 75% to 80% of all attacks happen at the web application layer. We adopted security requirements engineering methods to identify security requirements at the early stages of software development life cycle so as to minimize vulnerabilities at the later phases. In this paper, we present the evaluation of Model Oriented Security Requirements Engineering (MOSRE) framework and Security Requirements Engineering Framework (SREF) by implementing the identified security requirements of a web application through each framework while developing respective web application. We also developed a web application without using any of the security requirements engineering method in order to prove the importance of security requirements engineering phase in software development life cycle. The developed web applications were scanned for vulnerabilities using the web application scanning tool. The evaluation was done in two phases of software development life cycle: requirements engineering and testing. From the results, we observed that the number of vulnerabilities detected in the web application developed by adopting MOSRE framework is less, when compared to the web applications developed adopting SREF and without using any security requirements engineering method. Thus, this study led the requirements engineers to use MOSRE framework to elicit security requirements efficiently and also trace security requirements from requirements engineering phase to later phases of software development life cycle for developing secure web applications.

Keywords: Requirements Engineering, Security Mechanism, Security Requirements, Security Requirements Engineering, Web Applications and Vulnerabilities.

Revived December 15, 2014; accepted April 5, 2015

 

Full text 

 


 
Print E-mail

A Novel Approach for Face Recognition Using Fused GMDH-Based Networks

El-Sayed  El-Alfy1, Zubair Baig2, and Radwan Abdel-Aal1

1College of Computer Sciences and Engineering, King Fahd University of Petroleum and Minerals, Saudi Arabia

2School of Science and Security Research Institute, Edith Cowan University, Australia

Abstract: This paper explores a novel approach for automatic human recognition from multi-view frontal facial images taken at different poses. The proposed computational model is based on fusion of the Group Method of Data Handling (GMDH) neural networks trained on different subsets of facial features and with different complexities. To demonstrate the effectiveness of this approach, the performance is evaluated and compared using eigen-decomposition for feature extraction and reduction with a variety of GMDH-based models. The experimental results show that high recognition rates, close to 98%, can be achieved with very low average false acceptance rates, less than 0.12%. Performance is further investigated on different feature set sizes and it is found that with smaller feature sets (as few as 8 features), the proposed GMDH-based models outperform other classifiers including those using radial-basis functions and support-vector machines. Additionally, the capability of the group method of data handling algorithm to select the most relevant features during the model construction makes it more attractive to build much simplified models of polynomial units.

Keywords: Face recognition, abductive machine learning, neural computing, GMDH-based ensemble learning.

Revived May 30, 2015; accepted November 29, 2015

 

Full text  

 

 
Print E-mail

Complementary Approaches Built as Web Service for Arabic Handwriting OCR Systems via Amazon Elatic MapReduce (EMR) Model

Hassen Hamdi1, Maher Khemakhem2, Aisha Zaidan1

1Department of Computer Science, Taibah University, Kingdom of Saudi Arabia

2Faculty of Computing and Information Technology, University of King Abdul-Aziz, Kingdom of Saudi Arabia

Abstract: Arabic Optical Character Recognition (OCR) as Web Services represents a major challenge for handwritten document recognition. A variety of approaches, methods, algorithms and techniques have been proposed in order to build powerful Arabic OCR web services. Unfortunately, these methods could not succeed in achieving this mission in case of large large quantity Arabic handwritten documents. Intensive experiments and observations revealed that some of the existing approaches and techniques are complementary and can be combined to improve the recognition rate. Designing and implementing these recent sophisticated complementary approaches and techniques as web services are commonly complex; they require strong computing power to reach an acceptable recognition speed especially in case of large quantity documents. One of the possible solutions to overcome this problem is to benefit from distributed computing architectures such as cloud computing.This paper describes the design and implementation of Arabic Handwriting Recognition as a web service (AHRweb service) based on the complementary approach K-Nearest Neighbor (KNN) /Support Vector Machine (SVM) (K-NN/SVM) via Amazon Elastic MapReduce (EMR) model. The experiments were conducted on a  cloud computing environment with a real large scale handwriting dataset from the Institute for Communications Technology (IFN)/ Ecole Nationale d’Ingénieur de Tunis (ENIT) IFN/ENIT database. The J-Sim (Java Simulator) was used as a tool to generate and analyze statistical results. Experimental results show that Amazon Elastic MapReduce (EMR) model constitutes a very promising framework for enhancing large AHRweb service performances.

Keywords: Arabic handwriting, Complementary Approaches and techniques, K-NN/SVM, web service, Amazon Elastic MapReduce .

 

Revived April 25, 2015; accepted January 3, 2016

 

Full text 

 

 
Print E-mail

Arabic Character Extraction and Recognition using Traversing Approach

Abdul Khader Saudagar and Habeeb Mohammed

College of Computer and Information Sciences, Al Imam Mohammad Ibn Saud Islamic University, Saudi Arabia

Abstract: The intention behind this research is to present an original work undertaken for Arabic character extraction and recognition for attaining higher percentage of recognition rate. Copious techniques for character, text extraction were proposed in earlier decades, but very few of them shed light on Arabic character set. From literature survey, it was found that 100% recognition rate is not attained by earlier proposed implementations. The proposed technique is novel and is based on traversing of the characters in a given text and marking their directions viz. North-South (NS), East-West (EW), NorthEast-SouthWest (NE-SW), NorthWest-SouthEast (NW-SE) etc., in an array and comparing them with the pre-defined codes of every character in the dataset. The experiments were conducted on Arabic news videos, documents taken from Arabic Printed Text Image (APTI) database and the results achieved from this research are very promising with a recognition rate of 98.1%. The proposed algorithm in this research work can replace the existing algorithms used in present Arabic Optical Character Recognition (AOCR) systems.

Keywords: Accuracy, Arabic Optical Character Recognition and Text Extraction.

Revived March 14, 2015; accepted August 16, 2015

 

Full text  

 

 
Print E-mail

A Multimedia Web Service Matchmaker

Sid Ahmed Djallal Midouni1,2, Youssef Amghar1, Azeddine Chikh2

1Université de Lyon, CNRS INSA-Lyon, France

2Département d'informatique, Université Abou Bekr Belkaid-Tlemcen, Algérie

Abstract: The full service approach for composing MaaS services in multimedia data retrieving, which we have proposed in a previous work, is based on a four phases process: description; matching; clustering; and restitution. In this article, we show how MaaS services are matched to meet user needs. Our matching algorithm consists of two steps: (1) the domain matching step is based on the calculation of similarity degrees between the domain description of MaaS services and user queries; (2) the multimedia matching step compares the multimedia description of MaaS services with user queries. The multimedia description is defined as a SPARQL query over multimedia ontology. An experimentation in a medical domain allowed to evaluate the solution. The results indicate that using both domain and multimedia matching considerably improve the performance of multimedia data retrieving systems.

Keywords: semantic web services, information retrieval, service description, SAWSDL, service matching.

Revived July 27, 2015; accepted September 12, 2015

 

Full text 

 

 
<
 
Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
 
 
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 Warning: fsockopen(): unable to connect to oucha.net:80 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 skterr