May 2013, No. 3
Print E-mail

Fast Window Based Stereo Matching for 3D Scene Reconstruction

Mohammad Chowdhury and Mohammad Bhuiyan
Department of Computer Science and Engineering, Jahangirnagar University, Bangladesh


Stereo correspondence matching is a key problem in many applications like computer and robotic vision to determine Three-Dimensional (3D) depth information of objects which is essential for 3D reconstruction. This paper presents a 3D reconstruction technique with a fast stereo correspondence matching which is robust in tackling additive noise. The additive noise is eliminated using a fuzzy filtering technique. Experimentations with ground truth images prove the effectiveness of the proposed algorithms.

Keywords: Stereo correspondence, disparity, window Cost, stereo vision, fuzzy filtering, 3D model.
Received   July 11, 2011; accepted December 20, 2011
Print E-mail

An Efficient Approach for Effectual Mining of Relational Patterns from Multi-Relational Database

Vimal Dhanasekar1 and Tamilarasi Angamuthu2
1Department of MCA, SAN International Information School, Anna University of Technology, Coimbatore
2Department of MCA, Kongu Engineering College, India


Data mining is an extremely challenging and hopeful research topic due to its well-built application potential and the broad accessibility of the massive quantities of data in databases. Still, the rising significance of data mining in practical real world necessitates ever more complicated solutions while data includes of a huge amount of records which may be stored in various tables of a relational database. One of the possible solutions is multi-relational pattern mining, which is a form of data mining operating on data stored in multiple tables. Multi-relational pattern mining is an emerging research area and it has been received considerable attention among the researchers due to its various applications. In the proposed work, we have developed an efficient approach for effectual mining of relational patterns from multi-relational database. Initially, the multi-relational database is represented using a tree-based data structure without changing their relations. A tree pattern mining algorithm is devised and applied on the constructed tree-based data structure for extracting the frequent relational patterns. The experimentation is carried out on customer order database and the comparative results demonstrate that the proposed approach is effective and efficient in mining of relational patterns.

Keywords: Data mining, multi-relational data mining, relational pattern, tree pattern mining, multi-relational database, customer order database.
Received   July 11, 2011; accepted December 20, 2011
Print E-mail

Modeling Human Dialogue Transactions in the Context of Declarative Knowledge

Igor Chimir1, Ameed Ghazi2, and Waheeb Abu-Dawwas3
1Department of Computer Science, Odessa State Environmental University, Ukraine
2Computer Department, Teachers College, King Saud University, Saudi Arabia
3Department of Management Information Systems, Qassim University, Saudi Arabia


The paper deals with an investigation and modeling of dialogue and dialogue transactions. An ontological model of human dialogue interaction, underlying follow-up reasoning, is obtained on the basis of analysis of human-human dialogues and illustrated by one of Plato’s dialogue Protagoras. In the sequel, attention is focused on one type of human-human dialogues, called erotetic dialogue and on the structure of erotetic dialogue transaction from the viewpoint of knowledge interchange within the transaction. A spectrum of formal models, oriented towards discovering the inner logical structure of erotetic transaction is offered. The distinguishing feature of all models is their orientation on language-independent entities representing declarative knowledge associated with a transaction’s elements.

Keywords: Natural dialogue ontology, erotetic dialogue, dialogue transaction, declarative knowledge, language of ternary description.
Received   July 11, 2011; accepted December 20, 2011
Print E-mail

Developing a GIS-Based MCE Site Selection Tool in ArcGIS Using COM Technology

Khalid Eldrandaly
Associate Professor of Information Systems, College of Computers and Informatics, Zagazig University, Egypt
Site selection is a complex process for owners and analysts. This process involves not only technical requirements, but also economical, social, environmental and political demands that may result in conflicting objectives. Site selection is the process of finding locations that meet desired conditions set by the selection criteria. Geographic Information Systems (GIS) and Multi Criteria Evaluation techniques (MCE) are the two common tools employed to solve these problems. However, each suffers from serious shortcomings and could not be used alone to reach an optimum solution. This poses the challenge of integrating these tools. Developing and using GIS-based MCE tools for site selection is a complex process that needs well trained GIS developers and analysts who are not often available in most organizations. In this paper, a GIS-based Multicriteria Evaluation Site Selection Tool is developed in ArcGIS 9.3 using COM technology to achieve software interoperability. This tool can be used by engineers and planners with different levels of GIS and MCE knowledge to solve site selection problems. A typical case study is presented to demonstrate the application of the proposed tool. .In addition, the paper presents a comprehensive discussion of the site selection process and characteristics.

Keywords: Site Selection, GIS, MCE, AHP, OWA, AHP-OWA.
Received February 9, 2011; accepted May 24, 2011
Print E-mail

Enhancements of A Three-Party Password-Based Authenticated Key Exchange Protocol

Shuhua Wu 1,2,3 , Kefei Chen 1, and Yuefei Zhu3
1 Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
2 State Key Laboratory of Information Security, Graduate University of Chinese Academy of Sciences, Beijing, China
3Department of Network Engineering, Information Science and Technology Institute, Zhengzhou, China
This paper discusses the security for a simple and efficient three-party password-based authenticated key exchange protocol proposed by Huang most recently. Our analysis shows her protocol is still vulnerable to three kinds of attacks: (1) undetectable on-line dictionary attacks; and (2) key-compromise impersonation attack. Thereafter we propose an enhanced protocol that can defeat the attacks described and yet is reasonably efficient.

Keywords: password-based; authenticated key exchange; three-party; dictionary attack.
Received June 2, 2010; accepted March 1, 2011
Print E-mail

Medical Image Segmentation using a Multi-Agent System Approach

Mahsa Chitsaz and Woo Seng
Faculty of Computer Science and Information Technology, University of Malaya, Malaysia
Image segmentation techniques have been an invaluable task in many domains such as quantification of tissue volumes, medical diagnosis, anatomical structure study, treatment planning, etc. Image segmentation is still a debatable problem due to some issues. Firstly, most image segmentation solutions are problem-based. Secondly, medical image segmentation methods generally have restrictions because medical images have very similar gray level and texture among the interested objects. The goal of this work is to design a framework to extract simultaneously several objects of interest from Computed Tomography (CT) images by using some priori-knowledge. Our method used properties of agent in a multi-agent environment. The input image is divided into several sub-images, and each local agent works on a sub-image and tries to mark each pixel as a specific region by means of given priori-knowledge. During this time the local agent marks each cell of sub-image individually. Moderator agent checks the outcome of all agents’ work to produce final segmented image. The experimental results for CT images demonstrated segmentation accuracy around 91% and efficiency of 7 seconds.

Keywords: Medical Image Segmentation, Agent, Multi-Agent system.
Received June 9, 2010; accepted March 1, 2011
Print E-mail

Submesh Allocation in 2D-Mesh Multicomputers:
Partitioning at the Longest Dimension of Requests

Sulieman Bani-Ahmad
Department of Information Technology, Al-Balqa Applied University Al-Salt, Jordan
Two adaptive non-contiguous allocation strategies for 2D-mesh multicomputers are proposed in this paper. The first is first-fit-based and the second is best-fit-based. That is; for a given request, the proposed first-fit-based approach tries to find a free submesh using the well-known first-fit strategy, if it fails, the request at hand is partitioned into two sub-requests that are independently allocated using the first-fit approach. Partitioning is gradually performed at the longest dimension of the parallel request. This partitioning mechanism aims at (i) lifting the condition of contiguity, and (ii) at the same time maintaining good level of contiguity. Gradual partitioning of requests produce two sub-requests one of which is relatively big and as close as possible to the square-shape and, thus; reducing communication latency caused by non-contiguity. Using extensive simulations, we evaluated the proposed strategies and compared them with previous contiguous and non-contiguous strategies.  Simulation outcomes clearly show the proposed allocation schemes produce the best Average Response Time (ART), the Average System Utilization (ASU) and also produce relatively low communication overhead.

Keywords: Multicomputer; 2D mesh; Contiguous Allocation; Non-contiguous Allocation; Request Partitioning.
Received July 28, 2010; accepted March 1, 2011
Print E-mail

Applying Artificial Neural Network and eXtended Classifier System for Network Intrusion Detection

Wafa’ AlSharafat
Prince Hussein Bin Abdullah College of Information Technology, Al Al-Bayt University, Jordan
Due to increasing incidents of cyber attacks, building effective intrusion detection systems are essential for protecting information systems security, and yet it remains an elusive goal and a great challenge. Current intrusion detection systems (IDS) examine all data features to detect intrusion or misuse patterns. Some of the features may be redundant or low importance during detection process. The purpose of this study is to identify important input features in building IDS to gain better detection rate (DR). By that, two stages are proposed for designing intrusion detection system. In the first phase, we proposed filtering process for a set of features to combine best set of features for each type of network attacks that implemented by using Artificial Neural Network (ANN).  Next, we design an IDS using eXtended Classifier System (XCS) with internal modification for classifier generator to gain better detection rate. In the experiments, we choose   KDD 99 as a dataset to train and examine the proposed work. From experiment results, XCS with its modifications achieves a promised performance compared with other systems for detecting intrusions.

Keywords: Feature selection, genetic algorithms, XCS, KDD 99, and ANN.
Received June 19, 2010; accepted March 1, 2011
Print E-mail

Query Dispatching Tool Supporting Fast Access to Data Warehouse

Anmar Aljanabi1, Alaa Alhamami2, and Basim Alhadidi3
 1Computer Sciences Department, University of Technology, Iraq
2Computer Sciences and Informatics College, Amman Arab University, Jordan
3Information Technology Department, Prince Abdullah Bin Ghazi Faculty of Science and Information
Technology, Al-Balqa’ Applied University, Jordan
Data warehousing hastens the process of retrieving information needed for decision making. The spider web diversity between both end-users and data marts increases traffic, load, and delay in accessing the requested information. In this research, we have developed a query dispatching tool facilitating the access to the information within data marts, eventually data warehouse in fast, and an organized fashionable way. The dispatching tool takes the query, analyzes it, and decides which data mart as a destination that query should be forwarded to. This research is based on Ralph Kimball’s methodology. The results show that the dispatching tool reduces the check time spent in the data dictionary within a logical side of the data warehouse deciding the intended data mart and hence, minimizing execution time.

Keywords: Data warehouse, metadata, query dispatching tool, and execution time.
Received October 12, 2010; accepted May 24, 2011
Print E-mail

A Technique for Handling Range and Fuzzy Match Queries on Encrypted Data

Shaukat Ali, Azhar Rauf, and Huma Javed
Department of Computer Science, University of Peshawar, Peshawar, Khyber Pakhtunkhwa, Pakistan
Data is an important asset of today’s dynamically operating organizations and their businesses. Data is usually stored in databases. An important issue for IT professionals is to secure such data from unauthorized access and intruders. For protecting business centric data, many levels of security are used. Among these levels, data encryption is the final layer of security. Although encryption makes it difficult to breach this level of security, but it has a potential disadvantage of performance degradation, particularly for those queries which require operations on encrypted data. This work proposes to allow users to query over the encrypted column directly without decrypting it. This improves the performance of the SELECT query. In this technique the query retrieves only those records fulfilling the user’s search criteria and the data will be decrypted on the fly. The proposed algorithm handles range, fuzzy match, and exact match type queries. It has no “false positive hits”.  From experimental findings the algorithm has shown greater efficiency than the state of art technique.

Keywords: Database security, encryption, performance.
Received June 27 2010; accepted March 1, 2011
Print E-mail

Semantic Method for Query Translation

Mohd Amin Mohd Yunus, Roziati Zainuddin, and Noorhidawati Abdullah
Faculty of Computer Science and Information Technology, University of Malaya, Malaysia
Cross language information retrieval presents huge ambiguous results as polysemy problems. Therefore, the semantic approach comes to solve the polysemy problem which that the same word may have different meanings according to the context of sentences. This paper presents semantic technique on queries for retrieving more relevant results in cross language information retrieval (CLIR) that concentrates on the Arabic, Malay or English query(s) translation (a dictionary based method) to retrieve documents according to query(s) translation. Therefore, semantic ontology significantly improves and expands the single query itself with more synonym and related words. The query however is to retrieve relevant documents across language boundaries. Therefore, this study is conducted with the purposes to investigate English-Malay-Arabic query(s) translation approach and vice versa against keywords and querywords based on total retrieve and relevant. Keywords and querywords retrieval are evaluated in the experiments in terms of precision and recall. In order to produce more significant results, semantic technique is therefore applied to improve the performance of CLIR.

Keywords: Semantic ontology, semantic query, cross language information retrieval, dictionary-based.
Received September 2, 2010; accepted May 24, 2011
Print E-mail

A Dynamic Linkage Clustering using KD-Tree

Shadi Abudalfa1 and Mohammad Mikki2
1The University Collage of Applied Sciences, Palestine
2The Islamic University of Gaza, Palestine
Some clustering algorithms calculate connectivity of each data point to its cluster by depending on density reachability. These algorithms can find arbitrarily shaped clusters, but they require parameters that are mostly sensitive to clustering performance. We develop a new dynamic linkage clustering algorithm using kd-tree. The proposed algorithm does not require any parameters and does not have a worst-case bound on running time that exists in many similar algorithms in the literature. Experimental results are shown in this paper to demonstrate the effectiveness of the proposed algorithm. We compare the proposed algorithm with other famous similar algorithm that is shown in literature. We present the proposed algorithm and its performance in detail along with promising avenues of future research.

Keywords: Data clustering, density-based clustering Algorithm, KD-tree, dynamic linkage clustering, DBSCAN.
Received February 26, 2011; accepted July 28, 2011
Print E-mail

Attack Tree Based Information Security Risk Assessment Method Integrating Enterprise Objectives with Vulnerabilities

Bugra Karabey and Nazife Baykal
Middle East Technical University, Informatics Institute, Inonu Bulvari, 06531, Ankara, Turkey
In order to perform the analysis and mitigation efforts related with the Information Security risks there exists quantitative and qualitative approaches, but the most critical shortcoming of these methods is the fact that the outcome mainly addresses the needs and priorities of the technical community rather than the management. For the enterprise management, this information is essentially required as a decision making aid for the asset allocation and the prioritization of mitigation efforts. So ideally the outcome of an information security risk method must be in synchronization with the enterprise objectives to act as a useful decision tool for the management. Also in the modelling of the threat domain, attack trees are frequently utilized. However the execution of attack tree modelling is costly from the effort and timing requirements and also has inherent scalability issues. So within this article our design-science research based work on an information security risk assessment method that addresses these two issues of enterprise objective inclusion and model scalability will be outlined.

Keywords: Enterprise information security, enterprise modelling, risk assessment, risk assessment method, resource based view, attack trees, risk management.
Received May 4, 2011; accepted July 28, 2011
Print E-mail

Implementing New Approach for Enhancing Performance and Throughput in A Distributed Database

Khaled Maabreh1 and Alaa al Hamami2
1Faculty of Information Technology and Computer Science, Zarqa University, Jordan
2Graduate College of Computer Studies, Amman Arab University for Graduate Studies, Jordan
A distributed database system consists of a number of sites over a network and has a huge amount of data. Besides a high number of users use these data. The lock manager coordinates the use of database resources among distributed transactions. Because a distributed transaction consists of several participants to execute over sites; all participants must guarantee that any change to data will be permanent in order to commit the transaction.
Because the number of users is increasingly growing and the data must be available all of the time, this research applied a new method for reducing the size of lockable entities to allow several transactions to access the same database row simultaneously, the other attributes remain available to other users if needed. It is possible to do that by increasing the granularity hierarchy tree one more level down at the attributes. The experimental results proved that using attribute level locking will increase the throughput and enhance the system performance.

Keywords: Granularity hierarchy tree, locks, attribute level, concurrency control, data availability.
Received March 16, 2011; accepted July 28, 2011
Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ on line 251 Warning: fsockopen(): unable to connect to (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ on line 251 skterr