January 2012, No. 1
Print E-mail

Routing for Wireless Mesh Networks with Multiple Constraints Using Fuzzy Logic

Mala Chelliah1, Siddhartha Sankaran1, Shishir Prasad1, Nagamaputhur Gopalan1, Balasubramanian Sivaselvan2
1Department of Computer Science and Engineering, National Institute of Technology, Tiruchirapalli
2IIITDM, Chennai
Abstract : Since wireless mesh networks are ad-hoc in nature, many routing protocols used for ad-hoc networks like AODV are also used for wireless mesh networks by considering only the shortest route to destination. Since data transfer in wireless mesh networks is to and from the AP, these protocols lead to congested routes and overloaded APs. To reduce congestion, the routing protocols such as traffic balancing which choose routes based on medium usage of the route were used. However, routing is a multi constraint problem. To make routing decisions based on more than one constraint viz., buffer occupancy, node energy and hop count and to provide an efficient routing method for wireless mesh networks, a fuzzy multi - constraint AODV routing is proposed in this paper. Simulation results in ns-2 verify that they perform better than single constraint routing.

Keywords: Mesh networks, multi constraints, traffic balancing, congestion, AODV, and power aware routing.

Received June 19, 2008; accepted November 25, 2008

Print E-mail

An Intelligent Approach of Sniffer Detection

Abdul Nasir Khan, Kalim Qureshi, and Sumair Khan
Department of Computer Science, COMSATS Abbottabad, Pakistan

Abstract: ARP cache poisoning and putting host Network Interface Card (NIC) in promiscuous mode are ways of sniffer attacks. ARP cache poisoning attack is effective in an environment which is not broadcast in nature (like switch LAN environment) and other attack is effective in an environment which is broadcast in nature (like hub, bus, access point LAN environments). Sniffing is malicious activity performed by network user and because of this network security is at risk so detection of sniffer is essential task to maintain network security. Sniffer detection techniques can be divided into two main categories. First category’s techniques are used to detect a sniffer host that runs it’s NIC into promiscuous mode and second category’s techniques are used to detect a sniffer host that uses ARP cache poisoning for sniffing. The network configuration is hidden form users. Network users do not have any information about nature of network. Therefore, users of network may invoke such sniffer detection technique that is not effective in that environment. This may result in sharing of his private and confidential information with malicious users. In this paper we designed an intelligent invocation module that checks the nature of environment automatically and invokes appropriate sniffer detection technique for that environment. With the help of this invocation module it is possible to detect passive as well as active sniffer hosts in both environments.

Keywords: Network security, sniffer, ARP cache poisoning, and IP packet routing.

Received January 7, 2009; accepted March 9, 2009

Print E-mail

Least Recently Plus Five Least Frequently Replacement Policy (LR+5LF)

Adwan AbdelFattah1 and Aiman Abu Samra2
1Computer Science Department, The Arab American University of Jenin, Palestine
2Computer Engineering Department, The Islamic University of Gaza, Palestine
Abstract: In this paper we present a new block replacement policy, in which we proposed a new efficient algorithm for combining two important policies Least Recently Used (LRU) and Least Frequently Used (LFU). The implementation of the proposed policy is simple. It requires limited calculations to determine the victim block. We proposed our models to implement LRU and LFU policies. The new policy gives each block in cache two weighing values corresponding to LRU and LFU policies. Then a simple algorithm is used to get the overall value for each block. A comprehensive comparison is made between our Policy and LRU, First In First Out (FIFO), V-WAY, and Combined LRU and LFU (CRF) policies. Experimental results show that the LR+5LF replacement policy significantly reduces the number of cache misses. We modified simple scalar simulator version 3 under Linux Ubuntu 9.04 and we used speccpu2000 benchmark to simulate this policy. The results of simulations showed, that giving higher weighing to LFU policy gives this policy best performance characteristics over other policies. Substantial improvement on miss rate was achieved on instruction level 1 cache and at level 2 cache memory.

Keywords: Cache memory, replacement policy, LRU, LFU, and miss rate.

Received January 29, 2010; accepted September 20, 2010

Print E-mail

Approximating I/O Data Using Wavelet Neural Networks: Control the Position of Mother Wavelet

Mohammed Awad
Faculty of Engineering and Information Technology, Arab American University, Palestine
Abstract: In this paper, we deal with the problem of function approximation from a given set of input/output data. This problem consists of analyzing training examples, so that we can predict the output of the model given new inputs. We present a new method for function approximation of the I/O data using Wavelet Neural Networks (WNN). This method is based on a new efficient method of optimizing the position of a single function called mother wavelet of the WNN; it uses the objective output of WNN to move the position of wavelet single function. This method calculates the error committed in every mother wavelet area using the real output of the WNN trying to concentrate more mother wavelets in those input regions where the error is bigger, thus attempting to homogenize the contribution to the error of every mother wavelet, this method improves the performance of the approximation system obtained, compared with other models derived from traditional algorithms.

Keywords:  Wavelet neural networks, function approximation, and controls the position of mother wavelet.

Received April 4, 2009; accepted January 3, 2010

Print E-mail

A Framework for Distributed Pattern Matching Based on Multithreading

Najib Kofahi and Ahmed Abusalama
Department of Computer Science, Yarmouk University, Jordan
Abstract: Despite of the dramatic evolution in high performance computing we still need to devise new efficient algorithms to speed up the search process. In this paper we present a framework for a data-distributed and multithreaded string matching approach in a homogeneous distributed environment. The main idea of this approach is to have multiple agents that concurrently search the text, each one from different position. By searching the text from different positions the required pattern can be found more quickly than by searching the text from one position).Concurrent search can be achieved by two techniques; the first one is by using multithreading on a single processor, in this technique each thread is responsible for searching one part of the text. The concurrency of the multithreading technique is based on the time sharing principle, so it provides us of an illusion of concurrency not pure concurrency. The second technique is by having multiprocessor machine or distributed processors to search the text; in this technique all of the processors search the text in a pure concurrent way. Our approach combines the two concurrent search techniques to form a hybrid one that takes advantage from the two techniques. The proposed approach manipulates both exact string matching and approximate string matching with k-mismatches. Experimental results demonstrate that this approach is an efficient solution to the problem in a homogeneous clustered system.

Keywords: Pattern matching, online search algorithms, multithreading, concurrency, java space technology, and distributed processing.

Received April 7, 2009; accepted November 5, 2009

Print E-mail

Prioritized Heterogeneous Traffic-Oriented Congestion Control Protocol for WSNs

Muhammad Monowar, Obaidur Rahman, AlSakib Pathan, and Choong Hong
 Department of Computer Engineering, Kyung Hee University, South Korea
Abstract: Due to the availability of multiple sensing units on a single radio board of the modern sensor motes, some sensor networks need to handle heterogeneous traffic within the same application. This diverse traffic could have different priorities in terms of transmission rate, required bandwidth, packet loss, etc.  Because of the multi-hop transmission characteristic of this prioritized heterogeneous traffic, occurrence of congestion is very common and unless handled effectively, it could thwart the application objectives. To address this challenge, in this paper we propose a Prioritized Heterogeneous Traffic-oriented Congestion Control Protocol (PHTCCP) which performs hop-by-hop rate adjustment controlling the congestion and ensures efficient rate for the prioritized diverse traffic. This protocol also could be applied for healthcare infrastructure. We exploit cross layer approach to perform the congestion control. Our protocol uses intra-queue and inter-queue priorities along with weighted fair queuing for ensuring feasible transmission rates of heterogeneous data. It also guarantees efficient link utilization by using dynamic transmission rate adjustment. We present detailed analysis and simulation results with the description of our protocol to demonstrate its effectiveness in handling prioritized heterogeneous traffic in Wireless Sensor Networks (WSNs).

Keywords: Heterogeneous, congestion, Inter-queue, intra-queue, scheduler, and sensor.

Received July 27, 2009; accepted May 20, 2010

Print E-mail

Multiviews Reconstruction for Prosthetic Design

Nasrul Mahmood1, Camallil Omar2, and Tardi Tjahjadi3
1,2Faculty of Electrical Engineering, Universiti Teknologi Malaysia, Malaysia
3School of Engineering, University of Warwick, United Kingdom
Abstract: Existing methods that use a fringe projection technique for prosthetic designs produce good results for the trunk and lower limbs; however, the devices used for this purpose are expensive. This paper investigates the use of an inexpensive passive method involving 3D surface reconstruction from video images taken at multiple views. The method that focuses on fitting the reference model of an object to the target data is presented. For an upper dummy limb, the fitting of the model with the data shows a satisfactory result. The results of 15 measurements of different length between both reconstructed and actual dummy limb are highly correlate . The methodology developed is shown to be useful for prosthetic designers as an alternative to manual impression during the design.

Keywords: 3D Surface reconstruction, orthotic and prosthetic.

Received April 7, 2009; accepted March 9, 2010

Print E-mail

Design Mini-Operating System for Mobile Phone

Dhuha Albazaz
Computer Sciences Department, University of Mosul, Iraq
Abstract: Due to the development witnessed in the field mobile phone and the development of their operating systems added to the increase in number of users and many businesses that have relied on them, a large number of programmers have started to develop special operating systems for these phones and building applications that meet the demands of the users and facilitate a great number of businesses. The proposed work is designing mini-operating system for managing some special features of the mobile phone. The operating system designed in this work is based on multitasking –multithreading with mixing of preemptive and cooperative mode. The functions and features chose where those related to message management for sending and receiving SMS. Photo-album application also is chooses for managing and displaying images with different formats stored in mobile memory, and contact application for displaying names and phone numbers. This system can be considered a starting point to establish an integrated operating system for mobile phones. As the internal memory for the mobile phone is small, low output size language J2ME has been used for programming this operating system. J2ME relays on virtual machines which is an implementation of KVM in its operating. This language is characterized by its multiple channels and it is considered appropriate for all low memory sets.

Keywords: Mobile, smartphone, operating system, multithreading, and pre-emptive.

Received August 18, 2009; accepted March 9, 2010

Print E-mail

An Investigation of Design Level
Class Cohesion Metrics

Kuljit Kaur and Hardeep Singh
Department of Computer Science and Engineering, Guru Nanak Dev University, Amritsar, India
Abstract: Design level class cohesion metrics are based on the assumption that if all the methods of a class have access to similar parameter types then they all process closely related information. A class with a large number of parameter types common in its methods is more cohesive than a class with less number of parameter types common in its methods. In this paper we review the design level class cohesion metrics with a special focus on metrics which use similarity of parameter types of methods of a class as the basis of its cohesiveness. Basically three metrics fall in this category: Cohesion among Methods of a Class (CAMC), Normalized Hamming Distance (NHD), and Scaled NHD (SNHD). Keeping in mind the anomalies in the definitions of the existing metrics, a variant of the existing metrics is introduced.  It is named NHD Modified (NHDM). A major point of difference is that the NHD metric counts a disagreement only if class methods taken as pairs disagree on a parameter type that one method uses but the other method, in the pair, does not use. It ignores the case when both methods of a pair do not use a parameter type. NHD indirectly counts it as an agreement, but NHDM considers such a case as a disagreement. An automated metric collection tool is used to collect the metrics data from an open source Java based software program containing 884 classes. Metrics data is then subjected to statistical analysis. The NHDM metric shows the maximum amount of variation in data values in comparison to other metrics in the group. NHDM is strongly correlated with CAMC. Unlike the previous studies, no significant correlation is found in CAMC and NHD.

Keywords: Design metrics, class cohesion metrics, product quality, cohesion among methods of a class, normalized hamming distance, scaled NHD, and NHD modified.

Received September 10, 2009; accepted March 9, 2010

Print E-mail

A Robust Segmentation Approach for Noisy Medical Images Using Fuzzy Clustering With Spatial Probability

1Zulaikha Beevi and 2Mohamed Sathik
1Assistant Professor, Department of IT, National College of Engineering, Tirunelveli, Tamilnadu, India
2Associate Professor, Department of Computer Science, Sadakathullah Appa College, Tirunelveli-11
Abstract: Image segmentation plays a major role in medical imaging applications. During last decades, developing robust and efficient algorithms for medical image segmentation has been a demanding area of growing research interest. The renowned unsupervised clustering method, Fuzzy C-Means (FCM) algorithm is extensively used in medical image segmentation. Despite its pervasive use, conventional FCM is highly sensitive to noise because it segments images on the basis of intensity values. In this paper, for the segmentation of noisy medical images, an effective approach is presented. The proposed approach utilizes histogram based Fuzzy C-Means clustering algorithm for the segmentation of medical images. To improve the robustness against noise, the spatial probability of the neighboring pixels is integrated in the objective function of FCM. The noisy medical images are denoised, with the help of an effective denoising algorithm, prior to segmentation, to increase further the approach’s robustness. A comparative analysis is done between the conventional FCM and the proposed approach. The results obtained from the experimentation show that the proposed approach attains reliable segmentation accuracy despite of noise levels. From the experimental results, it is also clear that the proposed approach is more efficient and robust against noise when compared to that of the FCM.

Keywords: Image segmentation, medical images, Magnetic Resonance Imaging (MRI), clustering, FCM, histogram, membership function, spatial probability, denoising, Principal Component Analysis (PCA), and Local Pixel Grouping (LPG).

Received December 21, 2009; accepted May 20, 2010

Print E-mail

Arabic Speaker-Independent Continuous Automatic Speech Recognition Based on a Phonetically Rich and Balanced Speech Corpus

Mohammad Abushariah1,2, Raja Ainon1, Roziati Zainuddin1, Moustafa Elshafei3, and Othman Khalifa4
1Faculty of Computer Science and Information of Technology, University of Malaya, Malaysia
2King Abdullah II School for Information Technology, University of Jordan, Jordan
3Department of Systems Engineering, King Fahd University of Petroleum and Minerals, Saudi Arabia
4Faculty of Engineering, International Islamic University Malaysia, Malaysia
Abstract: This paper describes and proposes an efficient and effective framework for the design and development of a speaker-independent continuous automatic Arabic speech recognition system based on a phonetically rich and balanced speech corpus. The speech corpus contains a total of 415 sentences recorded by 40 (20 male and 20 female) Arabic native speakers from 11 different Arab countries representing the three major regions (Levant, Gulf, and Africa) in the Arab world. The proposed Arabic speech recognition system is based on the Carnegie Mellon University (CMU) Sphinx tools, and the Cambridge HTK tools were also used at some testing stages. The speech engine uses 3-emitting state Hidden Markov Models (HMM) for tri-phone based acoustic models. Based on experimental analysis of about 7 hours of training speech data, the acoustic model is best using continuous observation’s probability model of 16 Gaussian mixture distributions and the state distributions were tied to 500 senones. The language model contains both bi-grams and tri-grams. For similar speakers but different sentences, the system obtained a word recognition accuracy of 92.67% and 93.88% and a Word Error Rate (WER) of 11.27% and 10.07% with and without diacritical marks respectively. For different speakers with similar sentences, the system obtained a word recognition accuracy of 95.92% and 96.29% and a WER of 5.78% and 5.45% with and without diacritical marks respectively. Whereas different speakers and different sentences, the system obtained a word recognition accuracy of 89.08% and 90.23% and a WER of 15.59% and 14.44% with and without diacritical marks respectively.

Keywords: Arabic automatic speech recognition, arabic speech corpus, phonetically rich and balanced, acoustic model, and statistical language model.

Received December 22, 2009; accepted May 20, 2010

Print E-mail

CFS: A New Dynamic Replication
Strategy for Data Grids

Feras Hanandeh1, Mutaz Khazaaleh2, Hamidah Ibrahim3, and Rohaya Latip3
1Prince Al- Hussein bin Abdullah II Faculty of Information Technology, Hashemite University, Jordan
2Irbid College, Al-Balqa Applied University, Jordan
3Faculty of Computer Science and Information Technology, University Putra Malaysia, Malaysia
Abstract: Data grids are currently proposed solutions to large scale data management problems including efficient file transfer and replication. Large amounts of data and the world-wide distribution of data stores contribute to the complexity of the data management challenge. Recent architecture proposals and prototypes deal with dynamic replication strategies for a high-performance data grid. This paper describes a new dynamic replication strategy called Constrained Fast Spread (CFS). It aims to alleviate the main problems encountered in the current replication strategies like the negligence of the storage capacity of the nodes. The new CFS strategy enhanced the fast spread strategy by concentrating on the feasibility of replicating the requested replica on each node among the network.

Keywords: Grid computing, dynamic replication strategies.

Received March 1, 2010; accepted May 20, 2010


Print E-mail

An Automated Real-Time People Tracking System Based on KLT Features Detection

Nijad Al-Najdawi1, Sara Tedmori2, Eran Edirisinghe3 and Helmut Bez3
1Information Technology Department, Al-Balqa Applied University, Jordan
2Department of Computer Science, Princess Sumaya University for Technology, Jordan
3Department of Computer Science, Loughborough University, UK
Abstract: The advancement of technology allows video acquisition devices to have a better performance, thereby increasing the number of applications that can effectively utilize digital video. Compared to still images, video sequences provide more information about how objects and scenarios change over time. Tracking humans is of interest for a variety of applications including surveillance, activity monitoring and gate analysis. Many efficient object tracking algorithms have been proposed in literature, however part of those algorithms are semi-automatic requiring human interference. As for the fully automated algorithms, most of them are not applicable to real-time applications. This paper presents a low cost automatic object tracking algorithm suitable for use in real-time video based systems. The novelty of the proposed system is that it uses a simplified version of the Kanade-Lucas-Tomasi (KLT) technique to detect features of both continuous and discontinuous nature. As discontinuous feature selection is subject to noise, and would result in non-optimal feature based object tracking, the authors propose the use of a Kalman filter for the purpose of seeking optimal estimates in tracking. The integrated tracking system is capable of handling shadows and is based on a dynamic background subtraction strategy that minimises errors and quickly adapts to scene changes. Experimental results are provided to demonstrate the system’s capability of accurately tracking objects in real-time applications where scenes are subject to noise particularly resulting from occlusions and sudden illumination variations.

Keywords: Object tracking, kalman-filter, features selection, and KLT.

Received March 13, 2010; accepted May 20, 2010

Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 Warning: fsockopen(): unable to connect to oucha.net:80 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 skterr