Tuesday, 17 June 2008 00:42

Parallelization of the Resampling Image Filter

Mourad Mahboub1 and Djamal Lachachi2

1Sciences Faculty, Abou Bakr Belkaid University, Algeria

2 Engineering Sciences Faculty, Abou Bakr Belkaid University, Algeria

 

Abstract: When resampling a digital image by uniform cubic B-splines, an output pixel is computed from a filter applied to 16 neighboring pixels of the original image or more precisely of some auxiliary matrix C. Matrix C is computed with a computational time proportional to the number of pixels of the image to be resampled; that time is substantially smaller than the computational time of the filtering part. In this paper, an adapted Cholesky factorization is presented; it allows to calculate the matrix C, the resampling filter of the 16 neighboring pixels and a parallel computation of the filter. This parallel approach reduces the global computational time partly from a better memory management.

Keywords: Image resampling, uniform cubic B-spline, interpolation, filter of 16 pixels, speedup, efficiency.

Received March 3, 2006; accepted June 2, 2006

 
Tuesday, 17 June 2008 00:40

Experimenting N-Grams in Text Categorization

Abdellatif Rahmoun and Zakaria Elberrichi

Faculty of Computer and Information Technology, University of King Faisal, KSA

Abstract: This paper deals with automatic supervised classification of documents. The approach suggested is based on a vector representation of the documents centred not on the words but on the n-grams of characters for varying n. The effects of this method are examined in several experiments using the multivariate chi-square to reduce the dimensionality, the cosine and Kullback&Liebler distances, and two benchmark corpuses the reuters-21578 newswire articles and the 20 newsgroups data for evaluation. The evaluation was done, by using the macroaveraged F1 function. The results show the effectiveness of this approach compared to the Bag-Of-Word and stem representations.

Keywords: Text categorization, n-grams, multivariate chi-square, cosine measure, reuters21578, 20 news groups.

Received April 5, 2006; accepted June 1, 2006

 
Tuesday, 17 June 2008 00:38

Fingerprint Recognition Using Zernike Moments

Hasan Abdel Qader, Abdul Rahman Ramli, and Syed Al-Haddad

Faculty of Computer Engineering, University Putra Malaysia, Malaysia

 

Abstract: In this paper, we present a fingerprint matching approach based on localizing the matching regions in fingerprint images. The determination of the location of such Region Of Interest (ROI) using only the information related to core points based on a novel feature vectors extracted for each fingerprint image by Zernike Moment Invariant (ZMI) as the shape descriptor. The Zernike Moments is selected as feature extractor due to its robustness to image noise, geometrical invariants property and orthogonal property. These features are used to identify corresponding ROI between two fingerprint impressions by computing the Euclidean distance between feature vectors. The fingerprint matching invariance under translations, rotations and scaling using Zemike Moment Invariants and the experimental results obtained from a FVC2002 DB1 database confirm the Zernike moment is able to match the fingerprint images with high accuracy.

Keywords: Fingerprint matching, Region Of Interest, ZMI, feature extractor.

Received April 5, 2006; accepted May 31, 2006

 
Tuesday, 17 June 2008 00:37

An Intelligent MCDM Approach for Selecting the Suitable Expert System Building Tool

Khalid Eldrandaly

Information Systems Department, Zagazig University, Egypt

 

Abstract: Expert Systems (ES), a promising branch of Artificial Intelligence (AI), have achieved considerable success in recent years. This area of AI has concentrated on the construction of high-performance programs in specialized professional domains. Building a new expert system is a major investment. Choosing the right expert system building tool or shell is critical to the success and failure of such investment. The selection of a suitable tool requires consideration of a comprehensive set of factors and balancing of multiple objectives in determining the suitability of a particular tool for building a defined expert system application. Because of the complexity of the problem a number of tools must be deployed to arrive at the proper solution. A new decision making approach is presented in which Expert Systems, and Multi-Criteria Decision Making techniques (MCDM) are integrated systematically in solving expert system building tool selection problem. To implement the proposed decision-making approach, a prototype system was developed in which ES, and MCDM methods (Analytic Hierarchy Process (AHP)) were successfully integrated by using the Component Object Model (COM) technology to achieve software interoperability among the systems components. A typical example is also presented to demonstrate the application of the prototype system.

Keyword: Expert Systems, MCDM, AHP, ES building tools selection, ES shells.

Received March 28, 2006; accepted July 23, 2006

 
Monday, 16 June 2008 07:18

Malaysian Vehicle License Plate Recognition

Othman Khalifa, Sheroz Khan, Rafiqul Islam, and Ahmad Suleiman

Kulliyyah of Engineering, International Islamic University Malaysia, Malaysia

 

Abstract: Vehicle license plate recognition is an image-processing technology used to identify vehicles by their license plates. This technology can be used in various security and traffic applications, such as finding stolen cars, controlling access to car parks and gathering traffic flow statistics. In this paper an approach to license plate localization and recognition is presented. A proposed method to perform recognition of license plates under any environmental conditions, with no assumptions about the orientation of the plate or its distance from the camera is designed. To solve the problem of localization of a license plate, a simple texture-based approach based on edge information is used. Segmentation of characters is performed by using connected components analysis on license plate’s image, and a simple multi-layer Perceptron neural network is used to recognize them. Simulation results were shown to be an efficient method for real time plate recognition.

Keywords: LPR, license plate, license plate recognition, OCR.

Received March 14, 2006; accepted July 13, 2006

 
Monday, 16 June 2008 07:16

Automated Student’s Courses Registration Using Computer-Telephony Integration

Maged Fahmy

Computer Department, King Faisal University, Saudi Arabia

 

Abstract: This research project aims to introduce automated student’s courses registration using computer-telephony integration. The number of students joining both undergraduate and graduate studies is increasing fast through most universities. Manual registration results in crowding a huge number of students inside the registration halls. Registration employees are suffering a lot. Online registration techniques help a lot but still many problems encountered. The reason is due the huge number of students trying to access the university web at the same time. Accessing the web through the Internet becomes a very slow and tedious process. In this research, Computer Telephony Integration technology (CTI) is used to solve these problems it would enable the students to register their courses using their telephones. Technology Application Programming Interface (TAPI) controls are used to develop a CTI application for accessing and updating registration databases. The design, analysis, implementation, and test of the designed system are included.

Keywords: Computer telephony integration, registration, databases, software engineering.

Received February 19, 2006; accepted April 12, 2006

 
Monday, 16 June 2008 07:15

SAPIENCE: A Simulator of Interactive Pedagogical Activities for Distance Learning Environment

Tahar Bouhadada and Mohamed-Tayeb Laskri

Laboratory Research on Computing (GRIA/LRI), University of Annaba, Algeria

 

Abstract: Several works on engineering Learning Active Situations (LAS) gave birth to tools for distance learning scriptwriting based on collaboration and interactivity. This paper describes a distance-learning environment that integrates a simulator for interactive pedagogical activities dedicated to the teaching of the compilation. The achieved prototype, SAPIENCE, is a framework based on controlled simulation. The device software uses the notion of interactivity with its different kinds in order to conceive LAS founded on the pedagogical script forms and the script monitor concepts. The teacher possesses tools enabling him/her to work out some LAS according to the learner’s profile, to adapt the pedagogical strategies, to supervise the different registered learners in the different sessions in a personalized way, or in a group; and so, to control the learner’s activities and to provide the most adapted pedagogical assistance.

Keywords: Pedagogical simulator, Learning Active Situation (LAS), pedagogical script, controlled simulation.

Received February 18, 2006; accepted April 23, 2006

Monday, 16 June 2008 07:13

Improved Vector Quantization Approach for Discrete HMM Speech Recognition System

Mohamed Debyeche1, Jean-Paul Haton2, and Amrane Houacine1

1Faculty of Electronics and Computer Sciences, USTHB, Algeria

2LORIA/INRIA-Lorraine, France

 

Abstract: The paper presents an improved Vector Quantization (VQ) approach for discrete Hidden Markov Models (HMMs). This improved VQ approach performs an optimal distribution of VQ codebook components on HMM states. This technique, that we named the Distributed Vector Quantization (DVQ) of hidden Markov models, succeeds in unifying acoustic micro-structure and phonetic macro-structure when the estimation of HMM parameters is performed. The DVQ technique is implemented through two variants; the first variant uses the K-means algorithm (K-means-DVQ) to optimize the VQ, while the second variant exploits the benefits of the classification behavior of Neural Networks (NN-DVQ) for the same purpose. The proposed variants are compared with the HMM-based baseline system by experiments of specific Arabic consonants recognition. The results show that the distributed vector quantization technique increase the performance of the discrete HMM system while maintaining the decoding speed of the models.

Keywords: Arabic language, hidden Markov model, vector quantization, neural network, speech recognition.

Received February 7, 2006; accepted April 26, 2006

 
Monday, 16 June 2008 07:08

Deductive Inference in the Context of the Dialogue Process

Igor Chimir1 and Waheeb Abu-Dawwas2

1Department of Information Technology, Odessa State Environmental University, Ukraine

2Department of Computer Science, University of Petra, Jordan

 

Abstract: The paper is devoted to the investigation of a community between the dialogue process and the process of deductive inference. Attention is focused on the goal-oriented interactive process between two agents (active and reactive) involved into a step-by-step question-answering dialogue, which is considered as an activity directed to solving certain problem. A key element of the architecture of Dialogue Problem Solver is Dialogue Knowledge Base (DiKB). The paper demonstrates how an inference process of the classical rule-based systems can be emulated by navigation within DiKB. Some positive features of such emulation are discussed. Final section represents a formal theory of the DiKB structure from the inference process point of view. In conclusion the idea of a knowledge base agent with distributed architecture is discussed.

Keywords: Question-answering dialogue, logic of questions and answers, dialogue knowledge base, rule-based system.

Received February 7, 2006; accepted April 18, 2006

 
Monday, 16 June 2008 07:04

A Multi-Agent System for POS-Tagging Vocalized Arabic Texts

Chiraz Ben Othmane Zribi, Aroua Torjmen, and Mohamed Ben Ahmed

RIADI laboratory, University of La Manouba, Tunisia

 

Abstract: In this paper, we address the problem of Part-Of-Speech(POS) tagging of Arabic texts with vowel marks. After the description of the specificities of Arabic language and the induced difficulties on the task of POS-tagging, we propose an approach that combines several methods (stochastic and rule-based). For the implementation of these methods and the global POS-tagging system, we adopted a multi-agent architecture. In which, five tagger agents work in parallel, each one applies its own method, in order to propose for each word in a sentence the suitable tag among those proposed by the morphological analyzer. The tagger agents cooperate together and with the unknown words solver agent to resolve unknown words. A voting agent decides in the end, which tag to affect to each word. Finally, we present the experimental protocol we used to evaluate the system carried out in this work and the obtained results that we consider very satisfactory.

Keywords: Natural Language Processing (NLP), Arabic language, morphological analyzer, part-of-speech tagging, hybrid methods, multi-agent system.

Received February 3, 2006; accepted April 22, 2006

 
Monday, 16 June 2008 07:02

Adaptive Contention Window Scheme for WLANs

Elwathig Elhag and Mohamed Othman

Faculty of Computer and Information Technology, University Putra Malaysia, Malaysia

 

Abstract: In this paper, a new bakeoff algorithm is proposed to enhance the performance of the IEEE 802.11 Distributed Coordination Function (DCF) which employs Binary Exponential Backoff (BEB) algorithm. We present simulation results showing that the new algorithm outperforms the BEB algorithm and compared with the previously proposed enhancement algorithms, a salient feature of our algorithm is that it performs well when the number of active stations is large and small : that is, in both heavy and light contention cases. Furthermore, the adaptive window adjustment algorithm is simpler than previously proposed enhancement schemes in that no live measurement of the WLANs traffic activity is needed and don't assume constant packet sizes.

Keywords: IEEE 802.11, wireless local area networks, WLANs, DCF, binary exponential backoff, ACW.

Received February 2, 2006; accepted April 30, 2006

 
Monday, 16 June 2008 07:00

Dimensionality Reduction in Time Series: A PLA-Block-Sorting Method

Bachir Boucheham

Department of Informatics, University of Skikda , Algeria

 

Abstract: We address the data reduction in time series problem through a combination of two newly developed algorithms. The first is a modified version of the Douglas-Peucker Algorithm (DPA) for short-term redundancy reduction. The second is an alternative to the classical statistic methods for long-term redundancy reduction and is based on block sorting. The block sorting technique is inspired from the quite recent Burrows and Wheeler Algorithm (BWA). The novel reduction scheme was applied to the ECG time series using the MITBIH public ECG database. Results show that the novel scheme is highly competitive with respect to the most performant existing techniques (SPIHT, TSVD, CCSP-ORD-VLC and others).

Keywords: Data reduction, time series, long-term compression,Douglas-Peucker algorithm, block sorting.

Received February 2, 2006; accepted April 16, 2006

Monday, 16 June 2008 06:58

Updating Search Engines Using Meta-Updates

Ezz Hattab

Faculty of IS and Technology, Arab Academy for Banking and Financial Sciences, Jordan

Abstract: Web search engines provide an extremely valuable service by indexing web content. However, much of this content is fluid; it changes, moves, and occasionally disappears, which leads to novel challenge to keep search engines up-to-date. This paper investigates how to keep search engines up-to-date by proposing meta-updates technique. Meta updates keep useful information about the behavior of a page to be provided to the spider of the search engine. The proposed technique saves many overheads in complex probabilities calculation suggested in the related works.

Keywords: Web content management, web content updates, search engines freshnes.

Received January 25, 2006; accepted September 14, 2006

 
Monday, 16 June 2008 06:51

Overview of Some Algorithms of Off-Line Arabic Handwriting Segmentation

Toufik Sari and Mokhtar Sellami

LRI Laboratory, University of Badji Mokhtar-Annaba, Algeria

 

Abstract: We present in this paper an overview of realized works in the field of automatic segmentation of off-line Arabic handwriting. The Arabic writing is cursive in nature even printed or handwritten. The shapes of characters vary considerably according to their positions within the word. The word shapes change depending on whether letters are horizontally or vertically ligatured, i.e. superposed letters. This variability makes word decomposition in letters very delicate and not always assured, what explains the lack of robust commercialized systems. The objective of this paper is to realize a state of the art of the different techniques for off-line Arabic handwriting segmentation proposed in the literature.

Keywords: Handwriting Arabic segmentation, contour following, topological rules, ligatures.

Received December 28, 2005; accepted September 27, 2006

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…