Thursday, 23 February 2012 02:17

Cyber Security Using Arabic CAPTCHA Scheme

Bilal Khan1, Khaled Alghathbar1, 2, Muhammad Khurram Khan1, Abdullah Alkelabi2,
 and Abdulaziz Alajaji2
1Center of Excellence in Information Assurance, King Saud University, Saudi Arabia
2Department of Information Systems, King Saud University, Saudi Arabia
 

Abstract:
Bots are programs that crawl through the web site and make auto registrations. CAPTCHAs, using Latin script, are widely used to prevent automated bots from abusing online services on the World Wide Web.  However, many of the existing English based CAPTCHAs have some inherent problems and cannot assure the security of these websites. This paper proposes a method that focuses on the use of Arabic script in the generation of CAPTCHA. The proposed scheme uses specific Arabic font types in CAPTCHA generation. Such CAPTCHA exploits the limitations of Arabic OCRs in reading Arabic text. The proposed scheme is beneficial in Arabic speaking countries and is very useful in protecting internet resources. A survey has been conducted to find the usability of the scheme, which was satisfactory. In addition, experiments were carried out to find the robustness of the scheme against OCR. The results were encouraging. Moreover, a comparative study of our CAPTCHA and Persian CAPTCHA scheme shows its advancement over Persian CAPTCHA.

Keywords: CAPTCHA, Arabic, automated-bots, cyber security, spam.

 
Received July 11, 2010; accepted March 1, 2011
Thursday, 23 February 2012 02:07

Selection of Distinctive SIFT Feature Based on its Distribution on Feature Space and Local Classifier for Face Recognition

Sung-Kil Lim, and Hyon-Soo Lee
Department of Computer Engineering Graduate School, Kyung Hee University, Korea
 

Abstract:
This paper investigates a face recognition system based on Scale Invariant Feature Transform (SIFT) feature and its distribution on feature space. The system takes advantage of SIFT which possess strong robustness to expression, accessory pose and illumination variations. Since we use each of SIFT keypoint as the feature of face and SIFT keypoints are very complicated in feature space, we apply the feature partition on Self Organizing Map (SOM) and adopt local Multilayer Perceptron (MLP)  for each node on map to improve the classification performance. Moreover the distinctive features from all SIFT keypoints in each face class are defined and extracted based on feature distribution on SOM. Finally the face can be recognized through the proposed scoring method depending on the classification result of these distinctive features. In the experiments, the proposed method gave a higher face recognition rate than other methods including matching and holistic feature based methods in three famous databases.

Keywords: Face recognition, SIFT, distinctive features.
 
Received December 14, 2010; accepted March 1, 2011
Thursday, 23 February 2012 02:04

Environment Recognition for Digital Audio Forensics Using MPEG-7 and Mel
 Cepstral Features


Ghulam Muhammad1, 2 and Khaled Alghathbar1
1Center of Excellence in Information Assurance, King Saud University, Saudi Arabia
2Department of Computer Engineering, King Saud University, Saudi Arabia

Abstract:
Environment recognition from digital audio for forensics application is a growing area of interest. However, compared to other branches of audio forensics, it is a less researched one. Especially less attention has been given to detect environment from files where foreground speech is present, which is a forensics scenario. In this paper, we perform several experiments focusing on the problems of environment recognition from audio particularly for forensics application. Experimental results show that the task is easier when audio files contain only environmental sound than when they contain both foreground speech and background environment. We propose a full set of MPEG-7 audio features combined with Mel Frequency Cepstral Coefficients (MFCCs) to improve the accuracy. In the experiments, the proposed approach significantly increases the recognition accuracy of environment sound even in the presence of high amount of foreground human speech.


Keywords: Audio forensics, environment recognition, MPEG-7 audio, MFCC.

 
Received March 8, 2010; accepted October 24, 2010
Thursday, 23 February 2012 02:01

Effect of Resampling Steepness on Particle
Filtering Performance in Visual Tracking

Zahidul Islam, Chi-Min Oh, and Chil-Woo Lee
 School of Electronics and Computer Engineering, Chonnam National University, South Korea

 

Abstract:
This paper presents a proficiently developed resampling algorithm for particle filtering. In any filtering algorithm adopting the perception of particles, especially in visual tracking, resampling is an essential process that determines the algorithm’s performance and accuracy in the implementation step. It is usually a linear function of the weight of the particles, which determines the number of particles copied. If we use many particles to prevent sample impoverishment, however, the system becomes computationally too expensive. For better real-time performance with high accuracy, we introduce a Steep Sequential Importance Resampling (S-SIR) algorithm that can require fewer highly weighted particles by introducing a nonlinear function into the resampling method. Using our proposed algorithm, we have obtained very remarkable results for visual tracking with only a few particles instead of many. Dynamic parameter setting boosts the steepness of resampling and reduces computational time without degrading performance. Since resampling is not dependent on any particular application, the S-SIR analysis is appropriate for any type of particle filtering algorithm that adopts a resampling procedure. We show that the S-SIR algorithm can improve the performance of a complex visual tracking algorithm using only a few particles compared with a traditional SIR-based particle filter.


Keywords: Resampling, particle filter, multi-part colour histogram, steepness parameter, object tracking.
 
Received December 27, 2010; accepted March 1, 2011
Thursday, 23 February 2012 01:54

An Enhanced Mechanism for Image Steganography Using Sequential Colour Cycle Algorithm

Lip Yee Por, Delina Beh, Tan Fong Ang, and Sim Ying Ong
1Faculty of Computer Science and Information Technology, University of Malaya, Malaysia
2Malaysian Institute of Information Technology, Universiti Kuala Lumpur, Malaysia


 

Abstract:
Several problems arise among the existing LSB-based image steganographic schemes due to distortion in a stego-image and limited payload capacity. Thus, a proposed scheme has been developed with the aims to help in improving the payload of the secret data at the same time retaining the quality of the stego-image produced within an acceptance threshold. This study has led to the modification of the current LSB substitution algorithm by delivering a new algorithm namely sequential colour cycle. For achieving a higher security, multi-layered steganography can be performed by embedding a secret data into multiple layers of cover-images. The performance evaluation has been tested and proven that the improvement of embedding ratio at 1:2 for the proposed algorithm can be achieved and the value of the image quality is not falling below the threshold of distortion.


Keywords: Image steganography, steganography, least significant bit, data hiding, information hiding.

 
Received August 18, 2010; accepted October 24, 2010
Thursday, 23 February 2012 01:50

Arabic Expert System Shell

Venus Samawi1, Akram Mustafa1, and Abeer Ahmad2
1Department of Computer Science, Al-albayt University, Jordan
2Department of Computer Science, AL-Nahrain University, Iraq
 

Abstract:
Most expert system designers suffer from knowledge acquisition complications. Expert system shells contain facilities that can simplify knowledge acquisition to make domain experts themselves responsible for knowledge structuring and encoding. The aim of this research is to develop an Arabic Expert System Shell (AESS) for diagnosing diseases based on natural language. The suggested AESS mainly consists of two phases. The first phase is responsible for automatic acquiring of human expert knowledge. The acquired knowledge is analyzed by Arabic morphological system. The Arabic morphological system analyzes the given Arabic phrase and finds the required keywords (roots). The suggested system is provided with the required domain dictionary to be used by the Arabic morphological system. The second phase is concerned with the design of inference engine together with user interface (based on natural language) that uses a backward chaining method (end-user interface).When AESS tested by experts and end users, it was found that AESS performance in constructing Knowledge-Base (KB) and diagnosing problems was very exact (the diagnostic ability of AESS is 99%.). Merging of morphological system with knowledge acquisition is very effective in constructing the target KB without any duplicate or inconsistent rules. The same technique could be used to build expert system shell based on any other natural language (English, French, etc.). The only difference is to build morphological system suitable to that language in addition to the desired domain dictionary. 


Keywords: Expert system, knowledge acquisition, knowledge engineering, diagnosing expert system, Arabic morphological system.
 
Received October 25, 2010; accepted March 1, 2011
Thursday, 23 February 2012 01:46

Joint Routing, Scheduling and Admission
 Control Protocol for WiMAX Networks

Raja Prasad and Pentamsetty Kumar
Department of Electronics and Communications, MLR Institute of Technology, Hyderabad
 

Abstract:
In WiMax networks the routing and the scheduling are tightly coupled. The routing and scheduling problem for WiMAX networks is different from 802.11 based mesh networks and can be designed and operated separately. Standard problems in wireless systems include Bandwidth allocation and Connection Admission Control (CAC). In this paper we design a joint routing, scheduling and admission control protocol for WiMax networks. In the adaptive scheduling, packets are transmitted as per allotted slots from different priority of traffic classes adaptively, depending on the channel condition. A bandwidth estimation technique is combined with route discovery and route setup in order to find a best route. The admission control technique is based on the estimation of bandwidth utilization of each traffic class, with the constraint that the delay requirement of real-time flows should be satisfied. The current available bandwidth is estimated for all the nodes and for the new incoming flows, it estimates the requested bandwidth and decides to admit this new flow or not. By simulation results, we show that our proposed protocol achieves better throughput and channel utilization while reducing the blocking probability and delay.

Keywords: Routing, scheduling, admission, channel, WiMax networks, bandwidth.
 
Received March 26, 2011; accepted July 28, 2011
Thursday, 23 February 2012 01:43

Exploiting Hybrid Methods for Enhancing
Digital X-Ray Images

Yusuf Abu Sa'dah1, Nijad Al-Najdawi1, and Sara Tedmori2
1Department of Computer Science, Al-Balqa Applied University, Jordan
2Department of Computer Science, Princess Sumaya University for Technology, Jordan

 

Abstract:
The principle objective of image enhancement is to process an image so that the result is more suitable than the original image for a specific application. This paper presents a novel hybrid method for enhancing digital X-Ray radiograph images by seeking optimal spatial and frequency domain image enhancement combinations. The selected methods from the spatial domain include: negative transform, histogram equalization and power-law transform. Selected enhancement methods from the frequency domain include: gaussian low and high pass filters and butterworth low and high pass filters. Over 80 possible combinations have been tested, where some of the combinations have yielded in an optimal enhancement compared to the original image, according to radiologist subjective assessments. Medically, the proposed methods have clarified the vascular impression in hilar regions in regular X-ray images. This can help radiologists in diagnosing vascular pathology, such as pulmonary embolism in case of thrombus that has been logged in pulmonary trunk, which will appear as a filling defect. The proposed method resulted in more detailed images hence, giving radiologists additional information about thoracic cage details including clavicles, ribs, and costochondral junction.



Keywords: X-Ray, radiography, image enhancement, spatial domain, frequency domain.
 
Received June 19, 2010; accepted March 1, 2011
Thursday, 23 February 2012 01:40

An Efficient Group Key Agreement Scheme for
Mobile Ad-Hoc Networks

Yang Yang1, 2, Yupu Hu2, Chunhui Sun2, Chao Lv2, Leyou Zhang3
1College of Mathematics and Computer Science, Fuzhou University, China
2 Department of Telecommunication, Xidian University, China
3Department of Science, Xidian University, China
   
 

Abstract:
Mobile Ad-hoc Networks (MANETs) are considered as the most promising terminal networks in future wireless communications and characterized by flexibility, fast and easy deployment, which make them an interesting technology for various applications. Group communication is one of the main concerns in MANETs. To provide the secure group communication in wireless networks, a group key is required so that efficient symmetric encryption can be performed. In this paper, we propose a constant-round group key agreement scheme to enable secure group communications, which adopts the Identity Based Broadcast Encryption (IBBE) methodology. When a new Ad-hoc network is constructed, the suggested scheme requires no message exchange to establish a group key if the receivers’ identities are known to the broadcaster, which is an advantage that outperforms most of the existing key agreement schemes. The proposed scheme can build a new group and establish a new group key with ease when member joins or leaves. In addition, our scheme is efficient in computation and only one bilinear pair computation is required for group members to obtain his/her session key. A highlight property of the scheme is that communication cost remains unchanged as group size grows. Furthermore, we show that the new scheme is proved secure without random oracle. Thus, the scheme can not only meet security demands of larger mobile Ad-hoc networks but also improve executing performance.


Keywords: Group key agreement scheme, identity based, broadcast encryption, data security, public key cryptography, standard model.

 
Received October 6, 2010; accepted March 1, 2011

Thursday, 23 February 2012 01:36

A Framework of Summarizing XML
Documents with Schemas

Teng Lv1 and Ping Yan2
1Teaching and Research Section of Computer, Army Officer Academy, China
2School of Science, Anhui Agricultural University, China


 

Abstract:
eXtensible Markup Language (XML) has become one of the de facto standards of data exchange and representation in many applications. An XML document is usually too complex and large to understand and use for a human being. A summarized XML document of the original document is useful in such cases. Three standards are given to evaluate the final summarized XML document: document size, information content, and information importance. A framework of summarizing an XML document based both on the document itself and the schema is given, which applies schema to summarize XML documents because there are many important semantic and structural information implied by the schema. In our framework, redundant data are first removed by abnormal functional dependencies and schema structure. Then tags and values of the XML document are summarized based on the document itself and schema. Our framework is a semi-automatic approach which can help users to summarize an XML document in the sense that some parameters must be specified by the users. Experiments show that the framework can make the summarized XML document has a good balance of document size, information content, and information importance comparing with the original one.



Keywords: XML, document summarization, schema, key, functional dependency.


 
Received June 14, 2010; accepted March 1, 2011
Thursday, 23 February 2012 01:33

Intelligent Approach for Data Collection in Wireless Sensor Networks

Yujin Lim1 and Sanggil Kang2
1Department of Information Media, University of Suwon, Korea
2Department of Computer Science and Information Engineering, Inha University, Korea


 

Abstract:
In wireless sensor networks, one of most important issues is data collection from sensors to sink. Many researchers employ a mathematical formula to select the next forwarding node in the network-wide manner. We are motivated that surrounding environments for nodes are different in time and space. Because different situations of nodes are not considered for selecting the next forwarding node, the performance of data collection is degraded. In this paper, we present an intelligent approach for data collection in sensor networks. We model a nonlinear cost function for determining the next forwarder according to the input types whether inputs are correlated or uncorrelated for generating the output of the function. In our method, the correlated inputs are presented in a weighted sum with the dependent fashion but the uncoupled inputs with an independent fashion in the nonlinear function. The weights in the functions are determined to the direction in which the reliability of data collection maximizes. In the experimental section, we show that our method outperforms other conventional methods with respect to the efficiency in data collection from sensors to sink.

Keywords: Coupled input, decoupled input, wireless sensor network, intelligent data collection.


 
Received February 8, 2010; accepted January 3, 2011
Wednesday, 22 February 2012 09:00

Automated Retrieval of Semantic Web Services:
A Matching Based on Conceptual Indexation

Hadjila Fethallah and Chikh Mohammed Amine
Computer Science Department, UABT University Algeria

 

Abstract:
Web services are taking an important place in the distributed computing field, as well as in the electronic business.  In this paper we present an initial research which deals with the issue of automated service retrieval. For that, we propose an approach that exploits the service interface (inputs/outputs) and the domain ontology, in order to conceptually index the web services. After that we compute a similarity score between the request and the indexed web services through the cosine measure. An experimentation based on the OWLTC test collection is described to evaluate the system. The obtained results are very encouraging and confirm the suitability of the solution.



Keywords: Web service research, similarity measure, ontologies, OWLS, information retrieval.
 
Received October 17, 2010; accepted January 3, 2011
Wednesday, 22 February 2012 08:58

QoS-Based Performance and Resource Management in 3G Wireless Networks
in Realistic Environments

Aymen Issa Zreikat
Department of Information Technology, Mu’tah University, Jordan

 

Abstract:
The third generation networks like Universal Mobile Telecommunication Systems (UMTS) offers multimedia applications and services that meet end-to-end quality of service requirements. The load factor in the uplink is critical and it is one of the important parameters which has a direct impact on the resource management as well as on the cell performance. However, in this paper, the fractional load factor in the uplink and the total downlink power are derived taken into account the multi-path propagation in different environments. The analysis is based on changing new parameters that affect the Quality of Service (QoS) as well as the performance, such as: service activity factor, energy-to-noise ratio (Eb/N0), interference factor, and the non-orthogonality factor of the codes. The impact of these parameters on the performance and the capacity as well as the total throughput of the cell is also investigated. It is shown in this paper that in addition to the above parameters, the type of the environment has a major effect on the noise rise in the uplink as well as the total power in the downlink. The investigation is based on different types of services, i.e., voice (conversational: 12.2kbit/s), packet switched services with rates (streaming 64 and 128kbit/s) and (interactive 384kbit/s). Additionally, the obtained results in this paper are compared with some similar results in the literature.


Keywords: 3G wireless networks, QoS, radio resource management, UMTS, load factor, noise rise.
 
Received August 6, 2010; accepted January 3, 2011
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…