Wednesday, 16 November 2011 08:53

I2MANET Security Logical Specification Framework

Yasir Mohamed  and Azween Abdullah
 Department Of Computer Sciences and Information Technology, University Technology PETRONAS, MALAYSIA

 
Abstract: This paper presents an immune-inspired logical specification framework for securing Mobile Ad hoc Networks (I2MANETs). The framework simulates the Human Immune System in: first response, second response, adaptability, distributability and many other Immune features and properties. The framework has the ability to monitor, detect, classify, and block the corrupted packets that transfer between nodes in a distributed environment. Scalability and bandwidth conservation are the well pointed out issues in the framework. The framework can be applied to many applications which depend on ad hoc technology such as emergency, health-care systems, and M-commerce.


Keywords:  Wireless ad-hoc networks, mobile agent system, immune-based security, and framework specification logic.


Received July 18, 2009; accepted August 10, 2010

Full Text
 
Wednesday, 16 November 2011 08:51

YAMI: Incremental Mining of Interesting Association Patterns

Eiad Yafi1, Ahmed Sultan Al-Hegami2, Afshar Alam1, and Ranjit Biswas3
1Department of Computer Science, Jamia Hamdard University, India
2Faculty of Computers and Information Technology, Sana’a University, Yemen
3Department of Computer Engineering, Jadavpur University, India
 
Abstract: Association rules are an important problem in data mining. Massively increasing volume of data in real life databases has motivated researchers to design novel and incremental algorithms for association rules mining. In this paper, we propose an incremental association rules mining algorithm that integrates shocking interestingness criterion during the process of building the model. A new interesting measure called shocking measure is introduced. One of the main features of the proposed approach is to capture the user background knowledge, which is monotonically augmented. The incremental model that reflects the changing data and the user beliefs is attractive in order to make the over all KDD process more effective and efficient. We implemented the proposed approach and experiment it with some public datasets and found the results quite promising.


Keywords: Knowledge Discovery in Databases (KDD), data mining, incremental association rules, domain knowledge, interestingness, and Shocking Rules (SHR).


Received February 4, 2010; accepted August 10, 2010

Wednesday, 16 November 2011 08:48

A Dynamic ID-Based Authentication Scheme for M2M Communication of Healthcare Systems

Tien-Dung Nguyen and Eui-Nam Huh
 Department of Computer Engineering, Kyung Hee University, Korea

 
Abstract: M2M (machine to machine) applications involving intelligence to ubiquitous environment have been in existence for the past many years. However, its provisioning using mobile technologies raises a new security challenge. Security services such as authentication and key establishment are critical in M2M, especially for healthcare systems. We proposed a simple architecture M2M service to apply any hospital which considers mobility of doctors and patients. An efficient security scheme with dynamic ID-Based authentication using pairwise key distribution is applied in M2M system. It can be assured high security through security analysis under shared key attack and Sybil attack.


Keywords: M2M, u-healthcare, WSN, Authentication.

Received March 24, 2010; accepted October 24, 2010

Wednesday, 16 November 2011 08:41

A Markov Random Field Model and Method to Image Matching

Mohammed Ouali1, Holger Lange2, and Kheireddine Bouazza1
1Département d’Informatique, Faculté des Sciences, Université d’Oran, Algérie
2R and D Group, General Dynamics Canada, Canada

 
Abstract: In this paper, the correspondence problem is solved by minimizing an energy functional using a stochastic approach. Our procedure generally follows Geman and Geman’s Gibbs sampler for Markov Random Fields (MRF). We propose a transition generator to generate and explore states. The generator allows constraints such as epipolar, uniqueness, and order to be imposed. We also propose to embed occlusions in the model. The energy functional is designed to take into account resemblance, continuity, and number of occlusions.  The disparity and occlusion maps as modeled by their energy functional, i.e., as a Gibbs-Boltzmann distribution, are viewed as a MRF where the matching solution is an optimal state.


Keywords: Disparity, MRF, image matching, stereo constraints, resemblance, epipolar geometry, uniqueness, and continuity.


Received May 10, 2010; accepted January 3, 2011

Wednesday, 16 November 2011 08:37

A Privacy-Preserving Classification Method Based on Singular Value Decomposition

Guang Li and Yadong Wang
Department of Computer Science and Engineering, Harbin Institute of Technology, China

 
Abstract: With the development of data mining technologies, privacy protection has become a challenge for data mining applications in many fields. To solve this problem, many privacy-preserving data mining methods have been proposed. One important type of such methods is based on Singular Value Decomposition (SVD). The SVD-based method provides perturbed data instead of original data, and users extract original data patterns from perturbed data. The original SVD-based method perturbs all samples to the same degree. However, in reality, different users have different requirements for privacy protection, and different samples are not equally important for data mining. Thus, it is better to perturb different samples to different degrees. This paper improves the SVD-based data perturbation method so that it can perturb different samples to different degrees. In addition, we propose a new privacy-preserving classification mining method using our improved SVD-based perturbation method and sample selection. The experimental results indicate that compared with the original SVD-based method, this new proposed method is more efficient in balancing data privacy and data utility. 


Keywords: Privacy preservation, data mining, singular value decomposition, and sample selection.

Received June 3, 2010; accepted January 3, 2011

Wednesday, 16 November 2011 08:34

Estimating Quality of JavaScript

Sanjay Misra1 and Ferid Cafer2
1Department of Computer Engineering, Atilim University, Turkey
2Servus Bilgisayar, Turkey

 
Abstract: This paper proposes a complexity metric for Java script since JavaScript is the most popular scripting language that can run in all of the major web browsers. The proposed metric “JavaScript Cognitive Complexity Measure (JCCM)” is intended to assess the design quality of scripts. The metrics has been evaluated theoretically and validated empirically through real test cases. The metric has also been compared with other similar metrics. The theoretical, empirical validation and comparative study prove the worth and robustness of the metric.


Keywords: Software engineering, software quality, software metrics, and java script.

Received July 21, 2010; accepted January 3, 2011

Wednesday, 16 November 2011 08:31

Efficient Management of Schema Versioning in Multi-Temporal Databases

Zouhaier Brahmia1, Mohamed Mkaouar2, Salem Chakhar3, and Rafik Bouaziz1
 1Faculty of Economic Sciences and Management, University of Sfax, Tunisia
2Faculty of Science, Mathematics, Physics and Natural, University of Tunis-El Manar, Tunisia 
3 LAMSADE Laboratory, University of Paris Dauphine, France 

 
Abstract: To guarantee a complete data history in temporal databases, database management systems have to manage both evolution of schema over time, through their versioning, and evolution of data defined under different schema versions. This paper proposes a new approach for schema versioning in multi-temporal databases. It allows an efficient management of schema versions and their underlying data, through a smooth conversion of the temporal database. When creating a new schema version, the basic idea consists in forbidding (i) any automatic transfer of data defined under previous schema versions to this new version, in order to avoid data loss and ambiguousness in the interpretation of temporal intervals of data, and (ii) any change in the structures of previous schema versions, in order to permit to the legacy applications to remain operational after this schema evolution.

Keywords: Schema evolution, schema versioning, temporal databases, multi-temporal databases, application time, and database conversion.
Received July 31, 2010; accepted October 24, 2010
Wednesday, 16 November 2011 08:28

Secure and Efficient SIP Authentication Scheme for Converged VoIP Networks

Qiong Pu1,2 and Shuhua Wu3
1School of Electronics and Information Engineering,Tongji University, China
2Science Institute, Information Engineering Univeristy, China
3Department of Network Engineering, Information Science and Technology Institute, China

 
Abstract: The Session Initiation Protocol (SIP) is commonly used to establish Voice over IP (VoIP) calls. Mostly recently, Yoon et al. proposed a new secure and efficient SIP authentication scheme in a converged VoIP network based on ECC. In this paper, we first demonstrate that the recently proposed SIP authentication scheme is insecure against off-line password guessing attacks. Thereafter, we propose an enhanced SIP authentication scheme that enjoys provable security. And yet our scheme is simple and efficient. Therefore, the end result is more suited to be a candidate for SIP authentication scheme.


Keywords: Voice over internet protocol, session initial protocol, elliptic curve, and authentication.

Received August 30, 2010; accepted October 24, 2010

Wednesday, 16 November 2011 08:26

Single Image Face Recognition Using Laplacian of Gaussian and Discrete Cosine Transforms

Muhammad Sharif1, Sajjad Mohsin1, Muhammad Younas Javed2, and Muhammad Atif Ali1
1Department of Computer Sciences, COMSATS Institute of Information Technology, Pakistan
2Department of Computer Engineering, National University of Science and Technology, Pakistan

 
Abstract: This paper presents a single image face recognition approach called Laplacian of Gaussian (LOG) and Discrete Cosine Transform (DCT). The proposed concept highlights a major concerned area of face recognition i.e., single image per person problem where the availability of images is limited to one at training side. To address the problem, the paper makes use of filtration and transforms property of LOG and DCT to recognize faces. As opposed to conventional methods, the proposed idea works at pre-processing stage by filtering images up to four levels and then using the filtered image as an input to DCT for feature extraction using mid frequency values of image. Then, covariance matrix is computed from mean of DCT and Principal component analysis is performed. Finally, distinct feature vector of each image is computed using top Eigenvectors in conjunction with two LOG and DCT images. The experimental comparison for LOG (DCT) was conducted on different standard data sets like ORL, Yale, PIE and MSRA which shows that the proposed technique provides better recognition accuracy than the previous conventional methods of single image per person i.e., (PC)2A and PCA, 2DPCA, B-2DPCA etc. Hence with over 97% recognition accuracy, the paper contributes a new enriched feature extraction method at pre-processing stage to address the facial system limitations.


Keywords: Single image, face, recognition, DCT, LOG, and mid frequency values.


Received August 31, 2010; accepted October 24, 2010

Tuesday, 15 November 2011 08:59

Effective Unsupervised Arabic Word Stemming: Towards an Unsupervised Radicals Extraction

Ahmed Khorsi
Department of Computer Science, College of Computer and Information Science, Al-Imam Mohammed Ibn Saud Islamic University, Kingdom of Saudi Arabia

 
Abstract: This paper presents a new totally unsupervised and 90% effective stemming approach for classical Arabic. This stemming is meant to be a preparatory step to an unsupervised root (i.e., radicals) extraction. As a learning input, our stemming system requires no linguistic knowledge but a plain classical Arabic text. Once the learning input analyzed, our stemming system is able to extract the strongest segment of a given length, namely the stem. We start by a definition of the targeted stem, then, we show how our system performs about 90% true positives after a leaning of less than 15000 words. Unlike the other unsupervised approaches, ours does not suppose the perfectness of the input text and deals efficiently with the eventual (practically very frequent) misspellings. The test corpus we have used is an ultimate reference in the classical Arabic and its labeling has been rigorously done by a team of experts.


Keywords: Computational morphology, machine learning, natural language processing, classical arabic, and semitic languages.



Received September 7, 2010; accepted October 24, 2010

Tuesday, 15 November 2011 08:57

Automatic Mapping of MPEG-7 Descriptions to Relational Database

Ala’a Al-zoubi1 and Mohammad Al-zoubi2
1Department of Computer Information Systems, Irbid National University, Jordan
2Department of Computer Graphics, Princess Sumaya University for Technology, Jordan

 
Abstract: MPEG-7 is considered an important standard for the description of multimedia content. It is expected that variety of applications based on MPEG-7 media descriptions will be set up in the near future. Basically, MPEG-7 media descriptions are XML documents following media description schemes defined with a variant of XML Schema. Therefore, efficient storage mechanisms for large amount of MPEG-7 descriptions are required. Because of many advantages, the relational DBMS is the best choice for storing such XML documents. However, the existing RDBMS-based XML storage solutions cannot fulfill all the requirements for MPEG-7 descriptions management. In this paper, we present a new automatic approach for mapping MPEG-7 media descriptions to relational database, called RelMPEG-7. RelMPEG-7 automatically generates the relational database tables along with the appropriate columns datatypes and constraints, this is performed by automatic analysis of the XML Schema to extract necessary information from it. This approach is intended to meet almost all the essential requirements to best store the MPEG-7 documents in a relational database, and as a complementary to the existing solutions.


Keywords: Mpeg-7, multimedia database, mapping XML to RDB schema.

Received October 24, 2010; accepted January 3, 2011

Tuesday, 15 November 2011 08:55

Combining Neural Networks for Arabic Handwriting Recognition

Chergui Leila1, Kef Maamar2, and Chikhi Salim3
1Department of Computer Sciences, University Larbi Ben Mhidi, Algeria
2Department of Computer Sciences, University Hadj Lakhdar, Algeria
3Department of Computer Sciences, University Mentouri, Algeria

 
Abstract: Combining classifiers is an approach that has been shown to be useful on numerous occasions when striving for further improvement over the performance of individual classifiers. In this paper we present a Multiple Classifier System (MCS) for off-line Arabic handwriting recognition. The MCS combines three neuronal recognition systems based on Fuzzy ART network used for the first time in Arabic OCR, multi layer perceptron and radial basic functions. We use various feature sets based on Tchebichef, Hu and Zernike moments. For deriving the final decision, different combining schemes are applied. The best combination ensemble has a recognition rate of 90,10 %, which is significantly higher than the 84,31% achieved by the best individual classifier. To demonstrate the high performance of the classification system, the results are compared with three research using IFN/ENIT database.



Keywords: Multiple classifier system, Arabic recognition, neural networks, tchebichef moments, hu moments, and Zernike moments.


Received December 20, 2010; accepted March 1, 2011

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…