Monday, 30 July 2012 04:07

A Survey: Linear and Nonlinear PCA Based Face Recognition Techniques

Jamal Shah, Muhammad Sharif, Mudassar Raza, and Aisha Azeem
Department of Computer Sciences, COMSATS Institute of Information Technology, Pakistan

 

Abstract:
Face recognition is considered to be one of the most reliable biometric, when security issues are taken into concern. For this, feature extraction becomes a critical problem. Different methods are used for extraction of facial feature which are broadly classified into linear and nonlinear subspaces. Among the linear methods are Linear Discriminant Analysis(LDA), Bayesian Methods (MAP and ML), Discriminative Common Vectors (DCV), Independent Component Analysis (ICA), Tensor faces (Multi-Linear Singular Value Decomposition (SVD)), Two Dimensional PCA (2D-PCA), Two Dimensional LDA (2D-LDA) etc. but Principal Component Analysis (PCA) is considered to be one the classic method in this field. Based on this a brief comparison of PCA family is drawn, of which PCA, Kernel PCA (KPCA), Two Dimensional PCA (2DPCA) and Two Dimensional Kernel (2DKPCA) are of major concern. Based on literature review recognition performance of PCA family is analyzed using the databases named YALE, YALE-B, ORL and CMU. Concluding remarks about testing criteria set by different authors as listed in literature reveals that K series of PCA produced better results as compared to simple PCA and 2DPCA on the aforementioned datasets. 


Keywords: Linear, non-linear, PCA, two dimensional PCA (2DPCA), two dimensional kernel PCA (2DKPCA), facial features extraction, face recognition and survey.
 
Received November 21, 2010; accepted May 24, 2011
Monday, 30 July 2012 04:03

Bi-Level Weighted Histogram Equalization for Scalable Brightness Preservation and Contrast Enhancement for Images

Shanmugavadivu Pichai1, Balasubramanian Krishnasamy2, and Somasundaram Karuppanagounder1
1Department of Computer Science and Applications, Gandhigram Rural Institute-Deemed University, India
2Department of Computer Applications, PSNA College of Engineering and Technology, India

 

Abstract:
A new technique, Bi-Level Weighted Histogram Equalization (BWHE) is proposed in this paper for the purpose of better brightness preservation and contrast enhancement of any input image. This technique applies bi-level weighting procedure on Brightness preserving Bi-Histogram Equalization (BBHE) to enhance the input images. The core idea of this method is to first segment the histogram of the input image into two, based on its mean and then weighting constraints are applied to each of the sub-histogram separately. Finally, those two histograms are equalized independently and their union produces a brightness preserved and contrast enhanced output image. This technique is found to preserve the brightness and enhance the contrast of input images better than its contemporary methods.


Keywords: Contrast enhancement, brightness preservation, histogram equalization, peak signal to noise ratio, absolute mean brightness error, structural similarity index matrix.
 
Received December 2, 2011; accepted May 22, 2011
  

Full Text

Monday, 30 July 2012 03:59

Novel Compression System for Hue-Saturation and Intensity Color Space

Noura Semary1, Mohiy Hadhoud1, Hatem Abdul-Kader1, and Alaa Abbas2
1Faculty of Computers and Information, Menofia University, Egypt
2Faculty of Electronic Engineering, Menofia University, Egypt

 

Abstract:
Common compression systems treat color image channels similarly. Nonlinear color models like Hue-Saturation -Value/ Brightness/ Luminance/ Intensity (HSV/ HSB/ HSL/ HSI) have special features for each channel. In this paper a new hybrid compression system is proposed for encoding color images in HSI color model. The proposed encoding system deals with each channel with a suitable compression technique to obtain encoded images with less size and high decoding quality than the traditional encoding methods. There are three encoding techniques will be mixed in the proposed system; Object Compression Technique for the Hue channel, Luma(Y) Intensity (I) Difference (D) - for Saturation, and the standard JPEG2000 encoding technique for the Intensity channel. The proposed system results demonstrate the proposed architecture and give considerable compression ratio with good decoding quality.


Keywords: Hue, saturation, intensity, encoding, compression.
 
Received March 18, 2010; accepted July 28, 2011
  

Full Text

Monday, 30 July 2012 03:55

A Hybrid Image Compression Scheme Using DCT and Fractal Image Compression

Chandan Rawat1 and Sukadev Meher2
1Department of Electronics and Communication Engineering, National Institute of Technology, India
2Department of Electronics and Computer Engineering, National Institute of Technology, India

 

Abstract:
Digital images are often used in several domains. Large amount of data is necessary to represent the digital images so the transmission and storage of such images are time-consuming and infeasible. Hence the information in the images is compressed by extracting only the visible elements. Normally the image compression technique can reduce the storage and transmission costs. During image compression, the size of a graphics file is reduced in bytes without disturbing the quality of the image beyond an acceptable level. Several methods such as Discrete Cosine Transform (DCT), DWT, etc. are used for compressing the images. But, these methods contain some blocking artifacts. In order to overcome this difficulty and to compress the image efficiently, a combination of DCT and fractal image compression techniques is proposed. DCT is employed to compress the color image while the fractal image compression is employed to evade the repetitive compressions of analogous blocks. Analogous blocks are found by using the Euclidean distance measure. Here, the given image is encoded by means of Huffman encoding technique. The implementation result shows the effectiveness of the proposed scheme in compressing the color image. Also a comparative analysis is performed to prove that our system is competent to compress the images in terms of Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM) and Universal Image Quality Index (UIQI) measurements.


Keywords: Image compression, DCT, fractal image compression, quantization, zig-zag scanning, huffman coding.
 
Received May 23, 2010; accepted, 2011
  

Full Text

Monday, 30 July 2012 03:51

Software Protection via Hiding Function Using Software Obfuscation

Venus Samawi1 and Adeeb Sulaiman2
1Department of Computer Science, Al Al-Bayt University, Jordan
2College of Administrative Science, Applied Science University, Kingdome of Bahrain

 

Abstract:
Application service provider (ASP) is a business that makes computer-based services (small and medium sized businesses) available to clients over a network. The usual ASP sells a large application to large enterprises, but also provides a pay-as-you-go model for smaller clients. One of the main problems with ASP is the insufficient security to resist attacks and guarantee pay-as-you-go.  Function hiding can be used to achieve protection for algorithms and assure charging clients on per-usage basis. Encryption functions that can be executed without prior decryption (function hiding protocol) gives good solution to the problems of software protection. Function hiding protocol faces a problem if the same encryption scheme is used for encrypting some data about the function and also the output of the encrypted function. In such case, an attacker could reveal the encrypted data easily thereby comprising its confidentiality. This paper aims to develop a software protection system based on function hiding protocol with software obfuscation that overcomes function hiding protocol problems. The suggested system is a multi-client system that allows charging clients on a per-usage basis (pay-as-you-go) and satisfies both confidentiality and integrity for the ASP and the client.


Keywords: Software protection, function hiding, software obfuscation, ASP.
 
Received July 10, 2011; accepted May 22, 2012
  

Full Text

Monday, 30 July 2012 03:21

Modeling Fuzzy Data with Fuzzy Data Types in Fuzzy Database and XML Models

Li Yan
School of Software, Northeastern University, China

 

Abstract:
Various fuzzy data models such as fuzzy relational databases, fuzzy object-oriented databases, fuzzy object-relational databases and fuzzy XML have been proposed in the literature in order to represent and process fuzzy information in databases and XML. But little work has been done in modeling fuzzy data types. Actually in the fuzzy data models, each fuzzy value is associated with a fuzzy data type. Explicit representations of fuzzy data types are the foundation of fuzzy data processing. To fill this gap, in this paper, we propose several fuzzy data types, including fuzzy simple data types, fuzzy collection data types and fuzzy defined data types. We further investigate how to declare the fuzzy data types in the fuzzy object-oriented database model and fuzzy XML Schema. The proposed fuzzy data types can meet the requirement of modeling fuzzy data in the fuzzy databases and fuzzy XML.


Keywords: Database models, fuzzy data, fuzzy data types, fuzzy databases, fuzzy XML, modeling
 
Received March 17, 2011; accepted May 24, 2011
  

Full Text

Monday, 30 July 2012 03:18

Efficient High Dimension Data Clustering using Constraint-Partitioning K-Means Algorithm

Aloysius George
Managing Director, Research & Development, India

 

Abstract:
With the ever-increasing size of data, clustering of large dimensional databases poses a demanding task that should satisfy both the requirements of the computation efficiency and result quality. In order to achieve both tasks, clustering of feature space rather than the original data space has received importance among the data mining researchers. Accordingly, we performed data clustering of high dimension dataset using Constraint-Partitioning K-Means clustering algorithm which did not fit properly to cluster high dimensional data sets in terms of effectiveness and efficiency, because of the intrinsic sparse of high dimensional data and resulted in producing indefinite and inaccurate clusters. Hence, we carry out two steps for clustering high dimension dataset. Initially, we perform dimensionality reduction on the high dimension dataset using Principal Component Analysis as a preprocessing step to data clustering. Later, we integrate the Constraint-Partitioning K-Means clustering algorithm to the dimension reduced dataset to produce good and accurate clusters. The performance of the approach is evaluated with high dimensional datasets such as Parkinson’s dataset and Ionosphere dataset. The experimental results showed that the proposed approach is very effective in producing accurate and precise clusters.


Keywords: Clustering, dimensionality reduction, principal component analysis, constraint-partitioning k-means algorithm, clustering accuracy, parkinson's dataset, ionosphere dataset.
 
Received May 4, 2011; accepted July 28, 2011
  

Full Text

Monday, 30 July 2012 03:14

Interactive Query Expansion Using Concept-Based Directions Finder Based on Wikipedia

Yuvarani Meiyappan1 and SrimanNarayana Iyengar2
1Lead in Education & Research at Infosys limited, India
2School of Computing Science and Engineering, VIT University, India

 

Abstract:
Despite the advances in information retrieval the search engines still result in imprecise or poor results, mainly due to the quality of the query being submitted. The query formulation to express their information need has always been challenging for the users.  In this paper, we have proposed an interactive query expansion methodology using Concept-Based Directions Finder (CBDF). The approach determines the directions in which the search can be continued by the user using Explicit Semantic Analysis (ESA) for a given query. The CBDF identifies the relevant terms with a corresponding label for each of the directions found, based on the content and link structure of Wikipedia. The relevant terms identified along with its label are suggested to the user for query expansion through the new visual interface proposed. The visual interface named as terms mapper, accepts the query, and displays the potential directions and a group of relevant terms along with the label for the direction chosen by the user. We evaluated the results of the proposed approach and the visual interfacefor the identified queries. The experimental result shows that the approach produces a good Mean Average Precision (MAP) for the queries chosen.


Keywords: Interactive query expansion, term suggestion, direction finder, term extractor, web search, Wikipedia.
 
Received august 13, 2011; accepted December 30, 2011
  

Full Text

Monday, 30 July 2012 03:09

An Improved Version of the Visual Digital Signature Scheme

Abdullah Jaafar and Azman Samsudin
School of Computer Sciences, Universiti Sains Malaysia, Malaysia

 

Abstract:
The issue of authenticity in data transfer is very important in many communications. In this paper, we propose an improved version of the visual digital signature scheme with enhanced security. The improvement was made based on Yang’s non-expansion visual cryptography technique and Boolean operations. The security of the improved version of the visual digital signature scheme is assured by the K-SAT (3-SAT and 4-SAT) NP-hard problem. This is to compare with the security of the existing scheme which is based on the difficulty of solving random Boolean OR operations. Besides improved in security the propose scheme is also efficient in generating shares, compared to the existing scheme where the probability of generating black shares is high.


Keywords: Digital signature, non-expansion visual cryptography, boolean operation, visual share.
 
Received November 11, 2011; accepted May 22, 2012
  

Full Text

Monday, 30 July 2012 02:55

The Statistical Quantized Histogram Texture Features Analysis for Image Retrieval Based on Median and Laplacian Filters in the DCT Domain

Fazal Malik and Baharum Baharudin
Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Malaysia
 

Abstract:
An effective Content-Based Image Retrieval (CBIR) system is based on efficient feature extraction and accurate retrieval of similar images. Enhanced images by using  proper filter methods can also play an important role in image retrieval in a compressed frequency domain since currently most of the images are represented in the compressed format by using the DCT (Discrete Cosine Transformation) blocks transformation. In compression, some crucial information is lost and perceptual information is left, which has significant energy requirement for retrieval in a compressed domain. In this paper, the statistical texture features are extracted from the enhanced images in the DCT domain using only the DC and first three AC coefficients of the DCT blocks of image having more significant information. We study the effect of filters in image retrieval using texture features. We perform an experimental comparison of the results in terms of accuracy on the basis of median, median with edge extraction and Laplacian filters using quantized histogram texture features in a DCT domain. Experiments on the Corel database using the proposed approach, give the improved results on the basis of filters; more specifically, the Laplacian filter with sharpened images gives good performance in retrieval of JPEG format images as compared to the median filter in the DCT frequency domain.


Keywords: CBIR, median filter, laplacian filter, statistical texture features, quantized histograms, DCT.
 
Received 25, 2012; accepted May 22, 2012
  

Full Text

Monday, 30 July 2012 02:55

Implementation and Comparative Analysis of the Fault Attacks on AES

Saleem Raza, Najmus Saqib Malik, Azfar Shakeel, and Majid Iqbal Khan
Department of Computer Science, COMSATS Institute of Information Technology, Pakistan

 

Abstract:
This research presents the survey, analysis, comparisons and implementation of the most threatening new kind of cryptographic attacks known as fault attacks or implementation attacks against Advanced Encryption Standard (AES) algorithm. AES algorithm is used in various applications and is considered the most secure against conventional cryptanalytic attacks which exploits the algebraic or mathematical weaknesses in the crypto-systems. Fault attacks are based on interrupting the execution of the algorithm in such a way that it produces faulty cipher output which can be analysed to break the algorithm. This research survey various fault attacks and provide implementation of three of them in detail for demonstration purposes. It mapped the complex mathematical analysis into programming algorithms for ease of implementation. At the end it compares various types of attacks based on our devised criteria of efficiency, flexibility and usability/flexibility of the attack methods.


Keywords: Fault attack, AES, cryptanalysis.
 
Received March 26, 2012; accepted May 22, 2012
  

Full Text

Monday, 30 July 2012 02:54

A Robust Multiwavelet-Based Watermarking Scheme for Copyright Protection of Digital Images Using Human Visual System

Padmanabhareddy Vundela1 and Varadarajan Sourirajan2
1Department of Information Technology, Vardhaman College of Engineering, India
2Department of Electrical & Electronic Engineering, S.V. University College of Engineering, India

 

Abstract:
The contemporary period of information technology facilitates simple duplication, manipulation and distribution of digital data. This enduringly, has insisted the rightful ownership of digital images to be protected efficiently. For content owners and distributors, there emerged a necessary concern in regard to the content authentication of digital images as well as copyright protection. A latent solution to this issue is bestowed by Digital watermarking. To certify efficient copyright protection, the watermarking scheme should own the characteristics, such as robustness and imperceptibility. Integration of Human Visual System (HVS) models with in the watermarking scheme helps to attain an effective copyright protection. Currently, wavelet domain based watermarking scheme mainly interested in watermarking researches. An undetectable and proficient wavelet-based watermarking scheme to safe guard the copyrights of images are portrayed here on contrary to the prior works. By effecting few modifications to our prior works, we have presented a new proficient watermarking scheme by incorporating the HVS models for watermark embedding. Additionally, we have applied the GHM multiwavelet transform in the watermarking process. Based on the computed distance using Hausdorff distance measure, the image components for embedding are selected and a new procedure is designed for watermark embedding by multiplying the embedding strength with the random matrix that is generated by key image as a primary element and is engaged in both embedding and extraction processes. The correlation coefficient computation is used for extraction of watermark process. The experimental results illustrate the robustness and imperceptibility of the proposed approach. From the results, we can identify that the proposed watermarking process has achieved the correlation value of 0.9848 even if the watermarked image is affected by the Gaussian noise.


Keywords: Digital watermarking, copyright protection, HVS, robust, discrete wavelet transform, GHM multiwavelet transform, canny edge detection algorithm, hausdorff distance measure, correlation coefficient.
 
Received November 15, 2010; accepted May 24, 2011
  

Full Text

Monday, 30 July 2012 02:48

Semantic Adaptation of Multimedia Documents

Azze-Eddine Maredj and Nourredine Tonkin
Research Center on Scientific and Technical Information (CERIST), Ben Aknoun-Algiers, Algeria

 

Abstract:
A multimedia document should be presented on different platforms, for this adaptation of its content is necessary. In this contribution, we make some proposals to improve and extend the semantic approach based on conceptual neighborhoods graphs in order to best preserve the proximity between the adapted and the original documents and to deal with models that define delays and distances.


Keywords: Multimedia document adaptation, semantic adaptation, conceptual neighborhood graph, relaxation graph.
 
Received September 11, 2011; accepted May 22, 2012
  

Full Text

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…