Sunday, 18 November 2012 00:18

Frequency of Occurrence Analysis Attack and Its Countermeasure

Lip Yee Por
Faculty of Computer Science and Information Technology, University of Malaya, Malaysia

 

Abstract:
This paper addresses a newly discovered security threat named Frequency of Occurrence Analysis (FOA) attack in searchmetics password authentication scheme. A countermeasure technique that utilises Metaheuristic Randomisation Algorithm (MRA) is proposed to address the FOA attack. The proposed Algorithm is presented and an offline FOA attack simulation tool is developed to verify the effectiveness of the proposed method. In addition, a shoulder surfing testing is conducted to evaluate the effectiveness of the proposed method in terms of mitigating shoulder surfing attack. The experiment results show that MRA is able to prevent FOA and mitigate shoulder surfing attacks. Moreover, the proposed method is able to provide larger password space compare to the benchmarking scheme.



Keywords: FOA, MRA, picture-based password, graphical authentication, shoulder surfing.
 
Received November 28, 2010; accepted May 24, 2011
Thursday, 23 February 2012 03:34

Null Steering of Dolph-Chebychev Arrays Using Taguchi Method

Abdelmadjid Recioui and Hamid Bentarzi
Laboratory Signals and Systems, Department of Electrical Engineering and Electronics, Faculty of Engineering, University of Boumerdes Independence street Algeria
 

Abstract:
Dolph-Chebychev arrays are known to exhibit the best compromise between sidelobe level and directivity. However, they place a constraint on the null locations. Any attempt to impose nulls or get them deeper will impact the directivity/sidelobe level trade-off. In this work, null placement in Dolph-Chebychev arrays through element position perturbation is carried out based on Taguchi method while preserving the array aperture. Several examples are considered for single, double, multiple and broad null placement to demonstrate the ability of the Taguchi method to explore the search space and reach the global optimum.

Keywords: Pattern nulling, dolph-chebychev arrays, taguchi method, element position perturbation.

 
Received May 22, 2010; accepted March 1, 2011
Thursday, 23 February 2012 03:30

A n2 + n MQV Key Agreement Protocol

Li-Chin Hwang1, Cheng-Chi Lee2, and Min-Shiang Hwang3
1Department of Computer Science and Engineering, National Chung Hsing University, Taiwan, R.O.C
2Department of Library and Information Science, Fu Jen Catholic University, Taiwan, R.O.C
3Department of Computer Science and Information Engineering Asia University, Taiwan, R.O.C
 

Abstract:
In this paper, a novel scheme to generate (n2 + n) common secret keys in one session is proposed, in which two parties can use them to encrypt and decrypt their communicated messages by using symmetric-key cryptosystem. The proposed scheme is based on the difficulty of calculating discrete logarithms problem. All the session keys can be used against the known key attacks, main-in-the middle attacks, replay attacks or forgery attacks. The security and efficiency of our proposed scheme are presented. Compare with other schemes, the proposed scheme can generate more session keys in one session. Therefore, the propose scheme is more efficient than the others.

Keywords: Authenticated key agreement, cryptography, multiple-key agreement, MQV.
 
Received June 21, 2010; accepted March 1, 2011
Thursday, 23 February 2012 03:27

A New Mixed Binarization Method Used in A Real Time Application of Automatic Business Document and Postal Mail Sorting

Djamel Gaceb, Véronique Eglin and Frank Lebourgeois
 LIRIS laboratory, National Institut of Applied Science (INSA of Lyon), France
 

Abstract:
The binarization is applied in the first stage of segmentation process and has a very strong impact on the performances of the system of the automatic sorting of company documents and mail. We present in the beginning of this paper a complete study of the different existing binarization mechanisms that are developed to meet the needs of specific applications. These conventional approaches, present weaknesses that it is crucial to overcome and unfortunately they remain unsuitable for our real time application. The separation between the thresholding and the text zones location stages considerably increase the computation time and lead to an over-segmentation of the noise and of the paper texture on empty zones of the image. Indeed, none of the traditional methods (whether global or local) efficiently meets all the required conditions. We have managed to optimize this stage by applying a local threshold only near the text zones that can be located by the cumulated gradients method with the multi-resolution and mathematical morphology. We demonstrate the consistent performance of the proposed method on several types of business documents and mail with wide-ranging content and image quality.

Keywords: Binarization, text zones location, real time processing, automatic sorting of company documents and mail.
 
Received November 9, 2010; accepted May 24, 2011
Thursday, 23 February 2012 03:23

Study the Best Approach Implementation and Codec Selection for VoIP Over Virtual Private Network

Mohd Ismail
Department of MIIT, University of Kuala Lumpur, Malaysia
 

Abstract:
In this research, we propose an architectural solution to implement VoIP over Virtual Private Network (VPN) technology in campus environment between branches. The objective of this evaluation is to measure the quality of the VoIP performance over VPN technology. This study is to analyze the VPN over open source application (e.g Windows and Linux operating system), and hardware device (e.g Juniper) performance areas evolved with the quality of service delivered by VoIP conversation between branches. This study focuses on quality of voice prediction such as i) the performance of VoIP activity and ii) delay and packet loss. The most apparent of implementing VoIP over VPN in campus environment is to define the best solution between open source application (e.g Windows and Linux operating system) and hardware device that can be used in operational environment.  Based on the finding result, VoIP over VPN measurement through hardware device is able to contribute better performance compare to open source application such as CPU utilization, MOS, delay and jitter.

Keywords: MOS, operating system, VoIP, VPN, system performance.
 
Received December 4, 2010, accepted May 24, 2011
Thursday, 23 February 2012 03:20

Identification of Factors that Affect the Transition Time between CMMI Levels from Geographical Region Perspective: An Empirical Study

Fahad H. Alshammari and Rodina Ahmad
Department of Software Engineering, University of Malaya 50603, Kuala Lumpur, Malaysia
 

Abstract:
A software industry has been more concerned about Software Process Improvement (SPI).  Numerous studies have been made in development of SPI standards and models, or to identify factors that affect SPI success. However, these studies did not provide answers to questions about the effect of Geographical Region on the transition time between Capability Maturity Model Integration (CMMI) levels. And why there are obvious differences in the organizations’ transition time between CMMI levels.  The objective of this research is to identify the geographical region impact on factors that can affect the transition time between CMMI levels. We conducted 18 interviews in 15 different software companies to extract the factors and compare these factors with what are in the literature to avoid redundancy, based on that we designed a questionnaire. We sent out of 236 requests to participants, 92 responded from 30 companies. We asked the participants to rank each factor on a five-point scale (high, medium, low, zero and not sure) to determine the effect of each factor. We identified 11 factors from both data sets that are considered effective factors for the transition time between CMMI levels. And also identified one new factor (turn over of staff) which was not identified in the literature.

Keywords: Software process improvement; CMMI; factors; transition time; empirical study.
 
Received October 29, 2010, accepted March 1, 2011
Thursday, 23 February 2012 03:13

A Knowledge-Based System for GIS Software Selection

Khalid Eldrandaly1 and Soad Naguib2
1Associate Professor of Information Systems, College of Computers and Informatics, Zagazig University, Egypt
2Lecturer, College of Computers and Informatics, Zagazig University, Egypt
 

Abstract:
Building a new GIS project is a major investment. Choosing the right GIS software package is critical to the success and failure of such investment. Because of the complexity of the problem a number of decision making tools must be deployed to arrive at the proper solution. In this study a new decision making approach for solving GIS software selection problem was proposed by integrating Expert Systems and Multi-Criteria Decision Making techniques. To implement the proposed decision-making approach, a prototype knowledge-based system was developed in which expert systems and Analytic Hierarchy Process (AHP) are successfully integrated using the Component Object Model (COM) technology. A typical case study is also presented to demonstrate the application of the prototype system.

Keywords: GIS software selection, expert systems, AHP, Knowledge-based systems, multicriteria decision making.
 
Received September 26, 2010; accepted March 1, 2011
Thursday, 23 February 2012 03:08

Using Quantum-Behaved Particle Swarm Optimization for Portfolio Selection Problem

Saeed Farzi1, Alireza Rayati Shavazi2, and Abbas Rezaei Pandari3
1Faculty of Computer Engineering, Islamic Azad University – Branch of Kermanshah, Kermanshah, Iran
2M.A. Graduated (Management, Financial), Isfahan University, Isfahan, Iran
3M .A. Graduated (Industrial Management), Tarbiat Modares University, Tehran, Iran
 

Abstract:
One of the popular methods for optimizing combinational problems such as portfolio selection problem is swarm-based methods. In this paper, we have proposed an approach based on Quantum-Behaved Particle Swarm Optimization (QPSO) for the portfolio selection problem. The particle swarm optimization (PSO) is a well-known population-based swarm intelligence algorithm. QPSO is also proposed by combining the classical PSO philosophy and quantum mechanics to improve performance of PSO. Generally, investors, in portfolio selection, simultaneously consider such contradictory objectives as the rate of return, risk and liquidity. We employed Quantum-Behaved Particle Swarm Optimization (QPSO) model to select the best portfolio in 50 supreme Tehran Stock Exchange companies in order to optimize the objectives of the rate of return, systematic and non-systematic risks, return skewness, liquidity and sharp ratio.  Finally, the obtained results were compared with Markowitz`s classic and Genetic Algorithms (GA) models indicated that although return of the portfolio of QPSO model was less that that in Markowitz’s classic model, the QPSO had basically some advantages in decreasing risk in the sense that it completely covers the rate of return and leads to better results and proposes more versatility portfolios in compared with the other models.  Therefore, we could conclude that as far as selection of the best portfolio is concerned, QPSO model can lead to better results and may help the investors to make the best portfolio selection.

Keywords:  Swarm Algorithm, Portfolio Selection, Genetic Algorithms, Risk, Return.
 
Received January 2, 2010, accepted August 10, 2010
Thursday, 23 February 2012 02:48

An Intelligent Model for Visual Scene Analysis and Compression

Amjad Rehman and Tanzila Saba
Faculty of Computer Science and Information Systems, University Teknologi Malaysia, Malaysia
 

Abstract:
This paper presents an improved approach for indicating visually salient regions of an image based upon a known visual search task. The proposed approach employs a robust model of instantaneous visual attention (i.e. “bottom-up”) combined with a pixel probability map derived from the automatic detection of a previously-seen object (task-dependent i.e. (“top-down”). The objects to be recognized are parameterized quickly in advance by a viewpoint-invariant spatial distribution of Speeded Up Robust Features (SURF) interest-points. The bottom-up and top-down object probability images are fused to produce a task-dependent saliency map. The proposed approach is validated using observer eye-tracker data collected under object search-and-count tasking. Proposed approach shows 13% higher overlap with true attention areas under task compared to bottom-up saliency alone. The new combined saliency map is further used to develop a new intelligent compression technique which is an extension of Discrete Cosine Transform (DCT) encoding. The proposed approach is demonstrated on surveillance-style footage throughout.

Keywords:  Visualization, discrete cosine transform, image compression, scene analysis.
 
Received May 27, 2010; accepted January 3, 2011
Thursday, 23 February 2012 02:41

An Arabic Lemma-Based Stemmer
for Latent Topic Modeling

Abderrezak Brahmi1, Ahmed Ech-Cherif2, and Abdelkader Benyettou2
1Department of Computer Sciences, Abdelhamid Ibn Badis University, Mostaganem, Algeria
2Department of Computer Sciences, USTO-MB University, Oran, Algeria
 

Abstract: 
Developments in Arabic information retrieval did not follow the increasing use of the Arabic Web during the last decade. Semantic indexing in a language with high inflectional morphology, such as Arabic, is not a trivial task and requires a text analysis in the original language. Excepting cross-language retrieval methods or limited studies, the main efforts, for developing semantic analysis methods and topic modeling, did not include Arabic text. This paper describes our approach for analyzing semantics in Arabic texts. A new lemma-based stemmer is developed and compared to root-based one for characterizing Arabic text. The Latent Dirichlet Allocation (LDA) model is adapted to extract Arabic latent topics from various real-world corpora. In addition to the interesting subjects discovered in the press articles during the 2007-2009 period, experiments show that the classification performances with lemma-based stemming in the topics space, are improved when comparing to classification with root-based stemming.

Keywords: Arabic stemming, topic model, semantic analysis, classification, test collection.
 
Received October 22, 2010; accepted May 24, 2011
Thursday, 23 February 2012 02:27

An Effective Similarity Measure via Genetic Algorithm for Content Based Image Retrieval with Extensive Features

Baddeti Syam1 and Yarravarapu Srinivasa Rao2
1Associate Professor and HOD, Department of ECE, Mandava Institute of Engineering and Technology, Jaggayyapet -521 275, Andhrapradesh, India.
2Senior Associate Professor, Instrument Technology Department, AU College of Engineering, Andhra University, Visakhapatnam, Andhra Pradesh, India.

 
Abstract: Recently, the construction of large datasets has been facilitated by the developments in data storage and image acquisition technologies. In order to manage these datasets in an efficient manner development of suitable information systems are necessary. Content-Based Image Retrieval is commonly utilized in most of the systems. Based on image content, CBIR extracts images that are relevant to the given query image from large image databases. Most of the CBIR systems available in the literature extract only concise feature sets that limit the retrieval efficiency. In this paper, extensive features are extracted from the database images and stored in the feature library. The extensive features set is comprised of shape feature along with the color, texture and the contourlet features, which are utilized in the previous work. When a query image is given, the features are extracted in the similar fashion.  Subsequently, Genetic Algorithm-based similarity measure is performed between the query image features and the database image features. The Squared Euclidean Distance (SED) aids the similarity measure in determining the Genetic Algorithm fitness. Hence, from the Genetic Algorithm-based similarity measure, the database images that are relevant to the given query image are retrieved. The proposed CBIR technique is evaluated by querying different images and the retrieval efficiency is evaluated by determining precision-recall values for the retrieval results.

Keywords: Content Based Image Retrieval (CBIR), Genetic Algorithm (GA), Squared Euclidean Distance (SED), and shape feature, similarity measure.
 

  Received October 4, 2010; accepted May 24, 2011

Thursday, 23 February 2012 01:56

Texture Image Segmentation Using A New Descriptor and Mathematical Morphology

Idrissi Sidi and Samir Belfkih
Faculty of Science and Technology, Sidi Mohamed Ben Abdellah University, Morocco

 
Abstract: In this paper we present a new texture descriptor based on the shape operator defined in differential geometry. Then we describe the texture feature analysis process based on the spectral histogram. After that we describe a new algorithm for texture segmentation using this descriptor, statistics based on the spectral histogram, and mathematical morphology. Many results are presented to illustrate the effectiveness of our approach.

Keywords: Textures segmentation, spectral histogram, differential geometry, and mathematical morphology.
 

  Received December 19, 2010; accepted May 24, 2011

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…