January 2015. No.1
Print E-mail

A New Perspective on Principal Component Analysis using Inverse Covariance

Tauseef Gulrez1 and Abdullah Al-Odienat2
1Department of Computing, Macquarie University, Australia
2Electrical Engineering Department, Mutah University, Jordan

Abstract: In this research paper we have proposed a new method of the orthogonal projection approximation for the extraction of the principal eigenvectors of a data while using inverse covariance regularization, thus naming the new method as Inverse Covariance Principal Component Analysis (ICPCA). The basic idea lies in the mapping of an input space into a feature space via inverse covariance factorization and then computing the principal components in the extracted feature space. The performance of the proposed method has been shown quantitatively and qualitatively on a well known the Essex University’s image database. The comparison shows that the proposed method outperforms competing eigenvalue decomposition (EVD) method (classical Principal Component Analysis) in variance coverage as well as in the execution time.

 Keywords: Pattern Recognition, machine learning, dimensionality reduction, inverse covariance.

Received December 31, 2012; accepted December 24, 2013

 
Print E-mail

  A Fuzzy Based Scheme for anitizing Sensitive Sequential Patterns

 

Faisal Shahzad1, Sohail Asghar2, Khalid Usmani2

1Faculty of Computing, Mohammad Ali Jinnah University, Pakistan

2Faculty of Computer Science, University Institute of Information Technology, PMAS-AAUR, , Pakistan

 

Abstract: The rapid advances in technology have led to generating and analyzing huge amounts of data in databases. The examples of such kind of data are bank records, web logs, cell phone records and network traffic records. This has raised a new challenge for people i.e. to transform this data into useful information. To achieve this task successfully, Data Mining is a vital technique. The aim of Data Mining is to extract knowledge from data. Sequential pattern mining is an important area of data mining. Sequential data contains events and events contain items. The order between items does not matter. Whenever we extract sequential information, there is always a threat that we may reveal sensitive sequential patterns. Thus a need arises to protect sensitive sequential patterns. To fulfil this need; privacy preservation data mining techniques are used. The aim of Privacy Preservation techniques is to extract information from data without revealing sensitive information. In this research we would propose a technique based on FP growth approach and then applying anti-monotone and monotone constraints foridentifying sensitive sequential patterns. For data modification we would apply the concept of fuzzy sets.

 

Keywords: data mining; privacy preserving data mining (PPDM); sequential pattern mining (SPM); FP growth; antimonotone;monotone; Fuzzy Logic.

 

Received July 6, 2012; accepted December 23, 2012

Full Text

 

 
Print E-mail

A Novel Approach for Software Architecture Recovery Using Particle Swarm Optimization

 

Ibrar Hussain1, Aasia Khanum2, Abdul Qudus Abbasi3 and Muhammad Younus Javed 4

  1,4College of Electrical and Mechanical Engineering, National University of Sciences and Technology, Pakistan

  2Department of Computer Science, Forman Christian Collage Lahore, Pakistan

  3Department of Information Technology, Quaid-i-Azam University, Pakistan 

Abstract: Software systems evolve and change with time due to change in business needs with the result that at some stage, the original design and architecture descriptions may not give exact representation of the actual software system. Accurate understanding of software architecture is very important for software maintenance because it helps in estimating scope of change, re-usability, cost, and risk involved in change. In some cases, for instance in legacy systems, an accurate architectural description may not even exist and it becomes necessary to extract the same from source code. Software clustering is the process of decomposing large software system into sub-systems on the basis of similarity between units in the sub-systems, essentially a depiction of the architecture. Software clustering, however, is an NP-hard problem that can be efficiently handled with help of meta-heuristic approaches. Particle Swarm Optimization (PSO) is an evolutionary meta-heuristic search based on flocking behavior of biological species, and can be used to solve software clustering problem. This paper provides a novel framework for software clustering using PSO. The proposed algorithm is examined using three industrial software systems. Comparison of results with another mainstream meta-heuristic shows that the PSO approach performs better in terms of computational effort, consistency, and quality of results.

 

Keywords: Software clustering, software architecture, software maintenance, software evolution, search based software engineering, PSO.

Received 21, September 2012; accept September 18, 2014

Full Text

* According to the request of authors, we updated the affiliation.

 
Print E-mail

A Bi-Dimensional Empirical Mode Decomposition Based Watermarking Scheme

 

Souad Amira-Biad1, Toufik Bouden1, Mokhtar Nibouche2 and Ersin Elbasi3

1Department of Automatic, Jijel University, Algeria

2Department of Applied Sciences Faculty of Environment and Technology,UWE University, UK

3The Scientific and Technological Research Council of Turkey, Turkey

 

Abstract: An invisible robust, non blind watermarking scheme for digital images is presented. The proposed algorithm combines the discrete wavelet transform and the Bi-dimensional Empirical Mode Decomposition (BEMD). Unlike previous works where the watermark bits are embedded directly on the wavelet coefficients, the proposed scheme suggests rather the embedding of the wavelet coefficients of the mean trend results by performing the BEMD on the host image, using Singular Value Decomposition (SVD). The watermarked image has a very good perceptual transparency. The extraction algorithm is a non-blind process, which uses the original image as a reference for retrieving the watermark. The proposed algorithm is robust against rotation, translation, compression and noise addition. It has also a superior Peak Signal to Noise Ratio (PSNR) for the watermarked image. The obtained results, tested on different images by various attacks, are satisfactory in terms of imperceptibility and robustness.

 

Keywords: Watermarking, discrete wavelet transform, BEMD, SVD.

 

Received September 20, 2013; accept February 25, 2014

Full Text

 
Print E-mail

New algorithm for QMF Banks Design and Its Application in Speech Compression Using DWT

 

Noureddine Aloui, Chafik Barnoussi and Adnane Cherif

Sciences Faculty of Tunis, Innov’Com laboratory, University of Tunis El-Manar, Tunisia

 Abstract: This paper presents, a new algorithm for designing Quadrature Mirrors Filters (QMF) banks using windowing techniques.  In the proposed algorithm the cutoff frequency of the prototype filters is iteratively varied such that the perfect reconstruction at frequency (ω=0.5π) in ideal condition is approximately equal to 0.707. The designed QMF banks are used as mother for speech compression algorithm based on discrete wavelet transform. The evaluation tests prove the efficiency of the proposed algorithm in speech compression using wavelets. The comparison results between the proposed algorithms with other existing algorithms used for designing QMF banks show an important reduction in reconstruction error and number of iterations.

  Keywords: QMF, speech compression, discrete wavelet transform, windowing techniques.

Received January 7, 2013; accepted February 12, 2014

Full Text

 
Print E-mail

A Mapping from BPMN Model to JADEX Model

 

Sana Nouzri and AbdelAziz El Fazziki

Computer Science Department, University Cadi Ayyad, Morocco

 

Abstract: The challenge for any enterprise is the evolution of its Information System (IS) to respond to unexpected requests. Face a complex IS, or any requested change; many enterprises live profound transformations. Today, agile IS must be tooled to provide the flexibility and adaptability that require the enterprise managed to remain competitive. This paper shows that business process modeling enriched by agent concepts is the best means for modeling and implementing information systems. The combination of these two technologies gives the idea for a new agent-oriented approach. This work focus on the development of Multi-Agent Systems (MAS) and a set of transformation rules. The transition from one model to another is ensured by a set of automated rules. The development process proposed is based on different meta-models (BPMN, AML, and JADEX) and automated transformation rules with ATL language. Finally, this work  illustrates the proposal with a case study.

 Keywords: IS, model driven architecture, MAS, business process modeling, automated transformation rules.

Received December 10, 2013; accept Febuaruy 12, 2014

Full Text

 
Print E-mail

Evaluating Bias in Retrieval Systems for Recall Oriented Documents Retrieval

 

Sanam Noor1 and Shariq Bashir2

1Department of Computer Science, University of Peshawar, Pakistan

2Centre of Science and Engineering, New York University Abu Dhabi, UAE

 

Abstract: Evaluation of retrieval system has always been a focus of research. Most of the retrieval systems seem to be more efficient for precision oriented documents than recall oriented documents. Since there is a difference between recall and precision oriented documents. Therefore, a system that is efficient for the retrieval of precision oriented documents needs not to be good for recall oriented documents as well. Evaluation of retrieval system is very necessary in order to determine whether these methods are suitable for recall oriented documents retrieval or not. We evaluate different retrieval systems for recall oriented documents retrieval. Our main focus is on finding the bias in retrieval systems. We use different retrieval systems for evaluation; in which four are query expansion techniques while the other three retrieve documents without using query expansion techniques. Patent documents are used for analyzing the effectiveness of retrieval systems. Accessibility of documents is measured by retrievability measurement. Lorenz curve and Gini coefficient are used for measuring bias in systems. Our experiments results show that tfidf is less biased. While exact method show high retrievability inequality. In query expansion techniques language modelling shows less inequality.

Keywords: Retrieval systems evaluation, search systems bias analysis, retrievability measurement, patent retrieval.

Received February 28, 2013; accepted September  16, 2013

Full Text

 
Print E-mail

An Approach for Detecting Spam in Arabic Opinion Reviews

 

Ahmed Abu Hammad1 and Alaa El-Halees2

1Department of Computer Science & Information Technology, College of Science & Technology-Khanyounis, Palestine

2Faculty of Information Technology, Islamic University-Gaza, Palestine

 Abstract: For the rapidly increasing amount of information available on the Internet, little quality control exists, especially over the user-generated content. Manually scanning through large amounts of user-generated content is time-consuming and sometime impossible. In this case, opinion mining is a better alternative. Although it is recognized that the opinion reviews contain valuable information for a variety of applications, the lack of quality control attracts spammers who have found many ways to draw their benefits from spamming. Moreover, the spam detection problem is complex because spammers always invent fresh methods that can't be easily recognized. Therefore, there is a need to develop a new approach that works to identify spam in opinion reviews. We have some in English; we need one in Arabic language in order to identify Arabic spam reviews. To the best of our knowledge, there is still no published study to detect spam in Arabic reviews. In this research, we propose a new approach for performing spam detection in Arabic opinion reviews by merging methods from data mining and text mining in one mining classification approach. Our work is based on the state-of-the-art achievements in the Latin-based spam detection techniques keeping in mind the specific nature of the Arabic language. In addition; we overcome the drawbacks of the class imbalance problem by using sampling techniques. The experimental results show that the proposed approach is effective in identifying Arabic spam opinion reviews. Our designed machine learning achieves significant improvements. In the best case, our F-measure is improved to 99.59%.

 

Keywords Opinion mining, Arabic opinion mining, spam review, spam detection.

Received May 17, 2013; accepted June 25, 2013

Full Text

 
Print E-mail

An Effective Soft Error Detection Mechanism Using Redundant Instructions

 

Seyyed Amir Asghari1 and Hassan Taheri2

1Computer Engineering and Information Technology Department, Amirkabir University of Technology, Iran

2Electrical Engineering Department, Amirkabir University of Technology, Iran

 

Abstract: Computer systems which operate in space environment are subject to different radiation phenomena that lead to soft errors and can cause unpredictable behaviors of computer-based systems. Commercial Of -The Shelf (COTS) equipment which is commonly used in space missions cannot tolerate some threats such as Single Event Upsets (SEU). Therefore, there are some considerations in resisting this equipment against possible threats. In this paper, a software instruction level method that is called SEDRI (Soft Error Detection using Redundant Instructions) is provided to detect soft errors which influence control flow and program data. This method is evaluated by fault injection on several C benchmark programs. The experimental results show that without protecting a program against control flow and data errors, 34% of them affect the program and damage it; but, by using our method, this rate is decreased to about 11%. Comparing to previous presented techniques, SEDRI method has a considerable improvement in performance and memory overhead, i.e., 46% and 55% respectively, and its fault coverage decrease about 9%.

 

Keywords: Control Flow Checking; data error; fault coverage; soft error; software-based error detection.

Received October 28, 2012; accepted January 18, 2013 

Full Text

 
Print E-mail

Towards Intelligence Engineering in Agent-Based Systems

 

 Shiva Vafadar and Ahmad Barfourosh

Computer Engineering & IT Faculty, Amirkabir University of Technology, Iran

 

Abstract: In this paper, we consider intelligence as an explicit requirement in modern information systems and introduce intelligence engineering as the application of a systematic and disciplined approach to the development, operation, and maintenance the intelligence. After categorizing intelligence sub-characteristics based on an extensive quantitative analysis, we choose learning as the most frequent sub-characteristic of intelligence and present a process for specifying learning in intelligent systems. This process is based on Organizational Multiagent System Engineering (O-MaSE) process framework and extends this framework to cover learning analysis concepts. More precisely, by using O-MaSE meta-model and learning analysis meta-models, alongside applying the process for development a system, we study the potentials of the process framework in specifying learning concepts. Based on the discovered shortcomings of the process, we propose extensions to enrich the process for specifying learning, namely for Requirements gathering, Goal modeling, Knowledge modeling and Environment specification. The applicability of the extended O-MaSE is evaluated by applying it on a book trading system.  Based on this systematic process, software development benefits from the more specific assumptions on capabilities expected for learning, and it moves towards more sophisticated engineering practices which prevents implicit or ad-hoc activities for developing features of intelligent.

 

Keywords: Intelligence, Learning, O-MaSE, intelligence engineering, agent Oriented software engineering.

 

Received May16, 2012; accepted August 16, 2013

Full Text

 
Print E-mail

A Greedy Approach for Coverage-Based Test Suite Reduction

 Preethi Harris and Nedunchezhian Raju

Faculty of Information Technology, Sri Ramakrishna Engineering College, India

Abstract: Software testing is an activity to find maximum number of yet undiscovered errors with optimum time and effort. As the software evolves, the size of the test suite also grows with new test cases being added to the test suite. However due to time and resource constraints rerunning all the test cases in the test suite is not possible, every time the software is modified. To deal with these issues, the test suite size should be manageable. In this paper a novel approach is presented to select a subset of test cases that exercise the given set of requirements with for data flow testing. In order to express the effectiveness of the proposed algorithm, both the  existing  Harrold Gupta and Soffa (HGS) and Bi-Objective Greedy (BOG) algorithms were applied to the generated test suites. The results obtained for the proposed algorithm was compared with the state-of-art algorithms. The results of the performance evaluation show that, when compared to the existing approaches, the proposed algorithm selects near optimal test cases that satisfy maximum number of testing requirements without compromising on the coverage aspect.

 

Keywords: Software testing, test cases, test suite, requirements, coverage and adequacy criterion. 

 

Received April 2, 2012; accept May 16, 2013

Full Text

 
Print E-mail

Combining Tissue Segmentation and Neural Network for Brain Tumor Detection

Selvaraj Damodharan1, Dhanasekaran Raghavan2

1Department of Electronics and Communication Engineering, Sathyabama University, India

2Syed Ammal Engineering College, Anna University, India

Abstract: The decisive plan in a large number of image processing applications is to take out the significant features from image data, in which a description, interpretation, or understanding of the scene can be provided by the machine. The segmentation of brain tumor from magnetic resonance images is a vital, but time-consuming task performed by medical experts. In this paper, we have presented an effective brain tumor detection technique based on neural network and our previously designed brain tissue segmentation. This technique hits the target with the aid of the following major steps, which includes, 1) Pre-processing of the brain images, 2) segmentation of pathological tissues (Tumor), normal tissues (White Matter and Gray Matter) and fluid (Cerebrospinal Fluid), 3) Extraction of the relevant features from each segmented tissues and 4) Classification of the tumor images with neural network. As well, the experimental results and analysis is evaluated by means of quality rate with normal and the abnormal MRI images. The performance of the proposed technique is been validated and compared with the standard evaluation metrics such as sensitivity, specificity and accuracy values for Neural network, K-NN classification and bayesian classification techniques. The obtained results depicts that the classification results yields better results in neural networks when compared with the other techniques.


Keywords: Brain MRI image, cerebrospinal fluid, white matter, gray matter, tumor region, feature extraction, neural network.

    Received May 24, 2012; accept August 20, 2013 

Full Text

 
Print E-mail

Investigation on IRIS Recognition System Adopting Cryptographic Techniques

    Shanmugam Selvamuthukumaran1, Shanmugasundaram Hariharan2, and Thirunavkarasu Ramkumar3

1,3Department of Computer Applications, A.V.C.College of Engineering, India

2Department of Computer Science and Engineering, TRP Engineering College, India

   Abstract: In a progressive digital society, the demand for secure identification has led to amplified development of biometric systems. The demand for such biometric system has increased dramatically due to the fact that such system recognizes unique features possessed by each individual.  Iris recognition systems have widely adopted and accepted as one of the most effective ways to positively identify people, so as to provide a secure environment. Though there exists variety of approaches for iris recognition, this paper focus on to examine the matching phase of iris component using cryptographic technique. The performance of the matching phase is well analyzed and it is proved that proposed optimization technique namely, Optimized Iris Matching using Cyclic Redundancy Check (CRC) would be more effective in nature as compared to other approaches.  We have also proved that the proposed approach improves the overall iris recognition system performance by the improvement factor of 10 fold as well. The experimental investigations and the results presented reveals that there is a significant improvement in False Accept Rate (FAR) and False Rejection Rate (FRR).


Keywords: Iris recognition, security, optimization, biometric, cryptography, CRC.

  

Received March 9, 2012; accepted July 25, 2013 

Full Text

 
<
 
Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
 
 
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 Warning: fsockopen(): unable to connect to oucha.net:80 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 skterr