Friday, 30 September 2016 16:05

An Efficient Age Estimation System with Facial Makeover Images Based on Key Points Selection

Tamilb Vairavan1 and Kaliyaperumal Vani2

1Department of Electronics and Instrumentation, R.M.K Engineering College, India

2Department of Information Science and Technology College of Engineering, Anna University, India

Abstract: Age is one of the essential factors in establishing the identity of the person. Estimation of the human age is a procedure adopted by anthropologists, archaeologists and forensic scientists. Compared with other cognition problems, age estimation from face images is still very challenging. Predicting and estimating the age from facial images with makeup is an interesting task in digital entertainment. Estimating age from a facial image is an intriguing and exigent task. Aging changes both shape as well as texture and it is an irreversible, uncontrollable and personalized. The efficiency of the age estimation system degrades with respect to facial makeover. The main objective of this research is to estimate the age of a human from the facial image with makeup. Initially, the face image will be normalized by employing a face detection algorithm. After detecting the face exactly, we have extracted the unique features (key points) from the images such as texture, shape and regions. Estimating the age of a person with different makeovers is not an easy task. To overcome this difficulty, we have to identify the uniqueness of each image of a same person. The eye part does not change whatever the person having the makeup. So the eyes are same for the person with different makeover. For region area or key points, the eye portion will be segmented from the detected face image. The shape feature can be extracted by Active Appearance Model (AAM). Finally, based on the feature library, the image can be classified under a particular age group using Artificial Neural Network (ANN). After the classification the age can be predicted. The proposed approach will be implemented in MATLAB and planned to be evaluated using various facial makeover images.

Keywords: Age estimation system, AAM, ANN, LGXP.

Received July 1, 2013; accepted March 20, 2014

Full Text 

Tuesday, 26 April 2016 07:07

Expert Ranking using Reputation and Answer Quality of Co-existing Users

Muhammad Faisal1, Ali Daud2, and Abubakr Akram3

1,2Faculty of Basic and Applied Science, International Islamic University, Pakistan

1,3COMSATS Institute of IT, Pakistan

 Abstract: Online discussion forums provide knowledge sharing facilities to online communities. Usage of online discussion forums has increased tremendously due to the variety of services and their ability of common users to ask question and provide answers. With the passage of time, these forums can accumulate huge contents. Some of these posted discussions may not contain quality contents and may reflect users’ personal opinions about topic which may contradict with a relevant answer. These low quality discussions indicate the existence of unprofessional users. Therefore, it is imperative to rank an expert in online forums. Most of the existing expert-ranking techniques consider only user’s social network authority and content relevancy features as parameters of evaluating user expertise. But user reputation as a group member of thread repliers is not considered. In this context a novel solution of expert ranking in online discussion forums is proposed. We proposed two expert ranking techniques: The first technique is based on user and their co-existing user’s reputation in different threaded discussions, and the second technique is based on user answers’ quality and their category specialty features. Furthermore, we extended a technique expertise rank with our proposed features sets. The experimental study based on real dataset shows that the following proposed techniques perform better than existing techniques.

 Keywords: Co-existing user, expertise rank, ExpRank-CRF, ExpRank-COM, ExpRank-FB, ExpRank-AQCS.

 Received October 14, 2014, accepted July 9, 2015

Sunday, 20 March 2016 06:09

 

A Quantitative Evaluation of Change Impact Reachability and Complexity across Versions of Aspect Oriented Software

Senthil Suganantham, Chitra Babu, and Madhumitha Raju

Department of Computer Science and Engineering, SSN College of Engineering, India

Abstract: Software developed using a proven methodology exhibits an inherent capability to readily accept the changes in its evolution. This constant phenomenon of change is managed through maintenance of software. By modelling software using Aspect Oriented Software Development (AOSD) methodology, the designer can build highly modularized software that allows changes with lesser impact compared with a non-AOSD approach. Software metrics play a vital role to indicate the degree of system inter-dependencies among the functional components and provide valuable feedback about the impact of changes on reusability, maintainability and reliability. During  maintenance, software adapts to the changes in requirements and hence it is important to assess the impact of these changes across different versions of the software. This paper focuses on analysing the impact of changes towards maintenance for a set of Aspect Oriented (AO) applications taken as case study. Existing versions of three AO benchmark applications have been chosen and a set of metrics are defined to analyze the impact of changes made across different versions. An AO Software Change Impact Analyzer (AOSCIA) tool was also developed to study the impact of the changes across the selected versions. It was found that the impact of changes and the related ripple effect is less for AO modules compared to the Object Oriented (OO) modules. Hence, we deduce that the maintainability is improved by adopting the AO methodology.

Keywords: AOSD, change propagation reachability, cognitive complexity, software metrics, software maintenance.

Received September 29, 2013; accepted May 9, 2014.

Full Text


Sunday, 20 March 2016 06:07

A Study on Multi-screen Sharing System Using H.264/AVC Encoder

Shizhe Tan and Fengyuan Zhang

Department of electronic engineering, Ocean University of China, China

Abstract: H.264/AVC is a standard for video compression developed by the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC JTC1 Moving Picture Experts Group (MPEG). However, the computational complexity of H.264/AVC contributed a lot to the delay time of multi-screen sharing system. In this paper, the motion estimation algorithms provided in X264 have been analysed, and an optimized algorithm has been proposed, this optimized algorithm can reduce a lot of unnecessary computation. And more, this paper designed and implemented the multi-screen sharing system with the improved  encoder. Experimental results show that the proposed method has increased the encoding speed and decreased the delay time, while incurring little, if any, loss in quality.

Keywords: Multi-screen sharing system, H.264/AVC, motion estimation, delay time.

Received August 25, 2013; accepted October 26, 2014.

Full Text

Sunday, 20 March 2016 06:05

Medical Image Segmentation With Fuzzy C-Means and Kernelized Fuzzy C-Means Hybridized on PSO and QPSO

Anusuya Venkatesan1 and Latha Parthiban2

1Department of Information Technology, Saveetha School of Engineering, India

2Department of Computer Science, Pondicherry University, India

Abstract: Medical image segmentation is a key step towards medical image analysis. The objective of medical image segmentation is to delineate Region Of Interests (ROI) from the images. Hybridization of nature inspired algorithms with soft computing provides accurate image segmentation results in less computation time. In this work, various algorithms for medical image segmentation which help medical practitioners for better diagnosis and treatment are discussed and the following global optimized clustering techniques are proposed; Fuzzy C-Means optimized with Particle Swarm Optimization (FCMPSO), Kernelized Fuzzy C-Means optimized with PSO (KFCMPSO), Fuzzy C-Means optimized with Quantum PSO (FCMQPSO) and KFCMQPSO to extract ROI from the medical images. The experiments were conducted on Magnetic Resonance Imaging (MRI) images and analysis were carried out with respect to average intra cluster distance, elapsed time/computation time and Davies Bouldin Index (DBI). The conventional FCM is noted to be more sensitive to noise and shows poor segmentation performance on the images corrupted by noise. The experimental results showed that the proposed hybridized FCM and KFCM with PSO and QPSO performs well with good convergence speed. The convergence speed is found to be approximately three units lesser than other algorithms.

Keywords: Medical image segmentation, clustering, FCM, KFCM, PSO ,QPSO and DBI.

Received October 31, 2013; accepted July 13, 2014

Full Text

Sunday, 20 March 2016 05:58

A Novel Algorithm for Enhancing Search Results by Detecting Dissimilar Patterns Based on Correlation Method

Poonkuzhali Sugumaran1, Kishore Kumar Ravi1, and Thirumurugan Shanmugam2

1Department of Information Technology, Rajalakshmi Engineering College, India

2Department of Information Technology, College of Applied Science-Sohar, Oman

Abstract: The dynamic collection and voluminous growth of information on the web poses great challenges for retrieving relevant information. Though most of the researchers focused their research work in the areas of information retrieval and web mining, still their focus is only on retrieving similar patterns leaving dissimilar patterns which are likely to contain the outlying data. So this paper concentrates on mining web content outliers which extract the dissimilar web documents taken from the group of documents of same domain. Mining web content outliers indirectly help in promoting business activities and improving the quality of the search results. Existing algorithms for web content outliers mining are developed for structured documents, whereas, World Wide Web (WWW) contains mostly unstructured and semi structured documents. Therefore, there is need to develop a technique to mine outliers for unstructured and semi structured document types. In this research work, a novel statistical approach based on correlation method is developed for retrieving relevant web document through outlier detection technique. In addition, this method also identifies the redundant web documents. Removal of both redundant and outlaid documents improves the quality of search results catering to the user needs. Evaluation of the correlation method using Normalized Discounted Cumulative Gain method (NDCG) gives search results above 90%. The experimental results proved that this methodology gives better results in terms of accuracy, recall and specificity than the existing methodologies.

Keywords: Correlation, dissimilar patterns, outliers, redundant, relevance, term frequency, web content outliers.

Received January 19, 2014; accepted May 21, 2014.

Full Text

 

Tuesday, 08 March 2016 09:00

A Web and Software-Based Approach Blending Social Networks for Online Qur’anic Arabic Learning

Matin Abdullah1, Al-Sakib Pathan2, and Imad Al Shaikhli2

1Department of Computer Science and Engineering, BRAC University, Bangladesh

2Department of Computer Science, International Islamic University Malaysia, Malaysia

Abstract: About 80 percent of the world’s Muslim populations are non-native speakers of the Arabic language. Since it is obligatory for all Muslims to recite Qur’an in Arabic during prayers, an extraordinary social phenomenon has taken place in some parts of the Muslim world: Muslims are taught the complex phonological rules of the Arabic language in the context of Qur’an and they recite the “sounds” of Qur’an often understanding very little. This has given rise to a demographic segment of adult learners whose main learning goal is recalling a closed set of syntactic rules and vocabularies in the context of Qur’an while reciting or listening to it so that they can reconstruct a meaning in their native-language. Despite the availability of some resources for learning language for this specific purpose, according to our detailed investigation, no work has explored the possibilities of emerging adaptive and intelligent systems for collaborative learning to address this challenge.The goals of this work are: (a) To determine the applicability of learner corpus research, declarative memory modelling, and social learning motivation on the learners’ specific pedagogical objectives; (b) To use the Design-Based Research methodology (DBR) to optimize the design of such a system in real-life setting to observe how the different variables and elements work out. We present here, a prototype to gather requirement analysis of such a system by bootstrapping a user community. The compiled data were used to design an initial architecture of an intelligent and adaptive Qur’anic Arabic learning system.

Keywords: Arabic, language, processing, qur’an, requirement, software, tools, user, web-based.

Received January 29, 2014; accepted September 9, 2014

Tuesday, 08 March 2016 08:58

Logical Schema-Based Mapping Technique to Reduce Search Space in the Data Warehouse for Keyword-Based Search

Fiaz Majeed and Muhammad Shoaib

Department of Computer Science and Engineering, University of Engineering and Technology, Pakistan

Abstract: Data warehouse systems are used for decision-making purposes. The Online Analytical Processing (OLAP) tools are commonly used to query and analysis of results on such systems. It is complex task for non-technical users (executives, managers etc.,) to query the data warehouse using OLAP tool keeping in view the schema knowledge. For such data warehouse users, a natural language interface is a viable solution that transparently access data to fulfil their requirement. As data warehouse contain several times more data (that increase with incremental refreshes) than the operational systems. So keyword-based searching in such systems cannot be performed similar to database based natural language systems. Existing natural language interfaces to data warehouse commonly explore keywords in data instances directly that takes more than sufficient time in generating results. This paper proposes a Logical Schema-based Mapping (LSM) technique to reduce search space in the data warehouse data instances. It performs mapping of the natural language query keywords with logical schema of the data warehouse to identify the elements prior to search in the data instances. The retrieved matches for a keyword are ranked based on six criteria proposed in this paper. Further, an algorithm has been presented which is developed upon the proposed criteria. Targeted search in the data instances is then performed efficiently after the identification of schema elements. The in-depth experiments have been carried out on real dataset to evaluate the system with respect to completeness, accuracy and performance parameters. The results show that LSM technique outperforms the existing systems.

Keywords: Database systems, natural language processing for data warehouse, information systems, data warehousing, natural language interface, keyword-based query processing.

Received January 22, 2014; accepted October 14, 2014

Tuesday, 08 March 2016 08:57

A Novel Hybrid Chemical Reaction Optimization Algorithm with Adaptive Differential Evolution Mutation Strategies for Higher Order Neural Network Training

Sibarama Panigrahi

School of Computer Science, National Institute of Science and Technology, India

Abstract: In this paper, an application of a hybrid Chemical Reaction Optimization (CRO) algorithm with adaptive Differential Evolution (DE) mutation strategies for training Higher Order Neural Networks (HONNs), especially the Pi-Sigma Network (PSN) is presented. Contrasting to traditional CRO algorithms, the reactant size (population size) remains fixed throughout all iterations, which makes it easier to implement. In addition, four DE mutation strategies (DE/rand/1, DE/best/1, DE/rand/2 and DE/best/2) with adaptive selection of control parameters as inter-molecular reactions and one intra-molecular reaction have been used. The proposed algorithm combines the diversification property of inter-molecular reactions following DE/rand mutation strategies and intensification property of intra-molecular reaction as well as inter-molecular reactions following DE/best mutation strategies, thereby glorifying the chances of reaching the global optima in less iteration. The performance of the proposed algorithm for HONN training is evaluated through a well-known neural network training benchmark i.e., to classify the parity-p problems. The results obtained from the proposed algorithm to train HONN have been compared with results from the following algorithms: basic CRO algorithm, CRO-HONNT and the most popular variants of DE algorithm (DE/rand/1/bin, DE/best/1/bin). It is observed that the application of the proposed hybridized algorithm to

HONN training (DE-CRO-HONNT) performs statistically better than that of other algorithms considering both classification accuracy and number of generation taken to attain the solutions.

Keywords: CRO, DE, HONN, training algorithm, PSN.

Received August 18, 2013; accepted September 21, 2014

Full Text           

Tuesday, 08 March 2016 08:52

Unified Author Ranking based on Integrated Publication and Venue Rank

                                                                                           Ammad Usmani and Ali Daud

                                      Department of Computer Science and Software Engineering, International Islamic University, Pakistan

Abstract: Authors’ ranking can be used to determine authenticity of authors in particular domain. Several different methods for author ranking focusing on number of publications and number of citations are proposed. In this paper, we propose ranking algorithms for publications, conferences, journals and respective authors. In publication ranking, both incoming and outgoing citations are considered. In case a publication is published in a well-reputed venue (conference or journal) then it is expected to have a high number of citations. Resultantly, due importance is given to venues and their scores are computed from popularity of their publications. Both publications’ ranking and venue scores are used to rank authors, where authors having published in well reputed venues would have added benefits. We used multiple features to rank publications and venue effectively. These scores are then further used for ranking authors, instead of just using the number of citations for author ranking. Results of comparative study show a significant improvement in author ranking due to the inclusion of proposed features.

Keywords: Publication, venue, ranking, author ranking, pagerank

Received July 21, 2014; accepted February 10, 2015

Wednesday, 24 February 2016 08:30

Audiovisual Speaker Identification Based on Lip and Speech Modalities

Fatma Chelali and Amar Djeradi

Faculty of Electronics Engineering and Computer Science, University of Science and Technology HouariBoumedienne, Algiers 

Abstract: In this article, we present a bimodal speaker identification method, which integrates both acoustic and visual features and where the two audiovisual stream modalities are processed in parallel. We also propose a fusion technique that combines the two modalities to make the final recognition decision. Experiments are conducted on an audiovisual dataset containing the 28 Arabic syllables pronounced by ten speakers. Results show the importance of the visual information that is provided by Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) in addition to the audio information corresponding to the Mel Frequency Cepstra Coefficients (MFCC) and Perceptual Linear Predictive (PLP). Furthermore some artificial neural networks such as Multilayer Perceptron (MLP) and Radial Basis Function (RBF) were investigated and tested successfully in this dataset by presenting good recognition performances with serial concatenation for the acoustic and visual vectors.

Keywords: Audiovisual speaker recognition, DCT, DWT, PLP, MFCC.

Full Tex t

 

 

 

Monday, 22 February 2016 07:17

Joint Image Denoising and Demosaicking by Low Rank Approximation and Color Difference Model

Xia Zhai1, Weiwei Guo2, Yongqin Zhang3, Jinsheng Xiao4, and Xiaoguang Hu5

1Arts Department, Henan University of Economics and Law, China

2China Unicom in Chongqing Branch, China

3School of Information Science and Technology, Northwest University, China

4School of Electronic Information, Wuhan University, China

5School of Criminal Science and Technology, Peopleʼs Public Security University of China, China

 Abstract: Digital cameras generally use a single image sensor which surface is covered by a color filter array. The Color Filter Array (CFA) limits each sensor pixel by sampling one of the three primary color values (red, green or blue), whereas the other two missing color values would be acquired by the post-processing procedure called demosaicking. From the noisy CFA data, the full color images are reconstructed through an imaging pipeline of demosaicking and denoising. However, image denoising in the RGB space has expensive computation cost. In this paper, to increase the efficiency and the color fidelity, we propose a novel joint denoising and demosaicking strategy to reconstruct the noiseless full color image from the input noisy CFA data. The low-rank approximation technique is first used to remove the noise from CFA data. Then, image demosaicking using both color difference space and signal correlation are applied to the denoised CFA data to obtain the noise-less full color image. The experimental results show that the proposed algorithm not only improves the quality of full color image but also outperforms the existing state-of-the-art methods both subjectively and objectively.

Keywords: denoising, image demosaicking, CFA, low rank approximation, color difference model.

Monday, 22 February 2016 07:13

Enforcement of Rough Fuzzy Clustering Based on Correlation Analysis

 Revathy Subramanion1, Parvathavarthini Balasubramanian2, and Shajunisha Noordeen3

1Research Scholar, Sathyabama University, India

2Department of Master of Computer Applications, Anna University, India

3Post Graduate Scholar, Sathyabama University, India

 Abstract: Clustering is a standard approach in analysis of data and construction of separated similar groups. The most widely used robust soft clustering methods are fuzzy, rough and rough fuzzy clustering. The prominent feature of soft clustering leads to combine the rough and fuzzy sets. The Rough Fuzzy C-Means (RFCM) includes the lower and boundary estimation of rough sets, and fuzzy membership of fuzzy sets into c-means algorithm, the widespread RFCM needs more computation. To avoid this, this paper proposes Fuzzy to Rough Fuzzy Link Element (FRFLE) which is used as an important factor to conceptualize the rough fuzzy clustering from the fuzzy clustering result. Experiments with synthetic, standard and the different benchmark dataset shows the automation process of the FRFLE value, then the comparison between the results of general RFCM and RFCM using FRFLE is observed. Moreover, the performance analysis result shows that proposed RFCM algorithm using FRFLE deals with less computation time than the traditional RFCM algorithms.

Keywords: Software clustering, FCM, RCM, RFCM, FRFLE.

Monday, 22 February 2016 07:10

Frequency Model Based Crossover Operators for Genetic Algorithms Applied To the Quadratic Assignment Problem

 

Hachemi Bennaceur and Zakir Ahmed

Department of Computer Science, Al Imam Mohammad Ibn Saud Islamic University, Saudi Arabia

 Abstract: The quadratic assignment problem aims to find an optimal assignment of a set of facilities to a set of locations that minimizes an objective function depending on the flow between facilities and the distance between locations. In this paper we investigate a Genetic Algorithm (GA) using new crossover operators to guide the search towards unexplored regions of the search space. First, we define a frequency model which keeps in memory a frequency value for each pair of facility and location. Then, relying on the frequency model we propose three new crossover operators to enhance genetic algorithm for the quadratic assignment problem. The first and second ones, called Highest Frequency crossover (HFX) and Greedy Highest Frequency crossover (GHFX), are based only on the frequency values, while the third one, called Highest Frequency Minimum Cost crossover (HFMCX), combines the frequency values with the cost induced by assigning facilities to locations. The experimental results comparing the proposed crossover operators to three efficient crossover operators, namely, One Point crossover (OPX), Swap Path crossover (SPX) and Sequential Constructive crossover (SCX), show effectiveness of our proposed operators both in terms of solution quality and computational time.

Keywords: Frequency model, quadratic assignment problem, GA, HFX.

 

Monday, 22 February 2016 07:04

Analysis of Hybrid Router-Assisted Reliable Multicast Protocols in Lossy Networks


Lakhdar Derdouri1, Congduc Pham2, and Mohamed Benmohammed3

1ReLa(CS)2 Laboratory, Larbi Ben M’hidi, Algeria

2LIUPPA Laboratory, Pau University, France

3LIRE Laboratory, Constantine 2 University, Algeria

Abstract: Router-assisted concepts have been proposed in many research areas including reliable multicast protocols. These concepts can limit the implosion and repair locality problems in an effective way by attributing the role of repair locality to the specific router close to the point of packet loss. Several router-assisted reliable multicast protocols have been proposed in the literature. However, the extent of the reliability benefit of combining sender-initiated and receiver-initiated protocol classes is not known. This paper quantifies the reliability gain of combining classes for reliable multicasting in lossy networks. We define the delivery delay, the bandwidth consumption, and the buffer requirements as the performance metrics for reliability. We then use simulations to study the impact of multicast group size and loss rate on the performance of combining protocol classes. Our numerical results show that combining classes significantly improves the delivery delay, reduces the consumption of the network bandwidth and minimizes the buffer size at the routers compared to receiver-initiated class alone. The performance gains increase as the size of the network and the loss rate increase, making the combination of classes approach more scalable with respect to these parameters. 

Keywords: Router-assisted, reliable multicast, sender-initiated, receiver-initiated, delay, bandwidth, buffer requirements..

 

Monday, 22 February 2016 06:47

A Novel Authentication Mechanism Protecting Users’ Privacy in Pervasive Systems

 

Mohammed Djedid and Abdallah Chouarfia

Faculty of Science, University of Sciences and Technology of Oran-Mohammed Boudiaf-, Algeria

Abstract: Transparency of the system and its integration into the natural environment of the user are some of the important features of pervasive computing. But these characteristics that are considered as the strongest points of pervasive systems are also their weak points in terms of the user’s privacy. The privacy in pervasive systems involves more than the confidentiality of communications and concealing the identity of virtual users. The physical presence and behavior of the user in the pervasive space cannot be completely hidden and can reveal the secret of his/her identity and affect his/her privacy. This paper shows that the application of state-of-the-art techniques for protecting the user’s privacy still insufficient. A new solution named shadow protocol is proposed, which allows the users to authenticate and interact with the surrounding devices within an ubiquitous computing environment while preserving their privacy.

Keywords: Pervasive systems, Identification/authentication, privacy.

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…