March 2018, No. 2
Print E-mail
A Message from the Editor
Print E-mail

Idle Time Estimation for Bandwidth-Efficient Synchronization in Replicated Distributed File System

Fidan Kaya Gülağız, Süleyman Eken, Adnan Kavak, and Ahmet Sayar

Department of Computer Engineering, Kocaeli University, Turkey

Abstract: Synchronization is a promising approach to solve the consistency problems in replicated distributed file systems. The synchronization can be repeated periodically, with fixed time interval or a time interval which can be adjusted adaptively. In this paper, we propose a policy-based performance efficient distributed file synchronization approach, in which synchronization processes occur in varying time intervals and adjusted adaptively. The study is based on tracing network idle times by means of measuring and clustering Round Trip Time (RTT) values. K-means clustering is used to cluster RTT values as idle, normal, and busy. To estimate the most suitable synchronization time intervals, the measured RTT values are included into these classes with an algorithm similar to Transmission Control Protocol (TCP) Additive-Increase/Multiplicative-Decrease (AIMD) feedback control. The efficiency and feasibility of the proposed technique is examined on a distributed file synchronization application within the scope of Fatih project, which is one of the most important educational projects in Turkey.

Keywords: Idle time detection algorithm, cloud traffic, round trip time, K-means clustering, distributed file synchronization, policy-based synchronization.

Received October 4, 2015; accepted January 3, 2016

Full text   


Print E-mail

Image Processing in Differential Digital

Holography (DDH)

Kresimir Nenadic, Tomislav Galba, and Irena Galic

Faculty of Electrical Engineering, Computer Science and Information Technology in University of Osijek, Croatia

Abstract: Accumulating dust on Charge-Coupled Device/Complementary Metal-oxide-Semiconductor (CCD/CMOS) sensors can cause problems in detecting defects on observed object in some industrial production. This paper describes Differential Digital Holography (DDH) and observed effect of cancelling the negative impact of dust on optical sensor. The laboratory setup for recording digital holograms is described and shown graphically later in paper. Differential digital holography method is presented step by step. Furthermore, negative effect of accumulating dust on CCD/CMOS sensor and cancelling effect due to DDH method is explained. DDH method comprises of both hardware and software parts. Digital hologram recording process takes place on hardware and all image, i.e., digital hologram, while processing is performed by intensive calculations on processor. Experiments were conducted and graphical results are shown.

Keywords: CCD/CMOS image sensors, digital holography, dust, holographic optical components, Image processing.

Received April 15, 2015; accepted November 29, 2016

Full text   


Print E-mail

An Optimized Model for Visual Speech Recognition Using HMM

Sujatha Paramasivam1 and Radhakrishnan Murugesanadar2

1Department of Computer Science and Engineering, Sudharsan Engineering College, India

2Department of Civil Engineering, Sethu Institute of Technology, India

Abstract: Visual Speech Recognition (VSR) is to identify spoken words from visual data only without the corresponding acoustic signals. It is useful in situations in which conventional audio processing is ineffective like very noisy environments or impossible like unavailability of audio signals. In this paper, an optimized model for VSR is introduced which proposes simple geometric projection method for mouth localization that reduces the computation time.16-point distance method and chain code method are used to extract the visual features and its recognition performance is compared using the classifier Hidden Markov Model (HMM). To optimize the model, more prominent features are selected from a large set of extracted visual attributes using Discrete Cosine Transform (DCT). The experiments were conducted on an in-house database of 10 digits [1 to 10] taken from 10 subjects and tested with 10-fold cross validation technique. Also, the model is evaluated based on the metrics specificity, sensitivity and accuracy. Unlike other models in the literature, the proposed method is more robust to subject variations with high sensitivity and specificity for the digits 1 to 10. The result shows that the combination of 16-point distance method and DCT gives better results than only 16-point distance method and chain code method.

Keywords: Visual speech recognition, feature extraction, discrete cosine transform, chain code, hidden markov model.

Received March 20, 2015; accepted August 31, 2015

Full text   


Print E-mail

A Fuzzy Based Matrix Methodology for Evaluation and Ranking of Data Warehouse Conceptual Models Metrics

Naveen Dahiya1, Vishal Bhatnagar2, and Manjeet Singh3

1Maharaja Surajmal Institute of Technology, C-4, Janakpuri, India

2Ambedkar Institute of Advanced Communication Technology and Research, India

3YMCA University of Science and Technology, Sector-6, India

Abstract: The authors present a methodology for ranking data warehouse conceptual models metrics based on opinion of experts using fuzzy inference technique. The fuzzy based approach gives a precise ranking methodology due to its ability to handle imprecise data involved in ranking of metrics and ambiguity involved in expert decision making process. The proposed work aims towards ranking of quality metrics already proposed and validated by Manuel Serrano along certain identified parameters based on expert opinion and evaluation of criteria matrix using permanent function. The results obtained are also compared with the actual experts ranking. The achieved results are better as the imprecise human thinking is taken into consideration during calculation of results to give realistic results.

Keywords: Fuzzy, data warehouse, conceptual models, quality metrics, criteria matrix.

Received October 23, 2014; accepted July 7, 2015

Full text   


Print E-mail

DragPIN: A Secured PIN Entry Scheme to Avert


Rajarajan Srinivasan

 School of Computing, SASTRA University, India

Abstract: Personal Identification Numbers (PIN) are widely used for authenticating users for financial transactions. PIN numbers are entered at Automatic Teller Machine (ATMs), card payments at Point of Sale (POS) counters and for e-banking services. When PIN numbers are keyed in by the users, they are vulnerable to shoulder surfing and keylogging attacks. By entering PIN numbers through virtual keyboards, the keylogging attacks can be mitigated, but it elevates the risk of shoulder surfing. A number of shoulder surfing resistive keyboard schemes have been proposed. But many of them offer inadequate security and are poor in usability. They also demand substantial user intelligence, training, user memory and additional devices for entering the PIN numbers. Keeping in mind that securing PIN number should not be done at the cost of user inconvenience, a new scheme based on key sliding is proposed in this paper. Two variations of the scheme are presented. They are based on manual and automatic sliding of keys and indirect user entry of PIN numbers. Our proposed schemes are simple and easy to adopt. They are sufficiently stronger against attacks. Our extensive analysis and user study of the schemes have proved their security and usability.

Keywords: PIN, Shoulder surfing, keylogging, virtual keyboard, user authentication, e-banking, man-In-the-middle attacks.

Received September 24, 2014; accepted August 12, 2015

Full text   

Print E-mail

Fuzzy Logic based Decision Support System for Component Security Evaluation

Shah Nazir1, Sara Shahzad1, Saeed Mahfooz1, and Muhammad Nazir2

1Department of Computer Science, University of Peshawar, Pakistan

2Institute of Business and Management Sciences, University of Agriculture, Pakistan

Abstract: Software components are imperative parts of a system which play a fundamental role in the overall function of a system. A component is said to be secure if it has a towering scope of security. Security is a shield for unauthorized use as unauthorized users may informally access and modify components within a system. Such accessing and modifications ultimately affect the functionality and efficiency of a system. With an increase in software development activities security of software components is becoming an important issue. In this study, a fuzzy logic based model is presented to handle ISO/IEC 18028-2 security attributes for component security evaluation. For this purpose an eight input, single output model based on the Mamdani fuzzy inference system has been proposed. This component security evaluation model helps software engineers during component selection in conditions of uncertainty and ambiguity.

Keywords: Software component, component security, fuzzy logic.

Received May 1, 2015; accepted November 29, 2015

Full text   


Print E-mail

Revisiting Constraint Based Geo Location: Improving Accuracy through Removal of Outliers

Sameer Qazi and Muhammad Kadri

College of Engineering, Karachi Institute of Economics and Technology, Pakistan

Abstract: IP based GeoLocation has become increasingly important for network administrators for a number of reasons. The first and foremost is to determine cyber attackers’ geographical location to prevent, contain and thwart further attacks. A secondary reason may be to target potential advertisements to users based on their geographical location. IP addresses are indicators of the users location at a course level such as country or city. For further refined estimate of user location within a city, for example, network measurements as simple as simple round trip time of pings to the user can give a more precise estimate of users location. Recently, several researchers have proposed constraint based optimization approaches to estimate a users location using a set of landmark hosts using multi-lateration. In this work, we show that the optimization of such schemes are sometimes plagued by outliers which causes location estimated to be deteriorated greatly. We provide anecdotal evidence for this and proposals to alleviate these problems for detection, masking or removal of these outlier measurements and show that location error improvement of several hundreds or thousands of km are possible.

Keywords: Geo location of internet hosts, constraint-based optimization, location error.

Received November 17, 2014; accepted March 23, 2015

Full text   


Print E-mail

Hybrid Algorithm with Variants for Feed

Forward Neural Network

Thinakaran Kandasamy1 and Rajasekar Rajendran2

1Sri Venkateswara College of Engineering and Technology, Anna University, India

2Excel Engineering College, Anna University, India

Abstract: Levenberg-Marquardt back-propagation algorithm, as a Feed forward Neural Network (FNN) training method, has some limitations associated with over fitting and local optimum problems. Also Levenberg-Marquardt back-propagation algorithm is opted only for small network. This research uses hybrid evolutionary algorithm based on Particle Swarm Optimization (PSO) in FNN training. This algorithm includes a number of components that gives advantage in the experimental study. Variants such as size of the swarm, acceleration coefficients, coefficient constriction factor and velocity of the swarm are proposed to improve convergence speed as well as to improve accuracy. The integration of components in different ways in hybrid algorithm produces effective optimization of back propagation algorithm. Also, this hybrid evolutionary algorithm based on PSO can be used for complex neural network structure.

Keywords: Back propagation, hybrid algorithm, levenberg-marquardt, Particle swarm optimization, variants of PSO algorithm.

Received August 31, 2014; accepted April 12, 2015

Full text   

Print E-mail

Pair Programming for Software Engineering

Education: An Empirical Study

Kavitha Karthiekheyan1, Irfan Ahmed2, and Jalaja Jayalakshmi1

1Department of Computer Applications, Kumaraguru College of Technology, India

2Department of Computer Applications, Sri Krishna College of Engineering and Technology, India

Abstract: As an iterative and incremental methodology, agile software has helped a lot in evolving solutions from self-organizing, cross-functional teams. Pair programming is a type of agile software development technique where two programmers work together with one computer for developing the required software. This paper reports the results of a pair programming exercise carried out with a set of one hundred and twelve post graduate students, who developed small applications as a part of their software development laboratory course at Kumaraguru College of Technology (KCT) during the academic year 2012-2013 and 2013-2014. The objective of this research is to investigate the effect of adopting pair programming as a pedagogical tool in Indian higher educational setting. Purposeful pair programming modules were deployed in various phases of software development and the results revealed that pair programming is not only an useful approach in teaching computer programming but also facilitate effective knowledge sharing among students. Further, the effectiveness of pair programming was realized to a greater extent during the designing and coding phases of software development. Practicing pair programming also enables the students to develop their collaborative skills, which is crucial to an industrial working environment.

Keywords: Agile software development, collaborative learning, knowledge sharing, pair programming, software engineering, education.

Received December 13, 2014; accepted April 26, 2015

Full text   


Print E-mail

Comparison of Dimension Reduction Techniques on

High Dimensional Datasets

Kazim Yildiz1, Yilmaz Camurcu2, and Buket Dogan1

1Deparment of Computer Engineering, Marmara Unıversity, Turkey

2Department of Computer Engineering, Fatih Sultan Mehmet Waqf University, Turkey

Abstract: High dimensional data becomes very common with the rapid growth of data that has been stored in databases or other information areas. Thus clustering process became an urgent problem. The well-known clustering algorithms are not adequate for the high dimensional space because of the problem that is called curse of dimensionality. So dimensionality reduction techniques have been used for accurate clustering results and improve the clustering time in high dimensional space. In this work different dimensionality reduction techniques were combined with Fuzzy C-Means clustering algorithm. It is aimed to reduce the complexity of high dimensional datasets and to generate more accurate clustering results. The results were compared in terms of cluster purity, cluster entropy and mutual info. Dimension reduction techniques are compared with current Central Processing Unit (CPU), current memory and elapsed CPU time. The experiments showed that the proposed work produces promising results on high dimensional space.

Keywords: High dimensional data, clustering, dimensionality reduction, data mining.

Received October 23, 2014; accepted December 21, 2015

Full text   

Print E-mail

Recognition of Spoken Bengali Numerals Using MLP, SVM, RF Based Models with PCA Based Feature Summarization

Avisek Gupta and Kamal Sarkar

Department of Computer Science and Engineering, Jadavpur University, India

Abstract: This paper presents a method of automatic recognition of Bengali numerals spoken in noise-free and noisy environments by multiple speakers with different dialects. Mel Frequency Cepstral Coefficients (MFCC) are used for feature extraction, and Principal Component Analysis is used as a feature summarizer to form the feature vector from the MFCC data for each digit utterance. Finally, we use Support Vector Machines, Multi-Layer Perceptrons, and Random Forests to recognize the Bengali digits and compare their performance. In our approach, we treat each digit utterance as a single indivisible entity, and we attempt to recognize it using features of the digit utterance as a whole. This approach can therefore be easily applied to spoken digit recognition tasks for other languages as well.

Keywords: Speech recognition, isolated digits, principal component analysis, support vector machines, multi-layered perceptrons, random forests.

Received July 2, 2014; accepted March 15, 2015

Full text   

Print E-mail

Assessment of Ensemble Classifiers Using the Bagging

Technique for Improved Land Cover Classification of

multispectral Satellite Images

Hassan Mohamed1, Abdelazim Negm1, Mohamed Zahran2, and Oliver Saavedra3

1Department of Environmental Engineering, Egypt-Japan University of Science and Technology, Egypt

2Department of Geomatics Engineering, Benha University, Egypt

3Department of Civil Engineering, Tokyo Institute of Technology, Japan

Abstract: This study evaluates an approach for Land-Use Land-Cover classification (LULC) using multispectral satellite images. This proposed approach uses the Bagging Ensemble (BE) technique with Random Forest (RF) as a base classifier for improving classification performance by reducing errors and prediction variance. A pixel-based supervised classification technique with Principle Component Analysis (PCA) for feature selection from available attributes using a Landsat 8 image is developed. These attributes include coastal, visible, near-infrared, short-wave infrared and thermal bands in addition to Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI). The study is performed in a heterogeneous coastal area divided into five classes: water, vegetation, grass-lake-type, sand, and building. To evaluate the classification accuracy of BE with RF, it is compared to BE with Support Vector Machine (SVM) and Neural Network (NN) as base classifiers. The results are evaluated using the following output: commission, omission errors, and overall accuracy. The results showed that the proposed approach using BE with RF outperforms SVM and NN classifiers with 93.3% overall accuracy. The BE with SVM and NN classifiers yielded 92.6% and 92.1% overall accuracy, respectively. It is revealed that using BE with RF as a base classifier outperforms other base classifiers as SVM and NN. In addition, omission and commission errors were reduced by using BE with RF and NN classifiers.

Keywords: Bagging; classification; ensemble; landsat satellite magery.

Received May 25, 2015; accepted January 13, 2016

Full text   


Print E-mail

Real Time Facial Expression Recognition for Nonverbal Communication

Md. Sazzad Hossain1 and Mohammad Abu Yousuf2

1Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Bangladesh

2Institute of Information Technology, Jahangirnagar University, Bangladesh

Abstract: This paper represents a system which can understand and react appropriately to human facial expression for nonverbal communications. The considerable events of this system are detection of human emotions, eye blinking, head nodding and shaking. The key step in the system is to appropriately recognize a human face with acceptable labels. This system uses currently developed OpenCV Haar Feature-based Cascade Classifier for face detection because it can detect faces to any angle. Our system can recognize emotion which is divided into several phases: segmentation of facial regions, extraction of facial features and classification of features into emotions. The first phase of processing is to identify facial regions from real time video. The second phase of processing identifies features which can be used as classifiers to recognize facial expressions. Finally, an artificial neural network is used in order to classify the identified features into five basic emotions. It can also detect eye blinking accurately. It works for the active scene where the eye moves freely and the head and the camera moves independently in all directions of the face. Finally, this system can identify the natural head nodding and shaking that can be recognized in real-time using optical flow motion tracking and find the direction of head during the head movement for nonverbal communication.

Keywords: Haar-cascade classifier, facial expression, artificial neural network, template matching, lucas-kanade optical flow.

Received April 4, 2015; accepted November 29, 2015

Full text   

Print E-mail

Immunity inspired Cooperative Agent based

Security System

Praneet Saurabh and Bhupendra Verma

Department of Computer Science and Engineering, Technocrats Institute of Technology, India

Abstract: Artificial Immune System (AIS) has evolved substantially from its inception and is utilized to solve complex problems in different domains out of which computer security is one of them. Computer Security has emerged as a key research area because of the ever-growing attacks and its methodology. Various security concepts and products were developed to overcome this alarming situation but these systems by some means fall short to provide the desired protection against new and ever-increasing threats. AIS enthused from Human Immune System (HIS) is considered as an excellent source of inspiration to develop computer security solution since the previous protect the body from various external and internal threats very effectively. This paper presents Immunity Inspired Cooperative Agent based Security System (IICASS) that uses Enhanced Negative Selection Algorithm (E-RNS) which incorporate fine tuning of detectors and detector power in negative selection algorithm. These features make IICASS evolve and facilitate better and correct coverage of self or non-self. Collaboration and communication between different agents make the system dynamic and adaptive that helps it to discover correct anomalies with degree of severity. Experimental results demonstrate that IICASS show remarkable resilience in detecting novel unseen attacks with lower false positive.

Keywords: Anomaly, human immune system, artificial immune system, agent.

Received June 3, 2014; accepted May 24, 2016

Full text   


Print E-mail

Incorporating Unsupervised Machine Learning Technique on Genetic Algorithm for Test Case Optimization

Maragathavalli Palanivel and Kanmani Selvadurai 

 Department of Information Technology, Pondicherry Engineering College, India

Abstract: Search-based software testing uses random or directed search techniques to address problems. This paper discusses on test case selection and prioritization by combining genetic and clustering algorithms. Test cases have been generated using genetic algorithm and the prioritization is performed using group-wise clustering algorithm by assigning priorities to the generated test cases thereby reducing the size of a test suite. Test case selection is performed to select a suitable test case in order to their importance with respect to test goals. The objectives considered for criteria-based optimization are to optimize test suite with better condition coverage and to improve the fault detection capability and to minimize the execution time. Experimental results show that significant improvement when compared to the existing clustering technique in terms of condition coverage up to 93%, improved fault detection capability achieved upto 85.7% with minimal execution time of 4100ms.

Keywords: Test case selection and prioritization, group-wise clustering.

Received August 14, 2014; accepted August 31, 2015

Full text   


Print E-mail

A Signaling System for Quality of Service (QoS)-Aware Content Distribution in Peer-to-Peer Overlay Networks

Sasikumar Kandasamy1, Sivanandam Natarajan2, and Ayyasamy Sellappan3

1Department of Computer Science and Engineering, Tamilnadu College of Engineering, India

2Department of Computer Science and Engineering, Karpagam College of Engineering, India

3Department of Information Technology, Tamilnadu College of Engineering, India

Abstract: Peers are used to limit and expand the available facilities for different kind of devices, which should able to fetch the data according to the demand of users and available resources. Several factors such as latency, bandwidth, memory size, CPU speed, and reliability can affect the Quality of Service (QoS) of the peer-to-peer(p2p) network. In this paper, we propose a signaling system for QoS-aware content distribution for Peer-to-Peer overlay networks where the signaling system is controlled through a set of data so that it can be operated dynamically. The flow of signal in the system enhances other devices to choose their own way with the requirement of applications. This system is able to reduce the traffic and utilize the available resources.

Keywords: Signaling, bandwidth, delay, transmission, buffer, catch.

Received October 3, 2013; accepted July 6, 2014

Full text   


Print E-mail

Cipher Text Policy Attribute Based Broadcast Encryption for Multi-Privileged Groups

Muthulakshmi Angamuthu1, Akshaya Mani2, and Anitha Ramalingam3

1Department of Mathematics, PSG College of Technology, India

2Department of Computer Science, Georgetown University, USA

3Department of Applied Mathematics and Computational Sciences, PSG College of Technology, India

Abstract: In the current globalization scenario, many group communication applications have become vital and the users not only subscribe to a single resource, but they use multiple resources and hence ending up with multi-privileged groups. In some group communication applications, it is desirable to encrypt the contents without exact knowledge of the set of intended receivers. Attribute based encryption offers this ability and enforces access policies defined on attributes, within the encryption process. In these schemes, the encryption keys and/or cipher texts are labelled with sets of descriptive attributes defined for the system users, and a particular user private key can decrypt only if the two match. This paper presents a cipher text policy attribute based broadcast encryption scheme for multi-privileged group of users. The proposed scheme has been proved secure using random oracle model.

Keywords: Attribute based broadcast encryption, decisional bilinear diffie hellman problem and decisional diffie hellman problem, multi-privileged groups, cipher text policy.

Received June 8, 2014; accepted March 15, 2015

Full text   


Print E-mail

Progressive Visual Cryptography with Friendly and Size Invariant Shares

Young-Chang Hou1, Zen-Yu Quan2, and Chih-Fong Tsai2

1Department of Information Management, Tamkang University, Taiwan

2Department of Information Management, National Central University, Taiwan

Abstract: Visual cryptography is an important data encoding method, where a secret image is encoded into n pieces of noise-like shares. As long as there are over k shares stacked out of n shares, the secret image can be directly decoded by the human naked eye; this cannot be done if less than k shares are available. This is called the (k, n)-threshold Visual Secret Sharing Scheme (VSS). Progressive Visual Cryptography (PVC) differs from the traditional VSS, in that the hidden image is gradually decoded by superimposing two or more shares. As more and more shares are stacked, the outline of the hidden image becomes clearer. In this study, we develop an image sharing method based on the theory of PVC, which utilizes meaningful non-expanded shares. Using four elementary matrices (C0-C3) as the building blocks, our dispatching matrices (M0 - M3) are designed to be expandable so that the contrast in both the shares and the restored image can be adjusted based on user needs. In addition, the recovered pixels in the black region of the secret image are guaranteed to be black, which improves the display quality of the restored image. The image content can thus be displayed more clearly than that by previous methods.

Keywords: Visual cryptography, progressive visual cryptography, secret sharing, unexpanded share, meaningful (Friendly) share.

Received April 8, 2015; accepted October 7, 2015

Full text   


Print E-mail

A Hybrid Template Protection Approach using Secure Sketch and ANN for Strong Biometric Key Generation with Revocability Guarantee

Tran-Khanh Dang1,2, Van-Quoc-Phuong Huynh1,2, and Hai Truong2

1Institute for Application Oriented Knowledge Processing (FAW), Johannes Kepler University Linz, Austria

2Computer Science and Engineering, HCMC University of Technology, Vietnam

Abstract: Nowadays, biometric recognition has been widely applied in various aspects of security applications because of its safety and convenience. However, unlike passwords or tokens, biometric features are naturally noisy and cannot be revoked once they are compromised. Overcoming these two weaknesses is an essential and principal demand. With a hybrid approach, we propose a scheme that combines the Artificial Neural Network (ANN) and the Secure Sketch concept to generate strong keys from a biometric trait while guaranteeing revocability, template protection and noisy tolerance properties. The ANN with high noisy tolerance capacity enhances the recognition by learning the distinct features of a person, assures the revocable and non-invertible properties for the transformed template. The error correction ability of a Secure Sketch concept’s construction significantly reduces the false rejection rate for the enroller. To assess the scheme’s security, the average remaining entropy is measured on the generated keys. Empirical experiments with standard datasets demonstrate that our scheme is able to achieve a good trade-off between the security and the recognition performance when being applied with the face biometrics.

Keywords: Biometric cryptography, biometric template protection, ANN, Secure Sketch, remaining entropy.

Received July 8, 2015; accepted October 7, 2015

Full text   

Print E-mail

Parallel HMM-Based Approach for Arabic Part of Speech Tagging

Ayoub Kadim and Azzeddine Lazrek

Department of Computer Science, Faculty of Science, Cadi Ayyad University, Morocco

Abstract: In this paper we try to go beyond the classical use of the Hidden Markov Model for Part Of Speech Tagging, particularly for the Arabic language. In fact, most available Arabic tagging systems and tagsets are derived from English and do not make use of the linguistic richness of Arabic. Our new proposed tagging system will consist of two Hidden Markov Models working in parallel: In addition to the main model, a second model is added to serve as a reference for low probabilities tags. Of course, a dual corpus is required to train both models. To do so, we restructure the Nemlar Arabic corpus and extract a new tagset from diacritics and grammatical rules. The approach is implemented by using Java programming environment and several experimentations are conducted to evaluate it. The results of this approach, which are promising, as well as its limitations, are deeply discussed and future possible enhancements are also highlighted. This work will open the door for new promising research perspectives, particularly for the Arabic language processing, and more generally for the applications of Hidden Markov Models.

Keywords: Part of speech tagging, hidden Markov model, Viterbi algorithm, natural language processing, corpus, arabic language.

Received May 31, 2014; accepted December 21, 2015

Full text   

Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ on line 251 Warning: fsockopen(): unable to connect to (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ on line 251 skterr