September 2017, No 5
Print E-mail

SAK-AKA: A Secure Anonymity Key of Authentication and Key Agreement protocol for LTE network

Shadi Nashwan

Department of Computer Science and Information, Aljouf University, Saudi Arabia

Abstract: 3rdGeneration Partnership Project (3GPP) has proposed the Authentication and Key Agreement (AKA) protocol to achieve the security requirements of Evolved Packet System (EPS) in the Long Term Evolution (LTE) network, called (EPS-AKA) protocol. Nevertheless, the EPS-AKA protocol is still has some drawbacks in authentication process due to inherit some weaknesses from the predecessor protocols. This paper proposes a secure anonymity Key of Authentication and Key Agreement (SAK-AKA) protocol for LET network to enhance the security level of EPS-AKA protocol. The same security architecture of EPS-AKA is used by the proposed protocol without adding extra cost to the system. Specifically, the SAK-AKA protocol increases the difficulty of defeating the authentication messages by complete concealing of IMSI with perfect forward security. The extensive security analysis proves that the SAK-AKA protocol is secure against the drawbacks of current EPS-AKA protocol. Moreover, the performance analysis in terms of the bandwidth consumption, the authentication transmission overhead and the storage consumption demonstrates that the SAK-AKA protocol relatively is more efficient than the current EPS-AKA protocol.

Keywords: 3GPP, LTE network, IPsec protocol, UMTS-AKA protocol, EPS-AKA protocol.

Received March 21, 2017; accepted May 28, 2017


Print E-mail

An Improved Statistical Model of Appearance under

Partial Occlusion

1Qaisar Abbas and 2Tanzila Saba

1College of Computer and Information Sciences, Al Imam Muhammad Ibn Saud Islamic University, Saudi Arabia

2College of Computer and Information Sciences, Prince Sultan University, Saudi Arabia

Abstract: The Appearance Models (AMs) are widely used in many applications related to face recognition, expression analysis and computer vision. Despite its popularity, the AMs are not much more accurate due to partial occlusion. Therefore, the authors have developed Robust Normalization Inverse Compositional Image Alignment (RNICIA) algorithm to solve partial occlusion problem. However, the RNICIA algorithm is not efficient due to high complexity and un-effective due to poor selection of Robust Error Function and scale parameter that depends on a particular training dataset. In this paper, an Improved Statistical Model of Appearance (ISMA) method is proposed by integration techniques of perceptual-oriented uniform Color Appearance Model (CAM) and Jensen-Shannon Divergence (JSD) to overcome these limitations. To reduce iteration steps which decrease computational complexity, the distribution of probability of each occluded and un-occluded image regions is measured. The ISMA method is tested by using convergence measure on 600 facial images by varying degree of occlusion from 10% to 50%. The experimental results indicate that the ISMA method is achieved more than 95% convergence compared to RNICIA algorithm thus the performance of appearance models have significantly improved in terms of partial occlusion.


Keywords: Computer vision, appearance model, partial occlusion, robust error functions, CIECAM02 appearance model.

Received September 28, 2014; accepted March 2, 2015


Full text 


Print E-mail

Rule Schema Multi-Level for Local Patterns

Analysis: Application in Production Field

Salim Khiat1, Hafida Belbachir2, and Sid Rahal3

1Computer Sciences Department, University of science and technology–Mohamed Boudiaf Oran, Algeria

2The Science and Technology University USTO, Algeria

3System and Data Laboratory (LSSD)

Abstract: Recently, Multi-Database Mining (MDBM) for association rules has been recognized as an important and timely research area in the Knowledge Discovery Database (KDD) community. It consists of mining different databases in order to obtain frequent patterns which are forwarded to a centralized place for global pattern analysis. Various synthesizing models [8,9,13,14,15,16] have been proposed to build global patterns from the forwarded patterns. It is desired that the synthesized rules from such forwarded patterns must closely match with the mono-mining results, ie., the results that would be obtained if all the databases are put together and mining has been done. When the pattern is present in a site but fails to satisfy the minimum support threshold value, it is not allowed to take part in the pattern synthesizing process. Therefore this process can lose some interesting patterns which can help the decision maker to make the right decisions. To adress this problem, we propose to integrate the users knowledge in the local and global mining process. For that we describe the users beliefs and expectation by the rule schemas multi-level and integrate them in both the local association rules mining and in the synthesizing process. In this situation we get true global patterns of select items as there is no need to estimate them. Furthermore, a novel Condensed Patterns Tree (CP_TREE)structure is defined in order to store the candidates patterns for all organization levels which can improve the time processing and reduce the space requirement. In addition CP_TREE structure facilitate the exploration and the projection of the candidates patterns in differents levels. finally We conduct some experimentations in real world databases which are the production field and demonstrate the effectivlness of the CP_TREE structure on time processing and space requirement.

Keywords: Schema, association rules, exceptional rules, global rules, ontology.

Received July 22, 2014; accepted August 12, 2015


Full text 


Print E-mail

Features Modelling in Discrete and Continuous Hidden Markov Models for Handwritten Arabic Words Recognition

Amine Benzenache1, Hamid Seridi1, and Herman Akdag2

1LabSTIC, University of 8 Mai 1945 of Guelma, Algeria

2LIASD, University Paris 8, France

Abstract: The arab writing is originally cursive, difficult to segment and has a great variability. To overcome these problems, we propose two holistic approaches for the recognition of the handwritten arabic words in a limited vocabulary based on the Hidden Markov Models (HMMs): discrete with wk-means and continuous. In the suggested approach, each word of the lexicon is modelled by a discrete or continuous HMM. After a series of pre-processing, the word image is segmented from right to left in succession frames of fixed or variable size in order to generate a sequence vector of statistical and structural parameters which will be submitted to two classifiers to identify the word. To illustrate the efficiency of the proposed systems, significant experiments are carried out on IFN/ENIT benchmark database.

Keywords: Recognition of the handwritten arabic words, holistic approach, DHMMs, CHMMs, k-means, wk-means, algorithm of Viterbi, modified EM algorithm.



Received September 22, 2014; accepted April 23, 2015


Full text 


Print E-mail

Service-Oriented Process Modelling for Device Control in Future Networks

Muhammad Sohail Khan and DoHyeun Kim

Computer Engineering Department, Jeju National University, South Korea

Abstract: The recent advancements in the fields of electronics, information and communication technologies have paved a pathway towards a world-wide future network of connected smart devices. Researchers from industry and academia are taking even more interests in the realization of such an infrastructure where a seamless interaction of sensing and actuating devices will take place in order to provide valuable services to the human kind and other systems. So far the major focus of research is towards the connectivity, management and control of sensing devices and no major attention has been given to the control of actuating devices in such an environment. This paper presents a generic process model for actuating device control service in future networks. A prototype implementation of the proposed model based on the presented platform has been described along with the performance analysis of the proposed model.

Keywords: Process modelling, future networks, device profile, device control.


Received October 16, 2014; accepted April 26, 2015


Full text 

Print E-mail

Generalization of Impulse Noise Removal

Hussain Dawood1, Hassan Dawood2, and Ping Guo3

1Faculty of Computing and Information Technology, University of Jeddah, Saudi Arabia

2Department of Software Engineering, University of Engineering and Technology, Pakistan

3Image Processing and Pattern Recognition Laboratory, Beijing Normal University, China

Abstract: In this paper, a generalization for the identification and removal of an impulse noise is proposed. To remove the salt-and-pepper noise an Improved Directional Weighted Median Filter (IDWMF) is proposed. Number of optimal direction are proposed to increase from four to eight directions to preserve the edges and to identify the noise, effectively. Modified Switching Median Filter (MSMF) is proposed to replace the identified noisy pixel. In which, two special cases are considered to replace the identified noisy pixel. To remove the random-valued impulse noise, we have proposed an efficient random-valued impulse noise identifier and removal algorithm named as Local Noise Identifier and Multi-texton Removal (LNI-MTR). We have proposed to use the local statistics of four neighbouring and the central pixel for the identification of noisy pixel in current sliding window. The pixel identified as noisy, is proposed to replace by using the information of multi-texton in current sliding window. Experimental results show that the proposed methods cannot only identify the impulse noise efficiently, but also can preserve the detailed information of an image.

Keywords: Directional weighted median filter, multi-texton, impulse noise, random-valued impulse noise, salt-and-pepper noise, noise identification, modified switching median filter.


Received September 22, 2014; accepted April 23, 2015


Full text 



Print E-mail

A ­­­­­Novel Approach for Sentiment Analysis of Punjabi Text using SVM

Amandeep Kaur and Vishal Gupta

Department Computer Science and Engineering, Panjab University, India


Abstract: Opinion mining or sentiment analysis is to identify and classify the sentiments/opinion/emotions from text. Over the last decade, in addition to english language, many indian languages include interest of research in this field. For this paper, we compared many approaches developed till now and also reviewed previous researches done in case of indian languages like telugu, Hindi and Bengali. We developed a hybrid system for Sentiment analysis of Punjabi text by integrating subjective lexicon, N-gram modelling and support vector machine. Our research includes generation of corpus data, algorithm for Stemming, generation of punjabi subjective lexicon, developing Feature set, Training and testing support vector machine. Our technique proves good in terms of accuracy on the testing data. We also reviewed the results provided by previous approaches to validate the accuracy of our system.


Keywords: Sentiment analysis, subjective lexicon, punjabi language, n-gram modeling, support vector machine.


Received June 17, 2014; accepted December 16, 2014


Full text 


Print E-mail

Combination of Multiple Classifiers for Off-Line Handwritten Arabic Word Recognition

Rachid Zaghdoudi and Hamid Seridi

laboratory of Science and Information Technologies and Communication, University of 08 may 1945, Algeria

Abstract: This study investigates the combination of different classifiers to improve Arabic handwritten word recognition. Features based on Discrete Cosine Transform (DCT) and Histogram of Oriented Gradients (HOG) are computed to represent the handwritten words. The dimensionality of the HOG features is reduced by applying Principal Component Analysis (PCA). Each set of features is separately fed to two different classifiers, support vector machine (SVM) and fuzzy k-nearest neighbor (FKNN) giving a total of four independent classifiers. A set of different fusion rules is applied to combine the output of the classifiers. The proposed scheme evaluated on the IFN/ENIT database of Arabic handwritten words reveal that combining the classifiers results in improved recognition rates which, in some cases, outperform the state-of-the-art recognition systems.

 Keywords: Handwritten Arabic word recognition; Classifier combination; Support vector machine; Fuzzy K-nearest neighbor; Discrete cosine transform; Histogram of oriented gradients.

Received September 22, 2014; accepted August 31, 2015


Print E-mail

Enhanced Clustering-Based Topic Identification of Transcribed Arabic Broadcast News

Ahmed Jafar1, Mohamed Fakhr1, and Mohamed Farouk2

1Department of Computer Science, Arab Academy for Science and Technology, Egypt

2Department of Engineering Math and Physics, Faculty of Engineering, Egypt

Abstract: This research presents an enhanced topic identification of transcribed Arabic broadcast news using clustering techniques. The enhancement includes applying new stemming technique “rule-based light stemming” to balance the negative effects of the stemming errors associated with light stemming and root-based stemming. New possibilistic-based clustering technique is also applied to evaluate the degree of membership that every transcribed document has in regard to every predefined topic, hence detecting documents causing topic confusions that negatively affect the accuracy of the topic-clustering process. The evaluation has showed that using rule-based light stemming in combination of spectral clustering technique achieved the highest accuracy, and this accuracy is further increased after excluding confusing documents. 

Keywords: Arabic speech transcription, topic clustering.

Received June 17, 2014; accepted January 27, 2015


Full text 


Print E-mail

A Metrics Driven Design Approach for Real Time

Environment Application


Mahmood Ahmed and Muhammad Shoaib

Department of computer science and engineering, University of engineering and technology lahore, Pakistan.

Abstract: Design of real time environment application is the most exigent task for the designers comparing to non real time application design. The stringent timing requirement for task completion is the problem to handle at design time. The design complexity is increased manifolds when object oriented design methods are used and task deadlines are introduced at design stage. There are many design methodologies available for the real time systems but as far as the researcher is concerned none addresses all the problems of real time system design specially the issues of deadline inheritance and dynamic behavior of system if deadlines are introduced at early stages of the design. Most of the methodologies leave the task of handling the timing constraints for the implementation phase at the programming language level. In this paper we have proposed a design approach incorporated with our novel design metrics verification for measuring the design of real time environment applications. The metrics are measured for design of a real time weapon delivery system and it is illustrated that how design quality can be assessed before implementation.

Keywords: Deadlines, timed state statecharts, design metrics, real time systems

  Received November 26, 2011; accepted June 11, 2012


Full text 



Print E-mail

Diagnosis of Leptomeningeal Metastases Disease

in MRI Images by Using Image Enhancement


Mehmet Gül1, Sadık Kara1, Abdurrahman Işıkdoğan2, and Yusuf Yarar3

1Biomedical Engineering Institute, Fatih University, Istanbul

2Hospital of Oncology, Dicle University, Diyarbakır

3Selahaddin’i Eyyubi Hospital, Diyarbakır


Abstract: Leptomeningeal Metastases (LM) disease is the advanced stages of some complicated cancers. It Contaminates in the Cerebrospinal Fluid (CSF). Tumors might be in macroscopic or microscopic sizes. The medical operation is more risky than other cancers. Consequently, diagnosis of leptomeningeal metastases is important. Different methods are used to diagnose LM disease such as CSF examination and imaging systems Magnetic Resonance Imaging (MRI) or Computer Tomography (CT) examination. CSF examination result is more accurate compared to CT or MRI imaging systems. However imaging systems’ results are taken more early than CSF examination. Some details in MRI images are hidden and if the proper image enhancement method is used, the details will be revealed. Diagnosis of LM disease can be earlier with accurate results at that time. In this study, some image enhancement methods were used. The probability of result of Logarithmic Transformation (LT) method and Power-Law Transformation (PLT) method were almost the same and result was p=0.000 (p<0.001), and statistically high result was obtained. The probability of Contrast Stretching (CS) method was p=0.031 (p<0.05), and this result was statistically significant. The other four methods’ results were insignificant. These methods are Image Negatives Transformation (INT) method, thresholding transformations method; Gray-Level Slicing (GLS) method and Bit-Plane Slicing (BPS) method.

Keywords: Cerebrospinal Fluid (CSF) examination, Computed Tomography (CT), Image Enhancement methods, Leptomeningeal Metastases, Magnetic Resonance Imaging (MRI).

Received March 23, 2015; accepted August 12, 2015


Full text 




Print E-mail

Analysis and Performance Evaluation of

Cosine Neighbourhood Recommender System

Kola Periyasamy1, Jayadharini Jaiganesh1, Kanchan Ponnambalam1, Jeevitha Rajasekar1, and Kannan Arputharaj2

1Department of Information Technology, Anna University, India.

2Department of Information Science and Technology, Anna University, India.

Abstract: Growth of technology and innovation leads to large and complex data which is coined as Bigdata. As the quantity of information increases, it becomes more difficult to store and process data. The greater problem is finding right data from these enormous data. These data are processed to extract the required data and recommend high quality data to the user. Recommender system analyses user preference to recommend items to user. Problem arises when Bigdata is to be processed for Recommender system. Several technologies are available with which big data can be processed and analyzed. Hadoop is a framework which supports manipulation of large and varied data. In this paper, a novel approach Cosine Neighbourhood Similarity measure is proposed to calculate rating for items and to recommend items to user and the performance of the recommender system is evaluated under different evaluator which shows the proposed Similarity measure is more accurate and reliable.

Keywords: Big Data, Recommender System, Cosine Neighbourhood Similarity, Recommender Evaluator.

Received April 28, 2014; accepted June 12, 2014



Print E-mail

An Approach for Instance Based Schema Matching with Google Similarity and Regular Expression

Osama Mehdi, Hamidah Ibrahim, and Lilly Affendey

Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, Malaysia

Abstract: Instance based schema matching is the process of comparing instances from different heterogeneous data sources in determining the correspondences of schema attributes. It is a substitutional choice when schema information is not available or might be available but worthless to be used for matching purpose. Different strategies have been used  by various instance based schema matching approaches for discovering correspondences between schema attributes. These strategies are neural network, machine learning, information theoretic discrepancy and rule based. Most of these approaches treated instances including instances with numeric values as strings which prevents discovering common patterns or performing statistical computation between the numeric instances. As a consequence, this causes unidentified matches especially for numeric instances. In this paper, we propose an approach that addresses the above limitation of the previous approaches. Since we only fully exploit the instances of the schemas for this task, we rely on strategies that combine the strength of Google as a web semantic and regular expression as pattern recognition. The results show that our approach is able to find 1-1 schema matches with high accuracy in the range of 93%-99% in terms of Precision (P), Recall (R), and F-measure (F). Furthermore, the results showed that our proposed approach outperformed the previous approaches although only a sample of instances is used instead of considering the whole instances during the process of instance based schema matching as used in the previous works.


Keywords: Schema matching, instance based schema matching, Google similarity, regular expression.

Received April 24, 2014; accepted August 31, 2015


Print E-mail

Interactive Video Retrieval Using Semantic Level Features and Relevant Feedback

Sadagopan Padmakala 1 and Ganapathy AnandhaMala2

1Department of Computer Science, Anna University, India.

2Department of CSE, Easwari Engineering College, India.

Abstract: Recent years, many literatures presents a lot of work for content-based video retrieval using different set of feature. But, most of the works are concentrated on extracting features low level features. But, the relevant videos can be missed out if the interactive with the users are not considered. Also, the semantic richness is needed further to obtain most relevant videos. In order to handle these challenges, we propose an interactive video retrieval system. The proposed system consists of following steps: 1) Video structure parsing, 2) Video summarization and 3) Video Indexing and Relevance Feedback. At first, input videos are divided into shots using shot detection algorithm. Then, three features such as color, texture and shape are extracted from each frame in video summarization process. Once the video is summarized with the feature set, index table is constructed based on these features to easily match the input query. In matching process, query video is matched with index table using semantic matching distance to obtain relevant video. Finally, in relevance feedback phase, once we obtain relevant video, it is given to identify whether it is relevant for the user. If it is relevant, more videos relevant to that video is given to the user. The evaluation of the proposed system is evaluated in terms of precision, recall and f-measure. Experiments results show that our proposed system is competitive in comparison with standard method published in the literature. 


Keywords: shot detection, color, shape, texture, video retrieval, relevant feedback.

Received January 31, 2013; accepted June 17, 2014


Full text 


Print E-mail

An SNR Unaware Large Margin Automatic Modulations Classifier in Variable SNR Environments

Hamidreza Hosseinzadeh and Farbod Razzazi

Department of Electrical and Computer Engineering, Science and Research Branch, Islamic Azad University, Iran

Abstract:Automatic classification of modulation type in detected signals is an intermediate step between signal detection and demodulation, and is also an essential task for an intelligent receiver in various civil and military applications. In this paper, a new two-stage partially supervised classification method is proposed for Additive White Gaussian Noise (AWGN) channels with unknown signal to noise ratios, in which a system adaptation to the environment Signal-to-Noise Ratios (SNR) and signals classification are combined. System adaptation to the environment SNR enables us to construct a blind classifier to the SNR. In the classification phase of this algorithm, a passive-aggressive online learning algorithm is applied to identify the modulation type of input signals. Simulation results show that the accuracy of the proposed algorithm approaches to a well-trained system in the target SNR, even in low SNRs.

Keywords:Automatic modulation classification, pattern recognition, partially supervised classification,passive-aggressive classifier, SNR un-aware classification.

Received January 27, 2015; accepted March 9, 2014

 Full text


Print E-mail

Multi-criteria Selection of the Computer

Configuration for Engineering Design

Jasmina Vasović, Miroslav Radojičić, Stojan Vasović, and Zoran Nešić

Faculty of Technical Sciences, University of Kragujevac, Serbia

Abstract: The problems of choosing the PC configuration are Multi Criteria Decision Making (MCDM) problems. The paper presents an integrated approach to interdependent PC configuration selection problems using multiple criteria decision making methods and Delphi technique. Research has been based on the implementation of the concept of expert groups, extended approach to the Delphi method concept and appropriate statistical procedures and tools with software support. This provides the conditions for a decision maker, the manager, to connect all data and relations in one rational whole through multicriteria rating of alternative solutions; subsequently, by using appropriate methods of multicriteria decision making supported by software, the decision maker can find the solution for the optimisation problem - by the selection of the most favourable alternative with regard to the established criteria and appropriate preferences. The application of the proposed approach has been illustrated through an example of the selection of the best configuration of a computer system for simulation in engineering design in Serbian companies. The main contribution of the paper is presented methodological multicriteria approach that integrates the adequate methods and processes. The presented methodology opens the possibility for wide application in solving the problem of selecting computer configuration for different applications.

Keywords: Computer configurations, PROMETHEE method, delphi technique, information technology projects

Received February 27, 2014; accepted August 16, 2015


Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ on line 251 Warning: fsockopen(): unable to connect to (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ on line 251 skterr