Print E-mail

Hybrid Metaheuristic Algorithm for Real Time Task Assignment Problem in Heterogeneous Multiprocessors

Poongothai Marimuthu, Rajeswari Arumugam, and Jabar Ali

Department of Electronics and Communication Engineering, Coimbatore Institute of Technology, India

Abstract: The assignments of real time tasks to heterogeneous multiprocessors in real time applications are very difficult in scenarios that require high performance. The main problem in the heterogeneous multiprocessor system is task assignment to the processors because the execution time for each task varies from one processor to another.  Hence, the problem of finding a solution for task assignment to heterogeneous  processor   without exceeding the processors capacity in general is an NP hard problem. In order to meet the constraints in real time systems, a Hybrid Max-Min Ant colony optimization algorithm (H-MMAS) is proposed in this paper. Max-Min Ant System (MMAS) is extended with a local search heuristic to improve task assignment solution. The Local Search has resulted in maximizing the number of tasks assigned as well as minimizing the energy consumption. The performance of the proposed algorithm H-MMAS is compared with the Modified BPSO, ACO, MMAS algorithms in terms of the average number of task assigned, normalized energy consumption, quality of solution and average CPU time. From the experimental results, the proposed algorithm has outperformed MMAS, Modified BPSO and ACO for consistency matrix. In case of inconsistency matrix H-MMAS performed better than Modified BPSO, similar to ACO and MMAS, but there is an improvement in the normalized energy consumption.

Keywords: Multiprocessors, Task assignment, Heterogeneous processors, Ant Colony Optimization, Real time systems.

Received September 21, 2014; accepted December 21, 2015

Print E-mail

Fall Motion Detection with Fall Severity Level Estimation by Mining Kinect 3D Data Stream

Orasa Patsadu1, Bunthit Watanapa1, Piyapat Dajpratham2, and Chakarida Nukoolkit1

1School of Information Technology, King Mongkut’s University of Technology Thonburi,Thailand

2Faculty of Medicine Siriraj Hospital, Mahidol University, Thailand

Abstract: This paper proposes an integrative model of fall motion detection and fall severity level estimation. For the fall motion detection, a continuous stream of data representing time sequential frames of fifteen body joint positions was obtained from Kinect’s 3D depth camera. A set of features is then extracted and fed into the designated machine learning model. Compared with existing models that rely on the depth image inputs, the proposed scheme resolves background ambiguity of the human body. The experimental results demonstrated that the proposed fall detection method achieved accuracy of 99.97% with zero false negative and more robust when compared with the state-of-the-art approach using depth of image. Another key novelty of our approach is the framework, called fall severity injury score (FSIS), for determining the severity level of falls as a surrogate for seriousness of injury on three selected risk areas of body: head, hip and knee. The framework is based on two crucial pieces of information from the fall: 1) the velocity of the impact position and 2) the kinetic energy of the fall impact. Our proposed method is beneficial to caregivers, nurses or doctors, in giving first aid/diagnosis/treatment for the subject, especially, in cases where the subject loses consciousness or is unable to respond.

Keywords: Kinect 3D data stream, fall motion detection, fall severity level estimation, machine learning, smart home system.

Revived August 25, 2015; accepted December 27, 2015

Print E-mail

Intelligent Replication for Distributed active

Real-Time Databases Systems

Rashed Salem1, Safa'a Saleh2, Hattem Abdul-kader1

1Information Systems Department, Menoufia University, Egypt

2Information Systems Department, Taibah University,  KSA

Abstract: Recently, the demand for real-time database is increasing. Most real-time systems are inherently distributed in nature and need to handle data in a timely fashion. Obtaining data from remote sites may take long time making the temporal data invalid. This results in a large number of tardy transactions with their catastrophic effect. Replication is one solution of this problem, as it allows transactions to access temporal data locally. This helps transactions to meet their time requirements which require predictable resource usage. To improve predictability, DeeDS prototype is introduced to avoid the delay which results from disk access, network communications and distributed commit processing. DeeDS advises to use In-memory database, fully replication and local transaction committing, but full replication consumes the system resources causing a scalability problem. In this work, we introduce IReIDe as a new replication protocol that supports the replication for DeeDS and faces the scalability problem using intelligent clustering technique. The results show the ability of IReIDe to reduce the consumed system resources and maintain scalability.

Keyword: Replication, Real-time, DRTDBS, DeeDS, Clustering.

Revived February 17, 2015; accepted October 7, 2015


 Full text   

Print E-mail

Vision-Based Human Activity Recognition Using


Mahmoud Elmezain

Faculty of Science and Computer Engineering, Taibah University, KSA

Computer Science Division, Faculty of Science, Tanta University, Egypt

Abstract: In this paper, an innovative approach for human activity relies on affine-invariant shape descriptors and motion flow is proposed. The first phase of this approach is to employ the modelling background that uses an adaptive Gaussian mixture to distinguish moving foregrounds from their moving cast shadows. Accordingly, the extracted features are derived from 3D spatio-temporal action volume like elliptic Fourier, Zernike moments, mass center and optical flow. Finally, the discriminative model of Latent-dynamic Conditional Random Fields (LCDRFs) performs the training and testing action processes using the combined features that conforms vigorous view-invariant task. Our experiment on an action Weizmann dataset demonstrates that the proposed approach is robust and more efficient to problematic phenomena than previously reported. It also can take place with no sacrificing real-time performance for many practical action applications.

Keywords: Action recognition, Invariant elliptic Fourier, Invariant Zernike moments, Latent-dynamic Conditional Random Fields.

Revived August 15, 2015; accepted January 11, 2016

Print E-mail

Incorporating Unsupervised Machine Learning Technique on Genetic Algorithm for Test Case Optimization

P. Maragathavalli and S. Kanmani

 Department of Information Technology, Pondicherry Engineering College, India

Abstract: Search-based software testing uses random or directed search techniques to address problems. This paper discusses on test case selection and prioritization by combining genetic and clustering algorithms. Test cases have been generated using genetic algorithm and the prioritization is performed using group-wise clustering algorithm by assigning priorities to the generated test cases thereby reducing the size of a test suite. Test case selection is performed to select a suitable test case in order to their importance with respect to test goals. The objectives considered for criteria-based optimization are to optimize test suite with better condition coverage and to improve the fault detection capability and to minimize the execution time. Experimental results show that significant improvement when compared to the existing clustering technique in terms of condition coverage up to 93%, improved fault detection capability achieved upto 85.7% with minimal execution time of 4100ms.

Keywords: test case selection and prioritization, group-wise clustering.

Received August 14, 2014; accepted August 31, 2015

Print E-mail

A New Chaos-based Image Encryption Algorithm

Ming Xu

College of mathematics and information science, Hebei Normal University, China

Department of mathematics and physics, Shijiazhuang Tiedao University, China

Abstract: In this paper, we propose a new image encryption algorithm based on the Compound chaotic image encryption algorithm. The original one can’t resist chosen-plaintext attack and has weak statistical security, but our new algorithm can resist the chosen-plaintext attack using a simple improvement solution. The improvement solution is novel and transplantable, it can also be used to enhance the ability of resisting differential attack of other image encryption algorithms. The experimental results show that the new algorithm has higher security but its encryption speed is very nearly the same as the original one.

Keywords: Chaotic; Image encryption; Chosen-plaintext attack; Transplantable

Revived July 7, 2015; accepted January 13, 2016


Full text  

Print E-mail

HierarchicalRank: Webpage Rank Improvement Using HTML TagLevel Similarity

Dilip Sharma and Deepak Ganeshiya

Department of Computer Engineering and Applications, GLA University, India

Abstract: In the past researches, two types of algorithms are introduced that are query dependent and query independent, works online or offline. PageRank Algorithm works offline independent to query while HITS algorithm woks online dependent on query. One of the problems of these algorithms is that, division of the rank is based on number of inlinks, outlinks and different parameters used in hyperlink analysis which is dependent or independent to webpage content with the problem of topic drift. Previous researches were focused to solve this problem using the popularity of the outlink webpages. In this paper a novel algorithm for popularity measure is proposed based on similarity between query and Hierarchical text extracted from source and target webpage using HTML tags importance parameter. In this paper, result of proposed method is compared with PageRank Algorithm and Topic Distillation with Query Dependent Link Connections and Page Characteristics results.

Keywords: Web mining, Web graph, Hyperlink analysis, connectivity, PageRank, HTML tags.

Revived July 21, 2014; accepted October 14, 2014


Full text 


Print E-mail

Bilateral Multi-Issue Negotiation Model for a Kind of Complex Environment*

Jun Hu1, Li Zou1,2, and Ru Xu1,3

1 College of Computer Science and Electronic Engineering, Hunan University, China

2The Second Hospital, University of South China, China

3 Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, China

Abstract: There are many uncertain factors in bilateral multi-issue negotiation in complex environments, such as unknown opponents and time constraints. The key of negotiation in complex environment is the negotiation strategy of Agent. We use Gaussian process regression and dynamic risk strategies to predict the opponent concessions, and according to the utility of the opponent’s offer and the risk function, predict the concessions of opponent, then set the concessions rate of our Agent upon the opponent's concession strategy. We run the Agent in GENIUS platform and analyze the results of experiments. Experimental results show that the application of dynamic risk strategy in negotiation model is superior to other risk strategies.

Keywords: multi-issue negotiation; Gaussian process regression; dynamic risk strategy; concession strategy.

Revived October 16, 2014; accepted September 9, 2015


Full text 


Print E-mail

A Hybrid BATCS Algorithm to Generate Optimal Query Plan

Gomathi Ramalingam1 and Sharmila Dhandapani2

1Department of Computer Science and Engineering, Bannari Amman Institute of Technology, India.

2Department of Electronics and Instrumentation Enginering, Bannari Amman Institute of Technology, India.

Abstract: The enormous increase in the amount of web pages day by day leads to progress in semantic web data management. The issues in semantic web data management are increasing and there is a need for improvement in research to handle them. One of the most important issues is the process of query optimization. The semantic web data stored in the form of Resource Description Framework (RDF) data can be queried using the popular query language SPARQL. As the size of the data increases, complication arises in querying the RDF data. The problem of querying the RDF graphs involves multiple join operations and optimizing those joins becomes NP-hard. Nature inspired algorithms are becoming much popular in recent days to handle problems with high complexity. In this research, a hybrid BAT algorithm with Cuckoo Search (BATCS) is proposed to handle the problem of query optimization. The algorithm applies the echolocation behaviour of bats and hybrids with cuckoo search if the best solution stagnates for a designated number of iterations. Experiments were conducted with benchmark data sets and the algorithm proves that it performs efficiently in terms of query execution time. 

Keywords: Data management, Query optimization, Nature Inspired algorithms, Bat algorithm, Cuckoo Search algorithm

Revived November 7, 2014; accepted August 3, 2015


Full text 


Print E-mail

An Efficient Web Search Engine for Noisy Free Information Retrieval

1Pradeep Sahoo and 2Rajagopalan Parthasarthy

1Department of Computer Science and Engineering, Anna University, India.

2Department of Computer Science and Engineering, GKM College of Engineering and Technology, India.

Abstract: The vast growth, various dynamic and low quality of the World Wide Web makes it very difficult to retrieve relevant information from internet during query search. To resolve this issue, various web mining techniques are being used. The biggest challenge in web mining is to remove noisy data information or unwanted information from the webpage such as banner, video, audio, images, hyperlinks etc. which are not associated to a user query. To overcome these issues, a novel custom search engine is proposed with efficient algorithm in this paper. The proposed Uniform Resource Locator (URL) pattern extractor algorithm will extract the all relevance index pages from the web and ranking the indexes based on user query. Then, Noisy Data Cleaner (NDC) algorithm is applied to remove the unwanted content from the retrieved web pages. The results show that the proposed UPE+NDC algorithm provides very promising results for different datasets with high precision and recall rate in comparison with the existing algorithms.

Keywords: web content extraction, relevant information, noise data elimination, noisy data cleaner algorithm, URL pattern extractor algorithm

Revived November 27, 2014; accepted June 1, 2015


Full text 


Print E-mail

Effective Technology Based Sports Training System

Using Human Pose Model 

Kannan Paulraj and Nithya Natesan

Department of Electronics and Communication Engineering, Panimalar Engineering College, India

Abstract: This paper investigates the sports dynamics using human pose modeling from the video sequences. To implement human pose modeling, a human skeletal model is developed using thinning algorithm and the feature points of human body are extracted. The obtained feature points play an important role in analyzing the activities of a sports person. The proposed human pose model technique provides a technology based training to a sports person and performance can be gradually improved. Also the paper aimed at improving the computation time and efficiency of 2D and 3D model.

Keywords: Thinning Algorithm, Human activity, Motion Analysis, Feature Extraction


Revived March 28, 2015; accepted September 9, 2015


Full text 

Print E-mail

Vertical Links Minimized 3D NoC Topology and Router-Arbiter Design

                        Nallasamy Viswanathan1, Kuppusamy Paramasivam2 , and Kanagasabapathi Somasundaram 3

1Department of Electrical and Computer Engineering, Mahendra Engineering College, India.

2Department of Electrical and Computer Engineering, Karpagam College of Engineering, India,

3Department of Mathematics, Amrita Vishwa Vidyapeetham, India.

Abstract: Design of a topology and its router plays  a vital role in a 3D NoC architecture. In this paper, we develop a partially vertically connected topology, so called 3D Recursive Network Topology (3D RNT) and using an analytical model, we study the performance of the 3D RNT. Delay per Buffer Size (DBS) and Chip Area per Buffer Size (CABS) are the parameters considered for the performance evaluation. Our experimental results show that the vertical links are cut down upto 75% in 3D RNT compared to that of 3D Fully connected Mesh Topology (3D FMT) at the cost of increasing DBS by 8%, besides 10% lesser CABS is observed in the 3D RNT.  Further, a Programmable Prefix router-Arbiter (PPA) is designed for 3D NoC and its performance is analyzed. The results of the experimental analysis indicate that PPA has lesser delay and area (gate count) compared to Round Robin Arbiter with prefix network (RRA).

Keywords:  Network topology; Vertical links; Network Calculus; Arbiter; Latency; Chip area.

Revived June 26, 2014; accepted July 7, 2015

Print E-mail

Hidden Markov Random Fields and Particle Swarm

Combination for Brain Magnetic Resonance Image


El-Hachemi Guerrout, Ramdane Mahiou, Samy Ait-Aoudia

ESI - Ecole nationale Supérieure en Informatique, Algeria

Abstract: The interpretation of brain images is a crucial task in the practitioners’ diagnosis process. Segmentation is one of key operations to provide a decision support to physicians. There are several methods to perform segmentation. We use Hidden Markov Random Fields (HMRF) for modelling the segmentation problem. This elegant model leads to an optimization problem. Particles Swarm Optimization (PSO) method is used to achieve brain magnetic resonance image segmentation. Setting the parameters of the HMRF-PSO method is a task in itself. We conduct a study for the choice of parameters that give a good segmentation. The segmentation quality is evaluated on ground-truth images, using the Dice coefficient also called Kappa index. The results show a superiority of the HMRF-PSO method, compared to methods such as Classical MRF and MRF using variants of ACO (Ant Colony Optimization).

Keywords: Brain image segmentation, Hidden Markov Random Field, Swarm Particles Optimization, Dice coefficient.

Revived June 5, 2015; accepted October 19, 2015


 Full text  

Print E-mail

Energy consumption improvementand cost saving

by Cloud broker in Cloud datacenters

Ahmad Reza Karamolahi1, AbdolahChalechale2, and Mahmoud Ahmadi2

1B2B social sales network group, Iran

2Computer and Information Technology Department, Razi University, Iran

Abstract:Using a single Cloud datacenter in Cloud network can have several disadvantages for users, from excess energy consumption to increase dissatisfaction of users of service and price of provided services. The Cloud broker as an intermediary between users and datacenters can play a key role to enhance users' satisfaction and reducing energy consumption of datacenters that are located geographically in different areas. In this paper, we have attempted to provide an algorithm that assigns datacenter to users through rating various datacenters. This algorithm has been simulated by Cloudsim and will result in high levels of user satisfaction, cost-effectiveness and improving energy consumption. In this paper, we show that this algorithm can save 44% of energy consumption and 7% of cost saving to users are in sample simulation space.

Keywords:Cloud network, Cloud broker, Energy optimizing, Cost saving.

Revived June 10, 2015; accepted December 9, 2015


Full text  


Print E-mail

Advanced Architecture for Java Universal Message Passing (AA-JUMP)

Adeel-ur-Rehman1 and Naveed Riaz2

1National Centre for Physics (NCP), Pakistan

2SEECS, National University of Science and Technology, Pakistan

Abstract: The Architecture for Java Universal Message Passing (A-JUMP) is a Java based message passing framework. A-JUMP offers flexibility for programmers in order to write parallel applications making use of multiple programming languages. There is also a provision to use various network protocols for message communication. The results for standard benchmarks like ping-pong latency, Embarrassingly Parallel (EP) code execution, JGF Crypt etc. gave us the conclusion that for the cases where the data size is smaller than 256K bytes, the numbers are comparative with some of its predecessor models like MPICH2, MPJ Express etc. But, in case, the packet size exceeds 256K bytes, the performance of the A-JUMP model seems to be severely hampered. Hence, taking that peculiar behaviour into account, this paper talks about a strategy devised to cope up with the performance limitation observed under the base A-JUMP implementation, giving birth to an Advanced A-JUMP (AA-JUMP) methodology while keeping the basic workflow of the original model intact. AA-JUMP addresses to improve performance of A-JUMP by preserving its various traits like portability, simplicity, scalability etc. which are the key features offered by flourishing HPC oriented frameworks of now-a-days. The head-to-head comparisons between the two message passing versions reveals 40% performance boost; thereby suggesting AAJUMP a viable approach to adopt under parallel as well as distributed computing domains.

Keywords: A-JUMP, Java, Universal Message Passing, MPI, Distributed Computing

Revived February 15, 2015; accepted December 21, 2015


Full text 


Print E-mail

Performance Analysis of Security Requirements Engineering Framework by Measuring the Vulnerabilities

Salini Prabhakaran1 and Kanmani Selvadurai2

1Department of Computer Science and Engineering, Pondicherry Engineering College, India

2Department of Information Technology, Pondicherry Engineering College, India

Abstract: To develop security critical web applications, specifying security requirements is important, since 75% to 80% of all attacks happen at the web application layer. We adopted security requirements engineering methods to identify security requirements at the early stages of software development life cycle so as to minimize vulnerabilities at the later phases. In this paper, we present the evaluation of Model Oriented Security Requirements Engineering (MOSRE) framework and Security Requirements Engineering Framework (SREF) by implementing the identified security requirements of a web application through each framework while developing respective web application. We also developed a web application without using any of the security requirements engineering method in order to prove the importance of security requirements engineering phase in software development life cycle. The developed web applications were scanned for vulnerabilities using the web application scanning tool. The evaluation was done in two phases of software development life cycle: requirements engineering and testing. From the results, we observed that the number of vulnerabilities detected in the web application developed by adopting MOSRE framework is less, when compared to the web applications developed adopting SREF and without using any security requirements engineering method. Thus, this study led the requirements engineers to use MOSRE framework to elicit security requirements efficiently and also trace security requirements from requirements engineering phase to later phases of software development life cycle for developing secure web applications.

Keywords: Requirements Engineering, Security Mechanism, Security Requirements, Security Requirements Engineering, Web Applications and Vulnerabilities.

Revived December 15, 2014; accepted April 5, 2015


Full text 


Print E-mail

A Novel Approach for Face Recognition Using Fused GMDH-Based Networks

El-Sayed  El-Alfy1, Zubair Baig2, and Radwan Abdel-Aal1

1College of Computer Sciences and Engineering, King Fahd University of Petroleum and Minerals, Saudi Arabia

2School of Science and Security Research Institute, Edith Cowan University, Australia

Abstract: This paper explores a novel approach for automatic human recognition from multi-view frontal facial images taken at different poses. The proposed computational model is based on fusion of the Group Method of Data Handling (GMDH) neural networks trained on different subsets of facial features and with different complexities. To demonstrate the effectiveness of this approach, the performance is evaluated and compared using eigen-decomposition for feature extraction and reduction with a variety of GMDH-based models. The experimental results show that high recognition rates, close to 98%, can be achieved with very low average false acceptance rates, less than 0.12%. Performance is further investigated on different feature set sizes and it is found that with smaller feature sets (as few as 8 features), the proposed GMDH-based models outperform other classifiers including those using radial-basis functions and support-vector machines. Additionally, the capability of the group method of data handling algorithm to select the most relevant features during the model construction makes it more attractive to build much simplified models of polynomial units.

Keywords: Face recognition, abductive machine learning, neural computing, GMDH-based ensemble learning.

Revived May 30, 2015; accepted November 29, 2015


Full text  


Print E-mail

Complementary Approaches Built as Web Service for Arabic Handwriting OCR Systems via Amazon Elatic MapReduce (EMR) Model

Hassen Hamdi1, Maher Khemakhem2, Aisha Zaidan1

1Department of Computer Science, Taibah University, Kingdom of Saudi Arabia

2Faculty of Computing and Information Technology, University of King Abdul-Aziz, Kingdom of Saudi Arabia

Abstract: Arabic Optical Character Recognition (OCR) as Web Services represents a major challenge for handwritten document recognition. A variety of approaches, methods, algorithms and techniques have been proposed in order to build powerful Arabic OCR web services. Unfortunately, these methods could not succeed in achieving this mission in case of large large quantity Arabic handwritten documents. Intensive experiments and observations revealed that some of the existing approaches and techniques are complementary and can be combined to improve the recognition rate. Designing and implementing these recent sophisticated complementary approaches and techniques as web services are commonly complex; they require strong computing power to reach an acceptable recognition speed especially in case of large quantity documents. One of the possible solutions to overcome this problem is to benefit from distributed computing architectures such as cloud computing.This paper describes the design and implementation of Arabic Handwriting Recognition as a web service (AHRweb service) based on the complementary approach K-Nearest Neighbor (KNN) /Support Vector Machine (SVM) (K-NN/SVM) via Amazon Elastic MapReduce (EMR) model. The experiments were conducted on a  cloud computing environment with a real large scale handwriting dataset from the Institute for Communications Technology (IFN)/ Ecole Nationale d’Ingénieur de Tunis (ENIT) IFN/ENIT database. The J-Sim (Java Simulator) was used as a tool to generate and analyze statistical results. Experimental results show that Amazon Elastic MapReduce (EMR) model constitutes a very promising framework for enhancing large AHRweb service performances.

Keywords: Arabic handwriting, Complementary Approaches and techniques, K-NN/SVM, web service, Amazon Elastic MapReduce .


Revived April 25, 2015; accepted January 3, 2016


Full text 


Print E-mail

Arabic Character Extraction and Recognition using Traversing Approach

Abdul Khader Saudagar and Habeeb Mohammed

College of Computer and Information Sciences, Al Imam Mohammad Ibn Saud Islamic University, Saudi Arabia

Abstract: The intention behind this research is to present an original work undertaken for Arabic character extraction and recognition for attaining higher percentage of recognition rate. Copious techniques for character, text extraction were proposed in earlier decades, but very few of them shed light on Arabic character set. From literature survey, it was found that 100% recognition rate is not attained by earlier proposed implementations. The proposed technique is novel and is based on traversing of the characters in a given text and marking their directions viz. North-South (NS), East-West (EW), NorthEast-SouthWest (NE-SW), NorthWest-SouthEast (NW-SE) etc., in an array and comparing them with the pre-defined codes of every character in the dataset. The experiments were conducted on Arabic news videos, documents taken from Arabic Printed Text Image (APTI) database and the results achieved from this research are very promising with a recognition rate of 98.1%. The proposed algorithm in this research work can replace the existing algorithms used in present Arabic Optical Character Recognition (AOCR) systems.

Keywords: Accuracy, Arabic Optical Character Recognition and Text Extraction.

Revived March 14, 2015; accepted August 16, 2015


Full text  


Print E-mail

A Multimedia Web Service Matchmaker

Sid Ahmed Djallal Midouni1,2, Youssef Amghar1, Azeddine Chikh2

1Université de Lyon, CNRS INSA-Lyon, France

2Département d'informatique, Université Abou Bekr Belkaid-Tlemcen, Algérie

Abstract: The full service approach for composing MaaS services in multimedia data retrieving, which we have proposed in a previous work, is based on a four phases process: description; matching; clustering; and restitution. In this article, we show how MaaS services are matched to meet user needs. Our matching algorithm consists of two steps: (1) the domain matching step is based on the calculation of similarity degrees between the domain description of MaaS services and user queries; (2) the multimedia matching step compares the multimedia description of MaaS services with user queries. The multimedia description is defined as a SPARQL query over multimedia ontology. An experimentation in a medical domain allowed to evaluate the solution. The results indicate that using both domain and multimedia matching considerably improve the performance of multimedia data retrieving systems.

Keywords: semantic web services, information retrieval, service description, SAWSDL, service matching.

Revived July 27, 2015; accepted September 12, 2015


Full text 


Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ on line 251 Warning: fsockopen(): unable to connect to (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ on line 251 skterr