ACIT'2016 Proceeding

Reinforcing Arabic Language Text Clustering: Theory and Application

Fawaz S. Al-Anzi and Dia AbuZeina
Department of Computer Engineering, Kuwait University
This email address is being protected from spambots. You need JavaScript enabled to view it.; This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract:This paper presents a novel approach for automatic Arabic text clustering. The proposed method combines two well-known information retrieval techniques that are latent semantic indexing (LSI) and cosine similarity measure. The standard LSI technique generates the textual feature vectors based on the words co-occurrences; however, the proposed method generates the feature vectors using the cosine measures between the documents. The goal is to obtain high quality textual clusters based on semantic rich features for the benefit of linguistic applications. The performance of the proposed method evaluated using an Arabic corpus that contains 1,000 documents belongs to 10 topics (100 documents for each topic).For clustering, we used expectation-maximization (EM) unsupervised clustering technique to cluster the corpus's documents for ten groups. The experimental results show that the proposed method outperforms the standard LSI method by about 15%.

Keywords:Arabic Text, Clustering,Latent Semantic Indexing,Latent Semantic Indexing, Expectation-Maximization.

Full Text


A Dynamic Warehouse Design Based on Simulated Annealing Algorithm

Dr. Murtadha M. Hamad1, a Yusra A. Turky1, b
1College of Computer - Anbar University – Anbar – Iraq
                                                                     a This email address is being protected from spambots. You need JavaScript enabled to view it., b This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The amount of information available to large-scale enterprises is growing rapidly. New information is being generated continuously by operational systems. Decision support functions in a warehouse such as On-Line Analytical Processing (OLAP), involving hundreds or thousands of complex aggregate queries over large amount of data. A data warehouse can be seen as a set of materialized views defined over some relations. In this paper, when a query is posed to answer, here will be used the suitable materialized views with tables in order to produce best views and tables which will be used for constructing any new query. In order to achieve and implement the Dynamic Warehouse Design,  creating three complex OLAP queries with join and aggregation operation, creating views and updating them by using windows task scheduler and batch files based on base table updating, creating lattice of views by using multiple view processing plan operation, simulated annealing(SA) algorithm was developed and introduced for query re writing by replacing dynamically  suitable views instead of tables and introducing best tables and views that will used by user to construct the suitable query.The main goals of this work are to show the utilization of derived data such as materialized views for run time re optimization of aggregate queries (quick response time), effective, transparency and accuracy are important factors in the success of any data warehouse.

Keywords:On-Line Analytical Processing (OLAP), Materialized view, Dynamic Warehouse, Simulated annealing, Data warehouse.

Full Text


A Comparative Study among Cryptographic Algorithms: Blowfish, AES and RSA

                                                                               ¹Wafaa A. N. A. AL-Nbhany, ²Ammar Zahary
                                                            ¹The Arab Academy for Banking and Financial Sciences – Sana'a
                                                                                 Computer Information Systems Department
                                                                               ²Faculty of Computer and IT, Sana'a University
                                                                                             Sana'a, Republic of Yemen
                                                          Emails: This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The encryption in the field of information technology contains the conversion of data and files from the known formula to other types of files. Therefore, the original form of files cannot be traced without knowing the key that is used in the encryption process. Different types of encryption algorithms can exist with different properties. The appropriate algorithm can be chosen according to the nature of application, each application suits with a specific algorithm. In this paper, a comparative study was conducted for three types of algorithms AES, RSA, and Blowfish. Many performance metrics were used such as symmetric/asymmetric key, key size in bits, encryption speed, decryption speed and file size to determine the properties of each algorithm. Results show that RSA algorithm is very slow. Blowfish and AES treat the small files very quickly however, if the file is large, speed of algorithms will differ. The symmetric Blowfish algorithm is faster than AES and RSA algorithms. Symmetric algorithms provide higher security and higher speed for encryption and decryption, and asymmetric algorithms provide high security but with more processing time.

Keywords: Encryption, Decryption, Symmetric Key, Asymmetric Key, Advanced Encryption Standard (AES), Blowfish.

Full Text


Teaching and learning  sustainably with Web 2.0 Technologies

Benefits, Barriers and Best Practices

Aoued Boukelif, Hasnia Merzoug And Fatiha Faty  Aiboud
University of Sidi Bel Abbes, Algeria, University of Algiers, France,
Mohamed Boudiaf University of Science and Technology of Oran, Algeria
This email address is being protected from spambots. You need JavaScript enabled to view it. , This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: A whole new range of web-based tools and services now provides learners with the opportunity to create their own digital learning materials, personal learning environments, and social networks. These tools provide an opportunity for new design models for education and training that will better prepare citizens and workers for a knowledge-based society. provided insightful guidelines and tips for teaching with Web 2.0 technologies.

In recent years, IT's have fascinated many teachers and learners but did they really bring advantages for the learning process?.To what extent are they used? To reach what objectives? What are the new roles for teachers?. This paper deals with the innovative usages of IT's in teaching and addresses the many issues.

The purpose of this paper is to explore best practices in teaching with Web 2.0 technologies as well as the benefits and barriers associated with the use of Web 2.0.The major benefits of using Web 2.0 technologies in teaching include  interaction, communication and collaboration,  knowledge creation, ease of use and flexibility, and writing and technology skills. The major barriers university instructors encounter in teaching with Web 2.0 technologies include uneasiness with openness, technical problems, and time.

The paper is organized a follows : First ,current and emerging usages of IT's and multimedia in university teaching and learning are dealt with, then  two innovative teaching techniques are  introduced, namely active pedagogy and problem based learning . The paper concludes with the issue of Integrating IT's in university teaching.

Keywords: IT's, teaching, learning, tools, pedagogy, digital workspace, assessment.

Full Text


AVL Tree an efficient retrieval engine in Classified Fingerprint Database

                                                                                                Ahmed B. Elmadani
                                                     Department of Computer Science, Faculty of Science, Sebha University
                                                                                              This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Fingerprints are used to identify human and for crime discover. They are used to authenticate persons in order to allow them to gain access to their financial and personal resources or to identify them in big databases. This requires use of fast search engine to reduce time consumed in searching big fingerprint databases, therefor choosing searching engine is an important issue to reduce searching time. This paper investigates the existing searching engine methods and presents advantages of AVL tree method over other methods. The paper will investigate searching speed and time consuming to retrieve fingerprint image.

Keywords: Fingerprint Databases, fingerprint classifies, AVL tree, Access Methods Algorithms.

Full Text


 

Using Q-Gram and Fuzzy Logic Algorithms for Eliminating Data Warehouse Duplications

Dr. Murtadha M. Hamad1, a Salih S. Sami1, b
1College of Computer - Anbar University – Anbar – Iraq
a This email address is being protected from spambots. You need JavaScript enabled to view it., b This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Context: The duplication system or record linkage has many applications in real life. It seems in a wide area of detecting the similar data join the web documents in wide web, detect the plagiarism and many application enter it, a proper choosing to enhance the data quality that leads to the help system to make the right decisions routing plays a considerable part in order to ameliorate the economic interests and suitability of logistics projects. Problems: In this study, the problem is as follows: Duplicate records data comes with the content of the ambiguity for refined other records that dates back to the same customer, especially since the recipes refined constraint contain the same major change in the data limitations and restrictions as well as contain the same simple change data. Objectives: The aim of this paper is to find an optimal solution for duplicate records detection and elimination by using fuzzy logic (FL) and Q-gram. We suggest achieving that goal with the following. Objectives: Provide data warehouse without duplicate that leads to minimize the size of DW reduce the time of searching for the DW and enhancement the decision support system. Approach: The Approach has been presented based on two phases: firstly, find the similarity records by Q-gram similarity; secondly, Classification record whether refined using fuzzy logic. Have identified the percentage threshold of 0.68 the researcher chosen this value based on the results obtained. If the similarity between the key ratio exceeded, it enters to the Fuzzy logic algorithm, which in turn determines if this record duplicated or not. The proposed work has an accuracy of 96%.

Keywords:Duplicate Elimination, Similarity score, Q-Gram, Fuzzy logic, Key Generation.

Full Text


Empirical Study of Analysts' Practices in Packaged Software Implementation at Small Software Enterprises

                                                                                             Issam Jebreen and Ahmad Al-Qerem
                                                                                               Faculty of Information Technology
                                                                                                              Zarqa University
                                                                                      This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: This study investigates the practices of requirements engineering (RE) for packaged software implementation, as enacted by small packaged software vendors (SPSVs). The research findings lead to introduced new methods of documentation, was not as concerned as general RE practice with looking for domain constraints or with collecting requirements and viewpoints from multiple sources, was more likely to involve live software demonstrations and screenshots to validate user needs, and was more likely to involve the compilation of a user manual. In PSI, prioritising requirements is not a basic practice; instead, analysts collect requirements in a circular process, with managers then directing analysts regarding which requirements to direct most attention toward.

Keywords: Requirement engineering; Packaged software implementation; ERP; Analysts' practices SMEs.

Full Text


نظم اتخاذ القرار على الويب

تحديات وفرص

د. أروى يحيى الإرياني
أستاذ مساعد
جامعة سبــأ – كلية الحاسوب وتكنولوجيا المعلومات - اليمن

الملخص

قدمت التقنيات الحديثة وخاصة تقنيات الويب فرص افضل لنظم دعم القرارات وبنفس الوقت كثير من التحديات. تقوم هذه الدراسة بتحديد الفرص والتحديات في حالتي نظم الدعم على الويب مقارنتا بنظم دعم القرارات التقليدية مع التطرق لأهم تقنيات نظم الدعم على الويب ومقارنتها بما هو عليه في نظم دعم القرارات التقليدية. استخلصت الدراسة من واقع مجموعة من الدراسات السابقة أن استخدام نظم دعم القرارات على الويب  يقدم كثير جدا من الفرص والامكانيات الهائلة ولكن هناك أيضا التحديات التي تجعل من بعض المؤسسات التحفظ عن استخدامها والاهتمام بأخذ ما يناسبها ويقلل من كلفة تطبيق التقنيات عالية المستوى إذا لم يتم الاستفادة منها كما يجب.

الكلمات المفتاحية: نظم دعم القرار التقليدية، نظم دعم القرار على الويب، الفرص، التحديات.

Full Text 


An Automatic Grading System based on Dynamic Corpora

Djamal Bennouar
Bouira University, Algeria
This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: assessment is a key component of the teaching and learning process. Assessing a student's answer to an open ended question, even if it is a short answer question, is a difficult and time-consuming activity. Current Automatic Grading Systems (AGS) achieve their works using static corpora.Building efficient corpora for a course is actually a challenge.The underlying subjectivity in grading short answers may have a serious impact in the quality of a corpus. Specificcourse context definedby a teacher and the time dependent grading strategy may make very difficult the construction of traditional course corpora.  This paper presents an AGS for short answer based on dynamically built an up to date corpora. The corpora contain two kind of corpus: a corpus related to the reference answer and a corpus related to the student answer. Each corpus is automatically generated by applying a set of semantic and syntactic teacher indications to the reference and the student answer. The teacher indications are introduced by a teacher in a process of predicting possible student answer. The grading process of the proposed AGS tries to find the most similar answers in the two corpus in order to determine the most correct grade for a student answer.

Keywords: Computer Aided Assessment, Automatic Grading System, short answer,corpus, answer predicting, text similarity.

Full Text


A Customers' Discrete Choice Model for Competition over Incomes in Telecommunications Market

                                                              M'hamed Outanoute1, Mohamed Baslam1, Belaid Bouikhalene2
                                     1Sultan MoulaySlimane University, Faculty of Sciences and Techniques, BeniMellal, Morocco
                                                 2Sultan MoulaySlimane University, Polydisciplinary Faculty, BeniMellal, Morocco
                                               This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract :The customers churn between service providers (SPs) due to better prices, better qualities of services or better reputation. Each provider is assumed to look for a maximized revenue, which depends on the strategy of the competitor. Several works present in the literature were based on the price as the only decision parameter, whereas others parameters such as quality of service (QoS) have a decisive impact to register to an operator rather than the others. We formulate the interaction between SPs as a non-cooperative game. First, each SP chooses QoS to guarantee and the corresponding price. Second, each customer chooses his SP and may churn to another or alternatively switch to -no subscription state- depending on the observed price/QoS. In this work, we operate Markov chain to model user's decisions, which depend on the strategic actions of SPs. We adopt logit model to represent transition rates. Finally, we provide extensive numerical results to show the importance of taking price and QoS as joint decision parameters.

Keywords :Pricing, QoS, Migrating customers, Service providers' competition, Logit model.

Full Text


Coupling of Geographic Location-based Service and Routing for Wireless Sensor Networks

Rania Khadim1,2, Ansam Ennaciri2, Mohammed Erritali2 and Abdelhakime Maaden1
1 Laboratory of Mathematics and Applications
2 TIAD Laboratory, Department of Computer Sciences
Faculty of Sciences and Technics, Sultan Moulay Slimane University,
BENI MELLAL, MOROCCO
{khadimrania, ennaciri.ansam}@gmail.com
This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Geographic Routing Protocols use location information when they need to route packets. Obviously, location information is maintained by Location-based Services provided by network nodes in a distributed way. Routing and location services are very related but are used separately. Therefore, the overhead of the location-based service is not considered when we evaluate the geographic routing overhead. Our aim is to combine routing protocols with location-based services in order to reduce communication establishment latency and routing overhead. To reduce the location overhead, we propose a combination of GPSR (Greedy Perimeter Stateless Routing) and Location-based Services (Grid and Hierarchical Location Services (GLS/HLS)). GPSR takes care of routing packets and GLS or HLS are called to get destination position when the target's node position is unknown or is not fresh enough. In order to implement this concept, we have proposed a patch over the NS-2 simulator which mixes GPSR, GLS and HLS according to our proposal. We have undertaken a set of experimentations and we have considered two performance criteria, the location overhead (number of sent location requests and the consumed location bandwidth) and the network performances (i.e., the packet delivery ratio and the average latency).

Keywords: WSN, Geographic Routing Protocols, Location-based services.

Full Text


A Comparison Study between RCCAR and Conventional Prediction Techniques for Resolving Context Conflicts in Pervasive Context-Aware Systems

Asma Al- Shargabi, Francois   Siewe, Ammar  Zahary
Faculty of Computing and Information Technology, University of Science and Technology Sana'a, Yemen Faculty of Technology, De Montfort University,Leicester, UK, United Kingdom
Faculty of Computer and IT, Sana'a University, Yemen
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it. , This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract—In Pervasive computing environment, context-aware systems face many challenges to keep high quality performance. One-challenge faces context-aware systems is conflicted values come from different sensors because of different reasons. These conflicts affect the quality of context and as a result the quality of service as a whole. This paper is extension to our previous work, which is published in [15]. In our previous work, we presented an approach for resolving context conflicts in context-aware systems. This approach is could RCCAR (Resolving Context Conflicts Using Association Rules). RCCAR is implemented and verified well in [15], this paper conducts further experiments to explore the performance of RCCAR in comparison with the traditional prediction methods. The basic prediction methods that have been tested include simple moving average, weighted moving average, single exponential smoothing, double exponential smoothing, and ARMA. Experiments is conducted using Weka 3.7.7 and Excel; the results show better achievements for RCCAR against the conventional prediction methods. More researches are recommended to eliminate the cost of RCCAR.

Keywords— RCCAR, Pervasive Computing, Context–Aware System (CAS); Context Conflicts; Prediction.

Full Text


Towards Accurate Real-Time Traffic Sign Recognition Based on Unsupervised Deep Learning of Spatial Sparse Features: A perspective

Ahmad M. Hasasneh1, Yousef-Awwad Daraghmi2, and Nabil M. Hasasneh3
1Department of Information Technology, Palestine Ahliya University, Palestine
2Computer Engineering Department, Palestine Technical University, Palestine
3Computer Science Department, Hebron University, Palestine

Abstract: Learning a good generative model is of utmost importance for the problems of computer vision, image classification and image processing. In particular, learning features from small tiny patches and perform further tasks, like traffic sign recognition, can be very useful. In this paper we propose to use Deep Belief Networks, based on Restricted Boltzmann Machines and a direct use of tiny images, to produce an efficient local sparse representation of the initial data in the feature space. Such a representation is assumed to be linearly separable and therefore a simple classifier, like softmax regression, is suitable to achieve accurate and fast real-time traffic sign recognition. However, to achieve localized features, data whitening or at least local normalization is a prerequisite for these approaches. The low computational cost and the accuracy of the model enable us to use the model on smart phones for accurately recognizing traffic signs and alerting drivers in real time. To our knowledge, this is the first attempt that tiny images feature extraction using deep architecture is a simpler alternative approach for traffic sign recognition that deserves to be considered and investigated.

Keywords: Traffic Sign Recognition, Image Processing, Image Classification, Computer Vision, Restricted Boltzmann Machines, Deep Belief Networks, Softmax Regression, Sparse Representation.

Full Text


Combining Knowledge Based System withInformation Technology based Project Management

Nour Eldin Mohamed Elshaiekh, Arockiasamy Soosaimanickam
Faculty members, Department of Information Systems, College of Economics, Management and Information Systems,University of Nizwa, Sultanate of Oman.
Email: This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract:Using the growingtrend for better project management is very much essential and widespread, particularly for information technology (IT) based projects. Many institution's today have a new or renewed techniques and methods in handling project management. The Internet, computers devices, softwaresystems, applications, knowledge and the practice of interdisciplinary is aworldwide effort and it hascompletely changed the project management activities.Most of IT project need to use some computer applications for more efficient and great quality to increase reliability and improve productivity and also to avoid any failure that may occur. Hence Knowledge based system is (KBS) is a kind of computer database that motivates projects and uses knowledge to solve complex problems. Combining this kind of systems with IT project management will help the integration of these systems to support IT projects management in order to improve the quality, better management and reliability of these projects.The main purpose of this paper isto explore the possibilities of ongoing effect of combining knowledge management system with IT projects management. This will provide the sufficientreview of the idea of knowledge based project management. It discusses the various reasons for managing IT projectcombining with knowledge bases systems. Additionally the paper surveys a number of studies to classify the practices of KBS and IT projects. Finally, the paper highlights in the conclusion as knowledge based systems will be a good tool for IT project management.

Keywords:Knowledge Management, knowledge based systems, Project Management, IT Projects.

Full Text 


A comparative study of the liver disorders prediction based on Neuro-Fuzzy and Metaheuristics approaches

Fatima Bekaddour1, Chikhi Salim1, Bekaddour Okkacha2
1 MISC Laboratory, Mentouri University, Canstantine, Algeria
2 Computer Science Department , Tlemcen University, Algeria

Abstract:  In this work, we propose the application of some well-known metaheuristics to enhance the medical classifier performance. IHBA (Improved Homogeneity-Based Algorithm), Simulated Annealing (SA), Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) metaheuristics have been applied in conjunction with the neuro-fuzzy system to minimize the value of an objective function proposed in our previous work. We validate our computational results, based on the liver disorders dataset obtained from the UCI repository. Results show that the IHBA approach found the best performances. Both SA and PSO outperform the GA metaheuristic and the standard neuro-fuzzy model.

Keywords: Metaheuristics, Neuro-Fuzzy, IHBA, HBA, SA, PSO, GA, Bupa Liver Disorders, Medical Informatics.

Full Text


Oily Fingerprint Image Enhancement Using Fuzzy Morphology

Abdelwahed Motwakel1, Adnan Shaout2
1Collage of Post Graduate Studies, Sudan University of Science and Technology
2The Electrical and Computer Engineering Department, the University of Michigan -Dearborn
This email address is being protected from spambots. You need JavaScript enabled to view it.
This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract:The quality of fingerprint image greatly affects the performance of minutiae extraction and the process of matching in fingerprint identification system.  In order to improve the performance of the fingerprint identification system, a fuzzy morphology technique is proposed in this paper to enhancement oily fingerprint images.  Experimental results, using the DB_ITS_2009 database [15] indicated that the proposed method did enhance the quality of fingerprint image.  The new proposed method increased the fingerprint identification rate from 82% to 96% when compared withan adaptive pre-processing method based on binary image morphology.

Keywords:Oily fingerprint image; fuzzy Morphology; Fuzzy Dilation; Fuzzy Erosion;Image enhancement.

Full Text


Hybrid Arabic Handwritten Character Recognition Using PCA and ANFIS

1Omar Balola Ali, 2Adnan Shaout
1Sudan University for Sciences and Technology, Sudan, This email address is being protected from spambots. You need JavaScript enabled to view it.
2The University of Michigan – Dearborn, USA, This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: In this paper we will present a two phase method for isolated Arabic handwritten character recognition system. The proposed system is a hybrid system that uses the principal component analysis (PCA) feature technique and neuro-fuzzy classifier. The Adaptive Neural Network Fuzzy Inference System (ANFIS)were used at all levels of the character recognition stages with different learning algorithms and nonlinear outputs. The proposed system is applied to the Sudan University Sciences and Technology Arabic Recognition Group (SUST-ARG) data set.  In this paper the work was divided into two stages. In the first stage, the system was applied to 34 Arabic characters and achieved 96.2% recognition rate for the tested data set. In the second stage, a private classifier for each group was created to recognize and classify the characters within a group which achieved 99.5% recognition rate for the tested data set.

Keywords:Isolated Handwritten Arabic Character Recognition, Principal Components Analysis (PCA), Feature Extraction, Adaptive Neural Network Fuzzy Inference System (ANFIS).

Full Text


TCDR based on Efficiency and Accuracy of the Intelligent Systems

Azmi Shawkat Abdulbaqi1, AbdAbrahem Mosslah2, Reyadh Hazim Mahdi3
1Dept. of Computer Science, College of Computer& Information Technology,University of Anbar,IRAQ
2Dept. of Fiqih, College of Islamic Science,University of Anbar,IRAQ
3Dept. of Computer Science, College Science,University of Mustanseriah,IRAQ

Abstract: According the recent world health organization (WHO) reports, one person every hour of every day dies of oral cancer in the united states. Oral cancer is a term used to describe any tumor appears in the oral cavity. The origin of the tumor may be a prototype of the oral tissues or may be a minor mouth tumor. Tongue cancer is one of oral diseases and it's a common disease. In this paper, tongue cancer detection and recognition (TCDR) system using Radial Bases Function (RBF) Neural Network,  MultiLayer Perceptron (MLP) and Genetic algorithm (GA) is proposed. The proposed system consists of mainly three steps: first, pre-processing are applied to the input image (Mouth image, gum image and tongue image). Second, extracted the features of tumor  tissue. This feature is being as input parameters to the hybrid algorithm. The final step, the proposed algorithm is implement  the classification to acquire  the results.

Keywords: Radial Bases Function (RBF) neural network, MultiLayer Perceptron (MLP), Feature Extraction, Genetic Algorithm (GA), Canny Edge Detection (CED).

Full Text 


Visualizing Testing Results for Software Projects

Ahmed Fawzi Otoom, Maen Hammad, Nadera Al-Jawabreh, Rawan Abu Seini 
Department of Software Engineering
Faculty of Prince Al-Hussein Bin Abdullah II for Information Technology
The Hashemite University, Zarqa, Jordan
{aotoom, mhammad}@hu.edu.jo, This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it. 

Abstract: The key benefit of software visualization is to help in program understanding and in reducing the complexity of software systems. Test cases are essential artifacts to perform testing activities. There is large number of test cases to cover different aspects of the code. This paper proposes a visualization approach to represent test cases results and their relationship to object oriented software systems. The proposed visualization helps testers and program managers to get a clear and quick understanding about the test case, tested code and the results of testing. The proposed visualization represents test cases and source code at different views; method view, class view, package view and system view. The test cases are colored according to their execution results. We applied the proposed approach on two Java classes to illustrate the benefits and the usefulness of the proposed views. 

Keywords: software visualization, software testing, program comprehension.

Full Text


 

A comparison study of context-management approaches for the Internet of Things 

Farida Retima 1, Saber Benharzallah 2, Laid Kahloul 3, Okba Kazar 4
1,2,3,4Smart Computer Sciences Laboratory, Biskra University 07000, Algeria

Abstract: Currently, the context management solutions for the Internet of Things are the subject of numerous studies which are achieved through the expansion of the context manager for ambient environments. In literature, there are different approaches enabling the context management for internet of things (IoT). This paper studies these approaches, which are well known according to our defined criteria such as heterogeneity, mobility, the influence of the physical world, scalability, security, privacy, quality of context, autonomous deployment of entities, characterization multi scales, and interoperability.

Keywords: Context management, Internet of things, Context-awareness, Context manager, middleware.

Full Text


eLearner Experience Model

Rawad Hammad, Mohammed Odeh, and Zaheer Khan
Software Engineering Research Group,Faculty of Environment and Technology, University of the West of England, Bristol, United Kingdom
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract:In literature, there have been many e-learning artefacts developed and promoted based on their ability to enhance learning and e-learner experience. However, there is lack of a precise definition of e-learner experience in literature. This paper discusses e-learner experience model along with its roots in: (i) e-learning domain research and (ii) user experience/usability. It also proposes a definition for the e-learner experience model based on the particularities of e-learning. The proposed e-learner experience model has been derived by performing state of the art literature review. The proposed model consists of different constructs and this paper presents analysis of these constructs to measure their effectiveness and evaluating e-learner experience in an e-learning environment. Preliminary assessment of the proposed model indicates promising results to be further investigated as future work.

Keywords: e-learner experience, e-learning evaluation, learner modelling, user experience, usability, Technology-Enhanced Learning/e-learning.

Full Text


  A Simple Skew Correction Method of Sudanese License Plate

Musab Bagabir1and Mohamed Elhafiz2
1Faculty of Computer Studies, The National Ribat University, Khartoum, Sudan
2College of Computer Science and Information Technology, Sudan University for Science and Technology, Khartoum, Sudan

Abstract:License plate character segmentation is an important phase of vehicle license plate recognition systems. The skewed license plate negatively affects the accuracy and efficiency of character segmentation. This paper presents a simple skew correction method; which is mainly designed for Sudanese vehicle license plates recognition. The propose method involves several steps: contrast enhancement and binary conversion, filtering unwanted regions, and computing skew angle. In order to analyze the performance and efficiency of the proposed method, number of experiments was carried out over a new dataset of images. The test results, demonstrate that the proposed method is efficient to be used for license plate recognition system.

Keywords:Contrast-Limited Adaptive Histogram Equalization; Morphological Operation; Binary Conversion; Vehicle License Plate Recognition.

Full Text


Design Patterns for Dialog Boxes in User interface Mobile Applications

Mohamed Khlaif, Sana Alghazal, Raja Awami
Benghazi University IT Faculty, Libyan Arab Jamahiriya
This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The aim of this study is to investigate the emerging challenges accompanying the ongoing development of information technology nowadays, especially in the field of smart phone applications. This makes IT experts, designers, manufactures and researchers in need of finding better and more effective solutions to overcome these challenges and obstacles. One of these challenges is presented when the SW keyboard is shown and hidden in UI applications of PDA , PC or any other mobile devices. This keyboard is shown when the user wants to enter a text, which leads to the occupation of the application area by this SW keyboard. This eventually means that the application will have less room for its " normal interaction ".The main aim of this research is to use a Model view controller (MVC) designpattern to solve this problem associated with SW keyboard .It is intended to make the interaction of dialog boxes when entering a text on mobile UI easier ,more effective and more practical. 

Keywords: user interface (UI),Personal Digital Assistant (PDA),Software(SW),Personal Computer (PC),Information Technology(IT), Global Positioning Systems(GPS),Model-view-Controller ( MVC).

Full Text


Exploiting Multilingual Wikipedia to improve Arabic Named Entity Resources

Mariam Biltawi, Arafat Awajan, Sara Tedmori, and Akram Al-Kouz
King Hussein Faculty of Computing Sciences
Princess Sumaya University for Technology
Amman, Jordan
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: This paper focuses on the creation of Arabic named entity gazetteer lists, by exploiting Wikipedia and using the Naïve Bayes classifier to classify the named entities into the three main categories: person, location, and organization. The process of building the gazetteer starts with automatically creating the training and testing corpora. The training corpus consists of Arabic text; whereas, the testing corpus is derived from an English text using the Stanford name entity recognition. A Wikipedia title existence check of these English name entities is then performed. Next, if the named entity exists as a Wikipedia page title, a check for Arabic parallel pages is conducted. Finally, the Naïve Bayes classifier is applied to verify or assign new name entity tag to the Arabic name entity. Due to the lack of available resources, the proposed system is evaluated manually by calculating accuracy, recall, and precision. Results show an accuracy of 53%.

Keywords: Arabic name entity resources; naïve Bayes classifier; Wikipedia.

Full Text


Region Coded Hashing based Template Security Scheme for Multi-biometric System

Arvind Selwal 1, 2, Sunil Kumar Gupta 3 and Surender 4
1 Department of Computer Engineering, I.K .Gujral Punjab Technical University, Jalandhar, India
2Department of Computer Science & IT, Central University of Jammu, Jammu, India
3 BCET, (Autonomous Institute of Punjab Govt. & IKGPTU), Gurdaspur, India
4 Guru Teg Bahadur College, Bhawanigarh, Sangrur, Punjab, India

Abstract: The biometric-based systems (BMS) have proved enormously superior and accurate authentication mechanism as compared to conventional methods. The accuracy of recognition systems is further enhanced by using multi-modal biometric systems (MBS) with additional cost overhead. The BS based human recognition works by extracting unique feature points from the raw biological trait of the user, captured through a sensor. The important feature points are stored in a system database and referred as feature template of the enrolled user. The template security, before its storage or after storage in database has become an important design issue and attracted attention of most of the researchers. In this paper, a novel region coded hashing (RCH) based template security scheme for a multi-instance biometric system is presented. The proposed scheme is based on two instances of biometric traits, i.e the left and right hand index fingerprints. In the enrolment phase biological information captured from sensing devices are used to extract important feature points using the feature extractors. The extracted real feature vector of the fingerprints BMS are passed through proposed RCH scheme to get transformed templates. The transformed vectors are fused together at the matching score level. The proposed template security results in good overall performance with false accept rate (FAR) of 0.2 % and 0.05% false reject rate (FRR).

Keywords:Multi-biometrics, fingerprint, fusion, template security, feature transformation, hashing, region codes.

Full Text


A comparative study of the open source digital library software

Jaouad OUKRICH1, Belaid BOUIKHALENE2,and Noureddine ASKOUR1
1MPA, Sultan Moulay Slimane, Beni-MellalUniversity, Morocco
1Laboratory LIDST, Polydisciplinary Faculty, Department of Mathematics, Sultan Moulay Slimane, Beni-MellalUniversity, Morocco
2MPA, Sultan Moulay Slimane, Beni-MellalUniversity, Morocco

Abstract:The purpose of this paper is to evaluate fiveOpen Source Digital Library Software (OSDLS), which used PHP programming languages in order to justify our choice to determine which OSDLS packages offer more in terms of services and their easy use.We used a comparative study with the features of this five OSDLS based on their finalization and maturity. In this paper, we found that PMB (PhpMybibli)software satisfies the main functional requirements of a library management system.

Keywords:Open source digital library software, OSDLS, Comparative study.

Full Text


An Efficient Ranking Algorithm for Scientific Research Papers

Yaser Al-Lahham, Fathi  Al-Hatta, Mohammad Hassan
Zarqa University, Jordan
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Large content of scientific research papers of variable quality became available online, that is generally hard for researchers and students to locate their desired information by using the traditional search engines that return large number of documents. This made it necessary to develop an efficient and effective ranking algorithm to solve this problem. One of the most famous ranking algorithms is the PageRank (PR), which is originally used to rank web pages. This algorithm needs to be modified in order to efficiently rank scientific research papers, since they are different than WebPages. This paper proposes adding some factors that can enhance PageRank results when applied on research papers, making it less biased toward old papers, and taking author ranks into account. The results showed that the proposed method improved the results of the PageRank with respect to the recall and precision measures.

Keywords: Scientific Research Papers' Ranking, PageRank, Search Engines.

Full Text


Prediction of Suicidal Ideation in Twitter Data using Machine Learning algorithms

Marouane Birjali1, Abderrahim Beni-Hssane1,and MohammedErritali2
1LAROSERI Laboratory, Department of Computer Sciences, University of ChouaibDoukkali, Faculty of Sciences, El Jadida, Morocco
2TIAD Laboratory, Department of Computer Sciences, University of Sultan MoulaySlimane, Faculty of Sciences and Technologies, BéniMellal, Morocco

Abstract:The rise of social network and the large amount of data generated by them, has led researchers to study the possibility of their operation in order to identify the hidden knowledge. Suicide is a serious mental health problem that demands our attention, and to control and prevent it is not an easy task. In this paper, we propose a suicidal ideation detection system, for predicting the suicidal acts using Twitter data that can automatically analyze the sentiments of these tweets. Then we investigate a tool of data mining to extract useful information for classification of tweets collected from Twitter based on machine learning classification algorithms.Experimental results show that our method for detecting the suicidal acts using Twitter data and the machine learning algorithms verify the effectiveness of performance in term of  recall, precision and accuracy on sentiment analysis.

Keywords:Twitter; machine learning; suicide; tweets; sentiment analysis.

Full Text


Word Boundary Detection in Tifinagh using MaxEnt and n-gram algorithms

Mohamed Biniz1,Rachid El ayachi2, Mohamed Fakir3
Laboratory of Information Processing and Decision Support (TIAD).
Faculty of Sciences and Technics, University Sultan MoulaySlimane.
This email address is being protected from spambots. You need JavaScript enabled to view it.,This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.,

Abstract:This article proposes the use of Maximum Entropy and n-gram algorithms for the automatic segmentation of words in Tifinagh.The Maximum Entropy algorithm is a probability distribution widely used for a variety of natural language processing tasks, such as sentence boundary detection, part of speech, etc.The maximum entropy formulation has a unique solution that can be found by the Generalized Iterative Scaling algorithm.Our experimental results show that the model of maximum entropy using the approach based character considerably improves the quality of the segmentation of words in Tifinagh.

Keywords:Maxent, Tifinagh, WBD, Token, NLP.

Full Text


Data values on the MOOCs in the university's educational 

1Z. Harmouch, 2K. Ghoulam, 3B. Bouikhalene and 4H. Mouncif
1,2 Department of Mathematics and Informatics, Sultan Moulay Slimane university FST Beni Mellal Morocco
3,4Department of Mathematics and Informatics, Sultan Moulay Slimane university FP Beni Mellal Morocco

Abstract :The term MOOC (Massive Online Open Course) is used to refer to online teaching platforms, offering open courses which can apply to hundreds, thousands or even tens thousands of students simultaneously. So, more than ever , the amount of data onto learners, teachers and courses has exploded to the world of MOOC. In addition, more data are available for learners from several sources, including social networking platforms like Facebook, Twitter, Youtube etc.The purpose of this paper is not analysing the MOOC phenomenon in general, but to give an original vision on the MOOC, through its students, their profiles, and their activities during the course. In order to cope with this massive data, we will use a new technology "Big Data Analytics" to explore and analyse the behaviour of learners, their profile, and their activities during the course. For this, we analyse the data extracted during a course, launched on the educational platform of the University Sultan Moulay Slimane. We describe the communities of students, their socio-economic profiles, their motivations, their activities on the Facebook group created for this course, we will also study how the exchanges on this social network structure during the course.

Keywords :MOOC, Big Data, social networks, Facebook, online education.

Full Text


Segmentation of text/graphic from handwritten mathematical documents using Gabor filter

Yassine CHAJRI, Abdelkrim MAARIR, Belaid BOUIKHALENE
University Sultan Moulay Slimane, Morocco
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Most of handwritten mathematical documents contain graphics in addition to mathematical text. Thus, these documents must be segmented into homogenous areas to facilitate their digitization. Text and graphic segmentation from these documents aims at segmenting the document into two blocks: the first contains the texts and the second includes the graphical objects. In this paper, we focus our interest on document segmentation based on the texture and precisely the frequency methods. These methods are ideal to characterize the texture and allow detecting the frequencies and orientations characteristics. Firstly, we present the main steps of our system (pre-processing, features extraction (using Gabor filter), post-processing and text/graphic segmentation). Secondly, we discuss and interpret the results obtained by our system.

Keywords:Handwritten mathematical document; segmentation; graphic; Gabor filter; co-occurrence matrix; Haralick.

Full Text


REPLICA PLACEMENT STRATEGY BASED ON ANALYTIC HIERARCHY PROCESS IN 

HETEROGENEOUS CLOUD DATA STORAGE

Mohammed  Radi
Alaqsa University, Palestinian Territory
  This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract.Cloud data storage platforms have attracted increasing attention in data management. Data replication is a well-known technique that reduces access latency, thus improving cloud data storage availability and performance. Nonetheless, replica placement algorithms areamong the significant issuesin data replication, which affect data availability and access performance considerably. Replica placement algorithms determine where data replicas can be located in the data storage system. Replica placement is a classical Multi-Attribute Decision Making (MADM) problem. The analytic hierarchy process (AHP) is a structured technique for organizing and analyzing complex decisions based on mathematics and psychology. In this study, we present areplica placement algorithmbased on AHP(RPAHP) in heterogeneous cloud data storage. Simulationresults show that the RPAHP strategy performs better than other replica creation strategies in terms of average responsetime.

Key Words:cloud data storage,data replication, replica placement algorithms, analytic hierarchy process,CloudAnalyst.

Full Text


  New approach for 3D object forms detection using a new algorithm of SUSAN descriptor

Ilhame AGNAOU1, Belaid BOUIKHALENE2
1Laboratory of Information Processing and Decision Support, Sultan Moulay Slimane University,
Beni Mellal,Morocco
2Laboratory LIRST, Department of Mathematics and computer, Polydisciplinary Faculty, Sultan Moulay
Slimane, Beni Mellal, Morocco

Abstract:This paperoccurs​​in the context of object recognition, and in particular, in the detection of 3D objects and their free forms by local descriptors of interest points to identify them. However, it remains to solve a several problems in this area that is related to the large amount of information and invariant to scale and angle of view. In this context, our purpose is to make the recognition of a 3D object from the detection of their interest points and extract characteristics of the detection of each object to facilitate his research in a database.

For this reason, we will propose a new robust detector of noise that includes criteria for extracting interest points of 3D objects by specifying their free forms. This detector is based on SUSAN detector using differential measures for compare it with others.

Keywords:recognition, detection, 3D objects, detector, descriptor, interest points.

Full Text


 Predicting Learners' Performance in an

E-Learning Platform Based on Decision Tree Analysis

Badr HSSINA1, Abdelkrim MERBOUHA2,and Belaid BOUIKHALENE1
1TIAD laboratory, Computer Sciences DepartmentSultan Moulay Slimane University, FST
Beni-Mellal, Morocco
2LMACS laboratory, Mathematics DepartmentSultan Moulay Slimane University, FST
Beni-Mellal, Morocco

Abstract:The ability to predict learners' performance on an e-learning platform is a decisive factor in the current educational systems. Indeed, learning through decision trees uses more sophisticated and efficient algorithms based on the use of predictive models. A decision tree is a decision support tool for assessing the value of a characteristic of a population based on the observation of other characteristics of the same population. As our research is focused on how to help a tutor to monitor the learners' activities on e-learning systems (B. HSSINA and al.2015 [1] , B. HSSINA and al.2014 [2] [3]), we propose a predictive model based on the algorithm ID3, C4.5 and CART  to predict the level of learners in their learning path in a training. The choice of the most efficient algorithm is done based on a comparative study between different algorithms of decision trees, which leads us to confirm that the most powerful and most convenient is C4.5. The data used by the latter is harvested from the e-learning platform on which learners are enrolled.

Keywords:E-learning, Data mining, Decision trees.

Full Text


Analyzing competition over visibility in social networks

Khadija Touya1, Mohamed Baslam1,Rachid El Ayachi1and Mostafa Jourhmane2
1Department of Computer Science,Sultan MoulaySlimane University, Morocco
2Department of Mathematics,Sultan MoulaySlimaneUniversity, Morocco

Abstract:Social Networks have known an important evolution in the last few years. These structures, made up of individuals who are tied by one or more specific types of interdependency, constitute the window for members to express their opinions and thoughts by sending posts to their own walls or others' timelines. Actually, when a content arrives, it's located on the top of the timeline pushing away older messages. This situation causes a permanent competition over visibility among subscribers who jump on opponents to promote conflict. Our study presents this competition as a non-cooperative game; each source has to choose frequencies which assure its visibility. We model it, exploring the theory of concave games, to reach a situation of equilibrium; a situation where no player has the ultimate ability to deviate from its current strategy. We formulate the named game, then we analyze it and prove that there is exactly one Nash equilibrium which is the convergence of all players' best responses. We finally provide some numerical results, taking into consideration a system of two sources with a specific frequency space, and analyze the effect of different parameters on sources' visibility on the walls of social networks.

Keywords:Social networks, game theory, Nash Equilibrium, best response, utility function, concave game.

Full Text


A Hybrid Approach of Semantic Similarity Calculation for a Content-based Recommendation

of Text Documents on an E-learning Platform

Badr HSSINA1, Abdelkrim MERBOUHA2, and Belaid BOUIKHALENE1
1TIAD Laboratory, Computer Sciences Department Sultan MoulaySlimane University, FST
Beni-Mellal, Morocco
2LMACS Laboratory, Mathematics Department Sultan MoulaySlimane University, FST
Beni-Mellal, Morocco

Abstract:Currently in theonline learning sector, electronic monitoring of items is crucial. Maintaining effective monitoring involves targeting items to consult with users because the information amount is important. To resolve this problem, we propose an innovative recommendation system of documents which is based on the integration of its semantic indexing. In this context, we have created a semantic similarity calculation system between text documents to help their semantic recommendations. Indeed, the semantic recommendation of documents is a promising field of research, because it guarantees a quick and targeted access to information. The aim of our work is to guide learners and suggest resources on the basis of their learning experience. Our approach is to build a semantic recommendation system based on content; it is a system that allows returning from a set of documents which isrelevant to a learner--that is to say, the documents that are semantically similar to a document chosen by the learner. Experimental evaluations using WordNet prove that our system improves the accuracy of the semantic recommendation text documents to learners.

Keywords:Recommendation system, semantic similarity, WordNet.

Full Text


Analyzing social media with InfoSphereBigSheets

Marouane Birjali1, Abderrahim Beni-Hssane1,and MohammedErritali2
1LAROSERI Laboratory, Department of Computer Sciences, University of ChouaibDoukkali, Faculty of Sciences, El Jadida, Morocco
2TIAD Laboratory, Department of Computer Sciences, University of Sultan MoulaySlimane, Faculty of Sciences and Technologies, BéniMellal, Morocco

Abstract:Nowadays the term Bigdata becomes the buzzword in every organization due to ever-growing generation of data every day in life. Big data is high in volume, velocity and also high in variety that is, it can be a structured, semi-structured, or unstructured data. Big data can reveal the issues hidden by data that is too costly to process and perform the analytics such as user's transactions, social and geographical data issues faced by the industry. In this paper, we investigate twitter, which is the largest social networking area where data is increasing at high rates every day is considered as big data. This data is processed and analyzed using InfoSphereBigInsightstool, which bring the power of Hadoopin real time. This also includes the visualizations of analyzing big data charts using BigSheets.

Keywords:Social media, Twitter Data; Big Data; Hadoop; Infosphere; BigSheets.

Full Text


Decision Support System for identification and classification the Whewellite- and

Weddellite crystals in human urinary using data minig algorithms

A. Ait Ider1, D. Naji1 C.Tcheka2, A. Ben Ali3, A. Merbouha1 and M. Mbarki1
1University of Sultan Moulay Slimane, Faculty of Science and Technology, P.B 523, BeniMellal, Morocco
2 University of Yaoundé 1, Faculty of Science, P.B 812, Yaounde, Cameroun.
3 National Control Laboratory Medicines,Ministry of Health, Madinat Al Irfane, P.B 6206, Rabat, Morocco

Abstract: The majority of the analyzed calculi from patients is composed of calcium oxalate (CaOx) monohydrate (whewellite (Wh)) and CaOx dihydrate (wedellite (Wd)). The urinary calculi were identified by chemical and morphological analysis based on106 urine samples from human voluntary. The Crystalluria made by an optical polarized light microscopy. The apparent oxaluria and urinary calcium were determined by conventional volumetric assays. The database contains 63 men and 43 women among whom 30 (28.3%) cases had a type of crystal CaOx (Wh), 27 (25.5%) cases had a type of crystal CaOx dihydrate (Wd) and 49 (46.2%) cases are not affected by this disease (Nc). The aim of this paper was to compare the performances of different algorithms techniques: Rnd Tree (Random Forest), C4.5 and Artificial Neural Networks (ANNs) algorithms in order to develop a simple system to predict and classify urinary calculi. The results obtained show that the Rnd Tree algorithm is a very powerful model and was superior to different techniques in prediction, the average correct classification rate achieved 97.39%. The accuracy of the system was determined by comparing the correct classification rates of the different algorithms.

Keywords: calcium oxalate; urinary stones; data mining algorithms; Intelligent Diagnosis System.

Full Text


Enhancing RGB Color Image Encryption-Decryption Using One-Dimensional Matrix

Mohammad Rasmi1, Fadi Al-salameen2, Mohamad M. Al-Laham3, Anas Al-Fayomi4
1Department of Software Engineering, Zarqa University, Zarqa, 13132, Jordan
2Department of Computer Science, Zarqa University, Zarqa, 13132, Jordan
3MIS Department, Al-Balqa Applied University,Salt,19117, Jordan
4ISD, UNRWA, Amman, Jordan

Abstract: This research presents effective technique for image encryption which employs Red, Green and Blue (RGB) components of the RGB color image. The proposed technique utilizes matrix multiplication and inverse matrices for encryption-decryption purpose. Moreover, the effectiveness of the proposed encryption-decryption techniques lay on minimizing the encryption-decryption time and the square error between the original and the decrypted image.  The evaluations of the proposed technique were done using many images with different sizes, while the experimental results show that the improved encryption technique time are greatly reduced compared with "RGB Color Image Encryption-Decryption Using Gray Image" method.  The proposed technique has a high confidentiality level through using confusion diffusion sequentially with a square matrix key and two vectors keys. However, those keys are generated randomly and make the process of hacking the image very difficult.

Keywords: RGB color image, encryption, decryption, matrix multiplication.

Full Text


An Efficient Hybrid Feature Selection Method based on Rough Set Theory for Short Text Representation

Mohammed Bekkali, Abdelmonaime Lachkar
L.I.S.A, Dept. of Electrical and Computer Engineering, ENSA, USMBA, Fez, Morocco
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract— With the rapid development of Internet and telecommunication industries, various forms of information such as short text which plays an important role in people's daily life. These short texts suffer from curse of dimensionality due to their sparse and noisy nature. Feature selection is a good way to solve this problem. Feature selection is a process that extracts a number of feature subsets which are the most representative of the original feature set; thus it becomes an important step in improving the performance of any Text Mining task.

In this paper, a hybrid feature selection, based on Rough Set Theory (RST) which is a mathematical tool to deal with vagueness and uncertainty and Latent Semantic Analysis (LSA) which is a theory for extracting and representing the contextual-usage meaning of words, is proposed in order to improve  Arabic short text representation. The proposed method has been tested, evaluated and compared using an Arabic short text categorization system in term of the F1-measure. The experimental results show the interest of our proposition.

Keywords—Arabic Language; short text; feature selection; Rough Set Theory; Latent Semantic Analysis.

Full Text


Handwritten Gujarati Digit Recognition using Sparse Representation Classifier

Kamal Moro, Mohammed Fakir and Belaid Bouikhalene
Department of Computers Sciences Sultan Moulay Slimane univefsity Beni Mellal, Morocco

Abstract:We present in this paper a framework for handwritten Gujarati digit recognition using Sparse Representation Classifier. The classifier assumes that a test sample can be represented as a linear combination of the train samples from its native class. Hence, a test sample can be represented using a dictionary constructed from the train samples. The most sparse linear representation of the test sample in terms of this dictionary can be efficiently computed through -minimization, and can be exploited to classify the test sample. This is a novel approach for Gujarati Optical Character Recognition, and demonstrates an accuracy of 80,33%. This result is promising, and should be investigated further.

Keywords:Sparse Representation Classifier, Gujarati Optical Character Recognition, Handwritten character recognition, Digit recognition.

Full Text


Arabic Handwritten Script Recognition System Based on HOG and Gabor Features

Ansar Hani1, Mohamed Elleuch 2, Monji Kherallah 3
1 Faculty of Economics and Management of Sfax, University of Sfax, Tunisia
2 National School of Computer Science (ENSI), University of Manouba, Tunisia
3 Faculty of Sciences, University of Sfax, Tunisia

Abstract: Considered as among the most thriving applications in the pattern recognition field, handwriting recognition, despite being quite matured, it still raises so many research questions which are a challenge for the Arabic Handwritten Script. In this paper, we investigate Support Vector Machines (SVM) for Arabic Handwritten Script recognition. The proposed method takes the handcrafted feature as input and proceeds with a supervised learning algorithm. As designed feature, Histogram of Oriented Gradients (HOG) is used to extract feature vectors from textual images. The Multi-class SVM with an RBF kernel was chosen and tested on Arabic Handwritten Database named IFN/ENIT. Performances of the feature extraction method are compared with Gabor filter, showing the effectiveness of the HOG descriptor. We present simulation results so that we will be able to prove that the good functioning on the suggested system based-SVM classifier.

Keywords: SVM, arabic handwritten recognition, handcraft feature, IFN/ENIT, HOG.

Full Text


A Hybrid Range-free Localization Algorithm for ZigBee Wireless Sensor Networks

Tareq Alhmiedat and Amer Abu Salem
Tabuk University, Saudi Arabia
Zarqa University, Jordan
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstrac: Target localization  and tracking issues in Wireless Sensor  Networks (WSNs) have  received  considerable attention  recently,  driven  by  the  need  to  achieve  reasonable localization  accuracy,  with  the  minimum  cost possible. A wide  range  of proposed  approaches regarding  the  localization  research  area

have emerged recently, however most of the existing approaches  suffer  from  either  requiring  an  extra  sensor,  high  power  consumption,  inaccessible indoors, or  offer  high  localization  error.  This paper  presents a  research  and  development of a  hybrid  range-free  WSN localization  system  using  the  hop-count system  and  the  Received  Signal Strength  Indicator  (RSSI). The  proposed system is an efficient indoors, where it offers reasonable localization  accuracy  (0.36  meter),  and  achieves low  power  consumption. A number of real experiments have been conducted  to test the efficiency of the proposed system.

Keywords: Localization, Tracking, ZigBee, Wireless Sensor Network.

Full Text


Design and development of Practical Works

  for Remote Laboratories

                                                              Abdelmoula ABOUHILAL, Mohamed MEJDAL, AbdessamadMALAOUI
                                                         Laboratoire Interdisciplinaire de Recherche en Sciences et Techniques (LIRST)
                                                                                    Sultan MoulaySlimane University, Morocco
                                                              This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The paper present design and test of two architectures of practical works for remote laboratories using low-cost embedded systems. This practical works are destined to the 3rd year bachelor degree students on renewable energy and others on electronics. The first architecture is based on an Arduino microcontroller to control an irrigation system powered by photovoltaic panels. The second is based on a powerful mini PC, PcDuino, to control remotely a practical work of logic integrated circuits. These architectures can be generalized to others discipline. A simple interface is developed to allow students and instructors to access easily to this practical works. This approach of low-cost remote laboratories shows benefits in Moroccan universities, especially in open access faculties due to the huge number of registered students and the lack of rooms and materials.

Keywords: Remote labs; embedded systems; pcduino; remote practical works; e-learning.

Full Text


Improved Hierarchical Classifiers for Multi-Way Sentiment Analysis

Aya Nuseir,1 Mohammed Al-Kabi,2 Mahmoud Al-Ayyoub,1 Ghasan Kanaan3 and Riyad Al-Shalabi3
1 Jordan University of Science and Technology, Irbid, Jordan
E-mails: This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.
2 Information Technology Department, Al-Buraimi University College, Buraimi, Oman
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
3 Amman Arab University, Amman, Jordan
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Sentiment Analysis (SA) is a computational study of the sentiments expressed in text toward entities (such as news, products, services, organizations, events, etc.) using NLP tools. The conveyed sentiments can be quantified using a simple positive/negative model. A more fine-grained approach known as Multi-Way SA (MWSA) uses a ranking system like the 5-star ranking system. Since in such systems, rankings close to each other can be confusing, it has been suggested that Hierarchical Classifiers (HCs) can outperform traditional Flat Classifier (FCs) for MWSA. Unlike FCs which try to address the entire classification problem at once, HCs utilizes tree structures where the nodes are simple classifiers customized to address a subset of the classification problem. This study aims to explore extensively the use of HCs to address MWSA by studying six different hierarchies. We compare these hierarchies with four well-known classifiers (SVM, Decision Tree, Naive Bayes, and KNN) using many measures such as Precision, Recall, F1, Accuracy and Mean Square Error (MSE). The experiments are conducted on the LABR dataset consisting of 63K book reviews in Arabic. The results show that using some of the proposed HCs yield a significant improvement in accuracy. Specifically, while the best Accuracy and MSE for FC are 45.77% and 1.61, respectively, the best accuracy and MSE for an HC are 72.64% and 0.53, respectively. Also, the results show that, in general, KNN benefitted the most from using hierarchical classification.

Keywords: Sentiment Analysis; Arabic Text Processing; Hierarchical Classifiers, Multi-Way Sentiment Analysis.

Full Text


Effect of technology on the lives of 9-year old children

Nazir S. Hawi, Maya Samaha
Department of Computer Science, Notre Dame University-Louaize, Lebanon

Abstract: With the advancement of technology, researchers are more and more concerned about its impact on individuals, organizations, and society. In particular, the side effects of screen time have been a mounting large issue not only for researchers but for parents and educators. Although there have been several attempts to understand the phenomenon, but none focusing on a specific age. For this key reason, this study was conducted as a head start investigation in the Arab world. The target population was the cohort of 9-year children enrolled in private schools. The study sought to determine whether screen time is a predictor of participants' academic performance, physical activity time, and other behaviors. Exactly 1175 families participated by completing a paper survey questionnaire by offering information about lived experiences with technology that is taking hold of their children's minds and possibly shaping their lives. Unexpectedly, the results revealed that screen time variables contributed a substantively small, but statistically significant amount of explained variance to physical activity, academic performance, aggressive behavior, and sleep deprivation. Thus, there is strong empirical basis for where side effects of screen time occur, they are not detrimental to children's lives in particular and culture in general. The computer science community should be aware of this reality and push forward for the design of technologies that enhance children's overall wellbeing towards building better societies.

Keywords:Technology, screen time, academic performance, physical activity, social behavior, children, Arab world.

Full Text


Euclidean & Geodesic Distance between a Facial Feature Points in

Two-Dimensional Face Recognition System

Rachid AHDID1, Khaddouj TAIFI1, Said SAFI1 and Bouzid MANAUT2
1Department of Mathematics and Informatics, Sultan Moulay Slimane University, Beni Mellal, Morocco
2Departement of Physics, Sultan Moulay Slimane University, Beni Mellal, Morocco

Abstract:  In this paper, we present two feature extraction methods for two-dimensional face recognition. Our approaches are based on facial feature points detection then compute the Euclidean Distance between all pairs of this points for a first method (ED-FFP) and Geodesic Distance in the second approach (GD-FFP). These measures are employed as inputs to a commonly used classification techniques such as Neural Networks (NN), k-Nearest Neighbor (KNN) and Support Vector Machines (SVM). To test the present methods and evaluate its performance, a series of experiments were performed on two-dimensional face image databases (ORL and Yale). The experimental results also indicated that the extraction of image features is computationally more efficient using Geodesic Distance than Euclidean Distance.

Keywords: Face recognition, landmarks, Euclidean Distance, Geodesic Distance, Neural Networks, k-Nearest Neighbor and Support Vector Machines.

Full Text


Optimized MIH design for Handover over Next Generation Wireless Networks

Amina Gharsallah1, Faouzi Zarai2,and Mahmoud Neji1
1MIRACL laboratory, University of Sfax, Tunisia
2LETI laboratory, University of Sfax, Tunisia

Abstract: Next Generation Wireless Networks (NGWNs) are expected to be heterogeneous networks which integrate different Radio Access Technologies (RATs) such as 3GPP's Long Term Evolution (LTE) and IEEE 802.11 wireless local area network (WLAN). The integration of heterogeneous wireless networks poses several challenges. One of the major challenges is vertical handover management.  Vertical handover occurs when a Mobile Node (MN) decides to switch between networks of diverse technologies. In this paper, we present a new scheme for vertical handover management in NGWNs. We consider the Media Independent Handover architecture (MIH), proposed by the IEEE 802.21 working group in order to provide the required information for handover and mobility management entities. The performance analysis shows that the proposed approach efficiently uses the network resources by switching between LTE and WLAN to offer best connectivity to the users.It is observed that resource utilization ratio increases to (99%) and the integration of LTE with WLAN using the proposed algorithm reduces the call dropping probability (less than 0.15 for Voice over IP service when switching to LTE network) and call blocking probability for data service (less than 0.24).

Keywords: Next Generation Wireless Networks, Vertical handover, Handover decision, MediaIndependent Handover (MIH).

Full Text


Comparative Study of Blind Channel Equalization

Said Elkassimi1, Said Safi1, Bouzid Manaut2
1Departament of Mathematic and Informatic Polydisciplinary Faculty
Sultan Moulay Slimane University Beni Mellal, Morocco
2Laboratoire Interdisciplinaire de Recherche en Science et Technique (LIRST),
Sultan Moulay Slimane University Beni Mellal, Morocco

Abstract: In this paper we present two algorithms for blind channel equalization. These algorithms are compared with the adaptive filter algorithms such as Constant Modulus Algorithm (CMA), Fractional Space CMA (FSCMA) and Sign Kurtosis Maximization Adaptive Algorithm (SKMAA). The simulation results in noisy environment and for different number of symbols show that the presented algorithms gives good results compared to CMA, FSCMA and SKMAA algorithms. The channel equalization is performed using the ZF and MMSE algorithms.

Keywords: Blind Equalization Algorithms, Filter Adaptive, CMA, FSCMA, SKMAA, 4QAM.

Full Text


On Spatially Coherent Noise Field in Narrowband Direction Finding

Youssef Khmou, Said Safi
Department of Mathematics and informatics, Sultan Moulay Slimane University, Morocco

Abstract: When studying the radiation coming from far field sources using an array of sensors, besides the internal thermal noise, the received wave field is always perturbed by an external noise field, which can be temporally and spatially coherent to some degree, temporally incoherent and spatially coherence, spatially incoherent and temporally correlated or finally the incoherence in both domains. Thus treating the received data needs to consider the nature of perturbing field in order to make accurate measurements such as powers of punctual sources, theirs locations and the types of waveforms which can be deterministic or random. In this paper, we study the type of temporally white and spatially coherent noise field; we propose a new spatial coherence function using Lorentz function. After briefly describing some existing models, we numerically study the effect of spatial coherence length on resolving the angular locations of closely radiating sources using spectral techniques which are divided into beam forming and subspace based  methods, this study is made comparatively to temporally and spatially white noise with the same power as the proposed one to make a precise conclusions. Finally we discuss the possibility of extending the spatially coherent noise field into two dimensional geometries such as circular array.

Keywords: Spatial coherence function, narrowband, direction of arrival, Lorentz function, coherence length, white noise field.

Full Text


Open e-Test: Remote Evaluation of a Large Population of Students

at University of Oran 1 Ahmed Ben Bella (Algeria)

A.Bengueddach1, C.Boudia1, H.Haffaf1, and B.Beldjilali1
1Department of Computer Science, University of  Oran 1 Ahmed Ben Bella, El-M'Naouer, Algeria.

Abstract: In order to ensure quality programs and courses in the learning process, students' evaluation learning was proposed as the best way out. Indeed, evaluation provides important information to university administrators and instructors to estimate the effectiveness of teaching and the quality of the learning taking place. However it is hard when the class size is very large since the existing system is paper-based, a time-consuming, monotonous, less flexible and provides a very hectic working schedule. In this work we examine issues related to the remote evaluation in higher education at University of Oran 1 Ahmed Ben Bella (Algeria), we focused our study on second year students in computer science. The main contributions of this paper is to provide a quick and immediate tool that helps evaluating large numbers of students and that meets the requirements for both teachers and students. This tool is a web based online assessment system; it allows teachers to create tests and answers, reduces paper work to promote students learning by the checking their answers and computing their scores.

Keywords: Higher education, innovative teaching practice, e-Assessment, students' evaluation learning, remote evaluation, online assessment system.

Full Text


Vehicular Ad-hoc Network application for Urban Traffic Management based on Markov Chains

ahmed adart1, hicham Mouncif2, MOHAMED NAIMI1
1Flows and Transfers Modeling Laboratory (LAMET), Faculty of Sciences and Technologies
2Mathematics and Computer Sciences Department, Polydisciplinary Faculty
Sultan MoulaySlimane University, Beni-Mellal
E-mails: {a.adart, h.mouncif }@usms.ma , This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Urban traffic management problems have taken an important place in most of transportation research fields, hence the emergence of vehicular ad-hoc network (VANET) as an essential part of the intelligent transportation system ITS, that intervenes to improve and facilitate traffic management also control as far as improve global driving experience in the future. Indeed the concept of smart city or city of future becomes a new paradigm for urban planning and management, it considered as a complex system made up of services, citizens and resources. On the other hand ITS concept is implemented to deal with some problems as though traffic congestion, energy consumption and property damage and human losses caused by transport accidents....  In this paper we propose an approach for urban traffic management in smart cities based on markov chains implementing all vanet's technology units to optimize traffic flow simultaneously with real time monitoring of vehicle in urban area from its starting point to the destination.

Keywords: Vanet, Smart City, Intelligent Transportation System, Markov Decision Process, Markov Chains.

Full Text


An Efficient Method based on Deep Learning Approach for Arabic Text Categorization

Fatima-Zahra El-Alami1, Said Ouatik El Alaoui1
1Laboratoire Informatique et Modélisation, FSDM, USMBA, Morocco

Abstract: In this paper we propose an efficient methodbased on deep learning for Arabic Text Categorization (ATC) thatis considered as a challenging task.We explore deep learning approach to enhance text representation, we use deep stacked autoencoder as deep architecture to produce high level abstraction presentation. We take advantage of short reproducedcodes which enable us to take into account implicit semantic, in addition this representation reduces the dimensionality of representation space.We have conducted several experiment results on CNN Arabic news dataset, two types of stemmers are used: Light and Khoja stemmer. Tree machine learning techniques are explored: the decision tree(DT), the Naïve Bayesian (NB) and Support Vector Machine (SVM).The obtained results illustrate that the proposed methodaffords good performance in Arabic Text Categorization System.

Keywords: Arabic text categorization, deep learning, deep autoencoder, RBM, implicit semantic.

Full Text


Fusion of Singular Value Decomposition (SVD) and DCT-PCA for Face Recognition

                                                               El Mahdi BARRAH1, Said SAFI1,and Abdessamad MALAOUI1
                                              1Interdisciplinary Laboratory of Research in Sciences and Technologies (LIRST),
                                                                    Sultan MoulaySlimane University, BéniMellal, Morocco

Abstract:In this paper, we proposed the fusion of two methods to know principal component analysis (PCA) in the domain DCT and singular value decomposition (SVD). Experimental results performed on the standard database ORL which prove that the proposed approach achieves more advantages in terms of identification and processing time.

Keywords:Principal component analysis (PCA), Discrete Cosines Transformation (DCT), Singular Value Decomposition (SVD), ORL database.

Full Text


Researches and Reviews in Arabic Question Answering: principal approaches and systems with classification

Wided Bakari1, Patrice Bellot2, and Mahmoud Neji1
1Faculty of Economics and Management 3018, Sfax Tunisia MIR@CL, Sfax, Tunisia
2Aix-Marseille University, F-13397, Marseille Cedex 20, LSIS, Marseille, France

Abstract: As, we live in the world of knowledge, there is increasingly high advancement in technology. Our thirst for knowledge enhances our integrity to ask questions to various search engines. A similar search engine is a question-answering system which provides perfect solution to retrieve valid and accurate answers to user question asked in natural language instead of a query. Internet includes websites in different languages; the need for such systems is higher in the context of the Arabic. Furthermore, the community of the Arabic language users is still obliged to manually looking for precise answers to their questions which is a tedious task regarding the great amount of available Web information. Indeed, Arabic has a complex structure, which makes it difficult to apply natural language processing (NLP). Much research on Arabic NLP does exist. However, it is not as mature as that of other languages. This paper provides a comprehensive and comparative overview of the Arabic question answering technology. Specifically, it introduces an overview on principal existing approaches and discusses the different proposed systems with a classification.

Keywords: Arabic language, question-answering, approaches, literature review, classification.

Full Text


Decision Making System for Scientific Research using Data Mining Algorithm

Mohamed Elmohadab1, Belaid Bouikhalene 1 and Said Safi1
1Department of Mathematics and Computer Science Sultan Moulay Slimane University, Morocco

Abstract: Recently, the governance of information system through the use of decision aid has posed a serious challenge for the leadership of universities. In this paper, we present a decision aid system for scientific research based on a set of different decision aid algorithms, through Naive Bayes, Decision Tree, and OneR. For illustrations we treat the case of a public university.

Keywords: Scientific Research, data mining, decision aid, business intelligence, Odoo.

Full Text


Roads Extraction and Mapping from Aerial and Satellite Images

Abdelkrim Maarir1,2, Belaid Bouikhalene1,2 and Yassine Chajri1
1Laboratory of Information Processing and Decision Support, Department Computer Science,
Sultan Moulay Slimane University, Beni Mellal,Morocco
2Laboratory LIRST, Department of Mathematics and computer, Polydisciplinary Faculty,
Sultan MoulaySlimane, Beni Mellal, Morocco

Abstract: Automatic man-made objects detection from aerial and satellite images be a very important research field to understand the changes in our environment and gives an important source of information to be used in many fields as an infrastructure, mapping generation, planning traffics and cartographic. This study describes and evaluates an automated method for roads extraction and mapping. For roads extraction we follow these steps: The first step is pre-processing of images to reduce noses and increase the contrast between contours and to improve the quality of the initial image by using the bilateral filtering, the second step deals with Statistical Region Merging to segment the image into homogenous regions, in the third step roads are detected based on modified active contour with adaptive initial contour to localize the region of road followed by edge detection and linking. For road centerlines extraction and mapping, skeleton centerlines of roads were calculated by using the fast-marching distance transform. The proposed method is tested on several images with high resolution and experiments results show that can extract and map roads with a better accuracy of 93.36%.

Keywords - Roads extraction, statistical region merging, active contour, bilateral filtering,fast marching distance transform,aerial and satellite images.

Full Text


Energy Enhancement of Ad-hoc On-demand Distance Vector Routing Protocol for Maximum Lifetime in MANET

Mohamed ER-ROUIDI1, Houda MOUDNI1, Hassan FAOUZI1, Hicham MOUNCIF1, Abdelkrim MERBOUHA2
1Department of Computer Science, Sultan MoulaySlimane University, Beni Mellal, Morocco
2Department of Mathematics, Sultan MoulaySlimane University, Beni Mellal, Morocco

Abstract: Nowadays, Mobile Ad hoc NETwork (MANET) becomes more used in many domains. Nevertheless, it still suffers from various types of restrictions. Among these restrictions, and the biggest one is the energy consumption. The classical routing protocols proposed by Internet Engineering Task Force (IETF), in its establishment of the routes, searches for the shortest path in terms of the number of hops, while they not take on consideration the energy level or the lifetime of the intermediate nodes. In this paper, we propose a solution called Enhanced Energy-AODV (EE-AODV), which is an enhancement of the Ad-hoc On-demand Distance Vector (AODV) routing protocol. In our proposed protocol, we have added the energy consumption among the selection criteria of the AODV routing protocol in order to obtain a sufficient result in terms of the connectivity and the lifetime of the network. Simulation results show that EE-AODV outperforms AODV by reducing significantly the energy dissipation, also certain parameters affected by the energy issue like Packet Delivery Ratio (PDR) and Normalized Routing Load (NRL).

Keywords: Ad-hoc, MANET, Energy, AODV, Routing protocol.

Full Text


 Anomaly Traffic Detection Based on GPLVM and SVM

                                   Houda Moudni1, Mohamed Er-rouidi1, Hassan Faouzi1, Hicham Mouncif2, and Benachir El Hadadi2
                                        1Faculty of Sciences and Technology, Sultan Moulay Slimane University Beni Mellal, Morocco
                                                   2Faculty Polydisciplinary, Sultan Moulay Slimane University Beni Mellal, Morocco

Abstract:An Intrusion Detection System (IDS) is a type of security management technique designed to automatically detect possible intrusions in a network or in a host. However, the performance of existing IDSs that have been proposed are unsatisfactory since novel kinds of attacks appear constantly. Hence, it is necessary to develop an intrusion detection system with improved accuracy, high detection rate and reduced training time. We thus propose a new adaptive intrusion detection method based on Gaussion Process Latent Variable Model (GPLVM) and Support Vector Machines (SVMs). In this paper, the GPLVM is applied for feature dimension reduction of network connection records. After the step of dimensionality reduction, samples are classified using SVMs algorithm in order to distinguish between the normal and the intrusions in the traffic data. The results of our experiment based on KDD Cup'99 dataset illustrate the effectiveness of our proposed method. A comparative analysis of intrusion detection system based on GPLVM and Artificial Neural Network (ANN) with our building IDS is also performed.

Keywords:IDS, network security, GPLVM, SVM, KDD Cup 99 Dataset.

Full Text


Managing reuse across MPLs through Partial Derivation

Guendouz Amina and Bennouar Djamal
CS Department, Saad Dahlab University, Blida, Algeria
LIMPAF Lab, Bouira University, Bouira, Algeria
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Software Product Lines (SPLs) provide systematic reuse only within a particular field. While, in some fields, a single SPL is no longer sufficient to fulfill their requirements due to the large variability amount they include. A set of separated but still interdependent SPLs is then built to handle this issue, commonly known as Multiple Product lines (MPLs). However, reuse between those SPLs must be managed in order to preserve common information among them. In this paper, we propose an approach to systematize reuse across multiple SPLs. Our approach relies on partial derivation and integration of interdependent SPLs at early development stages, avoiding thus inter-SPLs reuse challenges encountered during derivation step.

Keywords:MPLs, partial derivation, SPLs integration, Feature models.

Full Text


Automatic detection and segmentation of brain tumor based on an adaptive mean shift over Riemannian manifolds

Mohamed Gouskir, Mohammed Boutalline , Benachir Elhadadi
Mohammed Amine Zyad, Belaid Bouikhalene
Sultan Moulay Slimane University, Morocco
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: In this paper, we propose a fully automated approach based on the mean shift algorithm over Riemannian manifolds, for the brain tumor detection and segmentation in magnetic resonance images (MRI). This approach based on the geometric median, geodesic distance. We propose the median shift to overcome the limitation of mean which is not necessary a point in a set. The geodesic distance can describe data points distributed on a manifold, compared to the Euclidean distance, and produce efficient results for image segmentation. Coupled with k-means algorithm, the proposed framework can cluster the brain image into tree regions (gray matter, white matter and cerebrospinal fluid). We applied this approach to clustering the brain tissue and brain tumor segmentation, and validated on a synthetic MRI.

Keywords: Mean shift, Geometric median, Riemannian manifolds, brain image segmentation, and Geodesic distance.

Full Text


Positive definite kernels for identification and equalization of Indoor Broadband Radio Access Network

Mohammed Boutalline1, Mohamed Gouskir2, Jilali Antari3, Belaid Bouikhalene1, Said Safi1
1Laboratory for Interdisciplinary Research in Science and Techniques, Polydisciplinary Faculty,
Sultan Moulay Slimane University, Beni Mellal, Morocco
2Laboratory of Sustainable Development, Faculty of Science and Techniques,
Sultan Moulay Slimane University, Beni Mellal, Morocco
3Polydisciplinary Faculty of Taroudant, Ibn Zohr University, 32/S Agadir 80000, Morocco
This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract :We consider a transmission system, where the transmitted symbols are subject of inquiry. The kernels-based algorithms are of great importance to many problems. The channel identification and equalization operate by a proposed algorithm based on positive kernel method for multi-carrier code division multiple (MC-CDMA) system. Two practical selective frequency fading channels are considered; they are called broadband radio access network (BRAN A and BRAN B) normalized by ETSI. To conceive the proposed algorithm, we focused on the positive definite kernels.  Numerical simulations show that the algorithm confirms the good performance for different Signal to Noise Ratio (SNR). We use zero forcing (ZF) and minimum mean square error (MMSE) equalizers for the equalization MC-CDMA system.

Keywords :Positive Kernel method, Identification, Equalization, MC-CDMA systems, BRAN Channel.

Full Text


The role of e-learning in Algerian universities

in the development of a knowledge society

Pr BOUKELIF Aoued
ICT's Research Team
Communication Networks, Architectures
and Multimedia laboratory
University of S.B.A
eMail:This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract:

Many factors and actors have to be taken into account in building the Arab knowledge societies: Government, private sector, information, professional and education institutions. In this paper, focus will be put on the role of E-learning in arab universities in the development of a knowledge society. A case study of the Algerian universities will illustrate our study. One of the basic requirements for education in the 21st century is to prepare populations for participation in a knowledge-based economy, including the social and cultural perspectives. The times in which we live now is a new era - the era of civilization ,  information development or the era of knowledge as it is called, which paved the way for the emergence of a new global community called the  "Knowledge society". This represents a challenge to the education systems in various international communities, causing a significant change in the role of educational institutions, especially after the advent of the internet in teaching and learning in developed countries and the emergence of the so-called '' web Based Learning Environments''. E-learning is a cornerstone for building inclusive knowledge societies. Society and helped to grasp the opportunities offered by ICT by placing the individual at its center. The Arab world needs to focus in the coming years on higher education quality and entrepreneurship education to bridge the gap between education supply and labor market demand, as well as tackling graduate unemployment, which was a factor driving the 2011 uprisings, according to a draft Arab research strategy.

Full Text


Simulation of covering location models for

emergency vehicles management: A comparison study

Sarah Ibri and Djahida Ali Fedila
Computer Science Chlef University. Chlef, Algeria
 This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract

The efficiency of the emergency management services EMS consists in responding as quickly as possible to the arriving emergency calls. This objective may be attained by insuring a good preparedness of the population, i.e a good coverage of the geographical zones under control. Since a well covered zone means the availability of close vehicles to assist any arriving calls from this zone, reducing the response time to emergencies pass through choosing the best positions to the available vehicles. In this work, we aim to compare the performance of different coverage models for locating emergency vehicles on the long run using simulation. The comparison takes into account both response time to emergency calls and the preparedness of the zones. The obtained results show that using a binary weighted coverage measure gives better quality solutions for both response time and coverage.

Keyword: decision support system, simulation, optimization, crisis response management, linear programming.

Full Text


Query Length and its Impact on Arabic Information Retrieval Performance

Suleiman Mustafa and Reham Bany-Younis
Yarmouk University, Jordan
This email address is being protected from spambots. You need JavaScript enabled to view it.

ABSTRACT

This paper reports the results of investigating the impact of query length on the performance of Arabic retrieval. Thirty queries were used in the investigation, each of which was phrased in three different types of length: short, medium, and longer, giving ninety different queries. A Corpus of one thousand documents on herbal medication was used and expert judgments were used to determine document relevance to each query. The main finding of this research is that using shorter queries improves both precision and recall. Due to the absence of other results to compare with and the lack of agreement on how length affects retrieval, it has been concluded that the results should be viewed in light of the type of dataset used and how queries were formulated and categorized.

KEYWORDS: Arabic Information Retrieval, Query Length, Retrieval Performance.

Full Text

 


Read 5119 times Last modified on Monday, 11 December 2017 07:34
Share
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…