ACIT'2015 Proceedings

Smart Sump-Pump Using Fuzzy Logic Controller

Adnan Shaout and Husein Dakroub
ECE Department, University of Michigan

Abstract: In the USA home can be the greatest asset a citizen might own. HomMinimal Energy Consumption in MANET Using Cluster Head Selection and PLGP-Mes can be very costly to maintain and damages can become a hefty payment on homeowners. The Midwest region of the United States can have excessive rainfalls that can exceed one inch per hour for many hours at a time. Public drainage system is outdated in many parts of the Midwest region which can lead to horrific flooding of home basements. To prevent water from seeping into basements, a sump pump was invented in 1946 to pump water out of your basement. These pumps can be very costly, consume a great amount of energy, and work very inefficiently. This paper presents a fuzzy controller modeled in Matlab that would enhance the efficiency and structure of a sump pump system. This paper will depict different defuzzification methods and showcase the efficiency of this fuzzy system compared to a non-fuzzy systems.

Keywords: Flood, sump pump, fuzzy logic controller, efficiency, matlab.

Full Text


Fingerprint Image Quality Analysis Using Fuzzy Logic Technique

Abdelwahed Motwakel1 and Adnan Shaout2
1Collage of Post Graduate Studies, Sudan University of Science and Technology, Sudan
2Electrical and Computer Engineering Department, the University of Michigan -Dearborn

Abstract: The quality of fingerprint image greatly affects the performance of minutiae extraction and the process of matching in fingerprint identification system, thus to analysis fingerprint image quality many research was done. In this paper, we present a novel technique to analysis fingerprint image quality using fuzzy logic on the extracted four features which are the local clarity score (LCS), global clarity score (GCS), ridge_valley thickness ratio (RVTR), and the contrast. The proposed fuzzy logic system uses Mamdani Fuzzy Rule model which can analysis and determinate each fingerprint image type (oily, dry or neutral) based on the extracted feature values and fuzzy inference rules.

Keywords: Fingerprint image quality, local clarity score, global clarity score, ridge-valley thickness ratio, contrast, fuzzy inference system.

Full Text


Performance Comparison of Neuro-Fuzzy Cloud Intrusion Detection Systems

Sivakami Raja1 and Saravanan Ramaiah2
1Department of Information Technology, PSNA College of Engineering and Technology, India
2Department of Computer Science and Engineering, RVS Educational Trust's Group of Institutions, India

Abstract: Cloud computing is a subscription-based service where we can obtain networked storage space and computer resources. Since access to cloud is through internet, data stored in clouds are vulnerable to attacks from external as well as internal intruders. In order to preserve privacy of the data in cloud, several intrusion detection techniques, authentication methods and access control policies are being used. The common intrusion detection systems are predominantly incompetent to be deployed in cloud environments due to their openness and specific essence. In this paper, we compare soft computing approaches based on type-1, type-2 and interval type-2 fuzzy-neural systems to detect intrusions in a cloud environment. Using a standard benchmark data from a CIDD (Cloud Intrusion Detection Dataset) derived from DARPA Intrusion Detection Evaluation Group of MIT Lincoln Laboratory, experiments are conducted and the results are presented in terms of mean square error. 

Keywords: Fuzzy neural networks, Hybrid intelligent systems, Intrusion detection, Partitioning algorithms, Pattern analysis.

Full Text


A Novel Approach for optimization of Feature Selection

Duha Al-Darras, Suhail Odeh, and Henry Chaya
Department of Computer Information Systems, Bethlehem University, Palestine

Abstract: The accuracy of many classification problems is crucial. The number of features for collected data is increasing, and the need to find the best features to be used to increase the accuracy of classification is a necessity. There are several methods of feature selection, but none of them give the absolute best solution and most of them fall in the trap of local optima. This paper presents a new method that searches for the absolute best solution, or a solution which will give a higher classification accuracy rate by using a novel approach that divides the features into two groups: first group and second group of features. After that the method finds the best combination from the two groups to give the maximum accuracy rate. The purpose from this method is to select and find the best feature/s as individual or in groups.

Keywords: Feature selection, machine learning, 1-knn, optimization, genetic algorithm.

Full Text


Alternatives Organizer in Group Decision Making: An Ontology Approach

Bakhta Nachet, Abdelkader Adla, and Mohammed Frendi
Department of Computer Science, University of Oran 1Ahmed Ben Bella

Abstract: Group Decision Support Systems (GDSS) provide a means by which a larger number of organizational decision makers, may be in different locations, can efficiently and effectively participate in the group decision making process. In the latter, the alternatives amongst which a decision must be made can range from a few to a few thousand. The facilitator (or thedecision makers) need(s) to narrow the possibilities down to a reasonable number, and categorize and classify alternatives, especially where the alternatives can be put into numerical terms. Even when this is not the case, facilitation support, such as ontology-based frameworks potentially offer these capabilities and can assist the decision-maker in presenting the alternatives in a form that facilitates the decision. Because of the problems, frustrations, and great amount of time experienced in alternatives organization, we introduce an ontology based approach and a software tool that supports the facilitator in addressing the process problem of cognitive load associated with alternatives organizing stage by synthesizing and organizing group alternatives. The resulting alternatives organizing tool is based on ontologies built using the Web Ontology Language (OWL) which facilitates the sharing and integration of decision-making information between multiple decision makers.

Keywords: GDSS, ontology, concept categorization, group facilitation, owl.

Full Text


Optimization of Position Finding Step of Pcmomars Algorithm with Statistical Information

Ammar Suhail Balouch
Department of Computer Science, University of Rostock, Germane

Abstract: The PCM- oMaRS algorithm guarantees the maximal reduction steps of the computation of the exact median in distributed datasets and proved that we can compute the exact median effectively with reduction of blocking time and without needing the usage of recursive or iterative methods anymore. This algorithm provided more efficient execution not only in distributed datasets even in local datasets with enormous data. We cannot reduce the steps of PCM- oMaRS algorithm any more but we have found an idea to optimize one step of it. The most important step of this algorithm is the step in which the position of exact median will be determinate. For this step we have development a strategy to achieve more efficiency in determination of position of exact median. Our aim in this paper to maximize the best cases of our algorithm and this was achieved through dividing the calculation of number of all value that smaller than or equal to temporary median in tow groups. The first one contains only the values that smaller than the temporary median and the second group contains the values that equal to the temporary median. In this dividing we achieve other best cases of PCM- oMaRS algorithm and reducing the number of values that are required to compute the exact median. The complexity cost of this algorithm will be discussed more in this article. In addition some statistical information depending on our implementation tests of this algorithm will be given in this paper.

Keywords: Median, parallel computation, algorithm, optimization, big data, evaluation, analysis, complexity costs.

Full Text


A Rule-based English to Arabic Machine Translation Approach

Ahmad Farhat and Ahmad Al-Taani
Department of Computer Science, Yarmouk University, Irbid, Jordan

Abstract: In this study, we propose Rule-based English to Arabic Machine translation system for translating simple English declarative sentences into well-structured Arabic sentences. The proposed system translates sentences containing gerunds, infinitives, prepositions, direct and indirect objects. The system is implemented using bilingual dictionary designed in the SQL server. A major goal of this system is to be used as a stand-alone tool and can be integrated with general (English-Arabic) machine translation systems. The proposed system is evaluated using 70 various simple English declarative sentences written by English Language experts. Experimental results showed the effectiveness of the proposed MT system in translating English simple declaratives sentences into Arabic. Results are compared with two well-known commercial systems; Google Translate and Systran Systems. The proposed system reached an accuracy of 85.71% while Google got 31.42% and Systran got 20% on the same test sample.

Keywords: Machine translation, rule-based approach, bilingual dictionary, natural language processing.

Full Text


A Novel Method to Evaluate Romanization Systems: The Case of Romanizing Arabic Proper Nouns

Mohammed AI-Kabi1, Hanan Abu Obied2, Izzat Alsmadi3, and Maryam Nuser2
1 Computer Science Department, Zarqa University, Jordan
2CIS Department, Yarmouk University, Jordan
3 Computer Science Department, University of New Haven, USA

Abstract: The transliteration of Arabic proper nouns to other languages is usually based on the phonetic translation of these nouns into their phonetic Latin counterparts. Most of the dictionaries do not include most of these nouns, although some may have meanings. Transliteration is essential generally to Natural Language Processing (NLP) field and specifically to machine translation systems, cross-language information-retrieval systems and Web search engines, since most of the submitted queries are proper nouns. Romanization also known as Latinization that refers to the representation of names and technical terms with Roman (Latin) alphabets. Romanization is accomplished based on a number of methods. Machine Romanizers are either based on a dictionary or on rules to Romanize different proper nouns. The Romanization process is not trivial due to many problems, so normally there are different Roman variations of a single Arabic proper noun that cause many problems to search engines and databases. This study is based on a dataset consisting of around 5000 Arabic proper nouns as a case study, where this dataset accumulated through years as a result of indexing the authors of Arabic books at the library of one of the public universities in the Middle East. Many methods are presented to evaluate Romanization systems. In this paper, we presented a new automated evaluation method called, "Back Romanization" to automatically evaluate the effectiveness of different Romanization systems regardless of the source and destination languages used.

Keywords: Romanization system evaluation; arabic romanizers evaluation; rule-based romanization; dictionary-based romanization; arabic names romanization.

Full Text


Word Sense Disambiguation for Arabic Text Categorization

Said Ouatik El Alaoui1,3Meryeme Hadni1, Abdelmonaime Lachkar2, Driss Aboutajdine3
1Department of Computer Science, FSDM, USMBA, Morocco
2Department of Electrical and Computer Engineering, E.N.S.A, USMBA, Morocco
3LRIT- CNRST URAC29, FSR, Mohammed V-Agdal University, Morocco

Abstract: In this paper, we present two contributions for Arabic Word Sense Disambiguation. In the first one, we propose to use both two external resources AWN and WN based on Term to Term Machine Translation System (MTS). The second contribution relates to the disambiguation strategies, it consists of choosing the nearest concept for the ambiguous terms, based on more relationships with different concepts in the same local context. To evaluate the accuracy of our proposed method, several experiments have been conducted using Feature Selection methods; Chi-Square and CHIR, and two Machine Learning techniques; the Naïve Bayesian (NB) and Support Vector Machine (SVM). The obtained results illustrate that using the proposed method increases greatly the performance of our Arabic Text Categorization System. Keywords: Word Sense Disambiguation, Arabic Text Categorization System, Arabic WordNet, Machine Translation System.

Full Text


Constraints Aware and User Friendly Exam Scheduling System

Mohammad Al-Haj Hassan1 and Osama Al-Haj Hassan2
1Computer Science Department, Zarqa University, Jordan
2Computer Science Department, Isra University, Jordan

Abstract: Scheduling is a crucial task for schools, universities, and industries. It is a vital task for any system containing utilization of resources to fulfill a certain criterion. Utilization of such resources usually includes several conflicting constraints that scheduling has to take into account. Exam Scheduling is an essential key for schools and universities in order for exams periods to be smooth. In this paper, we present an exam scheduling system that employs graph coloring scheduling technique. We focus on two aspects. First, the constraints our system handles. Second, the user friendly interface of the system.

Keywords: Exam Scheduling, user friendly, constraints, optimization, conflict, graph coloring.

Full Text


The Effect of Horizontal Database Table Partitioning on Query Performance

Salam Matalqa1 and Suleiman Mustafa2
Faculty of Information Technology and Computer Sciences, Yarmouk University, Jordan
Department of Computer Information Systems, Yarmouk University, Jordan

Abstract: The need for achieving optimal performance for database applications is a primary objective for database designers and a primary requirement for database end users. Partitioning is one of the techniques used by designers to improve the performance of database access. The purpose of this study was to investigate the effect of horizontal table partitioning on query response time using three partitioning strategies: zero partitioning, list partitioning and range partitioning. Three tables extracted from the Student Information System (SIS) at Yarmouk University in Jordan were used in this research. Variation in table size was used to determine when partitioning can have an impact (if any) on access performance. A set of 12 queries were run over a database of three different sizes. The results indicated that partitioning provided better response time than zero partitioning, on the other hand, range and list partitioning strategies showed little performance differences with the different database sizes.

Keywords: Table partitioning, horizontal partitioning, range partitioning, list partitioning, database performance.

Full Text


A Two-Layer Approach Model for Industry Electricity Demand in Malaysia

Noor Azina Ismail and Syamnd Mirza Abdullah
Department of Applied Statistic, University of Malaya Kuala Lumpur, Malaysia

Abstract: The main goal of this study is to propose a novel two layer approach (hybrid approach) as electricity prediction model for Malaysia. The work focuses more on the consumption rate of electricity in the industrial sector. The new hybrid approach combines a linear approach (Multiple linear regressions) and a nonlinear approach (Back Propagation Artificial Neural Network) with principal component analysis. The time span of the input data set that fed to this new proposed model is from 1992 to 2013. The included independent variables of this study are population size (Pop), Gross Domestic Product (GDP), Gross National Product (GNP), export, Import and price of electricity. The proposed combination approach can address the accuracy and reliability of the electricity demand prediction models. The Mean Absolute Percentage Error (MAPE) is used as performance indicator to evaluate the accuracy of the new hybrid approach. After testing the model, the obtained accuracy rate is 0.97.  The new model used to predict the industrial demand rates for next five years (2015 – 2020).

Keywords: Electricity prediction, industry sector in malaysia, two-layer approach, principal component regression, back propagation neural network.

Full Text


Principal Component Regression With Artificial Neural Network to Improve Prediction of in Electricity Demand

Noor Azina Ismail and Syamnd Mirza Abdullah
Department of Applied Statistic, University of Malaya Kuala Lumpur, Malaysia

Abstract: Planning for electricity demand is a key factor for the success in the development of any countries. Such success can only be achieved if the demand for electricity is predicted correctly and accurately. This study introduces a new hybrid approach that combines Principle Component Regression (PCR) and Back-Propagation Neural Networks (BPNN) techniques in order to improve the accuracy of the electricity demand prediction rates. The study includes 13 factors that related to electricity demand, and data for these factors have been collected in Malaysia. The new combination (PCR-BPNN) starts to solve the problem of collinearity among the input dataset, and hence, the reliability of the results. The work focuses also on the errors that recoded at that output stage of the electricity prediction models due to changes in the patterns of the input dataset. The accuracy and reliability of the results have been improved through the new proposed model. Validations have been achieved for the proposed model through comparing the value of three performance indicators of the PCR-BPNN with the performance rates of three major prediction models. Results show the outperformance of the PCR-BPNN over the other types of the electricity prediction models.

Keywords: electricity demand; accuracy and reliability, principle component analysis; multiple linear regressions; back-propagation neural network.

Full Text


BLN-Gram-TF-ITF as a Language Independent Feature for Authorship Identification and Paragraph Similarity

Nawaf Ali and Roman Yampolskiy
Al-Isra University, Jordan
University of Louisville, United States

Abstract: Authors tend to leave traits in their writings, even if they try not to. By analyzing these traits by looking for textual features, one can construct a form of authorial profile that can distinguish ones writing from another; this is known as Authorship identification. The BLN-Gram-TF-ITF has been implemented as a new feature to identify authors by analyzing samples of their writings. New experiments demonstrated that BLN-Gram-TF-ITF feature is also a language independent, and can be used to measure paragraphs similarities within a book or between different books.

Keywords: Authorship; TFIDF; N-grams; Stylometry; Text features; Text similarity.

Full Text


Interactive Visual Search System Based on Machine Learning Algorithm

Anas Al-Fayoumi and Mohammad Hassan
Department of Computer Science, Zarqa University, Jordan

Abstract: This paper presents a tool that enables non-technical end-users to use free-form queries in exploring a large scale datasets with simple and interactive direct technique. The proposed approach based on effective integration of different techniques, such as data mining, visualization and Human-Computer Interaction (HCI). The proposed model has been incorporated in a prototype developed as a web-based application using different programming languages and software tools. The system has been implemented based on a real dataset, whereas the obtained results indicate the efficiency of such approach.

Keywords: Visualization, features extraction, data mining, visual data mining, machine learning.

Full Text


IHadoop: Improve MapReduce Performance

Mohammed Qunper, Osama Badawy, Mohammed Kholief
College of Computing and Information Technology, Arab Academy for Science, Technology, and Maritime Transport, Egypt

Abstract: The performance of Hadoop dependent on many of points, the data partitioning is one of this point, the node specification is the other point. In the real world, the data is often highly skewed, which may cause losing time for the jobs. In this paper we study the skew problem in reduce and map phases, where map phase need to collect bocks and reduce phase need to collect key groups. We outline our solution to develop map and reduce phases, by decrease preparation Maper and Reducer, so we select locality base partitioning and developed to solve power loss by using node specification to decrease map time, and key group robust to decrease reduce time.

Keywords: Hadoop, mapreduce, big data, cloud computing, parallel computing.

Full Text


Facial Recognition Under Expression Variations

Mutasem Alsmadi
Department of MIS, University of Dammam

Abstract: Researchers in different fields such as image processing, neural sciences, computer programs and psychophysics have investigated number of problems related to facial recognition by machines and humans since 1975.  Automatic recognition of the human emotions using facial expression is an important, but difficult problem. This study introduces a novel and automatic approach to analyze and recognize human facial expressions and emotions using a metaheuristic algorithm, which hybridizes iterated local search and genetic algorithms with back-propagation algorithm (ILSGA-BP). Back-propagation algorithm (BP) was used to train and test the extracted features from the extracted right eye, left eye and mouth using radial curves and Cubic Bézier curves, Metaheuristic algorithm (MA) was used to enhance and optimize the initial weights of the traditional back-propagation algorithm. FEEDTUM facial expression database was used in this study for training and testing processes with seven different emotions namely; surprise, happiness, disgust, neutral, fear, sadness and anger. A comparison of the results obtained using the extracted features from the radial curves, Cubic Bézier curves and the combination of them experiments were conducted. The comparison shows the superiority of the combination of the radial curves and the Cubic Bézier curves with percentage ranges between 87% and 97% over the radial curves alone with a percentage ranges between 80% and 97% and over the Cubic Bézier curves with a percentage ranges between 83% and 97%. Moreover, based on the extracted features using the radial curves, Cubic Bézier curves and the combination of them, the experimental results show that the proposed ILSGA-BP algorithm outperformed the BP algorithm with overall accuracy 88%, 89% and 93.4% respectively, compared to 83%, 82% and 85% respectively using BP algorithm.

Keywords: Face recognition, cubic bézier curves, radial curves, features extraction, metaheuristic algorithm and back-propagation algorithm.

Full Text


Content-Based Image Retrieval Based on Integrating Region Segmentation and Colour Histogram

Duraisamy Yuvaraj1 and Shanmugasundaram Hariharan2
1,2Dept.of Computer Science and Engineering
1M.I.E.T Engineering College, India
2T.R.P Engineering College, India

Abstract: Developments in multimedia technology, increasing number of image retrieval functions & capabilities has led to the  rapid growth of CBIR techniques. Colour histogram could be compared in terms of speed and efficiency. We have presented a modified approach based on a composite colour image histogram. A major research perspective in CBIR emphasize on  matching similar objects based on shape, colour and texture using computer vision techniques in extracting image features. The colour histogram is perhaps the most popular one due to its simplicity. Image retrieval using colour histogram perhaps has both advantages and limitations. This paper presents some recommendations for improvements to CBIR system using unlabelled images. The experimental results presented using Matlab software significantly shows that region based histogram and colour histogram were effective as far as performance is concerned.

Keywords: Image analysis, content based image retrieval, retinal imaging, Gray scale and Semantic description.

Full Text


Algorithm for Answer Extraction Based on Pattern Learning

Muthukrishnan Ramprasath1 and Shanmugasundaram Hariharan2
1J.J. College of Engineering and Technology, India
2TRP Engineering College

Abstract: The rapid growth of information available on the internet has provoked the development of diverse tool for searching and browsing large document collections. Information retrieval (IR) system act as a vital tool for identifying relevant document for user queries posted to search engine. Some special kind of IR system, such as Google, yahoo and Bing which allow the system to retrieve the relevant information to user question form web. Question Answering System (QAS) play important role for identifying the correct answer to user question by relying on the many IR tools. In this paper, we propose a method for answer extraction based on pattern learning algorithm. Answer extraction component provide precise answer to user question. The proposed QA system uses the pattern learning algorithm which consists of following component such as question transformation, question and answer pattern generation, pattern learning, pattern based answer extraction and answer evaluation. The experiment has been conducted different type question on TREC data sets. Our system used different ranking metrics in the experimental part to find the correct answer to user question. The experimental results were investigated and compare with different type of questions.

Keywords: Question answering system, pattern learning, question transformation, answer extraction, trec data set.

Full Text


Selection of Relevance Feedback Technique in the Context of CBIR

Mawloud Mosbah and Bachir Boucheham
Department of Informatics, University 20 Août 1955 of Skikda, Algeria

Abstract: In this paper, we address the relevance feedback mechanism in the context of CBIR. Review of literature shows that there exists many techniques for re-ranking using relevance feedback information. We propose then a CBIR prototype that selects the appropriate technique for a given submitted query and relevance feedback information. Experiments conducted on Wang database (COREL1K), using color moments as signature and Euclidean distance as matching formula, show the clear superiority of the results achieved adopting the selection process. 

Keywords: CBIR, relevance feedback, selection, precision, recall.

Full Text


Evaluating Association between Research and Social Networks

Mohammed Al-Kabi1, Heider Wahsheh2, and Izzat Alsmadi3
1Computer Science Department, Zarqa University, Jordan
2Computer Science Department, King Khaled University, Saudi Arabia
3Computer Science Department, University of New Haven, USA

Abstract: Web 2.0 enables its users to generate their contents which lead to the emergence of a new era of online social networks. Nowadays the percentage of user/humans of online social network exceeds 50%, where the top 3 online social networks are Facebook, LinkedIn, and Twitter respectively. Twitter is among the top 10 websites worldwide on the Web, with more than 300 million active users. This study aims to discover whether there is any correlation between research networks and social networks. The motivation behind this paper is to discover whether the researchers in Jordan use Social Networks that are designed for general social purposes like Facebook, Google+, etc. as tools of communication to discuss topics related to their specialty. More specifically whether IT researchers in Jordan use both Facebook and Twitter as part of their research networks. Google Scholar is used to identify IT researchers in Jordan. Networking information between those researchers is collected from Facebook and Twitter. Results showed that while most of those researchers have individual pages in social networks, yet those networks or accounts are largely used for social, possibly professional but not research purposes. While there are some other Social Networks such as ResearchGate and LinkedIn which are professional networks used by academics and students, however, researchers in general should have a better usage of social networks.

Keywords: Social networks; Research networks; Twitter; Facebook; LinkedIn; ResearchGate.

Full Text


Embedded Voice Synthesiser and Sensors in Navigation Aid System for Blind People

Mohamed Fezari1, Rachid Sammouda2, and Salah Bensaoula1
1Department of electronics BP.121BADJI Mokhtar, Annaba University, Algeria
2Computer Science Department, King Saud University, Saudi Arabia

Abstract: World Health Organization estimates that there are 38 millions blind and 285 million visually impaired people worldwide, mainly in developing countries. People who are visually impaired often encounter physical and information barriers that limit their accessibility and mobility. In this research work, we describe the design of a navigation aid system for Visually-Impaired and Blind Persons. It is based on a new type of microcontroller with synthesiser speech module VR-Stamp.  The system is a portable, self-contained that will allow blind and visually impaired individuals to travel through familiar and unfamiliar routes without the assistance of guides. The proposed and designed system provides information to the user about urban walking routes using spoken words to indicate what decisions to make and during travel it detects obstacles. It was tested inside the university by students and obtained results are encouraging.

Keywords: Blind navigation aid system, embedded system design, microcontroller, speech synthesizer, distance measurement, ultrasonic sensors.

Full Text


Emoticon-based Feedback Tool for e-Learning Platforms

Tarek Boutefara1.2 and Latifa Mahdaoui3
1Computer Science Department, University of Mohammed Seddik Benyahia, Algeria
2Doctoral School STIC, National High School of Computer Sciences, Algeria
3RIIMA Laboratory, Computer Science Department, USTHB, Algeria

Abstract: Emotions are strongly related to cognitive and perception process, this is why learner emotional state can easily affect his learning performance. Detection and recognizing learner emotional state is a very recent and active research field. In this paper, we propose a simplified vision to the emotional state detection; we try to avoid existent complex approaches by counting on the today user habits like as participation and feeling sharing habits strongly present in social network and web 2.0 concept in general. The result is an explicit system to allow learner to express his emotional state during a learner session.

Keywords:Affective computing, emotion recogniation, web 2.0, moodle, emoticon.

Full Text


A Prototype for a Standard Arabic Sentiment Analysis Corpus

Mohammed Al-Kabi1, Mahmoud Al-Ayyoub2, Izzat Alsmadi3, and Heider Wahsheh4
1Computer Science Department, Zarqa University, Jordan
2Computer Science Department, Jordan University of Science and Technology, Jordan
3Computer Science Department, University of New Haven, USA
4Computer Science Department, King Khaled University, Saudi Arabia

Abstract: The researchers in the field of Arabic sentiment analysis (SA) need a relatively big standard Arabic sentiment analysis corpus to conduct their studies. There are a number of existing Arabic datasets; however they suffer from certain limitations such as the small number of reviews or topics they contain, the restriction to Modern Standard Arabic (MSA), or not being publicly available. Therefore, this study aims to establish a flexible and relatively big standard Arabic sentiment analysis corpus that can be considered as a pillar and cornerstone to build larger Arabic corpora. In addition to MSA, this corpus contains reviews written in the five main Arabic dialects (Egyptian, Levantine, Arabian Peninsula, Mesopotamian, and Maghrebi group). Furthermore, this corpus has other five types of reviews (English, mixed MSA & English, French, mixed MSA & Emoticons, and mixed Egyptian & Emoticons). This corpus is released for free to be used by researchers in this field, where it is characterized by its flexibility in allowing the users to add, remove, and revise its contents. The total number of topics and reviews of this initial copy is 250 and 1,442, respectively. The collected topics are distributed equally among five domains (classes): Economy, Food-Life style, Religion, Sport, and Technology, where each domain has 50 topics. This corpus is built manually to ensure the highest quality to the researchers in this field.

Keywords: Sentiment analysis; opinion mining; making of arabic corpus; arabic reference corpus; maktoob yahoo!

Full Text


A Mobile Application for Safer-Intelligent Driving and Vehicle Preventive Maintenance Using Vehicle OBD Data

Adnan Shaout and Abdur Rafay Mir
Electrical and Computer Engineering Department, The University of Michigan-Dearborn
Dearborn, Michigan 48128

Abstract: The purpose of this paper is to develop an app that could read the vehicle diagnostic trouble codes real time data through the on-board diagnostics (OBD-II). A regular driver might not be able to comprehend all of that data and the use of this app which could make it confined to only those with the knowledge of OBD-II. Keeping this mind, warning/notification feature was added into this smart phone app which will notify the driver about any malfunction or parameters breach with a warning sign and a beep. The goal of this paper is to achieve maximum awareness among drivers and vehicle owners. We also aim to make every driver more educated about their own vehicle and its maintenance through the use of this technology, which will not only help them in saving time and money but also makes vehicles much safer, reliable and fun to drive.

Keywords: OBD Ii, mobile application, intelligent driving, vehicle preventive maintenance.

Full Text


Minimal Energy Consumption in MANET Using Cluster Head Selection and PLGP-M

W.R.Salem Jeyaseelan and Shanmugasundaram Hariharan
J.J.College of Engg. Tech., India
TRP Engg. College (SRM Group), India

Abstract: Lack of infrastructure, Route discovery management and Energy consumption monitoring in MANET poses very big challenges for researchers working in this field. The existing protocols like Link State Routing, Destination Sequence Distance Vector, Low Energy Adaptive Cluster Hierarchy and Parno Luk Gaustad Perrig (PLGP) are not well suited for decentralized schemes because of the energy constraints in nodes which need to be taken into consideration while designing routing protocols. This paper presents the design and implementation using NS-2 simulator for optimal route discovery management by using Cluster Head Selection (CHS) and constructing less energy consuming route by PLGP-MANET (PLGP-M) algorithms. Our proposed system gives effective route discovery management which saves the energy. It is also inferred that CHS algorithm in ad-hoc networks maximizes the lifetime of the nodes in order to minimize maintenance, while PLGP-M reduces energy by finding optimal routing path and maximizes the overall performance.

Keywords: Ad-hoc networks, energy-consumption, manet, ns-2, plgp, routing.

Full Text


Integrated Replication-Checkpoint Fault Tolerance Approach of mobile agents “IRCFT”

Suzanne Sweiti and Amal Al. Dweik
College of Information Technology and Computer Engineering, Palestine Polytechnic University, Palestine

Abstract: Mobile agents offer flexibility which is evident in distributed computing environments. However, agent systems are subject to failures that result from bad communication, breakdown of agent server, security attacks, lack of system resources, congestion in network, and situations of deadlock. If any of such things happen, mobile agents suffer loss or damage totally or partially while execution is being carried out. Reliability must be addressed by the mobile agent technology paradigm. This paper introduces a novel fault tolerance approach “IRCFT” to detect agent failures as well as to recover services in mobile agent systems. Our approach makes use of checkpointing and replication where different agents cooperate to detect agent failures. We described the design of our approach, and different failure scenarios and their corresponding recovery procedures are discussed.

Keywords: Mobile agents, fault tolerance, reliability, checkpointing, replication.

Full Text


A Personalized Hybrid Web Recommender System

A. Janet Rajeswari1 and Shanmugasundaram Hariharan2
1Research and Development Centre, Bharathiar University, India
2Department of CSE, TRP Engineering College, India

Abstract: Personalized recommender system has attracted wide range of attention among researchers in recent years. There has been a huge demand for development of web search apps for gaining knowledge pertaining to user’s choice. A strong knowledge base, type of approach for search and several other factors make it accountable for a good personalized web search engine. This paper presents the state of art, challenges and other issues in this context, thereby providing the need for an improved personalized system. The paper describes an approach integrating the news feeds and users opinion on web news using content, collaborative and hybrid approaches. Experiments carried out shows the effectiveness of the proposed system using  popular dataset such as MovieLens.

Keywords: Personalization, web search, recommender system, user.

Full Text


Cloud-based Video Capture Handoff in the Intelligent Transportation System

Fekri Abduljalil
Alandalus University for Science and Technology

Abstract: Handoff is the process of transferring data session from one cell to another in a cellular system or from one channel to another in the same cell. Intelligent transportation System (ITS) is the integration of information system applications and communications technologies to improve transportation safety and mobility and enhance system productivity. In this paper, cloud-based video capture handoff scheme is proposed such that vehicular cloud can be utilized to offer video capture as a service. The proposed scheme transfer a video capture session from one vehicle to another. It handles the continuity of video capture with minimum service disruption. The proposed service is implemented using OPNET.

Keywords: Vehicular network, cloud computing, video capture, handoff.

Full Text


Oak Ridge Air Quality Index Computation: A Way for Monitoring Pollutions in Annaba City

Mohamed Fezari, Radhwan Hattab, and Ahmed Al-Dahoud
Faculty of Engineering, Badji Mokhtar Annaba University, Algeria
University of Bradford, Bradfor, UK

Abstract: Calibri, sans-serif;">: AQI (air quality index) with a microcontroller have been designed for air quality monitoring in some sensible area in Annaba City East of Algeria. The previous design computes IAP (index of atmospheric purity)  at central unit due to processing speed of the microcontroller, However the new design is based on DSPIC-30 as micro-system. The computation of AQI is done on the node in order to reduce transmission data to central unit. The nodes were tested in three main zones where AQI are quite different. We have taken dust, and three major gazes origin in air pollution (Co, NO2 and LPG) to compute AQI, the computation is done on site by microcontrollers based on AQI computation equations. In this work we covered the field of Air quality monitoring electronic Nodes design and wireless transmission of fusion data. The GUI has been designed for simulation of the WSN in controlling the environment air Quality.

Keywords: Pollution sensors, air quality index, environment monitoring, wireless sensor network.

Full Text


Ideal Software Architecture for the Automotive Industry

Adnan Shaout and Gamal Waza
Electrical and Computer Engineering Department, University of Michigan-Dearborn, Dearborn, Michigan, USA

Abstract: An Ideal Software Architecture is the foundation to solving software engineering methods. In this paper, all of the automotive industry current and future challenges will be analyzed, and the paper will propose an Ideal Software Architecture for the automotive industry.After gathering all the requirements, software architecture styles will be evaluated against the unique environment found in the automotive industry. Research results show that the component based and layered architectural styles were the most suitable for the automotive industry. Some other architectural styles were suitable for particular layer and/or component. One of the most interesting outcomes of this research was the introduction of run-time adaptation to the automotive software architecture. This was handled by introducing the Dynamic Configuration manager Module in the middleware layer. Finally, an ideal architecture design that solves all current and future concerns was presented.

Keywords: ware for automotive industry, software architecture, dynamic configuration manager, layered software.

Full Text


Modelling Software Fault debugging Complexity under Imperfect Debugging Environment

Omar Shatnawi
Department of Computer Science,Al al-Bayt University, Jordan

Abstract: The fault debugging progress is influenced by various factors all of which may not be deterministic in nature such as the debugging effort, debugging efficiency and debuggers skill, and debugging methods and strategies. In order to address these realistic factors that influencing the debugging process we propose an integrated nonhomogeneous Poisson process (NHPP) based software reliability model. The integrated modelling approach incorporates the effect of imperfect fault debugging environment, fault debugging complexity and learning debuggers' phenomenon. The debugging phase is assumed to be composed of three processes namely, fault detection, fault isolation and fault removal. The software faults are categorized into three types namely, simple, hard and complex according to their debugging complexity. As the debugging progresses, the fault removal rate changes to capture learning process of the debuggers. In order to relax the ideal debugging environment, two types of imperfect debugging phenomena are incorporated. Incorporating the imperfect fault debugging phenomena in software reliability modelling is very important to the reliability measurement as it is related to the efficiency of the debugging team. Accordingly, the total debugging process is the superposition of the three debugging activities processes. Such modelling approach can capture the variability in the software reliability growth curve due to debugging complexity of the faults depending on the debugging environment which enables the management to plan and control their debugging activities to tackle each type of fault. Actual test datasets cited from real software development projects have been used to demonstrate the proposed model.

Keywords: Software reliability engineering, software testing and debugging, non-homogenous Poisson process, imperfect debugging, fault debugging complexity.

Full Text


التحقق من أنواع البيانات المجردة
 (منهج هجين)

ناهد احمد علىو على الميلي2
قسم علوم الحاسوب جامعة الجزيرة, السودان 1
المعهد التقني بنيو جيرسي, نيو جيرسي, الولايات المتحدة الامريكيية 2

ملخص: في الأونة الأخيرة، أخذت المنتجات البرمجية وظائف حرجة بصورة متزايدة، ونمت بسرعة فائقة وأصبحت كبيرة ومعقدة؛ مما جعل الحاجة إلى الـتأكد من جودتها تزيد بشكل ملحوظ. ويظل التحدي قائما في مجال التحقق من المنتجات البرمجية ذات الحجم والتعقيد؛ إذ حاولت بحوث عديدة حل هذه المشكلة، ولكن محاولاتها باءت بالفشل. وفي هذه الورقة يناقش الباحثين منهجاً هجيناً للتحقق من صحة أنواع البيانات المجردة في مقابل المواصفات البديهية المكتوبة بشكل منهجي مستخدمين ترميزاً يشبة مواصفات التتبع ، ويسمى بالمواصفات البديهية. وهذا المنهج يركز على استخدام تقنيات تحليلية وتجريبية  بطريقة تكميلية تفادياً لعيوب استخدام كل تقنية بشكل منفرد.

الكلمات المفتاحية: أنواع البيانات المجردة، التحقق من البرامج، اختبار البرامج، منطق هوار، المواصفات البديهية.

Full Text


 

 Visualising Domain Ontologies Using Mind Maps to Enhance Requirements Engineering

Rami Zayed1, Mario Kossmann2, and Mohammed Odeh3
1University of the West of England, United Kingdom
2Airbus, United Kingdom
3University of the West of England, United Kingdom

Abstract: This paper sheds light on the relevance of domain ontologies in the contexts of Systems Engineering (SE) and Software Engineering (SwE), and in particular on their importance for the Requirements Engineering (RE) process in both contexts; and it explains both the importance of and obstacles to visualising ontology content in a meaningful way so that effective communications with stakeholders based on relevant domain ontologies can be greatly enhanced. One powerful yet flexible way to visualise ontologies is by means of mind maps. However, the automatic data transfer between the owl format and a standardised mind map format that can be viewed and edited by commercial mind mapping tools had not been enabled until recently. The OntoREM Mind-Mapper (OMM) tool was developed to bridge this gap between domain ontologies that are specified in OWL and mind mapping formats, in order to enhance the Ontology-driven Requirements Engineering Methodology (OntoREM). In addition to this primary goal of the OMM tool, it also enables the visualisation of any other ontology that is specified in line with the OWL notation, subject to compatibility with the used baseline of the OWL standard.

Keywords: Ontology; mindmapping; ontology visualisation; requirements engineering;

Full Text


 The Onto REM-Mind Mapper Software for Visualising OWL Ontologies

Rami Zayed1, Mario Kossmann2, and Mohammed Odeh3
1University of the West of England, United Kingdom
2Airbus, United Kingdom
3University of the West of England, United Kingdom

Abstract: The OntoREM Mind-Mapper (OMM) tool was developed at the University of the West of England in cooperation with Airbus in order to enhance and automate important aspects of the Requirements Engineering (RE) process as it is implemented in the Ontology-driven RE Methodology (OntoREM) – a novel, knowledge-driven methodology that has been applied to a number of case studies in the aerospace industry over the last years. The development of the OMM tool addressed opportunities for automation of previously manual interfaces between different tool environments that were used within the OntoREM approach, and as such greatly enhanced both the performance and reliability of OntoREM: process times could be significantly reduced, while errors related to manual data transfers could be eliminated. Furthermore, the visualisation of domain knowledge that is stored in domain ontologies (OWL format) was made possible in a user-friendly and customisable way, even of other types of ontologies that are specified in the OWL format, but are not directly related to the OntoREM approach. Using an MVC implementation for the development of the OMM tool led to a high degree of flexibility and increased the compatibility with different types of mind mapping tools available on the market today.

Keywords: Mind-mapping, ontology, ontorem methodology, requirements engineering, owl language, visualisation.

Full Text


 New Algorithms to Find Reliability and Unreliability Functions of the Consecutive k-out-of-n: F Linear and Circular System

Imad Nashwan
Al Quds Open University, Palestine

Abstract: The consecutive k-out-of-n: F linear and circular system consists of n components. The system fails if at least k consecutive components are in the failure state. In this paper, new algorithms to find the reliability and unreliability functions of the consecutive k-out-of-n: F linear and circular system are obtained, in this context, the Index Structure Function (ISF), and two equivalence relations are defined to partition the failure and the functioning space of the consecutive k-out-of-n: F circular system into finite mutual pairwise disjoint classes respectively, where the reliability and unreliability functions are the summations of reliability and unreliability of these equivalence classes. For the linear consecutive k-out-of-n: F system, a boundary conditions are given to omit a specified failure states from the failure space of the circular case to achieve the failure states of the linear case, which facilitate the evaluation of the reliability and unreliability functions of the consecutive k-out-of-n: F linear system.

Keywords: Consecutive k-out-of-n: F systems, System reliability, equivalence relation, modular arithmetic.

Full Text


 

 

 

 

 

 

 

 

Read 4290 times Last modified on Thursday, 05 May 2016 12:51
Share
More in this category: « ACIT 2016 About Us »
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…