Tuesday, 28 July 2020 00:21

Model Transformations Carried by the Traceability

Framework for Enterprises in Software Industry

 

Gullelala Jadoon1, Muhammad Shafi2, and Sadaqat Jan3

1Department of Information Technology, University of Haripur, Pakistan

2Faculty of Computing and Information Technology, Sohar University, Oman

3University of Engineering and Technology, Mardan, Pakistan

Abstract: The developmental paradigm in the software engineering industry has transformed from a programming-oriented approach to model-oriented development. At present, model-based development is becoming an emerging method for enterprises for constructing software systems and services most proficiently. In Capability Maturity Model Integration (CMMI) Level 2, i.e., Managed, we need to sustain the bi-directional trace of the transformed models for the administration of user requirements and demands. This goal is achieved by the organization after applying the particular practices suggested by CMMI level 2 process area of Requirements Management (RM). It is very challenging for software developers and testers to maintain trace, particularly during the evaluation and upgrading phases of development. In our previous research work, we proposed a traceability framework for model-based development of applications for software enterprises. This work is the extension of our previously presented research work in which we have anticipated the meta-model transformations according to the Software Development Life Cycle (SDLC). These meta-models are capable of maintaining the trace information through relations. The proposed technique is also verified using a generalized illustration of an application. This transformation practice will give a foundation to software designers to maintain traceability links in model-driven development.

Keywords: Requirements Management, traceability, Model-driven, SDLC, CMMI.

Received February 23, 2020; accepted June 9, 2020

Tuesday, 28 July 2020 00:19

Design and Simulation of Spectrum Access and Power Management Protocol for Dynamic Access Networks

 

Ala'eddin A. Masadeh1, Haythem Bany Salameh2,3, Ahmad Abu-El-Haija4

1Al-Balqa Applied University, Al-Salt, Jordan

2Al Ain University, Al Ain, UAE

3Yarmouk University, Irbid, Jordan

4Jordan University of Science and Tech., Irbid, Jordan

Abstract: This work investigates the problem of managing the transmission power and assigning channels for multi-channel single-radio Cognitive Radio Ad-Hoc Networks (CRAHNs). The considered network consists of M primary users and N secondary users, where the secondary users can use the licensed channels opportunistically when they are not utilized by the primary users. The secondary users have the capability of sensing the licensed channels and determine their occupation status. They are also able to control their transmission power such that the transmitted data can be received with high quality-of-service with the lowest possible transmission power, and minimum interference among the secondary users. This also contributes in increasing the frequency spatial reuse of the licensed channels by the secondary users, when the channels are unoccupied, which increases the network throughput. This work proposes a channel assignment algorithm aims at assigning the unoccupied licensed channels among secondary users efficiently, and a transmission power control aims at tuning the transmission power used by the secondary users to maximize the network throughput. The results show an enhancement achieved by the proposed protocol when it is integrated to the considered network, which is seen through increasing the network throughput and decreasing in the access delay. In this context, the Network Simulator 2 (NS2) was used to verify our proposed protocol, which indicates a significant enhancement in network performance.

 

Keywords: Cognitive radio network, primary user, secondary user, licensed spectrum, unlicensed spectrum, medium access control.

Received January 28, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/2

Full Text  

Tuesday, 28 July 2020 00:18

Incorporating Intelligence for Overtaking Moving Threatening Obstacles

Mohammed Shuaib1 and Zarita Zainuddin2

1Department of Computer Sciences and Information, Imam Mohammad Ibn Saud Islamic University, KSA

2School of Mathematical Sciences, Universiti Sains Malaysia, Malaysia

Abstract: Crowd management and fire safety studies indicate that the correct prediction of the threat caused by fire is crucial behavior which could lead to survival. Incorporating intelligence into exit choice models for accomplishing evacuation simulations involving such behavior is essential. Escaping from moving source of panic such as fire is of tremendous frightening event while evacuation situation. Predicting the dynamic of fire spreading and the exit clogging are intelligent aspects which help the individuals follow the correct behaviors for their evacuation. This article proposes an intelligent approach to accomplishing typical evacuations. The agents are provided with the ability to find optimal routes that enable them overcome spreading fire. Fire and safe floor fields are proposed to provide the agents with the capability of determining intermediate points to compose optimal routes toward the effective chosen exit. The instinct human behavior of being far from the fire to protect himself from sudden unexpected attack is introduced as essential factor risen in emergency situation. Simulations are conducted in order to examine the simulated evacuees’ behavior regarding overtaking the fire and to test the efficiency of making smart and effective decisions during emergency evacuation scenarios.

 

Keywords: Evacuation simulation; fire spread, precaution time; safe floor field.

 

Received February 25, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/3
Tuesday, 28 July 2020 00:17

Enhanced Android Malware Detection and Family

Classification, using Conversation-level

Network Traffic Features

 Mohammad Abuthawabeh and *Khaled Mahmoud

King Hussein School of Computing Sciences, Princess Sumaya University for Technology, Jordan

*This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Signature-based malware detection algorithms are facing challenges to cope with the massive number of threats in the Android environment. In this paper, conversation-level network traffic features are extracted and used in a supervised-based model. This model was used to enhance the process of Android malware detection, categorization, and family classification. The model employs the ensemble learning technique in order to select the most useful features among the extracted features. A real-world dataset called CICAndMal2017 was used in this paper. The results show that Extra-trees classifier had achieved the highest weighted accuracy percentage among the other classifiers by 87.75%, 79.97%, and 66.71%for malware detection, malware categorization, and malware family classification respectively. A comparison with another study that uses the same dataset was made. This study has achieved a significant enhancement in malware family classification and malware categorization. For malware family classification, the enhancement was 39.71% for precision and 41.09% for recall. The rate of enhancement for the Android malware categorization was 30.2% and 31.14% for precision and recall, respectively.

Keywords: Information Security, Android Malware, Network Traffic Analysis, Conversation-level Features, and Machine Learning.

Received February 19, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/4
Tuesday, 28 July 2020 00:15

Robotic Path Planning and Fuzzy

Neural Networks

 

Nada Mirza
College of Engineering, Al Ain University, UAE

 

Abstract: Fuzzy logic has gained excessive attention due to its capacity of handling the data in a much simpler way. It is applied to decrease the intricacy of already existed solutions and to provide the solution of new problems also. On the other hand, neural networks are distinct because of their robust processing and adaptive capabilities in dynamic environment. This paper mainly reviews the primary ideas and contribution of neural network system and fuzzy logic in the field of robotic path planning. Several hybrid techniques, which are being utilized in bringing dream of mobile robots to reality are discussed.

Keywords: Neural networks, fuzzy logic, obstacle avoidance, path planning.

Received February 29, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/5
Tuesday, 28 July 2020 00:14

Mitigating Insider Threats on the Edge:

A Knowledgebase Approach

 

Qutaibah Althebyan1,2
1College of Engineering, Al Ain University, UAE

 

2Software Engineering Department, Jordan University of Science and Technology, Jordan

Abstract: Insider Threats, who are cloud internal users, cause very serious problems, which in terns, leads to devastating attacks for both individuals and organizations. Although, most of the attentions, in the real world, is for the outsider attacks, however, the most damaging attacks come from the Insiders. In cloud computing, the problem becomes worst in which the number of insiders are maximized and hence, the amount of data that can be breached and disclosed is also maximized. Consequently, insiders' threats in the cloud ought to be one of the top most issues that should be handled and settled. Classical solutions to defend against insiders’ threats might fail short as it is not easy to track both activities of the insiders as well as the amount of knowledge an insider can accumulate through his/her privileged accesses. Such accumulated knowledge can be used to disclose critical information –which the insider is not privileged to- through expected dependencies that exist among different data items that reside in one or more nodes of the cloud. This paper provides a solution that suits well the specialized nature of the above mentioned problem. This solution takes advantage of knowledge bases by tracking accumulated knowledge of insiders through building Knowledge Graphs (KGs) for each insider. It also takes advantage of Mobile Edge Computing (MEC) by building a fog layer where a mitigation unit -resides on the edge- takes care of the insiders threats in a place that is as close as possible to the place where insiders reside. As a consequence, this gives continuous reactions to the insiders’ threats in real-time, and at the same time, lessens the overhead in the cloud. The MEC model to be presented in this paper utilizes a knowledgebase approach where insiders’ knowledge is tracked and modeled. In case an insider knowledge accumulates to a level that is expected to cause some potential disclosure of private data, an alarm will be raised so that expected actions should be taken to mitigate this risk. The knowledgebase approach involves generating Knowledge Graphs (KGs), Dependency Graphs (DGs) where a Threat Prediction Value (TPV) is evaluated to estimate the risk upon which alarms for potential disclosure are raised. Experimental analysis has been conducted using CloudExp simulator where the results have shown the ability of the proposed model to raise alarms for potential risks from insiders in a real time fashion with accurate precision.

Keywords: Insider Threats, Fog, Mobile Edge, Cloud, Knowledge Graph, Dependency Graph, Database.

Received February 29, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/6

Full Text  

Tuesday, 28 July 2020 00:12

Discovery of Arbitrary-Shapes Clusters Using

DENCLUE Algorithm

Mariam Khader1 and Ghazi Al-Naymat2,1

1Departmentof Computer Science, Princess Sumaya University for Technology, Jordan

2Department of IT, Ajman University, UAE

Abstract: One of the main requirements in clustering spatial datasets is the discovery of clusters with arbitrary-shapes. Density-based algorithms satisfy this requirement by forming clusters as dense regions in the space that are separated by sparser regions. DENCLUE is a density-based algorithm that generates a compact mathematical form of arbitrary-shapes clusters. Although DENCLUE has proved its efficiency, it cannot handle large datasets since it requires large computation complexity. Several attempts were proposed to improve the performance of DENCLUE algorithm, including DENCLUE 2. In this study, an empirical evaluation is conducted to highlight the differences between the first DENCLUE variant which uses the Hill-Climbing search method and DENCLUE 2 variant, which uses the fast Hill-Climbing method. The study aims to provide a base for further enhancements on both algorithms. The evaluation results indicate that DENCLUE 2 is faster than DENCLUE 1. However, the first DECNLUE variant outperforms the second variant in discovering arbitrary-shapes clusters.

Keywords: Clustering, DENCLUE, Density Clustering, Hill-Climbing.

Received January 24, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/7
Tuesday, 28 July 2020 00:10

Default Prediction Model: The Significant Role of Data Engineering in the Quality of Outcomes

Ahmad Al-Qerem1, Ghazi Al-Naymat2,3, Mays Alhasan3, and Mutaz Al-Debei4

1Computer Science Department, Zarqa University, Jordan

2Department of Information Technology, Ajman University, United Arab Emirates

3King Hussein School of Computing Sciences, Princess Sumaya University for Technology, Jordan

4Management Information Systems Department, The University of Jordan, Jordan

Abstract: For financial institutions and the banking industry, it is very crucial to have predictive models for their core financial activities, and especially those activities which play major roles in risk management. Predicting loan default is one of the critical issues that banks and financial institutions focus on, as huge revenue loss could be prevented by predicting customer’s ability not only to pay back, but also to be able to do that on time. Customer loan default prediction is a task of proactively identifying customers who are most probably to stop paying back their loans. This is usually done by dynamically analyzing customers’ relevant information and behaviors. This is significant so as the bank or the financial institution can estimate the borrowers’ risk. Many different machine learning classification models and algorithms have been used to predict customers’ ability to pay back loans. In this paper, three different classification methods (Naïve Bayes, Decision Tree, and Random Forest) are used for prediction, comprehensive different pre-processing techniques are being applied on the dataset in order to gain better data through fixing some of the main data issues like missing values and imbalanced data, and three different feature extractions algorithms are used to enhance the accuracy and the performance. Results of the competing models were varied after applying data preprocessing techniques and features selections. The results were compared using F1 accuracy measure. The best model achieved an improvement of about 40%, whilst the least performing model achieved an improvement of 3% only. This implies the significance and importance of data engineering (e.g., data preprocessing techniques and features selections) course of action in machine learning exercises.

Keywords: Default Prediction, Classification, Pre-processing, Prediction, Features Selection, Generic Algorithm, PSO Algorithm, Naïve Bayes, Decision Tree, SVM, Random Forest, Banking, Risk Management.

Received February 29, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/8
Tuesday, 28 July 2020 00:08

Identity Identification and Management in the Internet of Things

Zina Houhamdi1 and Belkacem Athamena2

1Software Engineering Department, College of Engineering, Al Ain University, UAE

2Business Administration Department, College of Business, Al Ain University, UAE

Abstract: Henceforth, users agreed on the necessity of continuous Internet connection independently of the place, the manner, and the time. Nowadays, several elite services are accessible by people over the Internet of Things (IoT), which is a heterogeneous network defined by machine-to-machine communication. Despite the fact that the devices are used to establish the communication, the users can be considered as the actual producers of input data and consumers of the output data. Consequently, the users should be viewed as a smart object in IoT; therefore, user identification, authentication, authorization are required. However, the user identification process is too complicated because the users are worried to share their confidential and private data. on the other hand, this private data should be used by some of their devices. Accordingly, an equitable mechanism to identify users and manage their identities is necessary. In addition, the user plays an extreme important role in the establishment of rules needed for identity identification and in ensuring the continuity of receptive services.The main purpose of this paper is to develop a new framework for Identity Management System (IdMS) for IoT. The primary contributions of this paper are: the proposition of a device recognition algorithm for user identification, the proposition of a new format for the identifier, and a theoretical framework for IdMS.

 

Keywords: Authentication, identification algorithm, identity management, internet of things, single thing sign-on, heterogeneous.

Received February 22, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/9
Tuesday, 28 July 2020 00:07

DoS and DDoS Attack Detection Using Deep Learning and IDS

Mohammad Shurman1, Rami Khrais2, and Abdulrahman Yateem1

1Jordan University of Science and Technology, Network Engineering and Security Department, Jordan

2Jordan University of Science and Technology, Computer Engineering Department, Jordan

Abstract: In the recent years, Denial-of-Service (DoS) or Distributed Denial-of-Service (DDoS) attack has spread greatly and attackers make online systems unavailable to legitimate users by sending huge number of packets to the target system. In this paper, we proposed two methodologies to detect Distributed Reflection Denial of Service (DrDoS) attacks in IoT. The first methodology uses hybrid Intrusion Detection System (IDS) to detect IoT-DoS attack. The second methodology uses deep learning models, based on Long Short-Term Memory (LSTM) trained with latest dataset for such kinds of DrDoS. Our experimental results demonstrate that using the proposed methodologies can detect bad behaviour making the IoT network safe of Dos and DDoS attacks.

Keywords: Deep learning, DoS, DrDoS, IDS, IoT, LSTM.

Received February 29, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/10

Full Text

Tuesday, 28 July 2020 00:04

On Detection and Prevention of Zero-Day Attack Using Cuckoo Sandbox in Software-Defined Networks

Huthifh Al-Rushdan1, Mohammad Shurman2, and Sharhabeel Alnabelsi3,4

1Computer Engineering Depatmenr, Jordan University of Science and Technology, Jordan

2Network Engineering and Security Department, Jordan University of Science and Technology, Jordan

3Computer Engineering Department, Al-Balqa Applied University, Jordan

4Computer Engineering Department, AL Ain University, United Arab Emirates

Abstract: Networks attacker may identify the network vulnerability within less than one day; this kind of attack is known as zero-day attack. This undiscovered vulnerability by vendors empowers the attacker to affect or damage the network operation, because vendors have less than one day to fix this new exposed vulnerability. The existing defense mechanisms against the zero-day attacks focus on the prevention effort, in which unknown or new vulnerabilities typically cannot be detected. To the best of our knowledge the protection mechanism against zero-day attack is not widely investigated for Software-Defined Networks (SDNs). Thus, in this work we are motivated to develop a new zero-day attack detection and prevention mechanism for SDNs by modifying Cuckoo sandbox tool. The mechanism is implemented and tested under UNIX system. The experiments results show that our proposed mechanism successfully stops the zero-day malwares by isolating the infected clients, in order to prevent the malwares from spreading to other clients. Moreover, results show the effectiveness of our mechanism in terms of detection accuracy and response time.

Keywords: Zero-day attack, Malwares, Controller, Intrusion Detection System, Cuckoo Sandbox, Software-Defined Networks.

Received March 1, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/11
Tuesday, 28 July 2020 00:03

Identification of Ischemic Stroke by Marker

Controlled Watershed Segmentation

and Fearture Extraction

Mohammed Ajam, Hussein Kanaan, Lina El Khansa, and Mohammad Ayache

Department of Biomedical Engineering, Islamic University of Lebanon Beirut, Lebanon

Abstract: In this paper, we will describe a method that distinguishes the ischemic stroke from Computed Tomography (CT) brain images by extracting the statistical and textural features. First, preprocessing of the CT images is done followed by image enhancement. Segmentation of the CT images is performed by Marker Controlled Watershed. After the segmentation, we get the Grey Level Co-occurrence matrix (GLCM) and extract the textural and statistical features. The disadvantage of watershed is the over-segmentation caused by noise and solved by Marker Controlled Watershed as shown experimentally. The features extracted are contrast, correlation, standard deviation, variance, homogeneity, energy and mean. We noticed in our results that the values of homogeneity, energy and mean are bigger in normal CT images than in abnormal CT images where the contrast, correlation, standard deviation and variance of normal CT images are less than those of abnormal CT images (Ischemic Stroke).

Keywords: Ischemic Stroke, Watershed, Grey Level Co-occurrence Matrix, Textural and Statistical features.

Received February 27, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/12
Monday, 27 July 2020 23:55

Streaming Video Classification Using Machine Learning

Adnan Shaout and Brennan Crispin

The Electrical and Computer Engineering, the University of Michigan-Dearborn, Michigan


Abstract: This paper presents a method using neural networks and Markov Decision Process (MDP) to identify the source and class of video streaming services. The paper presents the design and implementation of an end-to-end pipeline for training and classifying a machine learning system that can take in packets collected over a network interface and classify the data stream as belonging to one of five streaming video services: You Tube, You Tube TV, Netflix, Amazon Prime, or HBO.

Keywords: Machine Learning, Neural Networks, Deep Packet Inspection, MDP, Video Streaming, AI.

Received February 29, 2020; accepted June 9, 2020

https://doi.org/10.34028/iajit/17/4A/13
 
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…