Key Parts of Transmission Line Detection Using Improved YOLO v3
Tu Renwei1, Zhu Zhongjie1, Bai Yongqiang1, Gao Ming2, and Ge Zhifeng2
1College of Information and Intelligence Engineering, Zhejiang Wanli University, China
2Ninghai Power Supply Company Limited, State Grid Corporation of Zhejiang, China
Abstract: Unmanned Aerial Vehicle (UAV) inspection has become one of main methods for current transmission line inspection, but there are still some shortcomings such as slow detection speed, low efficiency, and inability for low light environment. To address these issues, this paper proposes a deep learning detection model based on You Only Look Once (YOLO) v3. On the one hand, the neural network structure is simplified, that is the three feature maps of YOLO v3 are pruned into two to meet specific detection requirements. Meanwhile, the K-means++ clustering method is used to calculate the anchor value of the data set to improve the detection accuracy. On the other hand, 1000 sets of power tower and insulator data sets are collected, which are inverted and scaled to expand the data set, and are fully optimized by adding different illumination and viewing angles. The experimental results show that this model using improved YOLO v3 can effectively improve the detection accuracy by 6.0%, flops by 8.4%, and the detection speed by about 6.0%.
Keywords: Deep learning, YOLO v3, electric tower, insulator.
Received October 31, 2019; accepted February 4, 2021
Transmit and Receive Antenna Selection Based Resource Allocation for Self-Backhaul 5G Massive MIMO HetNets
Farah Akif1, Aqdas Malik1, Ijaz Qureshi2, and Ayesha Abassi1
1Department of Electrical Engineering, International Islamic University, Pakistan
2Department of Electrical Engineering, Air University, Pakistan
Abstract: With the advancement in wireless communication technology, the ease of accessibility and increasing coverage area is a major challenge for service providers. Network densification through Small cell Base Stations (SBS) integration in Heterogeneous Networks (HetNets) promises to improve network performance for cell edge users. Since providing wired backhaul for small cells is not cost effective or practical, the third-Generation Partnership Project (3GPP) has developed architecture for self-backhaul known as Integrated Access and Backhaul (IAB) for Fifth Generation (5G). This allows for Main Base Station (MBS) resources to be shared between SBS and MBS users. However, fair and efficient division of MBS resources remains a problem to be addressed. We develop a novel transmit antenna selection/partitioning technique for taking advantage of IAB 5G standard for Massive Multiple Input Multiple Output (MIMO) HetNets. Transmit antenna resources are divided among access for MBS users and for providing wireless backhaul for SBS. We develop A Genetic Algorithm (GA) based Transmit Antenna Selection (TAS) scheme and compare with random selection, eigenvalue-based selection and bandwidth portioning. Our analysis show that GA based TAS has the ability to converge to an optimum antenna subset providing better rate coverage. Furthermore, we also signify the performance of TAS based partitioning over bandwidth partitioning and also show user association can also be controlled using number of antennas reserved for access or backhaul.
Keywords: Antenna selection, Massive MIMO, heterogeneous networks, genetic algorithm.
Received January 9, 2020; accepted January 13, 2021
A QoS-Based Medium Access Control Protocol for WBANs and Its Performance Evaluation
Abdullah Sevin and Cuneyt Bayilmis
Department of Computer Engineering, Sakarya University, Turkey
Abstract: Nowadays, Wireless Body Area Networks (WBANs) are used in many fields. WBANs are described as small sensor nodes that communicate wirelessly and provide services to the personal area. Quality of Service (QoS) is an essential issue for WBANs due to the importance of human life. QoS problems can only be solved with a robust Medium Access Control (MAC) protocol in WBANs. To find a solution to this problem, developers performed many MAC protocols for WBANs. ISO/IEEE 11073 health informatics defines the standard of personal health information and purposes to provide interoperability between medical technologies. This paper presents a MAC protocol that provides ISO/IEEE 11073 communication standards with QoS support, bases on cross-layer architecture. We designed a slot assignment scheme, prioritization mechanism, admission control mechanism to provide QoS. The performance evaluation of the proposed MAC protocol is compared with IEEE 802.15.4 and IEEE 802.15.6 protocols by considering end-to-end delay, packet loss ratio, and throughput parameters, and it has achieved out performance. It is observed that the proposed protocol doesn't exceed 45 ms delay, reached 81% traffic load, and a maximum error rate of 0.162%.
Keywords: Wireless body area networks, quality of service, ISO/IEEE 11073, medium access control protocol, cross-layer.
Received March 28, 2020; accepted January 19, 2021
https://doi.org/10.34028/iajit/18/6/3
Lightweight Secure MQTT for Mobility Enabled e-health Internet of Things
Adil Bashir1 and Ajaz Hussain Mir2
1Islamic University of Science and Technology, Jammu and Kashmir, India
2National Institute of Technology Srinagar, Jammu and Kashmir, India
Abstract: Internet of Things (IoT) is a smart interconnection of miniature sensors, enabling association of large number of smart objects ranging from assisted living and e-health to smart cities. IoT devices are equipped with limited resources in terms of power, memory and processing capabilities, therefore, presenting novel challenges to security. The purpose of this paper is to design energy efficient security mechanism for IoT based e-health system in which medical data is encrypted using lightweight cryptographic operations. The proposed scheme provides end-to-end data confidentiality for mobility enabled e-health IoT system. Our security scheme is simple and can be computed quickly on scarce resourced motes while providing required security services. Further, the mobility of patients is managed securely without the need of frequent reconfigurations during their movement within hospital/home premises. The evaluation results demonstrate that the proposed scheme reduces energy utilization to 17.84% and increases longevity of motes by 5.6 times compared to Certificate-Based Datagram Transport Layer Security (CB-DTLS). Energy consumption in configuration handover during mobility is handled by resource-rich devices, which make this scheme efficient in managing mobility of sensors. This work can be used as a basis for future research on securing patient data in an e-health system using energy efficient cryptographic operations.
Keywords: Internet of Things, sensors, MQTT, lightweight security, energy efficiency, e-health.
Received April 4, 2020; accepted September 27, 2020
A Secure Cellular Automata Integrated Deep Learning Mechanism for Health Informatics
Kiran Sree Pokkuluri1 and SSSN Usha Devi Nedunuri2
1Department of Computer Science and Engineering, Shri Vishnu Engineering College for Women (A), India
2Department of Computer Science and Engineering, University College of Engineering-JNTU, India
Abstract: Health informatics has gained a greater focus as the data analytics role has become vital for the last two decades. Many machine learning-based models have evolved to process the huge data involved in this sector. Deep Learning (DL) augmented with Non-Linear Cellular Automata (NLCA) is becoming a powerful tool with great potential to process big data. This will help to develop a system that facilitates parallelization, rapid data storage, and computational power with improved security parameters. This paper provides a novel and robust mechanism with deep learning augmented with non-linear cellular automata with greater security, adaptability for health informatics. The proposed mechanism is adaptable and can address many open problems in medical informatics, bioinformatics, and medical imaging. The security parameters considered in this model are Confidentiality, authorization, and integrity. This method is evaluated for performance, and it reports an average accuracy of 89.32%. The parameters precision, sensitivity, and specificity are considered to measure to measure the accuracy of the model.
Keywords: Deep learning, health informatics, cellular automata, neural network.
Received May 12, 2020; accepted March 16, 2021
Machine Learning Model for Credit Card Fraud Detection- A Comparative Analysis
Pratyush Sharma, Souradeep Banerjee, Devyanshi Tiwari, and Jagdish Chandra Patni
School of Computer Science, University of Petroleum and Energy Studies Dehradun
Abstract: In today's world, we are on an express train to a cashless society which has led to a tremendous escalation in the use of credit card transactions. But the flipside of this is that fraudulent activities are on the increase; therefore, implementation of a methodical fraud detection system is indispensable to cardholders as well as the card-issuing banks. In this paper, we are going to use different machine learning algorithms like random forest, logistic regression, Support Vector Machine (SVM), and Neural Networks to train a machine learning model based on the given dataset and create a comparative study on the accuracy and different measures of the models being achieved using each of these algorithms. Using the comparative analysis on the F_1 score, we will be able to predict which algorithm is best suited to serve our purpose for the same. Our study concluded that Artificial Neural Network (ANN) performed best with an F_1 score of 0.91.
Keywords: Machine learning, credit card fraud detection, random forest, accuracy, neural network, SVM.
Received June 10, 2020; accepted February 17, 2021
Gabor and Maximum Response Filters with Random Forest Classifier for Face Recognition in the Wild
Yuen-Chark See1, Eugene Liew1, and Norliza Mohd Noor2
1Department of Electrical and Electronic Engineering, University Tunku Abdul Rahman, Malaysia
2Razak Faculty of Technology and Informatics, Universiti Teknologi Malaysia, Malaysia
Abstract: Research on face recognition has been evolving for decades. There are numerous approaches developed with highly desirable outcomes in constrained environments. In contrast, approaches to face recognition in an unconstrained environment where varied facial posing, occlusion, aging, and image quality still pose vast challenges. Thus, face recognition in the unconstrained environment still an unresolved problem. Many current techniques are not performed well when experimented in unconstrained databases. Additionally, most of the real-world application needs a good face recognition performance in the unconstrained environment. This paper presents a comprehensive process aimed to enhance the performance of face recognition in an unconstrained environment. This paper presents a face recognition system in an unconstrained environment. The fusion between Gabor filters and Maximum Response (MR) filters with Random Forest classifier is implemented in the proposed system. Gabor filters are a hybrid of Gabor magnitude filters and Oriented Gabor Phase Congruency (OGPC) filters. Gabor magnitude filters produce the magnitude response while the OGPC filters produce the phase response of Gabor filters. The MR filters contain the edge- and bar-anisotropic filter responses and isotropic filter responses. In the face features selection process, Monte Carlo Uninformative Variable Elimination Partial Least Squares Regression (MC-UVE-PLSR) is used to select the optimal face features in order to minimize the computational costs without compromising the accuracy of face recognition. Random Forests is used in the classification of the generated feature vectors. The algorithm performance is evaluated using two unconstrained facial image databases: Labelled Faces in the Wild (LFW) and Unconstrained Facial Images (UFI). The proposed technique used produces encouraging results in these evaluated databases in which it recorded face recognition rates that are comparable with other state-of-the-art algorithms.
Keywords: Face recognition, labelled faces in the wild, unconstrained facial images.
Received November 7, 2019; accept February 7, 2021
Text Summarization Technique for Punjabi Language Using Neural Networks
Arti Jain1, Anuja Arora1, Divakar Yadav2, Jorge Morato3, and Amanpreet Kaur1
1Department of Computer Science and Engineering, Jaypee Institute of Information Technology, India
2Department of Computer Science and Engineering, National Institute of Information Technology, India
3Department of Computer Science and Engineering, Universidad Carlos III de Madrid, Spain
Abstract: In the contemporary world, utilization of digital content has risen exponentially. For example, newspaper and web articles, status updates, advertisements etc. have become an integral part of our daily routine. Thus, there is a need to build an automated system to summarize such large documents of text in order to save time and effort. Although, there are summarizers for languages such as English since the work has started in the 1950s and at present has led it up to a matured stage but there are several languages that still need special attention such as Punjabi language. The Punjabi language is highly rich in morphological structure as compared to English and other foreign languages. In this work, we provide three phase extractive summarization methodology using neural networks. It induces compendious summary of Punjabi single text document. The methodology incorporates pre-processing phase that cleans the text; processing phase that extracts statistical and linguistic features; and classification phase. The classification based neural network applies an activation function- sigmoid and weighted error reduction-gradient descent optimization to generate the resultant output summary. The proposed summarization system is applied over monolingual Punjabi text corpus from Indian languages corpora initiative phase-II. The precision, recall and F-measure are achieved as 90.0%, 89.28% an 89.65% respectively which is reasonably good in comparison to the performance of other existing Indian languages’ summarizers.
Keywords: Extractive method, Indian languages corpora initiative, natural language processing, neural networks, Punjabi language, text summarization.
Received May 31, 2020; accept January 6, 2021
A New Two-step Ensemble Learning Model for Improving Stress Prediction of Automobile Drivers
May Al-Nashashibi1, Wa’el Hadi2, Nuha El-Khalili3, Ghassan Issa4, and Abed Alkarim AlBanna1
1Computer Science, University of Petra, Jordan
2Information Security, University of Petra, Jordan
3Software Engineering, University of Petra, Jordan
4School of IT, Skyline University, UAE
Abstract: Commuting when there is a significant volume of traffic congestion has been acknowledged as one of the key factors causing stress. Significant levels of stress whilst driving are seen to have a profoundly negative effect on the actions and ability of a driver; this has the capacity to result in risks, hazards and accidents. As such, there is a recognized need to determine drivers’ levels of stress and accordingly predict the key causes responsible for high levels of stress. In this work, the objective is centred on providing an ensemble machine learning framework in order to determine the stress levels of drivers. Moreover, the study also provides a fresh set of data, as gathered from 14 different drivers, with data collection having taken place during driving in Amman, Jordan. Data was gathered via the implementation of a wearable biomedical instrument that was attached to the driver on a continuous basis in order to gather physiological data. The data gathered was accordingly categorised into two different groups: ‘Yes’, which represents the presence of stress, whilst ‘No’ represents the absence of stress. Importantly, in an effort to circumvent the negative impact of driver instances with a minority class on stress predictions, oversampling technique was applied. A two-step ensemble classifier was developed through bringing together the findings from random forest, decision tree, and Repeated Incremental Pruning to Produce Error Reduction (RIPPER) classifiers, which was then inputted into a Multi-Layer Perceptron neural network. The experimental findings highlight that the suggested framework is far more precise and has a more scalable capacity when compared with all classifiers in relation to accuracy, g-mean measures and sensitivity.
Keywords: Ensemble learning, stress prediction, oversampling, data mining algorithms.
Received June 23, 2020; accept January 6, 2021
Solving Point Coverage Problem in Wireless Sensor Networks Using Whale Optimization Algorithm
Mahnaz Toloueiashtian, Mehdi Golsorkhtabaramiri*, and Seyed Yaser Bozorgi Rad
Department of Computer Engineering, Babol Branch, Islamic Azad University, Babol, Iran
Abstract: Todays, dynamic power management methods that decrease the energy use of sensor networks after their design and deployment are of paramount importance. In Wireless Sensor Networks (WSN), coverage and detection quality are one aspect of service quality and power consumption reduction aspect. The aim of the coverage problem is to monitor at least one node at each point in the targeted area and is divided into three categories: border, area, and point coverage. In point coverage, which is our interest, the problem is to cover specific points of the environment scattered on the surface of the environment; their position is decided on and called the goal. In this paper, a new metaheuristic algorithm based on Whale Optimization Algorithm (WOA) is proposed. The proposed algorithm tries to find the Best Solution (BS) based on three operations exploration, spiral attack, and siege attack. Several scenarios, including medium, hard and complex problems, are designed to evaluate the proposed technique, and it is compared to Genetic Algorithm (GA) and Ant Colony Optimization (ACO) based on time complexity criteria in providing a suitable coverage, network lifetime, energy consumption. The simulation results show that the proposed algorithm performs better than the compared ones in most scenarios.
Keywords: Wireless sensor networks, point coverage, network lifetime, optimization, WOA.
Received July 10, 2020; accept December 1, 2020
Human Activity Recognition Based on Transfer Learning with Spatio-Temporal Representations
Saeedeh Zebhi, SMT Almodarresi, and Vahid Abootalebi
Electrical Engineering Department, Yazd University, Iran
Abstract: A Gait History Image (GHI) is a spatial template that accumulates regions of motion into a single image in which moving pixels are brighter than others. A new descriptor named Time-Sliced Averaged Gradient Boundary Magnitude (TAGBM) is also designed to show the time variations of motion. The spatial and temporal information of each video can be condensed using these templates. Based on this opinion, a new method is proposed in this paper. Each video is split into N and M groups of consecutive frames, and the GHI and TAGBM are computed for each group, resulting spatial and temporal templates. Transfer learning with the fine-tuning technique has been used for classifying these templates. This proposed method achieves the recognition accuracies of 96.50%, 92.30% and 97.12% for KTH, UCF Sport and UCF-11 action datasets, respectively. Also it is compared with state-of-the-art approaches and the results show that the proposed method has the best performance.
Keywords: Deep learning, tuning, VGG-16, action recognition.
Received August 5, 2020; accept January 6, 2021
Enhanced Long Short-Term Memory (ELSTM) Model for Sentiment Analysis
Dimple Tiwari1 and Bharti Nagpal2
1Ambedkar Institute of Advanced Communication Technologies and Research, Guru Gobind Singh Indraprastha University, India
2NSUT East Campus (Ambedkar Institute of Advanced Communication Technologies and Research), India
Abstract: Sentiment analysis is used to embed an extensive collection of reviews and predicts people's opinion towards a particular topic, which is helpful for decision-makers. Machine learning and deep learning are standard techniques, which make the process of sentiment analysis simpler and popular. In this research, deep learning is used to analyze the sentiments of people. It has an ability to perform automatic feature extraction, which provides better performance, a more vibrant appearance, and more reliable results than conventional feature-based techniques. Traditional approaches were based on complicated manual feature extractions that were not able to provide reliable results. Therefore, the presented study aimed to improve the performance of the deep learning approach by combining automatic feature extraction with manual feature extraction techniques. The enhanced ELSTM model is proposed with hyper-parameter tuning in previous Long Short-Term Memory (LSTM) to get better results. Based on the results, a novel model of sentiment analysis and novel algorithm are proposed to set the benchmark in the field of textual classification and to describe the procedure of the developed model, respectively. The results of the ELSTM model are presented by training and testing accuracy curve. Finally, a comparative study confirms the best performance of the proposed ELSTM model.
Keyword: Deep learning, convolutional neural network, recurrent neural network, long short-term memory, term frequency-inverse document frequency, glove, natural language processing.
Received August 24, 2020; accept April 7, 2021
Application of Intuitionistic Fuzzy Evaluation Method in Aircraft Cockpit Display Ergonomics
Hui Liu, Chengli Sun, and Jiliang Tu
School of Information and Engineering, University of Nanchang Hangkong, China
Abstract: The ergonomic level of cockpit display design can be improved by establishing an objective and effective method for evaluating the ergonomics of the cockpit display. Given the fuzz problem in ergonomic evaluation, a new intuitionistic fuzzy evaluation method is proposed based on the Intuitionistic Fuzzy Ordered Weighted Geometric Average (IFOWGA) operator and the possible degree function in this work. Firstly, the intuitionistic fuzzy evaluation matrix considering the hesitation degree of experts' determination is first constructed as the basis of intuitionistic fuzzy evaluation. Secondly, using the IFOWGA operator the intuitionistic fuzzy evaluation values are obtained through aggregating the evaluation matrix. And these values are ranked to get the level of ergonomic evaluation by possible degree ranking function. Finally, an evaluation example based on the cockpit display of a certain aircraft is given to verify the effectiveness of the proposed approach. Six alternatives of the evaluation result are obtained by the aggregation of the IFOWGA operator. Applied the possible degree function, the ergonomic evaluation grade of the aircraft cockpit display is the second level by ranking the alternative sand the variation of intuitionistic fuzzy value is already small when the number of experts is more than 16. It can be shown from the results that the ergonomic level of cockpit display can be objectively and scientifically evaluated by the proposed quantitative method, and it can provide a theoretical basis and practical methods for improving the ergonomic level of cockpit display design.
Keywords: Ergonomic evaluation, intuitionistic fuzzy, ordered weighted geometric average operator, possible degree function.
Received July 12, 2019; accepted September 27, 2020
Data Streams Oriented Outlier Detection Method: A Fast Minimal Infrequent Pattern Mining
ZhongYu Zhou and DeChang Pi
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, China
Abstract: Outlier detection is a common method for analyzing data streams. In the existing outlier detection methods, most of methods compute distance of points to solve certain specific outlier detection problems. However, these methods are computationally expensive and cannot process data streams quickly. The outlier detection method based on pattern mining resolves the aforementioned issues, but the existing methods are inefficient and cannot meet requirements of quickly mining data streams. In order to improve the efficiency of the method, a new outlier detection method is proposed in this paper. First, a fast minimal infrequent pattern mining method is proposed to mine the minimal infrequent pattern from data streams. Second, an efficient outlier detection algorithm based on minimal infrequent pattern is proposed for detecting the outliers in the data streams by mining minimal infrequent pattern. The algorithm proposed in this paper is demonstrated by real telemetry data of a satellite in orbit. The experimental results show that the proposed method not only can be applied to satellite outlier detection, but also is superior to the existing methods.
Keywords: Data streams, binary search, minimal infrequent pattern, outlier detection, pattern mining.
Received October 3, 2019; accept September 17, 2020