Unified Inter-Letter Steganographic Algorithm, A Text-based Data
Hiding Method
Ahmad Esmaeilkhah1,
Changiz Ghobadi1, Javad Nourinia1, and Maryam Majidzadeh2
1Electrical Engineering Department, Urmia University, Iran
2Department
of Electrical and Computer Engineering, Technical and Vocational University,
Iran
Abstract: This paper funds a novel text-based steganographic algorithm with
enhanced functionality with respect to the previously proposed methods, by
careful selection of one of standard space characters, the introduced
Inter-Letter Steganographic Method, or Visual and Reverse Extraction attacks,
two additional modes of operation have been added to the original InLetSteg
algorithm and have been merged into a single one, called as Unified
Inter-Letter Steganographic Method, or UILS. The Unified Inter-Letter
Steganographic Method (UILS) embeds the data using variable step-size into the
host text and the developed mathematical model can calculate the approximate
length of the host text required to embed certain data, statistically. In
addition, the general mathematical model of UILS makes it customizable to adapt
the real-world applications. The statistical parameters that are used through
this work are calculated for English host text, but are easily calculable for
other languages with similar alphabets and structure of notations. Finally, the
programmatically deployed UILS outputs are experimentally examined using 60
attendant and the results are discussed.
Keywords: Steganography, UILS, InLetSteg, Reverse
extraction attack, Unicode space character, Inter-letter spacing.
Received June 6, 2018; accepted July 21, 2019
A Self-Healing Model for QoS-aware Web Service Composition
Doaa Elsayed1, Eman Nasr3, Alaa
El Ghazali4, and Mervat Gheith2
1Department of Information Systems and Technology, Cairo
University, Egypt
2Department of Computer Science, Cairo University,
Egypt
3Independent Researcher, Egypt
4Department of
Computer and Information Systems, Sadat Academy for Management Sciences, Egypt
Abstract: In the Web Service
Composition (WSC) domain, Web Services (WSs) execute in a highly dynamic
environment, as a result, the Quality of Service (QoS) of a WS is constantly
evolving, and this requires tracking of the global optimization overtime to
satisfy the users’ requirements. In order to make a WSC adapt to such QoS
changes of WSs, we propose a self-healing model for WSC. Self-healing is the
automatic discovery, and healing of the failure of a composite WS by itself due
to QoS changes without interruption in the WSC and any human intervention. To the
best of our knowledge, almost all the existing self-healing models in this
domain substitute the faulty WS with an equivalent one without paying attention
to the WS selection processes to achieve global optimization. They focus only
on the WS substitution strategy. In this paper, we propose a self-healing model
where we use our hybrid approach to find the optimal WSC by using Parallel
Genetic Algorithm based on Q-learning, which we integrate with K-means
clustering (PGAQK). The components of this model are organized according to
IBM’s Monitor, Analyse, Plan, Execute, and Knowledge (MAPE-K) reference model.
The PGAQK approach considers as a module in the Execute component. WS
substitution strategy has also been applied in this model that substitutes the
faulty WS with another equivalent one from a list of candidate WSs by using the
K-means clustering technique. K-means clustering is used to prune the WSs in
the search space to find the best WSs for the environment changes. We implemented
this model over the NET Framework using C# programming language. A series of
comparable experiments showed that the proposed model outperforms improved GA
to achieve global optimization. Our proposed model also can dynamically
substitute the faulty WSs with other equivalent ones in a time-efficient
manner.
Keywords: Web
service composition, self-healing, quality of service, user requirements;
K-means clustering.
Received June 29, 2018; accepted
January 28, 2020
Synthesizing
Conjunctive and Disjunctive Linear Invariants by K-means++ and SVM
Shengbing Ren and
Xiang Zhang
School of Computer
Science and Engineering, Central South University, China
Abstract: The problem of synthesizing adequate inductive
invariants lies at the heart of automated software verification. The
state-of-the-art machine learning algorithms for synthesizing invariants have
gradually shown its excellent performance. However, synthesizing disjunctive
invariants is a difficult task. In this paper, we propose a method k++
Support Vector Machine (SVM) integrating k-means++ and SVM to synthesize
conjunctive and disjunctive invariants. At first, given a program, we start
with executing the program to collect program states. Next, k++SVM adopts
k-means++ to cluster the positive samples and then applies SVM to distinguish
each positive sample cluster from all negative samples to synthesize the
candidate invariants. Finally, a set of theories founded on Hoare logic are
adopted to check whether the candidate invariants are true invariants. If the
candidate invariants fail the check, we should sample more states and repeat
our algorithm. The experimental results show that k++SVM is compatible with the
algorithms for Intersection Of Half-space (IOH) and more efficient than the
tool of Interproc. Furthermore, it is shown that our method can synthesize
conjunctive and disjunctive invariants automatically.
Keywords: Software verification,
conjunctive invariant, disjunctive invariant, k-means++, SVM.
Received September 5, 2018; accepted January 28,
2020
Recurrence Quantification Analysis of Glottal Signal as non Linear Tool for Pathological Voice Asses
Recurrence Quantification Analysis of Glottal Signal as non Linear Tool for Pathological Voice Assessment and Classification
Mohamed Dahmani and Mhania Guerti
Laboratoire
Signal et Communications, Ecole Nationale Polytechnique, Algiers, Algeria
Abstract: Automatic detection and assessment of Vocal Folds
pathologies using signal processing techniques knows an extensively challenge
use in the voice or speech research community.
This paper contributes the application of the Recurrence Quantification
Analysis (RQA) to a glottal signal waveform in order
to evaluate the dynamic process of Vocal Folds (VFs) for diagnosis and classify
the voice disorders. The proposed solution starts by extracting
the glottal signal waveform from the voice signal through
an inverse filtering algorithm. In the next step, the parameters of RQA are
determined via the
Recurrent
Plot (RP) structure of the glottal signal where the normal voice is considered
as a reference. Finally, these
parameters are used as input features set of a hybrid Particle Swarm
Optimization-Support Vector Machines
(PSO-SVM) algorithms to segregate between normal and pathological voices. For
the test validation,
we have adopted the collection of Saarbrucken Voice Database (SVD) where we
have selected the
long vowel /a:/ of 133 normal samples and 260 pathological samples uttered by
four groups of subjects :
persons having suffered from vocal folds paralysis, persons having vocal folds
polyps, persons having
spasmodic dysphonia and normal voices. The
obtained results show the effectiveness of RQA applied to the glottal signal as
a features extraction technique. Indeed, the PSO-SVM as a classification method
presented an effective
tool for assessment and diagnosis of pathological voices with an accuracy of
97.41%.
Keywords: Glottal Signal, Recurrence Quantification
Analysis, Saarbrucken Voice Database, PSO-SVM, Pathological Voice Detection.
Received
December 2, 2018; accepted March 23, 2020
Specification of Synchronous Network
Flooding in Temporal Logic
Ra’ed
Bani Abdelrahman1, Rafat Alshorman2, Walter Hussak3,
and Amitabh Trehan3
1SoftwareEngineering
Department, Ajloun National University, Jordan
2Department
of Computer Science, Yarmouk University, Jordan
3Computer
Science Department, Loughborough University, United Kindom
Abstract:
In distributed network algorithms, network
flooding is considered one of the simplest and most fundamental algorithms. This
research specifies the basic synchronous memory-less network flooding algorithm
where nodes on the network don’t have memory, for any fixed size of network, in
Linear Temporal Logic. The specification can be customized to any single
network topology or class of topologies. A specification of the termination
problem is formulated and used to compare different topologies for earlier
termination. This paper gives a worked example of one topology resulting in
earlier termination than another, for which we perform a formal verification
using the model checker NuSMV.
Keywords:
Network flooding, linear temporal logic, model
checking.
Received December 17, 2018; accepted June 11,
2019
An Investigative Analysis on Finding Patterns in Co-Author and Co-Institution Networks for LIDAR Res
An Investigative Analysis on Finding Patterns in Co-Author and Co-Institution Networks for LIDAR Research
Imran
Ashraf, Soojung Hur, and Yongwan Park
Department of Information and Communication
Engineering, Yeungnam University, South Korea
Abstract: Social Network Analysis (SNA) has proven itself to
embody the complex relationships between actors of groups inside out. Not only
that, but it has also emerged as a new paradigm to investigate the structure of
ties and its role on relationships between the actors. This research aims to
investigate the patterns of relationships between authors and institutions
working in LIght Detection And Ranging (LIDAR) research area. LIDAR has been in
the limelight during recent years, especially autonomous vehicles for map-making
and objection detection tasks. Researchers need insight into the current
contributors and research areas to devise policies and set future targets for
this important technology. Current study performs SNA to identify potential
institutions and researchers that can help to achieve those goals. National and
international co-authorship is analysed separately. A total of 4274 papers from
Web of Science (WOS) database are collected from 1998 to September 2017. SNA
measures of degree, closeness, betweenness, and eigenvector centrality along
with descriptive analysis are employed to study the patterns. Analysis reveals
that the United States of America (USA) is the most central and significant
country in terms of international co-authorship. China, Germany, the United
Kingdom (UK) and Canada are ranked 2nd, 3rd, 4th and
5th in this list respectively. For co-institution network, National
Aeronautics and Space Administration (NASA), University of Idaho and California
Institute of Technology USA occupy 1st, 2nd, and 5th
position respectively when top 5 institutions are considered. Consiglio
NazionaleDelle Ricerche of Italy occupies 3rd position while Chinese
Academy of Science, China, secures 4th place concerning betweenness
centrality. Descriptive analysis reveals that during the last decade, co-author
collaboration in scientific research has been elevated. Results show that
research articles with 6 or more authors have higher citations than those with
two to five authors. In addition, journals producing a higher number of papers
and their corresponding citations are also discussed.
Keywords: Social network analysis, co-institution,
co-authorship, LIDAR, degree, closeness, Eigenvector.
Received January 1, 2019; accepted April 8,
2020
Wrapper based Feature Selection using Integrative
Teaching Learning Based Optimization Algorithm
Mohan Allam and Nandhini
Malaiyappan
Department of
Computer Science, Pondicherry University, India
Abstract: The performance of
the machine learning models mainly relies on the key features available in the
training dataset. Feature selection is a significant job for pattern
recognition for finding an important group of features to build classification
models with a minimum number of features. Feature selection with optimization
algorithms will improve the prediction rate of the classification models. But,
tuning the controlling parameters of the optimization algorithms is a
challenging task. In this paper, we present a wrapper-based model called Feature
Selection with Integrative Teaching Learning Based Optimization (FS-ITLBO),
which uses multiple teachers to select the optimal set of features from feature
space. The goal of the proposed algorithm is to search the entire solution
space without struck in the local optima of features. Moreover, the proposed
method only utilizes teacher count parameter along with the size of the population
and a number of iterations. Various classification models have been used for
finding the fitness of instances in the population and to estimate the
effectiveness of the proposed model. The robustness of the proposed algorithm
has been assessed on Wisconsin Diagnostic Breast Cancer (WDBC) as well as
Parkinson’s Disease datasets and compared with different wrapper-based feature
selection techniques, including genetic algorithm and Binary Teaching Learning Based
Optimization (BTLBO). The outcomes have confirmed that FS-ITLBO model produced the
best accuracy with the optimal subset of features.
Keywords: Feature Selection,
Integrative Teaching Learning based Optimization, Genetic Algorithm, Breast
Cancer.
Received May 15, 2019; accepted April 10, 2020
Design and Implementation of Crypt Analysis
of Cloud Data Intrusion Management System
Dinesh Elangovan and Ramesh Muthiya
Department of Electronics and
Communication Engineering, Anna University, India
Abstract: Cloud computing is the
method of employing a set-up of isolated servers to be hosted
on the web to accumulate and supervise information instead
of an area server or a private laptop. Storage of data in
cloud sometimes creates security issues in the data stored so, security in
provided for the stored cloud data. In order to provide secured cloud data
transaction, our proposed method initially verifies the authentication of the
user followed by splitting the information of the user using pattern-matching
technique. The blowfish computation is used to encrypt the alienated data.
After encryption, resorting to the selection of the optimal position of a data
center by means of the cross grey wolf optimization and firefly technique is
done. Finally, the encrypted data are stored at an optimal location in the
cloud. Then the data split column wise and separated at an optimal location in
the cloud, this method is highly secured since the user cannot retrieve the
file without authentication verification.
Keywords: Cloud computing,
Pattern Matching Technique, Blowfish Algorithm, Hybrid Grey Wolf Optimization
and Firefly Technique.
Received July 11, 2019; accepted May 10, 2020
F0 Modeling for Isarn Speech Synthesis using Deep Neural Networks and Syllable-level Feature Represe
F0
Modeling for Isarn Speech Synthesis using Deep Neural
Networks and Syllable-level Feature Representation
Pongsathon Janyoi and
Pusadee Seresangtakul
Department of Computer Science, Khon Kaen University, Thailand
Abstract: The generation of the fundamental frequency (F0)
plays an important role in speech synthesis, which directly influences the
naturalness of synthetic speech. In conventional parametric speech
synthesis, F0 is predicted frame-by-frame. This method is
insufficient to represent F0 contours in larger units,
especially tone contours of syllables in tonal languages that deviate as a
result of long-term context dependency. This work proposes a syllable-level F0
model that represents F0 contours within syllables, using
syllable-level F0 parameters that comprise the sampling F0
points and dynamic features. A Deep Neural Network (DNN) was used to represent
the relationships between syllable-level contextual features and syllable-level
F0 parameters. The proposed model was examined using an Isarn
speech synthesis system with both large and small training sets. For all
training sets, the results of objective and subjective tests indicate that the
proposed approach outperforms the baseline systems based on hidden Markov
models and DNNS that predict F0 values at the frame level.
Keywords: Fundamental frequency, speech synthesis, deep
neural networks.
Received
July 14, 2019; accepted May 28, 2020
Enriching Domain Concepts with Qualitative
Attributes: A Text Mining based Approach
Niyati Kumari Behera and Guruvayur
Suryanarayanan Mahalakshmi
Department of Computer Science and Engineering, Anna
University, India
Abstract: Attributes, whether qualitative or non-qualitative are
the formal description of any real-world entity and are crucial in modern
knowledge representation models like ontology. Though ample evidence for the amount
of research done for mining non-qualitative attributes (like part-of relation)
extraction from text as well as the Web is available in the wealth of
literature, on the other side limited research can be found relating to
qualitative attribute (i.e., size, color, taste etc.,) mining. Herein this
research article an analytical framework has been proposed to retrieve
qualitative attribute values from unstructured domain text. The research
objective covers two aspects of information retrieval (1) acquiring quality
values from unstructured text and (2) then assigning attribute to them by comparing
the Google derived meaning or context of attributes as well as quality value (adjectives).
The goal has been accomplished by using a framework which integrates Vector
Space Modelling (VSM) with a probabilistic Multinomial Naive Bayes (MNB)
classifier. Performance Evaluation has been carried out on two data sets (1)
HeiPLAS Development Data set (106 adjective-noun exemplary phrases) and (2) a
text data set in Medicinal Plant Domain (MPD). System is found to perform
better with probabilistic approach compared to the existing pattern-based
framework in the state of art.
Keywords: Information retrieval, text mining,
qualitative attribute, adjectives, natural language processing.
Received
July 24, 2019; accepted May 4, 2020
Polynomial Based Fuzzy Vault Technique for
Template Security in Fingerprint Biometrics
Reza
Mehmood and Arvind Selwal
Department of Computer Science and Information Technology, Central
University of Jammu, India
Abstract: In recent years
the security breaches and fraud transactions are increasing day by day. So
there is a necessity for highly secure authentication technologies. The
security of an authentication system can be strengthened by using Biometric
system rather than the traditional method of authentication like Identity Cards
(ID) and password which can be stolen easily. A biometric system works on
biometric traits and fingerprint has the maximum share in market for providing
biometric authentication as it is reliable, consistent and easy to capture.
Although the biometric system is used to provide security to many applications
but it is susceptible to different types of assaults too. Among all the modules
of the biometric system which needs security, biometric template protection has
received great consideration in the past years from the research community due
to sensitivity of the biometric data stored in the form of template. A number
of methods have been devised for providing template protection. Fuzzy vault is
one of the cryptosystem based method of template security. The aim of fuzzy
vault technique is to protect the precarious data with the biometric template
in a way that only certified user can access the secret by providing valid
biometric. In this paper, a modified version of fuzzy vault is presented to
increase the level of security to the template and the secret key. The
polynomial whose coefficients represent the key is transformed using an
integral operator to hide the key where the key can no longer be derived if the
polynomial is known to the attacker. The proposed fuzzy vault scheme also
prevents the system from stolen key inversion attack. The results are achieved
in terms of False Accept Rate (FAR), False
Reject Rate (FRR), Genuine Acceptance Rate (GAR)
by varying the degree of polynomial and number of biometric samples. It was
calculated that for 40 users GAR was found to be 92%, 90%, 85% for degree of
polynomial to be 3, 4 and 5 respectively. It was observed that increasing the
degree of polynomial decreased the FAR rate, thus increasing the security.
Keywords: Biometrics,
Fingerprint, Template Security, Crypto-System, Fuzzy vault.
A Deep Learning Approach for the Romanized
Tunisian Dialect Identification
1Université de Tunis, ISGT, Tunisia
2Université de
Tunis, ENSIT, Tunisia
Abstract: Language identification is an important task in natural language processing that consists in determining the language of a given text. It has increasingly picked the interest of researchers for the past few years, especially for code-switching informal textual content. In this paper, we focus on the identification of the Romanized user-generated Tunisian dialect on the social web. We segment and annotate a corpus extracted from social media and propose a deep learning approach for the identification task. We use a Bidirectional Long Short-Term Memory neural network with Conditional Random Fields decoding (BLSTM-CRF). For word embeddings, we combine word-character BLSTM vector representation and Fast Text embeddings that takes into consideration character n-gram features. The overall accuracy obtained is 98.65%.
Keywords: Tunisian dialect, language identification, deep
learning, BLSTM, CRF and natural language processing.
Received August 25, 2019; accepted April 28,
2020
Computer Vision-based Early Fire Detection Using Enhanced Chromatic Segmentation and Optical Flow An
Computer Vision-based Early Fire Detection Using Enhanced Chromatic Segmentation and
Optical Flow Analysis Technique
Arnisha
Khondaker1, Arman Khandaker1, and Jia Uddin2
1Department of Computer Science and Engineering,
BRAC University, Bangladesh
2Department of Technology Studies, Woosong University, South Korea
Abstract: Recent advances in video processing technologies have
led to a wave of research on computer vision-based fire detection systems. This
paper presents a multi-level framework for fire detection that analyses
patterns in chromatic information, shape transmutation, and optical flow
estimation of fire. First, the decision function of fire pixels based on
chromatic information uses majority voting among state-of-the-art fire color
detection rules to extract the regions of interest. The extracted pixels are
then verified for authenticity by examining the dynamics of shape. Finally, a
measure of turbulence is assessed by an enhanced optical flow analysis
algorithm to confirm the presence of fire. To evaluate the performance of the
proposed model, we utilize videos from the Mivia and Zenodo datasets, which
have a diverse set of scenarios including indoor, outdoor, and forest fires,
along with videos containing no fire. The proposed model exhibits an average
accuracy of 97.2% for our tested dataset. In addition, the experimental results
demonstrate that the proposed model significantly reduces the rate of false
alarms compared to the other existing models.
Keywords: Fire detection, color segmentation, shape
analysis, optical flow analysis, Lucas-Kanade tracker, neural network.
Received
September 26, 2019; accepted March 17, 2020
A Sentiment Analysis System for the Hindi
Language by Integrating Gated Recurrent
Unit with Genetic Algorithm
Kush
Shrivastava and Shishir Kumar
Department of Computer Science Engineering, Jaypee
University of Engineering and Technology, India
Abstract: The growing availability and popularity
of opinion rich resources such as blogs, shopping websites, review portals, and
social media platforms have attracted several researchers to perform the
sentiment analysis task. Unlike English, Chinese, Spanish, etc. the
availability of Indian languages such as Hindi, Telugu, Tamil, etc., over the
web have also been increased at a rapid rate. This research work understands
the growing popularity of Hindi language in the web domain and considered it
for the task of sentiment analysis. The research work analyses the hidden
sentiments from the movie reviews collected from the review section of Hindi
language e-newspapers. The reviews are multilingual, which makes sentiment
analysis a challenging task. To overcome the challenges, this research work
proposes a deep learning based approach where a Gated Recurrent Unit network is
combined with the Hindi word embedding model. The strategy enables the network
to efficiently capture the semantic and syntactic relation between Hindi words
and accurately classify them into the sentiment classes. Gated Recurrent Unit
network's performance is profoundly dependent upon the selection of its
hyper-parameters; therefore, this research work also utilizes a Genetic
Algorithm to automatically build a gated recurrent network architecture
enabling it to select the best optimal hyper-parameters. It has been observed
that the proposed Genetic Algorithm-Gated Recurrent Unit (GA-GRU) model is
effective and achieves breakthrough performance results on the Hindi movie
review dataset as compared to other traditional resource-based and machine
learning approaches.
Keywords: Sentiment analysis, Hindi language,
multilingual, deep learning, gated recurrent unit, genetic algorithm.
Received
September 28, 2019; accepted May 9, 2020
Traceable Signatures using Lattices
Thakkalapally Preethi and Bharat Amberker
Department of Computer Science and Engineering, National
Institute of Technology Warangal, India
Abstract: Traceable Signatures is an extension of group
signatures that allow tracing of all signatures generated by a particular group
member without violating the privacy of remaining members. It also allows
members to claim the ownership of previously signed messages. Till date, all
the existing traceable signatures are based on number-theoretic assumptions
which are insecure in the presence of quantum computers. This work presents the
first traceable signature scheme in lattices, which is secure even after the existence
of quantum computers. Our scheme is proved to be secure in the random oracle
model based on the hardness of Short Integer Solution and Learning with Errors.
Keywords: Traceable Signatures, Lattices, Short
Integer Solution, Learning with Errors.
Received October 7, 2019; accepted May 5, 2020
A Dynamic Particle Swarm Optimisation and Fuzzy Clustering Means Algorithm for Segmentation of Multi
A
Dynamic Particle Swarm Optimisation and Fuzzy Clustering Means Algorithm for
Segmentation of Multimodal Brain Magnetic Resonance Image Data
Kies Karima and
Benamrane Nacera
Department of Computer Science, Université des Sciences et de la Technologie d’Oran
“Mohamed Boudiaf”, Algeria
Abstract: Fuzzy Clustering
Means (FCM) algorithm is a widely used clustering method in image segmentation,
but it often falls into local minimum and is quite sensitive to initial values
which are random in most cases. In this work, we consider the extension to FCM
to multimodal data improved by a Dynamic Particle Swarm Optimization (DPSO) algorithm
which by construction incorporates local and global optimization capabilities. Image
segmentation of three-variate MRI brain data is achieved using FCM-3 and
DPSOFCM-3 where the three modalities T1-weighted, T2-weighted and Proton
Density (PD), are treated at once (the suffix -3 is added to distinguish our
three-variate method from mono-variate methods usually using T1-weighted modality).
FCM-3 and DPSOFCM-3 were evaluated on several Magnetic
Resonance (MR) brain images corrupted by different levels of noise and
intensity non-uniformity. By means of various performance criteria, our results
show that the proposed method substantially improves segmentation results. For
noisiest and most no-uniform images, the performance improved as much as 9%
with respect to other methods.
Keywords: Fuzzy c-mean, particle swarm optimization,
brain Magnetic Resonance Images segmentation.
Received December
24, 2019; accepted March 10, 2020