A Novel Secure Video Steganography Technique using Temporal Lifted Wavelet Transform and Human Visio
A Novel Secure Video Steganography
Technique using Temporal Lifted
Wavelet Transform and Human Vision
Properties
Ahmed
Thahab
Department of Electrical and
Electronic Engineering, University of Babylon, Iraq
Abstract: Steganography is a term that refers to the process
of concealing secret data inside a cover media which can be audio, image and
video. A new video steganography scheme in the wavelet domain is presented in
this paper. Since the convolutional discrete wavelet transform produces float
numbers, a lifted wavelet transform is used to conceal data. The method embeds
secret data in the detail coefficients of each temporal array of the cover video
at spatial localization using a unique embedding via YCbCr
color space and complementing the secret data to minimize error in the stego
video before embedding. Three secret keys are used in the scheme. Method’s
performance matrices such as peak signal to noise ratio and Normalized Cross
Correlation (NCC) expresses good imperceptibility for
the stego-video. The value of Peak Signal to Noise Ratio (PSNR) is in
range of 34-40dB, and high embedding capacity.
Keywords: Data hiding, integer wavelets
transform, color space, peak signal to noise ratio, first complement.
Received January 8, 2017; accepted April 30, 2018
https://doi.org/10.34028/iajit/17/2/1
A Daubechies DWT Based Image Steganography
Using Smoothing Operation
Vijay
Sharma, Devesh Srivastava, and Pratistha Mathur
Computer Science and Engineering Department, Manipal University Jaipur,
India
Abstract: Steganography is a capability which conceals the top-secret information
into cover media (e.g., digital images, sound files etc.,). This Paper presents
a secure, higher embedding capacity Discrete Wavelet Transformation (DWT) based
technique. Before embedding correlation in between cover and the secret image
is increased by multiplying some variable (i.e., 1/k) to the secret image. In
embedding process, the Daubechies DWT of both Arnold transformed secret and
cover images are taken followed by alpha blending operation. Arnold is a type
of scrambling process which increases the confidentiality of secret image and
alpha blending is a type of mixing operation of two images, the alpha value
indicates the amount of secret image is embedded into the cover image.
Daubechies Inverse Discrete Wavelet Transformation (IDWT) of the resulting
image is performed to obtain the stego image. Smoothing operation inspired by
the Genetic Algorithm (GA) is used to improve the quality of stego-image by
minimizing Mean square error and morphological operation is used to extract the
image component from the extracted secret image. Simulation results of the
proposed steganography technique are also presented. The projected method is
calculated on different parameters of image visual quality measurements.
Keywords: Steganography, Daubechies DWT, Arnold transform,
smoothing operation, genetic algorithm, morphological operation.
Received
March 18, 2017; accepted January 28, 2018
https://doi.org/10.34028/iajit/17/2/2
Toward Building Video Multicast Tree with Congestion Avoidance Capability in Software-Defined Networ
Toward Building Video Multicast Tree with Congestion Avoidance Capability in Software-Defined Networks
Huifen
Huang1, Zhihong Wu2, Jin Ge3, and Lu Wang3
1School of Information Science and Electrical Engineering, ShanDong
JiaoTong University, China
2Qilu Hospital, Shandong
University, China
3Shandong Provincial Key Laboratory
of Computer Networks, SCSC, China
Abstract: Network congestion is an obstacle to a Quality
of Service (QoS) guarantee for online video applications, because it leads to a
high packet loss rate and long transmission delay. In the Software-Defined
Network (SDN), the controller can conveniently obtain the network topology and
link bandwidth use situation. Based on the above advantages, an SDN-based video
multicast routing solution, called Congestion Avoidance Video Multicast (CAVM),
is proposed in this paper. CAVM obtains overall network topology, monitors available bandwidth resource and measures the link delays based on the OpenFlow, a
popular SDN southbound interface. We introduce a novel multicast routing
problem, named the Delay-Constrained and Minimum Congestion-Cost Multicast
Routing (DCMCCMR) problem, which finds the multicast tree with the lowest
congestion cost and a source-destination delay constraint in the SDN
environment. The DCMCCMR problem is NP-hard. CAVM uses an algorithm to solve it
in polynomial time. Our experimental results confirm that the proposed
algorithm can build multicast trees with good congestion avoidance capability.
Keywords: Network congestion, multicast, Software-Defined Network, congestion avoidance.
Received
May 21, 2017; accepted May 7, 2018
https://doi.org/10.34028/iajit/17/2/3
Gammachirp Filter Banks Applied in Roust Speaker Recognition Based on GMM-UBM Classifier
Lei Deng and Yong Gao
College of Electronics and Information Engineering, Sichuan University,
China
Abstract: In this paper, authors propose an auditory feature extraction algorithm in order to
improve the performance of the speaker recognition system in noisy
environments. In this auditory feature
extraction algorithm, the Gammachirp filter bank is adapted to simulate the
auditory model of human cochlea. In addition, the following three
techniques are applied: cube-root compression method, Relative Spectral
Filtering Technique (RASTA), and Cepstral Mean and Variance Normalization
algorithm (CMVN).Subsequently, based on the theory of Gaussian Mixes Model-Universal
Background Model (GMM-UBM), the simulated experiment was conducted. The experimental results implied
that speaker recognition systems with the new auditory feature has better
robustness and recognition performance compared to Mel-Frequency
Cepstral Coefficients (MFCC), Relative
Spectral-Perceptual Linear Predictive (RASTA-PLP),Cochlear Filter Cepstral Coefficients (CFCC) and
gammatone Frequency Cepstral Coefficeints (GFCC).
Keywords: Feature extraction, gammachirp
filter bank, RASTA, CMVN, GMM-UBM.
Received May 9, 2017; accepted June 19, 2019
https://doi.org/10.34028/iajit/17/2/4
A New Application for Gabor Filters in Face-Based
Gender Classification
Ebrahim Al-Wajih
and Moataz Ahmed
Information and Computer Science Department, King
Fahd University of Petroleum and Minerals, KSA
Abstract:Human
face is one of the most important biometrics as it contains information such as
gender, race, and age. Identifying the gender based on human face images is a
challenging problem that has been extensively studied due to its various relevant
applications. Several approaches were used to address this problem by
specifying suitable features. In this study, we present an extension of feature
extraction technique based on statistical aggregation and Gabor filters. We
extract statistical features from the image of a face after applying Gabor
filters; subsequently, we use seven classifiers to investigate the performance
of the selected features. Experiments show that the accuracy achieved using the
proposed features is comparable to accuracies reported in recent studies. We
used seven classifiers to investigate the performance of our proposed features.
Experiments reveal that k-Nearest Neighbors algorithm (k-NN), K-Star classifier
(K*), and Rotation Forest offer the best accuracies.
Keywords: Gabor
filters, gender recognition, statistical features, PCA.
Received September 25, 2017; accepted May 3,
2018
https://doi.org/10.34028/iajit/17/2/5
Enhancement of the Heuristic Optimization
Based on Extended Space Forests using Classifier Ensembles
Zeynep Kilimci1,3
and Sevinç Omurca2
1Department of Computer Engineering, Dogus
University, Turkey
2Department of Computer Engineering, Kocaeli
University, Turkey
3Department of Information
Systems Engineering, Kocaeli University, Turkey
Abstract: Extended space forests are a matter of common knowledge for ensuring
improvements on classification problems. They provide richer feature space and
present better performance than the original feature space-based forests. Most
of the contemporary studies employs original features as well as various
combinations of them as input vectors for extended space forest approach. In
this study, we seek to boost the performance of classifier ensembles by
integrating them with heuristic optimization-based features. The contributions
of this paper are fivefold. First, richer feature space is developed by using
random combinations of input vectors and features picked out with ant colony
optimization method which have high importance and not have been associated
before. Second, we propose widely used classification algorithm which is
utilized baseline classifier. Third, three ensemble strategies, namely bagging,
random subspace, and random forests are proposed to ensure diversity. Fourth, a
wide range of comparative experiments are conducted on widely used biomedicine
datasets gathered from the University of California Irvine (UCI) machine
learning repository to contribute to the advancement of proposed study.
Finally, extended space forest approach with the proposed technique turns out
remarkable experimental results compared to the original version and various
extended versions of recent state-of-art studies.
Keywords: Classifier ensembles, extended space forests, ant
colony optimization, decision tree.
Received November 11,
2017; accepted March 11, 2018
https://doi.org/10.34028/iajit/17/2/6
MPKC-based Threshold Proxy Signcryption Scheme
Li
Huixian1, Gao Jin1, Wang Lingyun1, and Pang Liaojun2
1School
of Computer Science and Engineering, Northwestern Polytechnical University,
China
2State
Key Laboratory of Integrated Services Networks, Xidian University, China
Abstract: The threshold proxy signcryption can
implement signature and encryption simultaneously in one logical step, and can be used to realize
the decentralized protection of the group signature key, so it is an efficient technology for network
security. Currently, most of the existing threshold
proxy signcryption schemes
are
designed based
on the traditional public key cryptosystems, and their security mainly
depends on the difficulty of the large
integer decomposition and the discrete logarithm. However, the traditional public
key cryptosystems cannot
resist the quantum computer attack, which makes the existing threshold proxy
signcryption schemes based
on traditional public key cryptosystems insecure
against quantum attacks. Motivated by
these concerns, we proposed a threshold proxy signcryption
scheme based on Multivariate Public Key Cryptosystem (MPKC) which is one of the quantum attack-resistent public
key algorithms. Under the premise of satisfying the
threshold signcryption
requirements of the threshold proxy, our scheme can not only realize the
flexible participation of the proxy signcrypters but also resist the quantum
computing attack. Finally, based on
the assumption of Multivariate Quadratic (MQ) problem and Isomorphism Polynomial (IP)
problem, the proof of the confidentiality and the unforgeability of the proposed scheme under the
random oracle
model is
given.
Keywords: Multivariate
public key cryptosystem, signcryption, threshold proxy signcryption, quantum attack.
Received December 5, 2017; accepted May 29, 2019
https://doi.org/10.34028/iajit/17/2/7
Referential DNA Data Compression using
Hadoop Map Reduce Framework
Raju Bhukya and Sumit Deshmuk
Department of Computer Science and
Engineering, National Institute of Technology, India
Abstract: The indispensable knowledge of Deoxyribonucleic
Acid (DNA) sequences and sharply reducing cost of the DNA sequencing techniques
has attracted numerous researchers in the field of Genetics. These sequences
are getting available at an exponential rate leading to the bulging size of
molecular biology databases making large disk arrays and compute clusters
inevitable for analysis.In this paper, we proposed
referential DNA data compression using hadoop MapReduce Framework to process
humongous amount of genetic data in distributed environment on high performance
compute clusters. Our method has successfully achieved a better balance between
compression ratio and the amount of time required for DNA data compression as
compared to other Referential DNA Data Compression methods.
Keywords: Compression,
map reduce sequences, dna sequences.
Received August 12, 2017; accepted April 17, 2018
https://doi.org/10.34028/iajit/17/2/8
Intrusion Detection Model Using Naive Bayes and Deep Learning Technique
Mohammed Tabash, Mohamed Abd Allah, and Bella Tawfik
Faculty of Computers and Informatics, Suez
Canal University, Egypt
Abstract: The increase of
security threats and hacking the computer networks are one of the most
dangerous issues should treat in these days. Intrusion Detection Systems
(IDSs), are the most appropriate methods to prevent and detect the attacks of
networks and computer systems. This study presents several techniques to
discover network anomalies using data mining tasks, Machine learning technology
and dependence of artificial intelligence techniques. In this research, the
smart hybrid model was developed to explore any penetrations inside the
network. The model divides into two basic stages. The first stage includes the
Genetic Algorithm (GA) in selecting the characteristics with depends on a
process of extracting, Discretize And dimensionality reduction through
Proportional K-Interval Discretization (PKID) and Fisher Linear Discriminant
Analysis (FLDA) on respectively. At the end of the first stage combining Naïve
Bayes classifier (NB) and Decision Table (DT) using NSL-KDD data set divided
into two separate groups for training and testing. The second stage completely
depends on the first stage outputs (predicted class) and reclassified with
multilayer perceptrons using Deep Learning4J (DL) and the use of algorithm
Stochastic Gradient Descent (SGD). In order to improve the performance in terms
of the accuracy in classification of penetrations, raising the average of
discovering and reducing the false alarms. The comparison of the proposed model
and conventional models show the superiority of the proposed model and the
previous conventional hybrid models. The result of the proposed model is
99.9325 of classification accuracy, the rate of detection is 99.9738 and
0.00093 of false alarms.
Keywords: Classification, intrusion detection, deep
learning, NSL-KDD, genetic algorithm, naïve bayes.
Received December 30, 2017; accepted April
17, 2018
Fault Tolerance Based Load Balancing
Approach for Web
Resources in Cloud Environment
Anju Shukla, Shishir Kumar, and Harikesh
Singh
Department of Computer
Science and Engineering, Jaypee University of Engineering and Technology, India
Abstract: Cloud
computing consists group of heterogeneous resources scattered around the world
connected through the network. Since high performance computing is strongly
interlinked with geographically distributed service to interact with each other
in wide area network, Cloud computing makes the architecture consistent,
low-cost, and well-suited with concurrent services. This paper presents a fault
tolerance load balancing technique based on resource load and fault index
value. The proposed technique works in two phases: resource selection and task
execution. The resource selection phase selects the suitable resource for task
execution. A resource with least resource load and fault index value is
selected for task execution. Further task execution phase sets checkpoints at
various intervals for saving the task state periodically. The checkpoints are
set at various intervals based on resource fault index. When a task is executed
on a resource, fault index value of selected resource is updated accordingly.
This reduces the checkpoint overhead by avoiding unnecessary placements of
checkpoints. The proposed model is validated on CloudSim and provides improved
performance in terms of response time, makespan, throughput and checkpoint
overhead in comparison to other state-of-the-art methods.
Keywords: Scheduler,
checkpoint manager, cloud computing, checkpointing, fault index, high
performance computing.
Received June 6, 2018; accepted July 2, 2019
https://doi.org/10.34028/iajit/17/2/10
Empirical Evaluation of Leveraging Named Entities
for Arabic Sentiment Analysis
Hala Mulki1,
Hatem Haddad2, Mourad Gridach3, and Ismail Babaoğlu1
1Computer
Engineering Department, Konya Technical University, Turkey
2Computer
Science Department, University of Manouba, Tunisia
3Computational
Bioscience Departments, University of Colorado Boulder, USA
Abstract: Social media reflects the attitudes of the public towards
specific events. Events are often related to persons, locations or
organizations, the so-called Named Entities (NEs). This can define NEs as sentiment-bearing
components. In this paper, we dive beyond NEs recognition to the exploitation
of sentiment-annotated NEs in Arabic sentiment analysis. Therefore, we develop
an algorithm to detect the sentiment of NEs based on the majority of attitudes
towards them. This enabled tagging NEs with proper tags and, thus, including
them in a sentiment analysis framework of two models: supervised and
lexicon-based. Both models were applied on datasets of multi-dialectal content.
The results revealed that NEs have no considerable impact on the supervised
model, while employing NEs in the lexicon-based model improved the
classification performance and outperformed most of the baseline systems.
Keywords: Named entity recognition, Arabic
sentiment analysis, supervised learning method, lexicon-based method.
Received August 2, 2018;
accepted May 21, 2019
https://doi.org/10.34028/iajit/17/2/11
A
New Vector Representation of Short Texts for Classification
Abstract:
Short and sparse
characteristics and synonyms and homonyms are main obstacles for short-text
classification. In recent years, research on short-text classification has
focused on expanding short texts but has barely guaranteed the validity of
expanded words. This study proposes a new method to weaken these effects
without external knowledge. The proposed method analyses short texts by using
the topic model based on Latent Dirichlet Allocation (LDA), represents each
short text by using a vector space model and presents a new method to adjust
the vector of short texts. In the experiments, two open short-text data sets
composed of google news and web search snippets are utilised to evaluate the
classification performance and prove the effectiveness of our method.
Keywords: Text representation, short-text
classification, Latent Dirichlet Allocation, topic model.
Received January 30, 2019; accepted July 2, 2019
https://doi.org/10.34028/iajit/17/2/12
A Hybrid Approach for Providing Improved
Link Connectivity in SDN
Muthumanikandan Vanamoorthy1, Valliyammai
Chinnaiah1, and Harish Sekar2
1Department of Computer Technology, Madras Institute of Technology, Anna
University, India
2Endurance International Group, India
Abstract: Software-Defined Networking (SDN) is a unique approach to design and
build networks. The networks services can be better handled by administrators
with the abstraction that SDN provides. The problem of re-routing the packets
with minimum overhead in case of link failure is handled in this work.
Protection and restoration schemes have been used in the past to handle such
issues by giving more priority to minimal response time or controller overhead
based on the use case. A hybrid scheme has been proposed with per-link Bidirectional
forwarding mechanism to handle the failover. The proposed method makes sure
that the controller overhead does not impact the flow of packets, thereby
decreasing the overall response time, even with guaranteed network resiliency.
The computation of the next shortest backup path also guarantees that the
subsequent routing of packets always chooses the shortest path available. The
proposed method is compared with the traditional approaches and proven by
results to perform better with minimal response time.
Keywords: Open flow, SDN, link
failure, protection and restoration.
Received October 12, 2018;
accepted January 21, 2019
https://doi.org/10.34028/iajit/17/2/13
On 2-Colorability Problem for Hypergraphs with
Ruzayn Quaddoura
Faculty of Information Technology, Zarqa University,
Jordan
Abstract: A 2-coloring of a hypergraph is a mapping from its vertex set to a
set of two colors such that no edge is monochromatic. The hypergraph 2-
Coloring Problem is the question whether a given hypergraph is 2-colorable. It
is known that deciding the 2-colorability of hypergraphs is NP-complete even
for hypergraphs whose hyperedges have size at most 3. In this paper, we present
a polynomial time algorithm for deciding if a hypergraph, whose incidence graph
is
Keywords: Hypergraph, Dominating set,
Received December 17, 2018; accepted June 11,
2019
https://doi.org/10.34028/iajit/17/2/14
Enhanced Median Flow Tracker Based on Photometric Correction for Videos with Abrupt Changing Illumin
Enhanced Median Flow Tracker for Videos with Illumination Variation Based on Photometric Correction
Asha Narayana and Narasimhadhan Venkata
Department of Electronics and Communication
Engineering, National Institute of Technology Karnataka, India
Abstract: Object tracking is a
fundamental task in video surveillance, human-computer interaction and activity
analysis. One of the common challenges in
visual object tracking is illumination variation. A large number of methods for
tracking have been proposed over the recent years, and median flow tracker is one
of them which can handle various
challenges. Median flow tracker is designed to track an object using
Lucas-Kanade optical flow method which is sensitive to illumination variation, hence fails when sudden illumination changes
occur between the frames. In this paper, we propose an enhanced median flow
tracker to achieve an illumination invariance to abruptly varying lighting
conditions. In this approach, illumination variation is compensated by modifying the Discrete Cosine Transform (DCT)
coefficients of an image in the logarithmic domain. The illumination variations
are mainly reflected in the low-frequency coefficients of an image. Therefore, a fixed number of DCT
coefficients are ignored. Moreover, the Discrete Cosine (DC) coefficient is
maintained almost constant all through the video based on entropy difference to
minimize the sudden variations of lighting
impacts. In addition, each video frame is enhanced by employing pixel transformation
technique that improves the contrast of dull images based on probability distribution of pixels. The
proposed scheme can effectively handle
the gradual and abrupt changes in the illumination of the object. The
experiments are conducted on fast-changing illumination videos, and results show that the proposed method
improves median flow tracker with outperforming accuracy compared to the state-of-the-art trackers.
Keywords: Illumination variation, median flow
tracker, entropy, gamma correction.
Received April 6,
2017; accepted April 25, 2018
FAAD: A Self-Optimizing Algorithm for Anomaly Detection
Adeel Hashmi and Tanvir Ahmad
Department of Computer Engineering, Jamia Millia Islamia, India
Abstract: Anomaly/Outlier detection is the process of
finding abnormal data points in a dataset or data stream. Most of the anomaly
detection algorithms require setting of some parameters which significantly
affect the performance of the algorithm. These parameters are generally set by
hit-and-trial; hence performance is compromised with default or random values.
In this paper, the authors propose a self-optimizing algorithm for anomaly
detection based on firefly meta-heuristic, and named as Firefly Algorithm for
Anomaly Detection (FAAD). The proposed solution is a non-clustering
unsupervised learning approach for anomaly detection. The algorithm is
implemented on Apache Spark for scalability and hence the solution can handle
big data as well. Experiments were conducted on various datasets, and the
results show that the proposed solution is much accurate than the standard
algorithms of anomaly detection.
Keywords: Anomaly detection, outliers, firefly algorithm,
big data, parallel algorithms and apache spark.
Received November 19, 2017; accepted April 28, 2019