Tuesday, 26 June 2018 07:19

Design and Development of Suginer Filter for Intrusion Detection Using Real Time Network Data

Revathi Sujendran and Malathi Arunachalam

Department of Computer Science, Government Arts College, India

Abstract: By rapid use of the Internet and computer network all over the world makes security a major issues, so using the intrusion-detection system has become more important. All the same, the primary issues of Intrusion-Detection System (IDS) are generating high false alarm rate and fails to detect attacks, which make system security more vulnerable. This paper proposed a new concept of using Suginer Filter to identify IDS. The Takagi-Sugeno fuzzy model is structured based on Neuro-fuzzy method to generate fuzzy rules and wiener filter is used to filter out attack as a noise signal using fuzzy rule generation. These two methods are combined to detect intrusive behavior of the system. The proposed suginer filter (Sugeno+Wiener) uses completely a different research structure to identify attacks and the experiment was evaluated on live network data collected, which shows that the proposed system achieves approximately 98.46% of accuracy and reduce false alarm rate to 0.08% in detecting different real time attacks. From the obtained result it’s clear that the proposed system performs better when compared with other existing machine learning techniques.

Keywords: Intrusion detection, wiener filter, artificial neural network, knowledge discovery dataset, network socket layer, defense advanced research projects agency, support vector machine.

Received October 10, 2014; accepted September 7, 2015

Full text 

 
Tuesday, 26 June 2018 07:17

A Robust Blind Watermarking Scheme for Ownership Assertion of Multi-band Satellite Images

Priyanka Singh

Department GIS Cell, Motilal Nehru National Institute of Technology, India

Abstract: Satellite images serve as very reliable sources for crucial information of inaccessible areas which incur high costing in their acquisitions. Hence, they must reside with their rightful owners as their mishandling may lead to serious consequences. A robust blind watermarking scheme for multi-band satellite images is proposed in this paper. Ownership information is embedded into cover image via minimum manipulation of pixel values such that the classification results are not much affected. Homogeneity analysis of the cover image is performed to chalk out homogeneous sites for embedding of copyright information. To further enhance the security of the scheme, random chaotic mapping along with convolutional encoding and viterbi decoding and multiple secret keys (four) has been employed. The robustness of the scheme has been tested against comprehensive set of attacks and evaluated using Normalized Cross Correlation (NCC) and Peak Signal to Noise Ratio (PSNR) metrics. The comparative results with the other existing state of the art approaches confirm the efficacy of the proposed scheme.

Keywords: Robust blind watermarking, ownership information, chaotic mapping, convolutional encoding, viterbi decoding, normalized cross correlation, peak signal to noise ratio.

Received March 13, 2015; accepted April 23, 2015

Full text 

 
Tuesday, 26 June 2018 07:15

Application of Framelet Transform and

Singular Value Decomposition to Image

Enhancement

Sulochana Subramaniam1, Vidhya Rangasamy1, Vijayasekaran Duraisamy1, and Mohanraj Karuppanan2

1Institute of Remote Sensing, Anna University, India

2Software Engineer, Wipro Technologies, India

Abstract: In this paper, a new satellite image enhancement technique based on framelet transform and Singular Value Decomposition (SVD) has been proposed. Framelet transform is used to decompose the image into one low frequency subband and eight high frequency subbands. The enhancement is done with regard of both resolution and contrast. To increase the resolution, low and high frequency subbands have been interpolated. In intermediate stage, estimating high frequency subbands has been proposed to achieve sharpness. All the subbands are combined by inverse framelet transform to get the high resolution image. To increase the contrast, framelet transform is combined with SVD. Singular values of the low frequency subband are updated and inverse transform is performed to get the enhanced image. The proposed technique has been tested on satellite images. The quantitative measures such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Universal Quality Index (UQI), Entropy, Quality_ Score are used and the visual results show the superiority of the proposed technique over the conventional and state-of-art image enhancement techniques. The time complexity indicates the proposed image enhancement is suitable for further image processing applications.

Keywords: Generalised histogram equalization, SVD, discrete wavelet transform, framelet Transform, PSNR, SSIM, UQI.

Received August 7, 2014; accepted September 9, 2015

Full text 

 
Tuesday, 26 June 2018 07:14

Bayesian Information Criterion in LTE Downlink

Scheduling Algorithm

Khairul Anwar, KuokKwee Wee, WooiPing Cheah, and YitYin Wee

 Faculty of Information and Science Technology, Multimedia University, Malaysia

Abstract: Real time multimedia has been a major trend in people daily life. With the rise of demands in faster internet connection for multimedia purpose, Long Term Evolution (LTE) has been used as a medium of transmission to fulfil these demands. Still, the need of handling multiple simultaneous multimedia transmission, either voice or audio is a challenge that LTE is facing. Many proportional fairness scheduling algorithms have been implemented in LTE such as Modified Largest Weighted Delay First (M-LWDF) that can handle up to 90 users in a single cell simultaneously with good bandwidth distribution. Yet there is still room for improvement as the allocation for simultaneous transmission of video and VoIP are affected by other best effort flows. Best effort flow such as internet surfing does not require a huge amount of bandwidth allocation whereas a sufficient amount from the best effort bandwidth allocation for best effort can be reallocated to video and VoIP flows. Hence, an adaptive algorithm named Criterion-Based Algorithm (C-B), Criterion-Based Proportional Fairness (C-BPF) and Criterion-Based Modified Largest Weighted Delay First (C-BMLWDF) based on Bayesian Information Criterion (BIC) had been proposed by the author. The result simulation of the solution had shown a better performance in throughput, delay, packet loss and fairness index of both video and VoIP transmission with a respective allocation for the best effort flow.

Keywords: LTE, criterion-based, bayesian information criterion, downlink scheduling, quality of service.

Received June 13, 2015; accepted September 24, 2017

Full text 

 
Tuesday, 26 June 2018 07:11

A Rule-Based Algorithm for the Detection of Arud Meter in Classical Arabic Poetry

Belal Abuata and Asma Al-Omari

Computer Information Systems Department, Yarmouk University, Jordan

Abstract: Arud is the science of poems used in Arabic, Persian, Urdu, and other eastern languages. Determining the Arud meter of classical Arabic poems is a difficult and tiresome task for those who study poetry. In this paper, we focus on the computerized analysis of Arabic Arud meter. We introduce an algorithm that is able to determine the correct Arud meter for a given Arabic poem and is also able to convert the poem into Arud Writing. The algorithm is based on a set of well defined rules applied only on the first part (sadr صدر) of the poem verse. The algorithm consists of five main steps. The preliminary tests are quite satisfactory and the algorithm gave high accuracy. The algorithm can be used in systems that handle Arabic poetry such as information retrieval systems or teaching Arabic poetry for students.

Keywords: Arud meter algorithm, arabic poetry, arabic linguistic, arabic retrieval.

Received June 22, 2014; accepted March 9, 2016

Full text 


Tuesday, 26 June 2018 07:10

Application of Computational Geometry in Coal

Mine Roadway 3D Localization

Feng Wang1, Lei Shi1, Weiguo Fan2, and Cong Wang1

1College of Information Engineering, Taiyuan University of Technology, China

2Department of Computer Science, Virginia Polytechnic Institute and State University, China

Abstract: The Voronoi diagram principle in the computational geometry was researched and the relationship between the anchor nodes and Voronoi diagram was analyzed in this paper. A new arrangement method of coal mine roadway nodes was proposed to construct the Voronoi diagram of the roadway on the basis of new node arrangement method and increase numerous virtual anchor nodes for the roadway space under the condition of no increase of network cost and increase the number of anchor nodes communicating with the sensor nodes. Through the combination with the range-free DV-Hop algorithm, the scheme of coal mine roadway localization was proposed to finally achieve the localization of underground roadway. The simulation results show that, compared to the traditional range-free algorithm, the algorithm in this paper can more accurately estimate the location of the nodes under the same network condition. The increase of the positioning accuracy of the algorithm can suit the node localization of underground wireless sensor network in coal mine.

Keywords: Wireless sensor network, roadway; voronoi diagra, virtual anchor node.

Received August 25, 2015; accepted February 11, 2016

Full text 

 
Tuesday, 26 June 2018 07:08

A Method for Finding the Appropriate Number of Clusters

Huan Doan and Dinh Nguyen

Department of Information System, University of Information Technology, Vietnam

Abstract: Drawback of almost partition based clustering algorithms is the requirement for the number of clusters specified at the beginning. Identifying the true number of clusters at the beginning is a difficult problem. So far, there were some works studied on this issue but no method is perfect in every case. This paper proposes a method to find the appropriate number of clusters in the clustering process by making an index indicated the appropriate number of clusters. This index is built from the intra-cluster coefficient and inter-cluster coefficient. The intra-cluster coefficient reflects intra-distortion of the cluster. The inter-cluster coefficient reflects the distance among clusters. Those coefficients are made only by extremely marginal objects of clusters. The looking for the extremely marginal objects and the building of the index are integrated in a weighted FCM algorithm and it is calculated suitably while the weighted Fuzzy C-Means (FCM) is processing. The Extended weighted FCM algorithm integrated this index is called Fuzzy C-Means-Extended (FCM-E). Not only does the FCM-E seek the clusters, but it also finds the appropriate number of clusters. The authors experiment with the FCM-E on some data sets of University of California, Irvine (UCI): Iris, Wine, Breast Cancer Wisconsin, and Glass and compare the results of the proposed method with the results of the other methods. The results of proposed method obtained are encouraging.

Keywords: Method for finding the number of clusters, appropriate a number of clusters, fuzzy c-means, clustering algorithm.

Received December 18, 2014; accepted March 3, 2016

Full text 

 
Tuesday, 26 June 2018 07:07

MR Brain Image Segmentation Using an Improved

Kernel Fuzzy Local Information C-Means Based

Wavelet, Particle Swarm Optimization (PSO)

Initialization and Outlier Rejection with Level

Set Methods

Abdenour Mekhmoukh and Karim Mokrani

Laboratoire de Technologie Industrielle et de l’Information, Université de Bejaia, Algeria

Abstract: This paper, presents a new image segmentation method based on Wavelets, Particle Swarm Optimization (PSO) and outlier rejection caused by the membership function of the Kernel Fuzzy Local Information C-Means (KFLICM) algorithm combined with level set is proposed. The segmentation of Magnetic Resonance (MR) images plays an important role in the computer-aided diagnosis and clinical research, but the traditional approach which is the Fuzzy C-Means (FCM) clustering algorithm is sensitive to the outlier and does not integrate the spatial information in its membership function. Thus the algorithm is very sensitive to noise and in-homogeneities in the image, moreover, it depends on cluster centers initialization. A novel approach, named improved IKFLICMOR is presented to improve the outlier rejection and reduce the noise sensitivity of conventional FCM clustering algorithm. To get the first image segmentation, the traditional FCM is applied to low-resolution image after wavelet decomposition. In general, the FCM algorithm chooses the initial cluster centers randomly, but the use of PSO algorithm gives us a good result for these centers. Our algorithm is also completed by adding into the standard FCM algorithm the spatial neighborhood information. These a priori are used in the cost function to be optimized. The resulting fuzzy clustering is used as the initial level set function. The results confirm the effectiveness of the IKFLICMOR associated with level set for MR image segmentation.

Keywords: Image segmentation, outlier rejection, FCM, PSO, spatial fuzzy clustering, wavelet transform, level set methods.

Received May 24, 2015; accepted March 9, 2016

Full text 

 
Tuesday, 26 June 2018 07:04

Using 3D Convolutional Neural Network in

Surveillance Videos for Recognizing Human Actions

Sathyashrisharmilha Pushparaj1 and Sakthivel Arumugam2

 1Department of Computer Science and Engineering, Adithya Institute of Technology, India

2Department of Information Technology, Woldia University, Ethiopia

Abstract: Human action recognition is a very important component of visual surveillance systems. The demand for automatic surveillance systems play a crucial role in the circumstances where continuous patrolling by human guards are not possible. The analysis in surveillance scenarios often requires the detection of certain specific human actions. The automated recognition of human actions in detecting certain human actions are considered here. The main aim is to develop a novel 3D Convolutional Neural Network (CNN) model for human action recognition in realistic environment. The features are extracted from both the spatial and the temporal dimensions by performing 3D convolutions, by which, capturing the motion information encoded in multiple adjacent frames. The evolved model generates multiple information from the input frames, and the information from all the channels are combined and that is to be the final feature. The developed model automatically tends to recognize specific human actions which needs attention in the real world environment like in pathways or in corridors of any organization. This proposed work is well suitable for the situations like where continuous patrolling of humans are not possible, to prevent certain human actions which are not allowed inside the organisation premises.

Keywords: Security surveillance, convolutional neural networks, 3D convolution, feature extraction, image analysis and action recognition.

Received April 29, 2014; accepted January 27, 2015

Full text 

 

Tuesday, 26 June 2018 07:03

Off-line Arabic Hand-Writing Recognition Using

Artificial Neural Network with Genetics Algorithm

Khalid Nahar

Computer Science Department, Yarmouk University, Jordan

Abstract: Artificial Neural Networks (ANN) were used in the recognition of the printed Arabic text with a high rate of success. In contrast, Arabic hand-writing recognition has many challenges, some were tackled in some research recently. In this paper we used ANN in recognizing Arabic hand-written characters with the Genetics Algorithm (GA). The GA was used to search for the best ANN structure. We consider Arabic off-line characters represented by a series of (x, y) coordinate. The dataset was gathered from a couple of volunteers, used the E-pen to write different Arabic letters. A Matrix Laboratory (Mat Lab) program was implemented to store the written characters and extracts their features. Features were determined based on the shape and number of segments that made up the characters. The recognition results were very promising when using ANN with the GA in comparison with other relevant approaches. On average more than 95% of accuracy was achieved when GA is used to adjust ANN structure in order to get the best recognition rate.

Keywords: ANN, GA, Feature vector, character recognition, arabic hand-written text, Hidden Markov Model (HMM).

Received January 30, 2016; accepted March 27, 2016

Full text 

 
Tuesday, 26 June 2018 07:01

Identification of an Efficient Filtering-

Segmentation Technique for Automated

Counting of Fish Fingerlings

Lilibeth Coronel1, Wilfredo Badoy2, and Consorcio Namoco3

1College of Science and Environment, Mindanao State University at Naawan, Philippines

2Department of Information Systems and Computer Science, Ateneo de Davao University, Philippines

3College of Industrial and Information Technology Mindanao, University of Science and Technology, Philippines

Abstract: The counting of fish fingerlings is an important process in determining the accurate consumption of feeds for a certain density of fingerlings in a pond. Image processing is a modern approach to automate the counting process. It involves six basic steps, namely, image acquisition, cropping, scaling, filtering, segmentation, and measurement and analysis. In this study, two (2) filtering and two (2) segmentation algorithms are identified based on the following observations: the non-uniform brightness and contrast of the image; random noise brought about by feeds, waste, and spots in the container; and the likelihood of the image samples or application used by the different authors of the smoothing and clustering algorithms in their respective experiments. Four (4) combinations of filtering-segmentation algorithms are implemented and tested. Results show that combination of local normalization filter and iterative selection threshold yield a very high counting accuracy using the measurement function such as Precision, Recall, and F-measure. A Graphical User Interface (GUI) is also presented to visualize the image processing steps and its counting results.

Keywords: Digital image processing, filtering, segmentation, image normalization, threshold.

Received July 9, 2015; accepted February 3, 2016

Full text 

 
Tuesday, 26 June 2018 06:59

An Improved Richardson-Lucy Algorithm Based

on Genetic Approach for Satellite Image Restoration

Fouad Aouinti1, M’barek Nasri1, Mimoun Moussaoui1, and Bouchta Bouali2

1Superior School of Technology, Mohammed I University, Morocco

2Faculty of Sciences, Mohammed I University, Morocco

Abstract: In the process of satellite imaging, the observed image is blurred by optical system and atmospheric effects and corrupted by additive noise. The deconvolution of blurred and noisy satellite images is an ill-posed inverse problem. In the literature, a number of image restoration methods have been proposed to reconstruct an approximated version of the original image from a degraded observation. The iterative method known as Richardson-Lucy deconvolution has demonstrated its effectiveness to compensate for these degradations. The efficiency of this method obviously depends on the iteration count that has a direct impact on the expected result. This decisive and virtually unknown parameter leads to the estimation of approximate values which may affect the quality of the restored image. In this paper, the idea consists of optimizing the iteration count of the Richardson-Lucy deconvolution by applying the genetic approach in order to get a better restoration of the degraded satellite image.

Keywords: Satellite image, spatially invariant blur, non-blind restoration, richardson-lucy deconvolution, genetic algorithm.

Received December 16, 2015; accepted February 23, 2016

Full text 

 
Tuesday, 26 June 2018 06:57

STF-DM: A Sparsely Tagged Fragmentation with

Dynamic Marking an IP Traceback Approach

Hasmukh Patel1 and Devesh Jinwala2

1Computer Engineering Department, Gujarat Technological University, India

 2Computer Engineering Department, Sardar Vallabhbhai National Institute of Technology, India

Abstract: Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks are serious threats to the Internet. The frequency of DoS and DDoS attacks is increasing day by day. Automated tools are also available that enable non-technical people to implement such attacks easily. Hence, it is not only important to prevent such attacks, but also need to trace back the attackers. Tracing back the sources of the attacks, which is known as an IP traceback problem is a hard problem because of the stateless nature of the Internet and spoofed Internet Protocol (IP) packets.Various approaches have been proposed for IP traceback. Probabilistic Packet Marking (PPM) approach incurs the minimum network and management overhead. Hence, we focus on PPM approach. Sparsely-Tagged Fragmentation Marking Scheme (S-TFMS), a PPM based approach, requires low overhead at the victim and achieve zero false-positives. However, it requires a large number of packets to recover the IP addresses. In this paper, we propose a Sparsely-Tagged Fragmentation Marking approach with dynamic marking probability. Our approach requires less number of packets than required by S-TFMS. Further, to reduce the number of packets required by victim, we extend our basic approach with the new marking format. Our extended approach requires less than one-tenth time number of packets than those in S-TFMS approach to recover the IP addresses. Our approaches recover the IP address quickly with zero false-positives in the presence of multiple attackers. We show mathematical as well as experimental analysis of our approaches.

Keywords: DDoS attack, IP traceback, probabilistic packet marking, dynamic marking, sparsely tagged marking.

Received September 8, 2014; accepted October 20, 2015

Full text 

 
Tuesday, 26 June 2018 06:55

Social Event Detection–A Systematic Approach using Ontology and Linked Open Data with Significance to Semantic Links

Sheba Selvam, Ramadoss Balakrishnan, and Balasundaram Ramakrishnan

Department of Computer Applications, National Institute of Technology Tiruchirappalli, India

Abstract: With the growing interest in capturing daily activities and sharing it through social media sites, enormous amount of multimedia content such as photographs, videos, texts, audio are made available on the web. Retrieval of multimedia content has now become a trivial task. Generally, people show interest in sharing photographs to a well-known closed community through social media sites like Flickr and Facebook. One solution to retrieve photographs is by identifying them as events. This task is known as Social Event Detection (SED). From the Flickr website, with the use of metadata like photoID, title, tags, description, date, time and geo-location for each photograph, the SED task is performed. As a central piece of the SED task, ontology for events domain is implemented. First half of the work is an explicit knowledge representation by constructing ontology for event detection using Protégé. Then, reasoning is done through HermiT reasoner and later SPARQL query is done to retrieve the media representing each event. The second half of the work involves in linking open description of specific events from different web services like Eventful, Last.fm, Foursquare, Upcoming and GeoNames. SPARQL query is done to measure the retrieval performance of each event after making semantic link using Linked Open Data (LOD). Finally an additional feature, the weather information for events is added, which shows removal of false positives in the SED task.

Keywords: Multimedia, social media, social events, photographs, event detection, ontology, linked open data, contextual metadata.

Received August 15, 2015; accepted December 21, 2016

Full text 

 
Tuesday, 26 June 2018 06:54

Selective Image Encryption using Singular Value Decomposition and Arnold Transform

Kshiramani Naik1, Arup Kumar Pal1, and Rohit Agarwal2

1Department of Computer Science and Engineering, Indian School of Mines, India

2Department of Computer Science and Engineering, JSS Academy of Technical Education, India

Abstract: Selective image cryptosystem is a popular method due to its low computational overhead for enciphering the large volume of digital images. Generally selective cryptosystem encrypts the significant part of the data set while the insignificant part is considered in compression process. As a result, such kind of approaches reduces the computational overhead of encryption process as well as properly utilizes the limited bandwidth of communication channel. In this paper the authors have proposed an image cryptosystem for a compressed image. Initially, the original image was compressed using Singular Value Decomposition (SVD) and subsequently, the selective parts of the compressed image are considered for enciphering purpose. We have followed the confusion-diffusion mechanism to encrypt the compressed image. In encryption process, the Arnold Cat Map (ACM) is used and the associated parameters of ACM are kept secret. The scheme is tested on a set of standard grayscale images and satisfactory results have been found in terms of various subjective and objective analysis like the visual appearance of cipher image, disparity of histogram with original one, computation of Peak Signal to Noise Ratio (PSNR), Number of Pixel Change Rate (NPCR), correlation coefficient and entropy.

Keywords: Arnold transform; confusion-diffusion mechanism; selective image cryptosystem; singular value decomposition.

Received November 10, 2014; accepted September 22, 2015

Full text 

 
Tuesday, 26 June 2018 06:53

Using Visible and Invisible Watermarking Algorithms for Indexing Medical Images

Jasmine Selvakumari1 and Suganthi Jeyaraj2

1Department of Computer Science and Engineering, Hindusthan College of Engineering and Technology, India

2Department of Computer Science and Engineering, Hindusthan Institute of Technology, India

Abstract: Watermarking of medical images greatly helps to provide authentication for safe storage and transmission of image databases. Though proper methodologies for indexing the medical images would provide faster retrieval performance, the problems have not been greatly addressed in the literature. This paper presents a review on image watermarking algorithms for indexing medical images. We have attempted at embedding and extraction of both visible and invisible watermarking algorithms over a set of 23 patient’s lung images. Results obtained establish the need for watermarking algorithms which show enhanced embedding as well as extraction performance for meeting the medical image indexation requirements.

Keywords: Lung CT image, visible watermarking, invisible watermarking, watermark embedding, watermark extraction.

Received May 8, 2014; accepted August 12, 2015

Full text 

 
Tuesday, 26 June 2018 06:08

Evaluation of Influence of Arousal-Valence Primitives on Speech Emotion Recognition

Imen Trabelsi1, Dorra Ben Ayed2, and Noureddine Ellouze2

1Sciences and Technologies of Image and Telecommunications, Sfax University, Tunisia

2Ecole Nationale d’Ingénieurs de Tunis, Université Tunis-Manar, Tunisia

Abstract: Speech Emotion recognition is a challenging research problem with a significant scientific interest. There has been a lot of research and development around this field in the recent times. In this article, we present a study which aims to improve the recognition accuracy of speech emotion recognition using a hierarchical method based on Gaussian Mixture Model and Support Vector Machines for dimensional and continuous prediction of emotions in valence (positive vs negative emotion) and arousal space (the degree of emotional intensity). According to these dimensions, emotions are categorized into N broad groups. These N groups are further classified into other groups using spectral representation. We verify and compare the functionality of the different proposed multi-level models in order to study differential effects of emotional valence and arousal on the recognition of a basic emotion. Experimental studies are performed over the Berlin Emotional database and the Surrey Audio-Visual Expressed Emotion corpus, expressing different emotions, in German and English languages.

Keywords: Speech emotion recognition, arousal, valence, hierarchical classification, gaussian mixture model, support vector machine.

Received September 3, 2015; accepted March 30, 2016

Full text 

 
Tuesday, 26 June 2018 05:34

A Reversible Data Hiding Scheme Using Pixel

Location

Rajeev Kumar, Satish Chand, and Samayveer Singh

Division of Computer Engineering, Netaji Subhas Institute of Technology, India

Abstract: In this paper, authors propose a new reversible data hiding scheme that has two passes. In first pass, the cover image is divided into non-overlapping blocks of 2×2 pixels. The secret data bit stream is converted into 2-bit segments, each representing one of the four values, i.e., 0,1,2,3 and these digits (2-bit segments) are embedded into blocks by increasing/decreasing the pixel value of the block by 1. If the pixel is even valued, then the pixel is increased otherwise it is decreased by 1 to embed the secret data. In second pass, the same process of the first pass embedding is repeated. The second pass embedding helps in achieving better stego-image quality and high data hiding capacity because some of the pixels changed in first pass are recovered to their original form. Basically, the second pass is a complement of the first pass. This scheme can achieve approximately 1 bpp data hiding capacity and more than 55db Peak Signal-to-Noise Ratio (PSNR) for all cover images in our experiments. For ensuring reversibility of the scheme, a location map for each phase is constructed and embedded into the image. Though, the scheme has some overhead in hiding the secret data, yet it provides good quality with high capacity. Since it only increases/decreases the pixel value of at most half of the pixels, it is very simple. The experimental results show that it is superior to the state of the art schemes.

Keywords: Reversible data hiding, pixel location, location map, non-overlapping blocks.

Received January 30, 2015; accepted July 17, 2015

Full text 


Tuesday, 26 June 2018 05:30

Pseudorandom Noise Sequence of Digital

Watermarking Algorithm based on Discrete

Wavelet Transform using Medical Image

Ramesh Muthiya1, Gomathy Balasubramanian2, and Sundararajan Paramasivam3

1Department of Electronics and Communication Engineering, St.Martin’s Engineering College, India

2Department of Computer and Science Engineering, Bannari Amman Institute of Technology, India

3Department of Electronics and Communication Engineering, Sri Shakthi Institute of Engineering and Technology, India

Abstract: Owing to the development of latest technologies in the areas of communication and computer networks, present day businesses are moving to the digital world for effectiveness, convenience and security. There are a number of applications in healthcare industry like tele-consulting, tele-surgery and tele-diagnosis. Today’s healthcare involves some security risks as these provide new ways to store, access and distribute medical data. Watermarking can be seen as an additional tool for security measures. Pseudorandom noise sequence image watermarking algorithm which is blind (it does not require the presence of input image for detection) and robust is also analyzed. The watermarking scheme embeds the binary logo in the Discrete Wavelet Transform (DWT) domain as in the sub-band level. Consequently, the simulation results show that the proposed algorithm achieves higher security and robustness against various attacks like Set Partitioning in Hierarchical Trees (SPIHT) and JPEG compression, adding Gaussian noise and salt and pepper noise, Gaussian filtering and average filtering. The promising experimental results are Peak Signal-Noise Ratio (PSNR) and Normalized Correlation (NC) value is reported and also by using Compression Techniques (CT) scan and MRI medical images.

Keywords: Discrete wavelet transform, watermarking algorithm, pseudorandom noise sequence, peak signal-noise ratio, normalized correlation and medical images.

Received September 22, 2014; accepted December 23, 2014

Full text 

 
Tuesday, 26 June 2018 05:28

A Hybrid Technique for Annotating Book Tables

Asima Latif1, Shah Khusro1, Irfan Ullah1, and Nasir Ahmad2

1Department of Computer Science, University of Peshawar, Pakistan

2Department of Computer Systems Engineering, University of Engineering and Technology Peshawar, Pakistan


Abstract: Table extraction is usually complemented with the table annotation to find the hidden semantics in a particular piece of document or a book. These hidden semantics are determined by identifying a type for each column, finding the relationships between the columns, if any, and the entities in each cell. Though used for the small documents and web-pages, these approaches have not been extended to the table extraction and annotation in the book tables. This paper focuses on detecting, locating and annotating entities in book tables. More specifically it contributes algorithms for identifying and locating the tables in books and annotating the table entities by using the online knowledge source DBpedia Spotlight. The missing entities from the DBpedia Spotlight are then annotated using Google Snippets. It was found that the combined results give higher accuracy and superior performance over the use of DBpedia alone. The approach is a complementary one to the existing table annotation approaches as it enables us to discover and annotate entities that are not present in the catalogue. We have tested our scheme on Computer Science books and got promising results in terms of accuracy and performance.

Keywords: DBpedia spotlight, google snippets, table extraction, table annotation, table semantics, knowledge base.

Received February 8, 2015; accepted August 31, 2015

Full text 

 
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…