July 2018, No. 4
Print E-mail

An Advancement to QoS on Traffic Prediction over Vehicular Ad Hoc Networks using Cultured Regression Model

Ishtiaque Mahmood1, Ahmad Khalil Khan2, Sharmi Sankar3, JehadAlKhalaf Bani-Younis4, Adeel Akram5, and Zeshan Iqbal6

Department of Computer Engineering1, University of Engineering and Technology Taxila, Pakistan,

Department of Electrical Engineering2, University of Engineering and Technology Taxila, Pakistan,

Department of IT3, Ibri College of Applied Science, Sultanate of Oman

Dean4, Ibri College of Applied Science, Sultanate of Oman

Department of Telecom Engineering5, University of Engineering and Technology Taxila, Pakistan,

Department of Computer Science6, University of Engineering and Technology Taxila, Pakistan,


Abstract: The research in this paper, focuses on the development of a Cultured Regression Model (CRM) that can analyse the traffic onVehicular Ad hoc Networks (VANETs). Multimedia streaming applications have experienced more Quality of Service(QoS) issues than any other type of networked applications; these include variations in latency, delay, dropping of packets on arrival at destinations and deterioration of services caused by reduced data rate. To overcome these issues, the communicating entities/network operators have to be informed in advance about the network conditions for improvement of QoS. This motivated us to develop a CRM, which will measure the throughput of the network. The input for this model is given as a trace file that has been used by the researchers, where the stages of pre-processing, clustering, classification with CRM and visualization are followed duly to produce the output of appropriate estimated throughput. Artificial Neural Network (ANN) based classificationthat had been previously used for analysing the time series data on the traffic trace file is compared with the throughput response produced by the proposed CRM classification model. The comparison between ANN and CRM demonstrates the effectiveness of both the models in terms of statistical analysis. The prediction on VANETs traffic can aid the network operators to improve the QoS up another level, as they are wellprepared in advance to overcome the issues. The reduction in QoS over VANETs with respect to multimedia streaming can be overcome by using this fundamental tool for prior management of traffic.


Keywords: QoS, CRM, VANETs, ANN.

Received April 13, 2015; accepted January 13, 2016


Full text 



Print E-mail

MR Brain Image Segmentation Using an Improved Kernel Fuzzy Local Information C-Means Based Wavelet, Particle Swarm Optimization (PSO) Initialization and Outlier Rejection with Level Set Methods

Abdenour Mekhmoukh and Karim Mokrani

Faculté de Technologie, Université de Bejaia, Algeria

Abstract: This paper, presents a new image segmentation method based on Wavelets, Particle Swarm Optimization (PSO) and outlier rejection caused by the membership function of the kernel fuzzy local information c-means (KFLICM) algorithm combined with level set is proposed. The segmentation of Magnetic Resonance (MR) images plays an important role in the computer-aided diagnosis and clinical research, but the traditional approach which is the Fuzzy C-Means (FCM) clustering algorithm is sensitive to the outlier and does not integrate the spatial information in its membership function. Thus the algorithm is very sensitive to noise and in-homogeneities in the image, moreover, it depends on cluster centers initialization. A novel approach, named improved IKFLICMOR is presented to improve the outlier rejection and reduce the noise sensitivity of conventional FCM clustering algorithm. To get the first image segmentation, the traditional FCM is applied to low-resolution image after wavelet decomposition. In general, the FCM algorithm chooses the initial cluster centers randomly, but the use of PSO algorithm gives us a good result for these centers. Our algorithm is also completed by adding into the standard FCM algorithm the spatial neighborhood information. These a priori are used in the cost function to be optimized. The resulting fuzzy clustering is used as the initial level set function. The results confirm the effectiveness of the IKFLICMOR associated with level set for MR image segmentation.


Keywords: Image segmentation, outlier rejection, FCM, PSO, Spatial fuzzy clustering, wavelet transform, Level set methods.


Received May 24, 2015; accepted March 9, 2016


Full text 


Print E-mail

Identification of an Efficient Filtering-Segmentation

Technique for Automated Counting of Fish


Lilibeth Coronel1, Wilfredo Badoy2, and Consorcio Namoco3

1Mindanao State University at Naawan, Philippines

2Ateneo de Davao University, Philippines

3Mindanao University of Science and Technology, Philippines

Abstract: The counting of fish fingerlings is an important process in determining the accurate consumption of feeds for a certain density of fingerlings in a pond. Image processing is a modern approach to automate the counting process. It involves six basic steps, namely, image acquisition, cropping, scaling, filtering, segmentation, and measurement and analysis. In this study, two (2) filtering and two (2) segmentation algorithms are identified based on the following observations: the non-uniform brightness and contrast of the image; random noise brought about by feeds, waste, and spots in the container; and the likelihood of the image samples or application used by the different authors of the smoothing and clustering algorithms in their respective experiments. Four (4) combinations of filtering-segmentation algorithms are implemented and tested. Results show that combination of local normalization filter and iterative selection threshold yield a very high counting accuracy using the measurement function such as Precision, Recall, and F-measure. A Graphical User Interface (GUI) is also presented to visualize the image processing steps and its counting results.


Keywords: Digital image processing, filtering, segmentation, image normalization, threshold.

Received July 9, 2015; accepted March 3, 2016


Full text 




Print E-mail

A Rule-Based Algorithm for the Detection of Arud Meter in Classical Arabic Poetry


Belal Abuata, Asma Al-Omari

Computer Information Systems Department, Yarmouk University, Jordan


Abstract: Arud is the science of poems used in Arabic, Persian, Urdu, and other eastern languages. Determining the Arud meter of classical Arabic poems is a difficult and tiresome task for those who study poetry. In this paper, we focus on the computerized analysis of Arabic Arud meter. We introduce an algorithm that is able to determine the correct Arud meter for a given Arabic poem and is also able to convert the poem into Arud Writing. The algorithm is based on a set of well defined rules applied only on the first part (sadr صدر) of the poem verse. The algorithm consists of five main steps. The preliminary tests are quite satisfactory and the algorithm gave high accuracy. The algorithm can be used in systems that handle Arabic poetry such as information retrieval systems or teaching Arabic poetry for students.

Keywords: Arud meter algorithm, Arabic poetry, Arabic linguistic, Arabic retrieval.

Received June 22, 2014; accepted March 9, 2016


Full text 




Print E-mail

Social Event Detection – a systematic approach using Ontology and Linked Open Data with Significance to Semantic Links

Sheba Selvam, Ramadoss Balakrishnan, and Balasundaram Ramakrishnan

Department of Computer Applications, National Institute of Technology Tiruchirappalli, India

Abstract: With the growing interest in capturing daily activities and sharing it through social media sites, enormous amount of multimedia content such as photographs, videos, texts, audio are made available on the web. Retrieval of multimedia content has now become a trivial task. Generally, people show interest in sharing photographs to a well-known closed community through social media sites like Flickr and Facebook. One solution to retrieve photographs is by identifying them as events. This task is known as Social Event Detection (SED). From the Flickr website, with the use of metadata like photoID, title, tags, description, date, time and geo-location for each photograph, the SED task is performed. As a central piece of the SED task, ontology for events domain is implemented. First half of the work is an explicit knowledge representation by constructing ontology for event detection using Protégé. Then, reasoning is done through HermiT reasoner and later SPARQL query is done to retrieve the media representing each event. The second half of the work involves in linking open description of specific events from different web services like Eventful, Last.fm, Foursquare, Upcoming and GeoNames.  SPARQL query is done to measure the retrieval performance of each event after making semantic link using Linked Open Data (LOD). Finally, an additional feature, the weather information for events is added, which shows removal of false positives in the SED task.


Keywords: Multimedia, Social media, social events, photographs, event detection, ontology, linked open data, contextual metadata.

Received August 15, 2015; accepted December 21, 2015


Full text



Print E-mail

Application of Framelet Transform and Singular

Value Decomposition to Image Enhancement

1Sulochana Subramaniam, 2Vidhya Rangasamy, 3Vijayasekaran Duraisamy, and 4Mohanraj Karuppanan

1, 2, 3Institute of Remote Sensing, Anna University, India

4Software Engineer, Wipro Technologies, India

Abstract: In this paper, a new satellite image enhancement technique based on framelet transform and Singular Value Decomposition (SVD) has been proposed. Framelet transform is used to decompose the image into one low frequency subband and eight high frequency subbands. The enhancement is done with regard of both resolution and contrast. To increase the resolution, low and high frequency subbands have been interpolated. In intermediate stage, estimating high frequency subbands has been proposed to achieve sharpness. All the subbands are combined by inverse framelet transform to get the high resolution image. To increase the contrast, framelet transform is combined with SVD. Singular values of the low frequency subband are updated and inverse transform is performed to get the enhanced image. The proposed technique has been tested on satellite images. The quantitative measures such as Peak Signal-to-Noise (PSNR), Structural Similarity Index Measure (SSIM), Universal Quality Index (UQI), Entropy, Quality_ Score are used and the visual results show the superiority of the proposed technique over the conventional and state-of-art image enhancement techniques. The time complexity indicates the proposed image enhancement is suitable for further image processing applications.

Keywords: Generalised Histogram Equalization (GHE), SVD, Discrete Wavelet Transform (DWT), Framelet Transform (FRT), PSNR, SSIM, UQI.


Received August 7, 2014; accepted September 9, 2015


Full text 



Print E-mail

STF-DM: A Sparsely Tagged Fragmentation with Dynamic Marking an IP Traceback Approach

1Hasmukh Patel and 2Devesh C. Jinwala

1Gujarat Power Engineering and Research Institute, India

2Sardar Vallabhbhai National Institute of Technology, India

Abstract: Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks are serious threats to the Internet. The frequency of DoS and DDoS attacks is increasing day by day. Automated tools are also available that enable non-technical people to implement such attacks easily. Hence, it is not only important to prevent such attacks, but also need to trace back the attackers. Tracing back the sources of the attacks, which is known as an IP traceback problem is a hard problem because of the stateless nature of the Internet and spoofed IP packets. Various approaches have been proposed for IP traceback. Probabilistic Packet Marking (PPM) approach incurs the minimum network and management overhead. Hence, we focus on PPM approach. Sparsely-Tagged Fragmentation Marking Scheme (S-TFMS), a PPM based approach, requires low overhead at the victim and achieve zero false-positives. However, it requires a large number of packets to recover the IP addresses. In this paper, we propose a Sparsely-Tagged Fragmentation marking approach with dynamic marking probability. Our approach requires less number of packets than required by S-TFMS. Further, to reduce the number of packets required by victim, we extend our basic approach with the new marking format. Our extended approach requires less than one-tenth time number of packets than those in S-TFMS approach to recover the IP addresses. Our approaches recover the IP address quickly with zero false-positives in the presence of multiple attackers. We show mathematical as well as experimental analysis of our approaches.


Keywords: DDoS attack, IP traceback, probabilistic packet marking, dynamic marking, sparsely tagged marking

 Received September 8, 2014; accepted October 20, 2015


Full text



Print E-mail

Application of Computational Geometry in Coal

Mine Roadway 3D Localization

Feng Wang1*, Lei shi2, Weiguo Fan3, and Cong Wang4

1, 2, 3, 4College of Information Engineering, Taiyuan University of Technology, China

3Department of Computer Science, Virginia Polytechnic Institute and State University, China

Abstract: The Voronoi diagram principle in the computational geometry was researched and the relationship between the anchor nodes and Voronoi diagram was analyzed in this paper. A new arrangement method of coal mine roadway nodes was proposed to construct the Voronoi diagram of the roadway on the basis of new node arrangement method and increase numerous virtual anchor nodes for the roadway space under the condition of no increase of network cost and increase the number of anchor nodes communicating with the sensor nodes. Through the combination with the range-free DV-Hop algorithm, the scheme of coal mine roadway localization was proposed to finally achieve the localization of underground roadway. The simulation results show that, compared to the traditional range-free algorithm, the algorithm in this paper can more accurately estimate the location of the nodes under the same network condition. The increase of the positioning accuracy of the algorithm can suit the node localization of underground wireless sensor network in coal mine.


Keywords: Wireless Sensor Network (WSN), Roadway; Voronoi Diagra,; Virtual Anchor Node.

Received August 25, 2015; accepted February 11, 2016


Full text 

Print E-mail

Overview of Automatic seed selection methods for

biomedical images segmentation

Ahlem Melouah, Soumia Layachi


Department of Informatics, Badji-Mokhtar Annaba University, Algeria


Abstract: In biomedical image processing, image segmentation is a relevant research area due to its wide spread usage and application. Seeded region growing is very attractive for semantic image segmentation by involving the high-level knowledge of image components in the seed point selection procedure. However, the seeded region growing algorithm suffers from the problems of automatic seed point generation. A seed point is the starting point for region growing and its selection is very important for the success of segmentation process. This paper presents an extensive survey on works carried out in the area of automatic seed point selection for biomedical images segmentation by seeded region growing algorithm. The main objective of this study is to provide an overview of the most recent trends for seed point selection in biomedical image segmentation.


Keywords: Automatic seed selection, biomedical image, region growing segmentation, region of interest, region extraction, edge extraction, feature extraction.

Received November 6, 2015; accepted February 21, 2016


Full text 




Print E-mail

Evaluation of Influence of Arousal-Valence

Primitives on Speech Emotion Recognition

Imen Trabelsi and Med Salim Bouhlel

Sciences and Technologies of Image and Telecommunications, Sfax University, Tunisia

Abstract: Speech Emotion recognition is a challenging research problem with a significant scientific interest. There has been a lot of research and development around this field in the recent times. In this article, we present a study which aims to improve the recognition accuracy of speech emotion recognition using a hierarchical method based on Gaussian Mixture Model and Support Vector Machines  for  dimensional and continuous prediction of emotions in valence (positive vs negative emotion) and arousal space ( the degree of emotional intensity).  According to these dimensions, emotions are categorized into N broad groups. These N groups are further classified into other groups using spectral representation. We verify and compare the functionality of the different proposed multi level models in order to study differential effects of emotional valence and arousal on the recognition of a basic emotion.  Experimental studies are performed over the Berlin Emotional database and the Surrey Audio-Visual Expressed Emotion corpus, expressing different emotions, in German and English languages.

Keywords: Speech emotion recognition, arousal, valence, hierarchical classification, GMM, SVM


Received September 3, 2015; accepted March 30, 2016


Full text 



Print E-mail

An Improved Richardson-Lucy Algorithm Based on Genetic Approach for Satellite Image Restoration

Fouad Aouinti1, M’barek Nasri1, Mimoun Moussaoui1, and Bouchta Bouali2

1Superior School of Technology, Mohammed I University, Morocco

2Faculty of Sciences, Mohammed I University, Morocco

Abstract: In the process of satellite imaging, the observed image is blurred by optical system and atmospheric effects and corrupted by additive noise. The deconvolution of blurred and noisy satellite images is an ill-posed inverse problem. In the literature, a number of image restoration methods have been proposed to reconstruct an approximated version of the original image from a degraded observation. The iterative method known as Richardson-Lucy deconvolution has demonstrated its effectiveness to compensate for these degradations. The efficiency of this method obviously depends on the iteration count that has a direct impact on the expected result. This decisive and virtually unknown parameter leads to the estimation of approximate values which may affect the quality of the restored image. In this paper, the idea consists of optimizing the iteration count of the Richardson-Lucy deconvolution by applying the genetic approach in order to get a better restoration of the degraded satellite image.

Keywords: Satellite image, spatially invariant blur, non-blind restoration, Richardson-Lucy deconvolution, genetic algorithm.

Received December 16, 2015; accepted February 23, 2016


Print E-mail

Off-line Arabic Hand-WritingRecognition Using Artificial Neural Networkwith Genetics Algorithm

Khalid Nahar

Computer Science Department, Yarmouk University, Jordan

Abstract: Artificial Neural Networks (ANN) were used in the recognition of the printed Arabic text with a high rate of success. In contrast, Arabic hand-writing recognition has many challenges,some were tackled in some research recently. In this paper we used ANN in recognizingArabic hand-written characterswith the Genetics Algorithm (GA). The GA was used to search for the best ANN structure. We consider Arabic off-line characters represented by a series of (x,y) coordinates. The dataset was gathered from a couple of volunteers, used the E-pen to write different Arabic letters. AMATLAB program was implemented to store the written characters and extractstheir features. Features were determined based on the shape and number of segments that made up the characters. The recognition results were very promising when using ANN with the GA in comparison with other relevant approaches. On average more than 95% of accuracy was achieved whenGA is used to adjust ANN structure in order to get the best recognition rate.

Keywords: Artificial Neural Networks (ANNs), GA, feature vector, character recognition, arabic hand-written text, Hidden Markov Model (HMM).

Received January 30, 2016;accepted March 27, 2016


Full text 


Print E-mail

Using Visible and Invisible Watermarking Algorithms for Indexing Medical Images

Jasmine Selvakumari1 and Suganthi Jeyaraj2

1Department of Computer Science and Engineering, Hindusthan College of Engineering and Technology, Coimbatore, India.

2Hindusthan Institute of Technology, India.

Abstract: Watermarking of medical images greatly helps to provide authentication for safe storage and transmission of image databases. Though proper methodologies for indexing the medical images would provide faster retrieval performance, the problems have not been greatly addressed in the literature. This paper presents a review on image watermarking algorithms for indexing medical images. We have attempted at embedding and extraction of both visible and invisible watermarking algorithms over a set of 23 patient’s lung images. Results obtained establish the need for watermarking algorithms which show enhanced embedding as well as extraction performance for meeting the medical image indexation requirements.


Keywords: Lung CT image, visible watermarking, invisible watermarking, watermark embedding, watermark extraction.


Received May 3, 2014; accepted August 12, 2015


Full text    

Print E-mail

 Design and Development of Suginer Filter for Intrusion Detection Using Real Time Network Data

Revathi Sujendran 1 and Malathi Arunachalam2

1Department of Computer Science, Government Arts College, India

2Government Arts College, India

Abstract: By rapid use of the Internet and computer network all over the world makes security a major issues, so using the intrusion-detection system has become more important. All the same, the primary issues of IDS are generating high false alarm rate and fails to detect attacks, which make system security more vulnerable. This paper proposed a new concept of using Suginer Filter to identify IDS. The Takagi-Sugeno fuzzy model is structured based on Neuro-fuzzy method to generate fuzzy rules and wiener filter is used to filter out attack as a noise signal using fuzzy rule generation.  These two methods are combined to detect intrusive behavior of the system. The proposed suginer filter (Sugeno+Wiener) uses completely a different research structure to identify attacks and the experiment was evaluated on live network data collected, which shows that the proposed system achieves approximately 98.46% of accuracy and reduce false alarm rate to 0.08% in detecting different real time attacks. From the obtained result it’s clear that the proposed system performs better when compared with other existing machine learning techniques.

Keywords: Intrusion detection, takagi-sugeno model, wiener filter, network data.


Received October 10, 2014; accepted September 7, 2015


Full text  



Print E-mail

A Hybrid Technique for Annotating Book Tables

Asima Latif1, Shah Khusro1, Irfan Ullah1, Nasir Ahmad2

1Department of Computer Science, University of Peshawar, Pakistan

2Department of Computer Systems Engineering, University of Engineering and Technology Peshawar, Pakistan

Abstract: Table extraction is usually complemented with the table annotation to find the hidden semantics in a particular piece of document or a book. These hidden semantics are determined by identifying a type for each column, finding the relationships between the columns, if any, and the entities in each cell. Though used for the small documents and web-pages, these approaches have not been extended to the table extraction and annotation in the book tables. This paper focuses on detecting, locating and annotating entities in book tables. More specifically it contributes algorithms for identifying and locating the tables in books and annotating the table entities by using the online knowledge source DBpedia Spotlight. The missing entities from the DBpedia Spotlight are then annotated using Google Snippets. It was found that the combined results give higher accuracy and superior performance over the use of DBpedia alone. The approach is a complementary one to the existing table annotation approaches as it enables us to discover and annotate entities that are not present in the catalogue. We have tested our scheme on Computer Science books and got promising results in terms of accuracy and performance.

Keywords: DBpedia Spotlight, Google Snippets, table extraction, table annotation, table semantics, KB.

Received January 8, 2015; accepted August 31, 2015


Full text 



Print E-mail

Using 3D Convolutional Neural Network in Surveillance Videos for Recognizing Human Actions

Sathyashrisharmilha Pushparaj1 and Sakthivel Arumugam2

 1Department of Computer Science and Engineering, Adithya Institute of Technology, India

2Department of Information Technology, Woldia University, Ethiopia

Abstract: Human action recognition is a very important component of visual surveillance systems.  The demand for automatic surveillance systems play a crucial role in the circumstances where continuous patrolling by human guards are not possible. The analysis in surveillance scenarios often requires the detection of certain specific human actions. The automated recognition of human actions in detecting certain human actions are considered here.  The main aim is to develop a novel 3D CNN model for human action recognition in realistic environment. The features are extracted from both the spatial and the temporal dimensions by performing 3D convolutions, by which, capturing the motion information encoded in multiple adjacent frames. The evolved model generates multiple information from the input frames, and the information from all the channels are combined and that is to be the final feature. The developed model automatically tend to recognize specific human actions which needs attention in the real world environment like in pathways or in corridors of any organization. This proposed work is well suitable for the situations like where continuous patrolling of humans are not possible, to prevent certain human actions which are not allowed inside the organisation premises.

Keywords: Security Surveillance, Convolutional Neural Networks, 3D Convolution, Feature Extraction, Image Analysis and Action Recognition.

Received April 29, 2014; accepted January 27, 2015


Full text  



Print E-mail

A Robust Blind Watermarking Scheme for Ownership Assertion of Multi-band Satellite Images

Priyanka Singh1and Suneeta Agarwal2

           1GIS Cell, Motilal Nehru National Institute of Technology, Allahabad, India

2Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology, India  

Abstract: Satellite images serve as very reliable sources for crucial information of inaccessible areas which incur high costing in their acquisitions. Hence, they must reside with their rightful owners as their mishandling may lead to serious consequences. A robust blind watermarking scheme for multi-band satellite images is proposed in this paper. Ownership information is embedded into cover image via minimum manipulation of pixel values such that the classification results are not much affected. Homogeneity analysis of the cover image is performed to chalk out homogeneous sites for embedding of copyright information.   To further enhance the security of the scheme, random chaotic mapping along with convolutional encoding and viterbi decoding and multiple secret keys (four) has been employed. The robustness of the scheme has been tested against comprehensive set of attacks and evaluated using Normalized Cross Correlation (NCC) and Peak Signal to Noise Ratio (PSNR) metrics. The comparative results with the other existing state of the art approaches confirm the efficacy of the proposed scheme.   


Keywords: Robust blind watermarking, ownership information, chaotic mapping, convolutional encoding, viterbi decoding, normalized cross correlation, peak signal to noise ratio.


Received March 13, 2015; accepted April 23, 2015


Full text 



Print E-mail

A Reversible Data Hiding Scheme Using Pixel


Rajeev Kumar, Satish Chand, and Samayveer Singh

Division of Computer Engineering, Netaji Subhas Institute of Technology, India

Abstract: In this paper, authors propose a new reversible data hiding scheme that has two passes. In first pass, the cover image is divided into non-overlapping blocks of 2×2 pixels. The secret data bit stream is converted into 2-bit segments, each representing one of the four values, i.e., 0,1,2,3 and these digits (2-bit segments) are embedded into blocks by increasing/decreasing the pixel value of the block by 1. If the pixel is even valued, then the pixel is increased otherwise it is decreased by 1 to embed the secret data. In second pass, the same process of the first pass embedding is repeated. The second pass embedding helps in achieving better stego-image quality and high data hiding capacity because some of the pixels changed in first pass are recovered to their original form. Basically, the second pass is a complement of the first pass. This scheme can achieve approximately 1 bpp data hiding capacity and more than 55db PSNR for all cover images in our experiments. For ensuring reversibility of the scheme, a location map for each phase is constructed and embedded into the image. Though, the scheme has some overhead in hiding the secret data, yet it provides good quality with high capacity. Since it only increases/decreases the pixel value of at most half of the pixels, it is very simple. The experimental results show that it is superior to the state of the art schemes.


Keywords: Reversible data hiding, Pixel Location, Location Map, non-overlapping blocks.


Received January 30, 2015; accepted July 17, 2015


Full text   


Print E-mail

A Method for Finding the Appropriate Number of Clusters

Huan Doan and Dinh Thuan Nguyen

University of Information Technology, VNU-HCM, Vietnam

Abstract: Drawback of almost partition based clustering algorithms is the requirement for the number of clusters specified at the beginning. Identifying the true number of clusters at the beginning is a difficult problem. So far, there were some works studied on this issue but no method is perfect in every case. This paper proposes a method to find the appropriate number of clusters in the clustering process by making an index indicated the appropriate number of clusters. This index is built from the intra-cluster coefficient and inter-cluster coefficient. The intra-cluster coefficient reflects intra-distortion of the cluster. The inter-cluster coefficient reflects the distance among clusters. Those coefficients are made only by extremely marginal objects of clusters. The looking for the extremely marginal objects and the building of the index are integrated in a weighted FCM algorithm and it is calculated suitably while the weighted FCM is processing. The Extended weighted FCM algorithm integrated this index is called FCM-E. Not only does the FCM-E seek the clusters, but it also finds the appropriate number of clusters. The authors experiment with the FCM-E on some data sets of UCI: Iris, Wine, Breast Cancer Wisconsin, and Glass and compare the results of the proposed method with the results of the other methods. The results of proposed method obtained are encouraging.

Keywords: Method for finding the number of clusters, Appropriate a number of clusters, Fuzzy c-means, Clustering algorithm.

Received December 18, 2014; accepted March 3, 2016


Full text  



Print E-mail


Selective Image Encryption using Singular Value Decomposition and Arnold Transform

Kshiramani Naik1, Arup Pal1, and Rohit Agarwal2

1Department of Computer Science and Engineering, Indian School of Mines, India

2Department of Computer Science and Engineering, JSS Academy of Technical Education, India

Abstract: Selective image cryptosystem is a popular method due to its low computational overhead for enciphering the large volume of digital images. Generally selective cryptosystem encrypts the significant part of the data set while the insignificant part is considered in compression process. As a result, such kind of approaches reduces the computational overhead of encryption process as well as properly utilizes the limited bandwidth of communication channel. In this paper the authors have proposed an image cryptosystem for a compressed image. Initially, the original image was compressed using Singular Value Decomposition (SVD) and subsequently, the selective parts of the compressed image are considered for enciphering purpose. We have followed the confusion-diffusion mechanism to encrypt the compressed image. In encryption process, the Arnold Cat Map (ACM) is used and the associated parameters of ACM are kept secret. The scheme is tested on a set of standard grayscale images and satisfactory results have been found in terms of various subjective and objective analysis like the visual appearance of cipher image, disparity of histogram with original one, computation of Peak Signal to Noise Ratio (PSNR), Number of Pixel Change Rate (NPCR), correlation coefficient and entropy.

Keywords: Arnold transform, confusion-diffusion mechanism, selective image cryptosystem, singular value decomposition.

Received November 10, 2014; accepted September 22, 2015


Full text  



Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 Warning: fsockopen(): unable to connect to oucha.net:80 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 skterr