March 2019, No. 2
Print E-mail

Secure Searchable Image Encryption in Cloud

Using Hyper Chaos

           Shaheen Ayyub and Praveen Kaushik

Department of Computer Science & Engineering, Maulana Azad National Institute of Technology, India

Abstract: In cloud computing, security is the main issue to many cloud providers and researchers. As we know that cloud acts as a big black box. Nothing inside the cloud is visible to the cloud user. This means that when we store our data or images in the cloud, we lost our control upon it. The data in the provider’s hands could make security and privacy issues in cloud storage as users lose their control over their data.  So it is necessary for protecting user’s private data that they should be stored in the encrypted form and server should not learn anything about the stored data. These data may be personal images. In this paper we have worked on the user’s personal images which should be kept secret. The proposed scheme is to do the encryption of the images stored in the cloud. In this paper Hyper Chaos based encryption is done, which is applied on the masked images. Comparing with conventional algorithms chaos based ones have suggested more secure and fast encryption methods. The flicker images are used to create the mask for the original image and then hyper chaos is applied for encrypting the image. Prior methods in this regard are restricted to either some attacks possibility or key transfer mechanism. One of the advantages of proposed algorithm is that, the key is also encrypted. Some values of generated encrypted key with the index is sent to the server & other value is sent to the user. After decrypting the key, an encrypted image can be decrypted.  The key encryption is used to enhance the security and privacy of the algorithm. Index is also created for the images before storing them on the cloud.

Keywords: Cloud computing, Encryption, Cloud Security, Privacy and Integrity, Hyper Chaos, Decryption, Logistic map.

Received June 27, 2015; accepted June 1, 2016

Full text  

 

 
Print E-mail

Collaborative Detection of Cyber Security

Threats in Big Data

Jiange Zhang, Yuanbo Guo, and Yue Chen

State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou Information Science and Technology Institute, China

Abstract: In the era of big data, it is a problem to be solved for promoting the healthy development of the Internet and the Internet+, protecting the information security of individuals, institutions and countries. Hence, this paper constructs a collaborative detection system of cyber security threats in big data. Firstly, it describes the log collection model of Flume, the data cache of Kafka, and the data process of Esper; then it designs one-to-many log collection, consistent data cache, CEP data process using event query and event pattern matching; finally, it tests on the datasets and analyzes the results from six aspects. The results demonstrate that the system has good reliability, high efficiency and accurate detection results; moreover, the system has the advantages of low cost and flexible operation.

Keywords: Big data, cyber security, threat, collaborative detection.

Received July 20, 2015; accepted February 15, 2017

Full text 


 
Print E-mail

Scheduling with Setup Time Matrix for Sequence

Dependent Family

Senthilvel Nataraj1,UmaMaheshwari Sankareswaran2,Hemamalini Thulasiram3, and Senthiil  Varadharajan4

1Department of Computer Science, Anna University, India

2 Department of Electronics and Communication Engineering, Coimbatore Institute of Technology, Affiliated to Anna University, India

 3Department of Mathematics, Government Arts and Science College, Affiliated to Bharathiar University, India

4Production Engineering Department, St. Peter’s University, India

Abstract: We consider a problem of scheduling n jobs in k families on a single machine subjected to family set-up time to minimize the overall penalty. This paper proposes three heuristic approaches based on neighbourhood structures using setup time matrix. These three approaches minimize the maximum penalty which in turn minimizes the total penalty. Inserting the Idle time initially (ITF approach) or between the families perform efficiently on large instances. The computational results prove the efficiency of the algorithm.

Keywords: Scheduling, sequence dependent scheduling, heuristic algorithm, idle time insertion.

Received January 1, 2016; accepted June 26, 2016

 
Print E-mail

Optimal Threshold Value Determination for

Land Change Detection

 

Sangram Panigrahi1, Kesari Verma1, and Priyanka Tripathi2

1Department of Computer Applications, National Institute of Technology Raipur, India

2Department of Computer Engineering and Applications, National Institute of Technical Teachers Trainingand Research Bhopal, India

Abstract: Recently data mining techniques have emerged as an important technique to detect land change by detecting the sudden change and/or gradual change in time series of vegetation index dataset. In this technique, the algorithms takes the vegetation index time series data set as input and provides a list of change scores as output and each change score corresponding to a particular location. If the change score of a location is greater than some threshold value, then that location is considered as change. In this paper, we proposed a two step process for threshold determination: first step determine the upper and lower boundary for threshold and second step find the optimal point between upper and lower boundary, for change detection algorithm. Further, by engaging this process, we determine the threshold value for both Recursive Merging Algorithm and Recursive Search Algorithm and presented a comparative study of these algorithms for detecting changes in time series data. These techniques are evaluated quantitatively using synthetic dataset, which is analogous to vegetation index time series data set. The quantitative evaluation of the algorithms shows that the Recursive Merging (RM) method performs reasonably well, but the Recursive Search Algorithm (RSA) significantly outperforms in the presence of cyclic data.

Keywords: Data mining, threshold determination, EVI and NDVI time series data, high dimensional data, land change detection, recursive search algorithm, recursive merging  algorithm .

Received October 28, 2015; accepted June 26, 2016

 
Print E-mail

Rough Set-Based Reduction of Incomplete

Medical Datasets by Reducing the Number

of Missing Values

Luai Al Shalabi

Faculty of Computer Studies, Arab Open University, Kuwait

Abstract: This paper proposes a model of: firstly, dimensionality reduction of noisy medical datasets that based on minimizing the number of missing values, which achieved by cutting the original dateset, secondly,  high quality of generated reduct. The original dataset was split into two subsets; the first one contains complete records and the other one contains imputed records that previously have missing values. The reducts of the two subsets based on rough set theory are merged. The reduct of the merged attributes was constructed and tested using Rule Based and Decomposition Tree classifiers. Hepdata dataset, which has 59% of its tuples with one or more missing values, is mainly used throughout this article. The proposed algorithm performs effectively and the results are as expected. The dimension of the reduct generated by the proposed model (PM) is decreased by 10% comparing to the rough set model (RSM). The proposed model was tested against different medical incomplete datasets. Significant and insignificant difference between RSM and PM are shown in tables 1-5.

Keywords: Data mining, rough set theory, missing values, reduct.

Received October 9, 2015; accepted August 24, 2016

 


 
Print E-mail

A High Capacity Data Hiding Scheme Using Modified AMBTC Compression Technique

Aruna Malik, Geeta Sikka, and Harsh K Verma

Department of Computer Science and Engineering, Dr B R Ambedkar National Institute of Technology, India

Abstract: In this paper, a data hiding scheme is proposed which modifies the absolute moment block truncation coding (AMBTC) technique to embed a large amount of secret data. This scheme employs a user-defined threshold value to classify the AMBTC compressed blocks as complex block and smooth block. In the case of smooth blocks, the bit plane is replaced with the secret data bits. Later, the quantization levels are re-calculated so that distortion is minimized. While for complex blocks, the bit plane is reconstructed in which every pixel is represented by two bits instead of just one bit. Now, the secret data is embedded into the first LSB of the bit plane. Finally, four new quantization levels are calculated for preserving the closeness of the resultant block to the original block. Thus, the proposed scheme is able to utilize each and every pixel of the cover image to hide the secret data while maintaining the image quality. This scheme achieves 1 bit per pixel data hiding capacity for every image. Experimental results show that our scheme is superior to the other existing schemes in terms of both hiding capacity and image quality.

Keywords: Data hiding, quantization level, secret data, stego-image, absolute moment block truncation coding.

Received September 17, 2015; accepted January 11, 2016

 
Print E-mail

Formal Architecture and Verification of a Smart

Flood Monitoring System-of-Systems

Nadeem Akhtar1, Saima Khan2, and Ambreen Sarwar3

1Department of Computer Science and IT, the Islamia University of Bahawalpur, Pakistan

2Department of Computer Science Faculty of Computer Science and Information Technology, Virtual University Of Pakistan, Pakistan

 3Department of Management Sciences, COMSATS Institute of Information Technology, Pakistan

Abstract: In a flood situation, forecast of necessary information and an effective evacuation plan are vital. Smart Flood Monitoring System-of-Systems is a flood monitoring and rescue system. It collects information from weather forecast, flood onlookers and observers. This information is processed and then made available as alerts to the clients. The system also maintains continuous communication with the authorities for disaster management, social services, and emergency responders. Smart Flood Monitoring System-of-System synchronizes the support offered by emergency responders with the community needs. This paper presents the architecture specification and formal verification of the proposed Smart Flood Monitoring SoS. The formal model of this SoS is specified to ensure the correctness properties of safety and liveness.

 

Keywords: Flood monitoring; system-of-systems (SoS); behavioral modeling; formal verification; correctness; safety property.

Received September 15, 2015; accepted June 1, 2016

 Full text 

 
Print E-mail

Tunisian Arabic Chat Alphabet Transliteration

Using Probabilistic Finite State Transducers

Nadia Karmani1, Hsan Soussou2 Adel M. Alimi1

 1Research Groups on Intelligent Machines, University of Sfax, Tunisia

2MD Soft, Tunisia

Abstract: Internet is taking more and more scale in Tunisians life, especially after the revolution in 2011. Indeed, Tunisian Internet users are increasingly using social networks, blogs, etc. In this case, they favor Tunisian Arabic chat alphabet, which is a Latin-scripted Tunisian Arabic language. However, few tools were developed for Tunisian Arabic processing in this context. In this paper, we suggest developing a Tunisian Arabic chat alphabet-Tunisian Arabic transliteration machine based on weighted finite state transducers and using a Tunisian Arabic lexicon:  aebWordNet (i.e. aeb is the ISO 639-3 code of Tunisian Arabic) and a Tunisian Arabic morphological analyzer. Weighted finite state transducers allow us to follow Tunisian Internet user’s transcription behavior when writing Tunisian Arabic chat alphabet texts. This last has not a standard format but respects a regular relation. Moreover, it uses aebWordNet and a Tunisian Arabic morphological analyzer to validate the generated transliterations. Our approach attempts good results compared with existing Arabic chat alphabet-Arabic transliteration tools such as EiKtub.

Keywords: Tunisian arabic chat alphabet, tunisian arabic, transliteration, aebWordNet, tunisian arabic morphological analyzer, weighted finite state transducer.

Received August 6, 2015; accepted April 17, 2016

 
Print E-mail

A New Approach of Lossy Image Compression

Based on Hybrid Image Resizing Techniques

Jau-Ji Shen1, Chun-Hsiu Yeh2,3, and Jinn-Ke Jan2

1 Department of Management Information Systems, National Chung Hsing University / Department of Information Communication, Asia University, Taichung, Taiwan

2 Department of Computer Science and Engineering, National Chung Hsing University, Taichung

3Department of Information Management Systems, Chung Chou University, Changhua, Taiwan

Abstract: In this study, we coordinated and employed known image resizing techniques to replace the widely applied image compression techniques defined by the Joint Photographic Experts Group (JPEG). The JPEG approach requires additional information from a quantization table to compress and decompress images. Our proposed scheme requires no additional data storage for compression and decompression and instead of using compression code it uses shrunken images that can be read visually. Experimental results indicate that the proposed method can coordinate typical image resizing techniques effectively to yield enlarged (decompressed) images that are better in quality than JPEG images. Our novel approach to lossy image compression can improve the quality of decompressed images and could replace the use of JPEG compression in current image resizing techniques, thus enabling compression to be performed directly in the spatial domain without the need for complex conversion in the frequency domain.

Keywords: Differential image, Image compression, Image rescaling, Image resolution improvement.

Received July 14, 2015; accepted October 24, 2016

 Full text  

 
Print E-mail

A Low-Power Self-service Bus Arrival Reminding

Algorithm on Smart Phone

 

Xuefeng Liu, Jingjing Fan, Jianhua Mao, and Fenxiao Ye

School of Communication and Information Engineering, Shanghai University, China

Abstract: In this paper, a low-power self-service bus arrival reminding algorithm on smart phone is proposed and implemented. The algorithm first determines the current position of the bus by GPS module in smart phone and calculates the linear distance from the bus current position to the destination station, then sets a buffer distance for reminding passengers of getting off the bus, estimates the bus maximum speed and calculates the minimum time of approaching the buffer. In terms of the time, the frequency of the GPS location and the distance calculation between the bus and the destination station is intelligently adjusted. Once the distance to destination station is within the buffer distance, smart phone will immediately remind passengers to get off. The test result shows that the algorithm can timely provide personalized arrival reminding service, efficiently meet the requirements of different passengers and greatly reduce the power consumption of smart phone.

Keywords: Bus arrival reminding algorithm, power consumption, buffer distance, GPS location.

Received June 21, 2015; accepted July 4, 2016

 
Print E-mail

New Algorithm for Speech Compression Based on

Discrete Hartley Transform

Noureddine Aloui1, Souha Bousselmi2, and Adnane Cherif 2

1 Centre for Research on Microelectronics & Nanotechnology, Sousse Technology Park, Tunisia

2 Innov’Com Laboratory, Sciences Faculty of Tunis, University of Tunis El-Manar, Tunisia

Abstract: This paper presents an algorithm for speech signal compression based on the discrete Hartley transform. The developed algorithm presents the advantages to ensure low bit rate and to achieve high speech compression efficiency, while preserving the quality of the reconstructed signal. The numerical results included in this paper show that the developed algorithm is more effective than the discrete wavelet transform for speech signal compression.

 Keywords: Speech signal compression, discrete hartley transform, discrete wavelet transform.

Received April 13, 2015; accepted May 2, 2016

Full text    

 
Print E-mail

Case Retrieval Algorithm Using Similarity Measure

and Fractional Brain Storm Optimization for Health Informaticians

PoonamYadav

Department of Computer Science and Engineering, D.A.V College of Engineering. & Technology, Maharshi Dayanand University,India

Abstract: The management and exploitation of health Information is a demandingtask for health informaticians to provide the highest quality healthcare delivery. Storage, retrieval and interpretation of healthcare information are important stages in health informatics. Consequently, the retrieval of similar cases based on the current patient data can help doctors to identify the similar kind of patients and their methods of treatments. By taking into concern, a hybrid model is developed for retrieval of similar cases through the use of Case-based reasoning. Here, new measure called, parametric enabled-similarity measure (PESM) is proposed and a new optimization algorithm called, Fractional Brain Storm Optimization (FBSO), by modifying the well known Brain Storm Optimization (BSO) algorithm with the addition of fractional calculus is proposed. For experimentation, three different patient dataset from UCI machine learning repository is used and the performance is compared with existing method using accuracy and f-measure. The average accuracy and f-measure reached by the proposed method with three different dataset is 89.6% and 88.8% respectively.

Keywords:Case-based reasoning, Case retrieval, Optimization, Similarity, fractional calculus.

Received April 1, 2015; accepted September 7, 2015

 Full text    

 
Print E-mail

Integration between Static and Dynamic Impact

Analysis for An Algorithmic-based Change Effort Estimation Model

Nazri Kama, Sufyan Basri, Saiful Adli Ismail, Roslina Ibrahim, Haslina Md Sarkan, Faizura Haneem

Advanced Informatics School, Universiti Teknologi Malaysia, Malaysia

Abstract: Effort estimation undoubtedly happens in both software maintenance and software development phases. Researchers have been inventing many techniques to estimate change effort prior to implementing the actual change and one of the techniques is using impact analysis. A challenge of estimating a change effort during developing a software is the management of inconsistent states of software artifacts i.e., partially completed and to be developed artifacts. Our paper presents a novel model for estimating a change effort during the software development phase through integration between static and dynamic impact analysis. Three case studies of software development projects have been selected to evaluate the effectiveness of the model using the Mean Magnitude of Relative Error (MMRE) and Percentage of Prediction (PRED) metrics. The results indicated that the model has 22% MMRE relative error on average and the accuracy of our prediction was more than 75% across all case studies.

Keywords: Software development, change impact analysis, change effort estimation, impact analysis, effort estimation.

Received February  18, 2015; accepted September 26, 2016

 Full text    

 
Print E-mail

Prediction of Future Vulnerability Discovery in

Software Applications using Vulnerability Syntax

Tree (PFVD-VST)

 

Kola Periyasamy1, Saranya Harirangan2

1Assistant Professor, Department of Information Technology, Madras Institute of Technology, India

2Assistant Professor, Department of Information Technology, Adhiparasakthi Enginnering College, India

Abstract: Software applications are the origin to spread vulnerabilities in systems, networks and other software applications. Vulnerability Discovery Model (VDM) helps to encounter the susceptibilities in the problem domain. But preventing the software applications from known and unknown vulnerabilities is quite difficult and also need large database to store the history of attack information. We proposed a vulnerability prediction scheme named as Prediction of Future Vulnerability Discovery in Software Applications using Vulnerability Syntax Tree (PFVD-VST) which consists of five steps to address the problem of new vulnerability discovery and prediction. First, Classification and Clustering are performed based on the software application name, status, phase, category and attack types. Second, Code Quality is analyzed with the help of code quality measures such as, Cyclomatic Complexity, Functional Point Analysis, Coupling, Cloning between the objects, etc,. Third, Genetic based Binary Code Analyzer (GABCA) is used to convert the source code to binary code and evaluates each bit of the binary code. Fourth, Vulnerability Syntax Tree (VST) is trained with the help of vulnerabilities collected from National Vulnerability Database (NVD). Finally, a combined Naive Bayesian and Decision Tree based prediction algorithm is implemented to predict future vulnerabilities in new software applications. The experimental results of this system depicts that the prediction rate, recall, precision has improved significantly.

Keywords: Vulnerability discovery, prediction, classification and clustering, binary code analyzer, code quality metrics, vulnerability syntax tree.

Received October 30, 2014; accepted April 21, 2016

 Full text      

 
Print E-mail

An Efficient Algorithm for Extracting Infrequent

Itemsets from Weblog

Brijesh Bakariya1, Ghanshyam Thakur2, and Kapil Chaturvedi3

1Department of Computer Science and Engineering, I.K. Gujral Punjab Technical University, India

2Department of Computer Applications, Maulana Azad National Institute of Technology, India

3Department of Computer Applications, Rajiv Gandhi Proudyogiki Vishwavidyalaya, India

Abstract: Weblog data contains unstructured information. Due to this, extracting frequent pattern from weblog databases is a very challenging task. A power set lattice strategy is adopted for handling that kind of problem. In this lattice, the top label contains full set and at the bottom label contains empty set. Most number of algorithms follows bottom-up strategy, i.e. combining smaller to larger sets. Efficient lattice traversal techniques are presented which quickly identify all the long frequent itemsets and their subsets if required. This strategy is suitable for discovering frequent itemsets but it might not be worth being used for infrequent itemsets. In this paper, we propose Infrequent Itemset Mining for Weblog (IIMW) algorithm; it is a top-down breadth-first level-wise algorithm for discovering infrequent itemsets. We have compared our algorithm IIMW to Apriori-Rare, Apriori-Inverse and generated result in with different parameters such as candidate itemset, frequent itemset, time, transaction database and support threshold.

Keywords: Infrequent itemsets, lattice, frequent itemsets, weblog, support threshold.

Received September 6, 2014; accepted March 24, 2016

Full text  
 
<
 
Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
 
 
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 Warning: fsockopen(): unable to connect to oucha.net:80 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 skterr