January 2019, No.1
Print E-mail

Frequent Spatio – Temporal Sequential Pattern

Mining Based on Support Index and Event Index

 

A Rama Reddy1 Gurram Sunitha2   Vankadara Saritha3

1Department of CSE, S.V.University, India

2Department of CSE, S.V.University, India

3SCOPE, VIT University, India

Abstract: The mobility of the objects makes the changes in their location with respect to time. The discovery of the patterns among these kinds of changes is a challenging issue and helps in determining the frequently followed paths. The sequential pattern mining algorithms designed for traditional databases may result in the loss of spatio-temporal correlations due to the improper estimations of properties related to the time and space. In this paper, an algorithm is proposed which is based on the support index as well as the event index for mining frequent spatio-temporal sequential patterns in which the database is stored in a different format. In general, if only the support takes into consideration, spurious results may be obtained otherwise the interesting results may be missing. So, this algorithm proposes two new parameters named support index and event index, which are used to scrutinize the sequences determined. The proposed algorithm generates the exact set of frequent sequential patterns. The proposed algorithm is compared with Slicing-STS-Miner and MST-ITP and proved that it performs well in the order of two to three.

Keywords: Spatio-temporal, support index, event index, frequent, sequential pattern.

Received March 17, 2015; accept November 29, 2015

Full text  

 

 
Print E-mail

Towards Automated Testing of Multi-agent

Systems Using Prometheus Design Models


Shafiq Rehman1 , Aamer Nadeem2, Muddassar Sindhu3

1, 2Center for Software Dependability, Capital University of Science and Technology, Pakistan

3 Department of Computer Science, Quaidi Azam University, Pakistan


Abstract: Multi-agent systems (MAS) are used for a wide range of applications. Goals and plans are the key premise to achieve MAS targets.  Correct and proper execution and coverage of plans and achievement of goals ensures confidence in MAS.  Proper identification of all possible faults in MAS working plays its role towards gaining such confidence. In this paper, we devise a model based approach which ensures goals and plans coverage. A Fault model has been defined covering faults in MAS related to goal and plan execution and interactions. We have created a test model using Prometheus design artifacts, i.e., Goal overview diagram, Scenario overview, Agent and Capability overview diagrams. New coverage criteria have been defined for fault identification. Test Paths have been identified from test model. Test cases have been generated from test paths. Our technique is then evaluated on actual implementation of MAS in JACK by executing more than 100 different test cases. Code has been instrumented for coverage analysis and faults have been injected in MAS. This approach successfully finds the injected faults by applying test cases for coverage criteria paths on MAS execution. ‘Goal plan coverage’ criterion has been more effective with respect to fault detection while scenario, capability and agent coverage criteria have relatively less scope in fault identification.

 

Keywords: Goal sub goals coverage, MAS faults identification, model based goal plan coverage.

Received April 18, 2016; accept July, 2014 

Full text 
 
Print E-mail

A Novel Approach for Segmentation of Human Metaphase Chromosome Images Using Region Based Active Contours

Tanvi Arora1, Renu Dhir1

1Department of Computer Science and Engineering, Dr. B.R Ambedkar National Institute of Technology, India

Abstract: The chromosomes are the genetic information carries. A healthy human being has 46 chromosomes. Any alteration in either the number of chromosomes or the structure of chromosomes in a human being is diagnosed as a genetic defect. To uncover the genetic defects the metaphase chromosomes are imaged and analyzed. The metaphase chromosome images often contain intensity inhomogeneity that makes the image segmentation task difficult. The difficulties caused by intensity inhomogeneity can be resolved by using region based active contours techniques. These techniques uses the local intensity values of the nearby regions of the objects and find the approximate intensity values along both sides of the contour. In the proposed work a segmentation technique has been proposed to segment the objects present in the human metaphase chromosome images using region based active contours. The proposed technique has been quite efficient from prospective of number of objects segmented. The method has been tested on ADIR dataset. The experimental results have shown quite good performance.

Keywords: Chromosomes, segmentation, active Contours, intensity inhomogeneity

.

Received December 8, 2015; accept April 17, 2016

Full text  

 
 
Print E-mail

A Reliable Peer-to-Peer Protocol for

Multi-Robot Operating in Mobile Ad-Hoc

Wireless Networks

 

Tarek Dandashy1, Mayez Al-Mouhamed2 and Irfan Khan2

1Department of Computer Science, Balamand University, Lebanon

2Department of Computer Engineering, KFUPM,KSA

Abstract: Cooperative behaviour in multi-robot systems are based on distributed negotiation mechanisms. A set of autonomous robots playing soccer may cooperate in deciding a suitable game strategy or role playing. Degradation in broadcast and multicast services are widely observed due to the lack of reliable broadcast in current IEEE 802.11. A reliable, peer-to-peer (P2P), fast auction-based broadcast is proposed for a team of robots playing soccer interconnected using an ad-hoc wireless mobile network. Auction broadcast includes a sequence order to determine the reply order of all nodes. This helps minimizing the potential of MAC conflicts. Repeated back-off are not desired especially at low load. Uncoordinated negotiation lead to multiple outstanding auctions originated by distinct nodes. In this case, the sequence order becomes useless as auction times are interleaved. An adaptive MAC is proposed to dynamically adjust the reply.   Protocols are implemented as symmetric multi-threaded software on an experimental WLAN embedded system. Evaluation reports the distribution of auction completion times for peer-to-peer operations for both static and mobile nodes.  Protocol trade-offs with respect to auction response time, symmetry and fairness, and power consumption are discussed. Proposed protocols are embedded as a library for multi-robot cooperative behaviours (CBs). Evaluation shows the proposed protocol preferences versus the behavioural primitives with specific communication patterns.

Keywords: Auction communication, cooperative multi-robot, distributed intelligence, peer-to-peer, wireless Protocol.

Received December 6, 2015; accept July , 2014 

Full text  

 

 
Print E-mail

A Data-Grouping-Aware Multiple Input Files Data Placement Method that Take into Account Jobs Execute Frequency for Hadoop

Jia-xuan Wu, Chang-sheng Zhang, Bin Zhang, and Peng Wang

School of Computer Science and Engineering, Northeastern University, China

Abstract: Recent years have seen an increasing number of scientists employ data parallel computing frameworks such as Hadoop to run data intensive applications. Research of Data-Grouping-Aware multiple input files data placement for Hadoop have become increasingly popular. However, we observe that many Data-Grouping-Aware data placement schemes for multiple input files without taking MapReduce jobs execute frequency into consideration. Through the study, such data placement scheme will increase the data transmission between nodes. In this paper, we propose a Data-Grouping-Aware multiple input files data placement method based on the jobs execute frequency (DGAMF). This method firstly creates an inter-block Join access correlation representation model based on the historical data information, then divides the correlated blocks into groups according to the inter-block Join access correlation and gives a mathematical model for data placement. The model can guide the data blocks to be placed centralizedly and maintain the node load balancing. Finally, correlated blocks of the same groups were placed into the same set of nodes using the placement algorithm proposed herein, thereby effectively reducing the amount of data transmitted between nodes. By setting up a Hadoop experimental environment, the present data placement algorithm was verified. Experimental results showed that the proposed method could effectively process massive amounts of data while significantly improving the execution efficiency of MapReduce.

Keywords: Hadoop; multiple input files data placement; date-group-aware; Job execute frequency; access correlate relation.

Received November 24, 2015; accept July 28, 2016 

Full text  

 
 
Print E-mail

Enhancement of Human Visual Perception-based Image Quality Analyzer for Assessment of Contrast Enhancement Methods

Soong Chen1, Tiagrajah Janahiraman2, and Azizah Suliman3

1,3Department of College of Information Technology, University Tenaga National, Malaysia

2Department of College of Engineering, University Tenaga National, Malaysia

Abstract: Prior to this work, Human Visual Perception (HVP) -based Image Quality Analyzer (IQA) has been proposed. The HVP-based IQA correlates with human judgment better than the existing IQAs which are commonly used for the assessment of contrast enhancement techniques. This paper highlights the shortcomings of the HVP-based IQA such as high computational complexity, excessive (six) threshold parameter tuning and high performance sensitivity to the change in the threshold parameters’ value. In order to overcome the aforementioned problems, this paper proposes several enhancements such as replacement of local entropy with edge magnitude in sub-image texture analysis, down-sampling of image spatial resolution, removal of luminance masking and incorporation of famous Weber-Fechner Law on human perception. The enhanced HVP-based IQA requires far less computation (>189 times lesser) while still showing excellent correlation (Pearson Correlation Coefficient, PCC > 0.90, Root Mean Square Error, RMSE<0.3410) with human judgment. Besides, it requires fewer (two) threshold parameter tuning while maintaining consistent performance across wide range of threshold parameters’ value, making it feasible for real-time video processing.

Keywords: contrast enhancement, histogram equalization, image quality, noise, Weber Fechner.

Received October 4, 2015; accepted March 30, 2016

Full text  

 

 
Print E-mail

AModel for English to Urdu and Hindi Machine

Translation System using Translation Rules

and Artificial Neural Network

Shahnawaz Khan and Imran Usman

Saudi Electronic University, Saudi Arabia


Abstract:This paper illustrates the architecture and working of a proposed multilingual machine translation system which is able to translate from English to Urdu and Hindi. The system applies translation rules based approach with artificial neural network.The efficient pattern matching and the ability of learning by examples makes neural networks suitable for implementation of a translation rule based machine translation system.This paper also describes the importance of machine translation systems and status of the languages in a multilingual country like India.Machine translation evaluation score for translation output obtained from the system has been calculated using various methods such as n-gram bleu score, F-measure, Meteor and precision, recall. The evaluation scores achieved by the system for around 500 Hinditest sentences are as:  n-gram bleu score 0.5903; METEOR score achieved is 0.7956 and F-score of 0.7916 and for Urdu n-gram bleu score achieved by thesystem is 0.6054; METEOR score achieved is 0.8083 and F-score of 0.8250.

Keywords: Machine translation, artificial neural network, english, hindi, urdu.

Received September 19, 2015; accept June 8, 2016 

Full text
 
Print E-mail

Assessing Impact of Class Change by Mining Class Associations

Anshu Parashar, and Jitender Chhabra

Department of Computer Engineering, National Institute of Technology, India

Abstract: Data mining plays vital role in data analysis and also encompasses immense potential of mining software engineering data to manage design and maintenance issues. Change impact assessment is one of the crucial issues in software maintenance. In Object Oriented (OO) software system, classes are the core components and changes to the classes are always inevitable. So, OO software system must support the expected changes. In this paper, to assess impact of change in the class, we have proposed changeability measures by mining associations among the classes. These measures estimate a) change propagation by identifying its ripple effect; b) change impact set of the classes; c) changeability rank of the classes and d) class change cost. Further, we have performed the empirically study and evaluation to analysis our results. Our results indicate that by mining associations among the classes, the development team can effectively estimate the probable impact of the class change. These measures can be very helpful to perform changes to the classes while maintaining the software system.

 

Keywords: Mining software engineering data, object oriented system development, change propagation, change impact

. Received September 7, 2015; accept February 21, 2016 

Full text  

 

 
Print E-mail

Comprehensive Stemmer for Morphologically Rich Urdu Language

Mubashir Ali1, Shehzad Khalid1, Ghayur Naqvi2, Muhammad Saleemi1

1Department of Computer Engineering, Bahria University Islamabad, Pakistan

2 Departamento de Ingeniería Eléctrica, Universidad de Chile, Chile

Abstract: Urdu language is used by approximately 200 million people for spoken and written communication. Bulk of unstructured Urdu textual data is available in the world. We can employ data mining techniques to extract useful information from such a large potential information base. There are many text processing systems that are available. However, these systems are mostly language specific with the large proportion of systems are applicable to English text. This is primarily due to the language dependant pre-processing systems mainly the stemming requirement. Stemming is a vital pre-processing step in the text mining process and its core aim is to reduce many grammatical words form e.g. parts of speech, gender, tense etc. to their root form. In this proposed work, we have developed a rule based comprehensive stemming method for Urdu text. This proposed Urdu stemmer has the ability to generate the stem of Urdu words as well as loan words (words belonging to borrowed language i.e. Arabic, Persian, Turkish, etc) by removing prefix infix, and suffix. This proposed stemming technique introduced six novel Urdu infix words classes and minimum word length rule. In order to cope with the challenge of Urdu infix stemming, we have developed infix stripping rules for introduced infix words classes and generic rules for prefix and suffix stemming. The experimental results show the superiority of our proposed stemming approach as compared to existing technique.

Keywords: Urdu stemmer, infix classes, infix rules, stemming rules, stemming lists.

Received September 5, 2015; accept , 2014 

Full text  

 

 
Print E-mail

Human Facial Image Age Group Classification

Based On Third Order Four Pixel Pattern (TOFP)

of Wavelet Image

Rajendra Chikkala1, Sreenivasa Edara2,and Prabhakara Bhima3

1Department of CSE, Research Scholar, India

2Department of CSE, Dean ANU College of Engineering and Technology, India

3Department of ECE, Rector, JNTUK, India.

Abstract:The present paper proposes a novel scheme for age group classification based on Third Order Four pixel Pattern (TOFP). This paper identified TOFP patterns in two forms of diamond pattern which have four pixels i.e., outer diamond and inner diamond patterns in Third Order neighborhood. The paper derives Grey-Level Co-occurrence Matrix (GLCM) of a Wavelet image based on the values of Outer Diamond Corner Pixels (ODCP) of TOFP and Inner Diamond Corner Pixels (IDCP) of TOFP on wavelet image which is generated from the original image without using the standard method for generating the co-occurrence matrix. Four GLCM features are extracted from the generated matrix. Based on these feature values, the age group of the human facial image was categorized. In this paper, human age is classified into six age groups such as Child: 0-9 years, Adolescent: 10-19 years, Young Adult:  20 – 35 years, Middle-Aged Adults: 36 - 45 years, Senior Adults 46 – 60 years, Senior Citizen: age > 60. The proposed method is tested on different databases and comparative results are given.

Keywords:GLCM, pixel pattern, age group classification, four pixel pattern, outer diamond, inner diamond.

Received July 23, 2015; accept March 24, 2016 

Full text  

 

 
Print E-mail

A Steganography Scheme Based on JPEG

Cover Image with Enhanced Visual Quality

and Embedding Capacity

Arup Pal1, Kshiramani Naik1 and Rohit Agarwal2

1Department of Computer Science and Engineering, Indian School of Mines, India

2Department of Computer Science and Engineering, JSS Academy of Technical Education, India

Abstract: Joint Photographic Experts Group (JPEG) is one of the widely used lossy image compression standard and in general JPEG based compressed version images are commonly used during transmission over the public channel like the Internet. In this paper, the authors have proposed a steganography scheme where the secret message is considered for embedding into the JPEG version of a cover image. The steganography scheme initially employs block based discrete cosine transformation (DCT) followed by some suitable quantization process on the cover image to produce the transformed coefficients. The obtained coefficients are considered for embedding the secret message bits. In general, most of the earlier works hide one bit message into each selected coefficient, where hiding is carried out either directly modifying the coefficients, like employing the LSB method or indirectly modifying the magnitude of the coefficients, like flipping the sign bit of the coefficients. In the proposed scheme, instead of embedding the secret message bits directly into the coefficients, a suitable indirect approach is adopted to hide two bits of the secret message into some selected DCT coefficients. As per the conventional approach, the modified coefficients are further compressed by entropy encoding. The scheme has been tested on several standard gray scale images and the obtained experimental results show the comparative performance with some existing related works.

Keywords: Chi-square Attack; Discrete Cosine Transformation (DCT); Histogram; Joint Photographic Experts Group (JPEG); Statistical Steganalysis; Steganography.

 

Received May 27, 2015; accept October 19, 2015 

Full text  

 

 
Print E-mail

An Automatic Localization of Optic Disc in Low Resolution Retinal Images by Modified Directional Matched filter

Murugan Raman1, Reeba Korah2, Kavitha Tamilselvan3

1Research Scholar, Anna University, India.

2 Alliance College of Engineering and Design, Alliance University, India.

3New Prince Shri Bhavani College of Engineering and Technology, India

Abstract: An automatic optic disc localization in retinal images used to screen eye related diseases like diabetic retinopathy. Many techniques are available to detect Optic Disc in high-resolution retinal images. Unfortunately, there are no efficient methods available to detect Optic Disc in low-resolution retinal images. The objective of this research paper is to develop an automated method for localization of Optic Disc in low resolution retinal images. This paper proposes a modified directional matched filter parameters of the retinal blood vessels to localize the center of optic disc. The proposed method was implemented in MATLAB and evaluated both normal and abnormal low resolution retinal images using the subset of Optic Nerve Head Segmentation Dataset (ONHSD) and the success percentage was found to be an average of 96.96% with 23seconds.

Keywords: Retinal image processing, dabetic retinopathy (DR), optic disc, bood vessels, modified directional matched filter. 

Received May 7, 2015; accept October 7, 2015 

Full text  

 

 
Print E-mail

Shamir’s Key Based Confidentiality on Cloud Data Storage

Kamalraj Durai1, Balamurugan Balasubramanian1, Jegadeeswari Sathyanarayanan1 and Sugumaran Muthukumarasamy2

1Research Scholar, Bharathiar University, India

2Pondicherry Engineering College, India

Abstract: Cloud computing is a flexible, cost effective and proven delivery platform for providing business or consumer services over the Internet. Cloud computing supports distributed service over the Internet as service oriented architecture, multi-user, and multi-domain administrative infrastructure, hence it is more easily affected by security threats and vulnerabilities. Cloud computing acts as a new paradigm where it provides a dynamic environment for end users and also guarantees Quality of Service (QoS) on data confidentiality. Trusted Third Party ensures the authentication, integrity and confidentiality of involved data and communications but fails on maintain the higher percentage of confidential rate on the horizontal level of privacy cloud services. TrustedDB on the cloud privacy preservation fails to secure the query parsers result for generating efficient query plans. To generate efficient privacy preserving query plans on the cloud data, we propose Shamir’s Key Distribution based Confidentiality (SKDC) Scheme to achieve a higher percentage of confidentiality by residing the cloud data with polynomial interpolation. The SKDC scheme creates a polynomial of degree with the secret as the first coefficient and the remaining coefficients picked up at random to improve the privacy preserving level on the cloud infrastructure. The experimental evaluation using SKDC is carried out on the factors such as system execution time, confidentiality rate and query processing rate, which improves the efficiency of confidentiality rate and query processing while storing and retrieving in cloud.

Keywords: Confidentiality, privacy, cloud computing, SKDC, privacy preserving and polynomial interpolation.

Received April 25, 2015; accept January 28, 2016

Full text  

 

 
Print E-mail

GLCM Based Parallel Texture Segmentation

using A Multicore  Processor

Shefa Dawwd

Department of Computer Engineering, Mosul University, Iraq

Abstract: This paper investigates the using of gray level co-occurrence matrix (GLCM) based on supervised texture segmentation. In most texture segmentation methods, the processing algorithm is applied to a window of the original image rather than to the entire image using sliding scheme. To attain a good segmentation accuracy especially in the boundaries, optimal size of window is determined, or windows of variant sizes are used. Both options are very time consuming. Here, a new technique is proposed to build an efficient GLCM based texture segmentation system. This scheme uses a fixed window of variant apertures. This will reduce the computation overhead and recourses that required to compute GLCM, and will improve the segmentation accuracy.  Image's windows are multiplied with a matrix of local operators. After that,  GLCM is computed and  features are extracted and classified and the segmented image is produced. In order to reduce the segmentation time, two similarity metrics are used to classify the texture pixels. Euclidean metric is used to find the distance between the current and previous GLCM. If it is above a predefined threshold, then the computation of GLCM descriptors are required. Gaussian metric is used as a distance measure between two GLCM descriptors.     Furthermore, a median filter is applied to the segmented image. Finally, the transition and misclassified regions are refined. The proposed system is parallelized and implemented on a multicore processor.

Keywords: GLCM, haralick descriptors, median filter, moving window,  texture segmentation.

Received November 13, 2014; accept July 9, 2014 

Full text  

 

 
Print E-mail

A Real Time Extreme Learning Machine for

Software Development Effort Estimation

S. K. Pillai1 and M. K. Jeyakumar2

1 Department of Electrical and Electronics Engineering, K. N. S. K. College of Engineering, India

2 Department of Computer Applications, Noorul Islam University, India

Abstract: Software development effort estimation always remains a challenging task for project managers in a software industry. New techniques are applied to estimate effort. Evaluation of accuracy is a major activity as many methods are proposed in the literature. Here, we have developed a new algorithm called Real Time Extreme Learning Machine (RT-ELM) based on online sequential learning algorithm. The online sequential learning algorithm is modified so that the extreme learning machine learns continuously as new projects are developed in a software development organization. Performance of the real time extreme learning machine is compared with training and testing methodology. Studies were also conducted using radial basis function and additive hidden node.  The accuracy of the Real time Extreme Learning machine with continuous learning is better than the conventionaltraining and testing method. The results also indicate that the  performance of radial basis function and additive hidden nodes is data dependent. The results are validated using data from academic setting and industry.

Keywords: Software effort estimation, extreme learning machine, real time, radial basis function.

Received October 5, 2014; accept March 30, 2016 

Full text  

 

 
Print E-mail

New Fool Proof Examination System through Color Secret Sharing Scheme and Signature Authentication

Mohamed Fathimal1 and Arockia Jansirani2

1Department of Computer Science and Engineering, SRM Institute of Science and Technology, India

2Department of Computer Science and Engineering, Manonmaniam Sundaranar University, India

Abstract: There have been widespread allegations about the question papers leakage for a number of subjects in the recently held Secondary School Leaving Certificate examinations. The leakage is due to the practice of using printed question papers. Such incidents and subsequent cancellation of examinations are happening frequently. This creates political and social embarrassment and causes loss of money and time. This paper proposes a new system of foolproof examination by tamperproof e-question paper preparation and secure transmission using secret sharing scheme. The application is perfectly secure because the proposed method automatically embeds the corresponding institute seal in the form of the key. As a result, it is easy to trace out the source culprit for the leakage of question papers. This scheme has reduced reconstruction time because the reconstruction process involves only XOR operation apart from authentication. The proposed method recovers the original secret image without any loss. The existing visual cryptographic scheme recovers half-toned secret image with average PSNR value 24dB. Further, it shall be stated that the proposed method with authentication recovers the image with 64.7dB PSNR value, which is greater than that of the existing method. In addition, this method does not suffer from pixel Expansion.

Keywords: Visual cryptography, secret sharing scheme, examination system, information security, authentication.

Revived August 20, 2014; accepted March 9, 2016

Full text  

 

 
Print E-mail

Contactless Palmprint Verification System

using 2-D Gabor Filter and Principal

Component Analysis

 

Saravanan Chandran and Satya Verma

Computer Centre, National Institute of Technology, India

Abstract: The palmprint verification system is gaining popularity in the biometrics research area. The palmprint provides many advantages over other biometric systems such as low cost acquisition device, high verification accuracy, fast feature extraction, stability, and unique characteristics. In this research article a new palmprint verification model is proposed using Sobel Edge Detection, 2D Gabor Filter, and Principal Component Analysis (PCA). The proposed new model is tested with the IIT Delhi palmprint database. The experimental results of the proposed new model achieves 99.5% Total Success Rate and 0.5% Equal Error Rate. The experimental result confirms that the proposed new model is more suitable compared to other existing biometric techniques.

Keywords: Biometric, palmprint, 2-D gabor filter, PCA.

Received August 19, 2015; accept January 13, 2016 

Full text  

 
Print E-mail

Mining Consumer Knowledge from Shopping

Experience: TV Shopping Industry

Chih-Hao Wen1, Shu-Hsien Liao2, and Shu-Fang Huang2

1Department of Logistics Management, National Defense University, Taiwan
2Department of Management Sciences and Decision Making, Tamkang University, Taiwan

Abstract: TV shopping becomes far much popular in recent years. TV nowadays is almost everywhere. People watch TV; meanwhile, they are more and more accustomed to buy goods via TV shopping channel. Even in recession, it is thriving and has become one of the most important consumption modes. This study uses cluster analysis to identify the profiles of TV shopping consumers. The rules between TV Shopping spokespersons and commodities from consumers are recognized by using association analysis. Depicting the marketing knowledge map of spokespersons, the best endorsement portfolio is found out to make recommendations. By the analysis of spokespersons, period, customer profiles and products, fourbusiness modes of TV shopping are proposed for consumers: new product, knowledge, low price and luxury product; the related recommendations are also provided for the industry reference.

Keywords: Consumer knowledge, data mining, TV shopping, association rules, clustering.

Received July 23, 2014; accepted June 26, 2016

Full text  


  

 
<
 
Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
 
 
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 Warning: fsockopen(): unable to connect to oucha.net:80 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 skterr