September 2018, No. 5
Print E-mail

Hyperspectral Image Segmentation Based on Enhanced Estimation of Centroid with Fast K-Means

Saravana Veligandan1 and Naganathan Rengasari2

1Center for Bioinformatics, Pondicherry University, India

2Department of CSE, Hindustan University, Chennai

 

Abstract: In this paper, the segmentation process is observant on hyperspectral satellite images. A novel approach, hyperspectral image segmentation based on enhanced estimation of centroid with unsupervised clusters such as fast k-means, fast k-means (weight), and fast k-means (careful seeding)  has been addressed. Besides, a cohesive image segmentation approach based on inter-band clustering and intra-band clustering is processed. Moreover, the inter band clustering is accomplished by above clustering algorithms, while the intra band clustering is effectuated using Particle Swarm Clustering algorithm (PSC) with enhanced estimation of centroid (EEOC). The hyperspectral bands are clustered and a single band which has a paramount variance from each cluster is opting for. This constructs the diminished set of bands. Finally, PSC (EEOC) carried out the segmentation process on the diminished bands. In addition, we compare the result produce in these methods by statistical analysis based on number of pixel, fitness value, and elapsed time.

 

Keywords: Fast K-Means, Fast K-Mean (weight), Fast K- Means (careful seeding), and particle swarm clustering algorithm.

 

Received March 16, 2015; accepted September 7, 2015

 

Full text  

 

 
Print E-mail


Multi-Sensor Fusion based on DWT, Fuzzy

Histogram Equalization for Video sequence


Nada Habib1 and Saad Hasson2

1Technical College of Management, Middle Technical University, Baghdad, Iraq

2Department of Computer Science, College of Science, University of Babylon, Iraq

Abstract: Multi-sensor fusion is a process which combines two or more sensor datasets of same scene resulting a single output containing all relevant information. The fusion process can work in the spatial domain and the transform domain. The  spatial  domain  fusion  methods  are  easy  to  implement and  have  low  computational  complexity,  but  they  may  produce blocking  artefacts  and  out of focus which means that the fused image got blur. In this paper, fusion algorithm has been proposed to solve this problem based on Discrete Wavelet Transform, Fuzzy Histogram Equalization, and De-blurring Kernel. In addition, two fusion techniques: Maximum selection and weighted average were developed based on Mean statistical technique. The performance of the proposed method has been tested on the real and synthetic datasets.  Experimental results showed the proposed fusion method with traditional and developed fusion rules gives improvement in fused results.

Keywords: Multi-sensor fusion, Discrete Wavelet Transform (DWT), Fuzzy Histogram Equalization, De-blurring Kernels, Principle Component Analysis (PCA).

 

Received September 4, 2015; accepted December 27, 2015

 

Full text

 

 
Print E-mail


Transfer-based Arabic to English Noun Sentence

Translation Using Shallow Segmentation

Namiq Sultan Abdullah

Department of Electrical and Computer Engineering, University of Duhok, Iraq

Abstract: The quality of machine translation systems decreases considerably when dealing with long sentences.  In this paper, a transfer-based system is developed for translating long Arabic noun sentences into English. A simple method used for dividing a long sentence into phrases based on conjunctions, prepositions, and quantifier particles. These particles divide a sentence into phrases. The phrases of a source sentence are translated individually. In the end of translation process, target sentence is constructed by connecting the translated phrases. The system was tested on 100 thesis long titles from the management and economy domain. The results show that the method is very efficient with most of the tested sentences.

Keywords: machine translation, transfer-based approach, noun phrases, sentence partitioning

Received May 15, 2015; accepted March 24, 2016

 

Full text 

 
Print E-mail

Image Steganography Based on Hamming Code and Edge Detection

Shuliang Sun

School of Electronics and Information Engineering, Fuqing Branch of Fujian Normal University, China

Abstract: In this paper a novel algorithm which is based on hamming code and 2k correction is proposed. The new method also utilizes canny edge detection and coherent bit length. Firstly, canny edge detector is applied to detect the edge of cover image and only edge pixels are selected for embedding payload. In order to enhance security, the edge pixels are scrambled. Then hamming encoding is practiced to code the secret data before embedded. Calculate coherent bit length L on the base of relevant edge pixels and replace with L bits of payload message. Finally, the method of 2k correction is applied to achieve better imperceptibility in stego image. The experiment shows that the proposed method is more advantage in PSNR, capacity and universal image quality index (Q) than other methods.

Keywords: hamming code, 2k correction, coherent bit length, canny edge detector

Received October 06, 2015; accepted April 30, 2016

Full text  

 

 
Print E-mail

Traceability between Code and Design Documentation in Database Management System: A Case Study

Mohammed Akour, Ahmad Saifan, and Osama Ratha'an

Computer Information Systems Department, Yarmouk University, Jordan

Abstract: Traceability builds many strong connections or links between requirements and design, so the main purpose of traceability is to maintain consistency between a high level conceptual view and a low level implementation view.  The purpose of this paper is to have full consistency between all components over all phases in the Oracle designer tool by allowing traceability to be carried out not only between the requirements and design but also between the code and design. In this paper, we propose a new methodology to support traceability and completeness checking between code and design of Oracle database applications.  The new algorithm consists of a set of interrelated steps to initialize the comparison environment. An example of a Student Information System is used to illustrate the work.

Keywords: Traceability, Oracle Designer, Completeness Checking, Design, Source Code, Database, PL/SQL, Testing.

Received February 18, 2016; accepted June 26, 2016

Full text  

 
Print E-mail


A Novel Architecture of Medical Image Fusion

based onYCbCr - DWT Transform

Behzad Nobariyan1, Nasrin Amini2, Sabalan Daneshvar3, Ataollah Abbasi4

1Faculty of Electrical Engineering, Sahand University of Technology, Iran

2Faculty of Biomedical Engineering, Islamic Azad University branch of Science and Research, Iran

3Faculty of Electrical and Computer Engineering, University of Tabriz, Iran

4 Faculty of Electrical Engineering, Sahand University of Technology, Iran

Abstract: Image fusion is one of the most modern, accurate and useful diagnostic techniques in medical imaging. Mainly, image fusion tries to offer a method for solving the problem that no system is able to integrate functional and anatomical information. Multiple image fusion of brain is very important for clinical applications. PET image indicates the brain function and SPECT indicates local performance in the internal organs like heart and brain imaging. Both of these images are multi-spectral images and have a low spatial resolution. The MRI image shows the brain tissue anatomy and contains no functional information. A good fusion scheme should preserve the spectral characteristics of the source multispectral image as well as the high spatial resolution characteristics of the source panchromatic image. There are many methods for image fusion but each of them has certain limitations. The studies have shown that YCbCr preserves spatial information and DWT preserves spectral information without distortion. The proposed method contains the advantages of both methods and it preserves spatial and spectral information without distortion. Visual and statistical analyses show that the results of our algorithm considerably enhance the fusion quality in connection with: discrepancy, average gradient and Mutual information; compared to fusion methods including, HIS (Hue- Intensity- Saturation), YCbCr, Brovey, Laplacian-pyramid, Contourlet and DWT.

Keywords: YCbCr, DWT, PET, SPECT, image fusion

 

Received April 24, 2015; accepted March 9, 2016

 

Full text 

 

 
Print E-mail

Google N-Gram Viewer does not Include Arabic Corpus! Towards N-Gram Viewer for Arabic Corpus

Izzat Alsmadi1 and Mohammad Zarour2

1Computer Information Systems Department, Yarmouk University, Jordan

2Information Systems Department, Prince Sultan University, KSA

Abstract: Google N-gram viewer is one of those newly published Google services. Google archived or digitized a large number of books in different languages. Google populated the corpora from over 5 million books published up to 2008. This Google service allows users to enter queries of words. The tool then charts time-based data that show the frequency of usage of query words. Although Arabic is one of the top spoken language in the world, Arabic language is not included as one of the corpora indexed by the Google n-gram viewer. This research work discusses the development of large Arabic corpus and indexing it using N-grams to be included in Google N-gram viewer. A showcase is presented to build a dataset to initiate the process of digitizing the Arabic content and prepare it to be incorporated in Google N-gram viewer. One of the major goals of including Arabic content in Google N-gram is to enrich Arabic public content, which has been very limited in comparison with the number of people who speak Arabic. We believe that adopting Arabic language by Google N-gram viewer can significantly benefit researchers in different fields related to Arabic language and social sciences.

Keywords: Arabic language processing, corpus, google N-gram viewer.

 

Received May 7, 2015; accepted September 20, 2015

 

 
Print E-mail

A New Method for Curvilinear Text line Extraction and Straightening of Arabic Handwritten Text

1Ayman Al Dmour, 2Ibrahim El rube', and 3Laiali Almazaydeh

1,3Faculty of Information Technology, Al-Hussein Bin Talal University, Jordan

2Department of Computer Engineering, Taif University, KSA

Abstract: Line extraction is a critical step from one of the main subtasks of Document Image Analysis, which is layout analysis. This paper presents a new method for curvilinear text line extraction and straightening in Arabic handwritten documents. The proposed method is based on a strategy that consists of two distinct steps. First, text line is extracted based on morphological dilation operation. Secondly, the extracted text line is straighten in two sub-steps: Course tuning of text line orientation based on Hough transform, then fine tuning based on centroid alignment of the connected component that forms the text line. The proposed approach has been extensively experimented on samples from the benchmark datasets of KHATT and AHDB. Experimental results show that, the proposed method is capable of detecting and straightening curvilinear text lines even on challenging Arabic handwritten documents.

Keywords: Document image analysis, Arabic handwriting, Text line extraction, Hough transform

 

Received January 14, 2016; accepted May 11, 2016

 

 
Print E-mail

Reverse Engineering of Object Oriented System using Hierarchical Clustering

Aman Jatain1 and Deepti Gaur2

    1 Amity University, India

2NCU University, India

Abstract: Now a day’s common problem faced by software community is to understand the legacy code. A decade ago the legacy code referred as the code written in language like COBOL or FORTRAN. Today software engineers primarily use object oriented language like C++ and Java. This implies that tomorrow’s legacy code is written today because object oriented programs are even more difficult and complex to understand which leads us towards making software that is vague and having insufficient design documentation. Object oriented programming produce many problems to software developers in maintenance phase. So reverse engineering methodologies can be applied to resolve it. In literature various techniques has been proposed by researchers to recover the architecture and components of legacy systems. The use of clustering algorithms has recently been discussed by many for reverse engineering and architecture recovery. Methodology: In this paper rational software architect (RSA) is used to recover the design from source code during reverse engineering process and then feature selection method is applied to select the features of software system. Hierarchical clustering is used after calculating the similarity measure between classes to cluster the similar classes into one component. The proposed technique is demonstrated by a case study.

Keywords: Clustering, Feature Selection, Hierarchical, Reverse Engineering, Rational Software Architect.

 

Received April 28, 2015; accepted November 29, 2015

 

Full text  

 

 

 
Print E-mail

Capacity Enhancement Based on Dynamically

Adapted PF Scheduling Algorithm for LTE

Downlink System

Mohamad Elhadad, El-Sayed El-Rabaie, and Mohammed Abd-Elnaby

Department of Electronics and Electrical Communications, Menoufia University, Egypt 

Abstract: Orthogonal Frequency Division Multiplexing (OFDM) with dynamic scheduling and resource allocation is a key component of most emerging broadband wireless access networks such as WiMAX and Long Term Evolution (LTE). Resource allocation mechanisms in LTE are very critical issues, because scheduling algorithms have the main responsibility for determining how to allocate radio resources for different users. In this paper a dynamically adopted Proportional Fair (PF) scheduling algorithm for capacity enhancement of LTE system is proposed. Performance comparison with the conventional PF downlink scheduler, which is characterized by high fairness but with low throughput, and the Best-CQI scheduling algorithm which is characterized by high throughput but with poor fairness performance is presented. Simulation results show that the proposed algorithm enhances the overall system capacity and also provides fairness in the distribution of the resources. The proposed algorithm improves the average cell throughput by more than 31 %, with a slight degradation in the fairness level as compared with the conventional PF scheduling algorithm.

Keywords: LTE, Packet Scheduling, PF, Fairness, OFDM.

Received April 24, 2015; accepted March 13, 2016

 

Full text  

 

 

 
Print E-mail

Phishing Detection using RDF and Random Forests

Vamsee Muppavarapu, Archanaa R, and Shriram  Vasudevan

Department of Computer Science and Engineering, Amrita Vishwa Vidyapeetham University, India

Abstract: Phishing is one of the major threats in this internet era. Phishing is a smart process where a legitimate website is cloned and victims are lured to the fake website to provide their personal as well as confidential information, sometimes it proves to be costly. Though most of the websites will give a disclaimer warning to the users about phishing, users tend to neglect it. It is not a fully responsible action by the websites also and there is not much that the websites could really do about it. Since phishing has been in persistence for a long time, many approaches have been proposed in past that can detect phishing websites but very few or none of them detect the target websites for these phishing attacks, accurately. Our proposed method is novel and an extension to our previous work, where we identify phishing websites using a combined approach by constructing RDF models and using ensemble learning algorithms for the classification of websites. Our approach uses supervised learning techniques to train our system. This approach has a promising true positive rate of 98.8%, which is definitely appreciable. As we have used random forest classifier that can handle missing values in dataset, we were able to reduce the false positive rate of the system to an extent of 1.5%. As our system explores the strength of RDF and ensemble learning methods and both these approaches work hand in hand, a highly promising accuracy rate of 98.68% is achieved.

Keywords: Phishing, ensemble learning, RDF models, phishing target, metadata, vocabulary, random forests.

 

Received April 22, 2015; accepted

 

Full text 

 

 
Print E-mail

Medical Image Segmentation Based on Fuzzy Controlled Level Set and Local Statistical Constraints

Mohamed Yaghmorasan Benzian1, 2 and Nacéra Benamrane2

1Computer Science Department, University Abou Bekr Belkaid of Tlemcen, Algeria

2Computer Science Department, University of Science and Technology Oran USTO-MB, Algeria

Abstract: Image Segmentation is one of the most important fields in artificial vision due to its complexity and the diversity of its application to different image cases. In this paper, a new ROI segmentation in medical images approach is proposed, based on modified level sets controlled by fuzzy rules and incorporating local statistical constraints (mean, variance) in level set evolution function, and low image resolution analysis by estimating statistical constraints and curvature of curve at low image scale. The image and curve at low resolution provide information on rough variation of respectively image intensity and curvature value. The weights of different constraints are controlled and adapted by fuzzy rules which regularize their influence. The objective of using low resolution image analysis is to avoid stopping the evolution of the level set curve at local maxima or minima of images. This method is tested on medical images. The obtained results of the technique presented are satisfying and give a good precision.

Keywords: Segmentation, level sets, medical images, image resolution, fuzzy rules, ROI.

Received April 8, 2015; accepted December 28, 2015

 

Full text 

 

 

 
Print E-mail

Enhanced Hybrid Prediction Models for Time Series Prediction

Purwanto1 and Chikkannan Eswaran2

1Faculty of Computer Science, Dian Nuswantoro University, Indonesia

2Faculty of Computing and Informatics, Multimedia University, Malaysia

Abstract: Statistical techniques have disadvantages in handling the non-linear pattern. Soft computing (SC) techniques such as artificial neural networks are considered to be better for prediction of data with non-linear patterns. In the real-life, time-series data comprise complex pattern, and hence it may be difficult to obtain high prediction accuracy rates using the statistical or SC techniques individually. We propose two enhanced hybrid models for time series prediction. The first model is an enhanced hybrid model combining statistical and neural network techniques. Using this model, one can select the best statistical technique as well as the best configuration for the neural network for time series prediction. The second model is an enhanced adaptive neuro-fuzzy inference system which combines fuzzy inference system and neural network. The proposed enhanced ANFIS model can determine the optimum input lags for obtaining the best accuracy results. The prediction accuracies of the two proposed hybrid models are compared with those obtained with other models based on three time series data sets. The results indicate that the proposed hybrid models yield better accuracy results compared to ARIMA, exponential smoothing, moving average, weighted moving average and Neural Network models.

Keywords: Hybrid Model, Adaptive Neuro-Fuzzy Inference Systems, Soft Computing, Neural Network, Statistical Techniques.

Received March 25, 2015; accepted October 7, 2015

 

Full text 

 

 

 
Print E-mail

Paradigma: A Distributed Framework for Parallel Programming

Sofien Gannouni, Ameur Touir, and Hassan Mathkour

College of Computer and Information Sciences, King Saud University, Saudi Arabia

Abstract: Recent advances in high-speed networks and the newfound ubiquity of powerful processors have revolutionized the nature of parallel computing. It is becoming increasingly attractive to perform parallel tasks on distant, autonomous, and heterogeneous networked machines. This paper presents a simple and efficient new distributed framework for parallel programming known as Paradigma. In this framework, parallel program development is simplified using the Gamma formalism, providing sequential programmers with a straightforward mechanism for solving large-scale problems in parallel. The programmer simply specifies the action to be performed on an atomic data element known as a molecule. The workers compete in simultaneously running the action specified on the various molecules extracted from the input until the entire dataset is processed. The proposed framework is dedicated for fine-grained parallel processing and supports both the Simple Program Multiple Data and Multiple Program Multiple Data programming models.

Keywords: Distributed Systems, Parallel Programming, Gamma Formalism, Single Program Multiple Data, Multiple Program Multiple Data.

 Received March 5, 2015; accepted March 9, 2016

 

Full text 

 

 


 
Print E-mail

Maximum Spanning Tree based Redundancy

Elimination for Feature Selection of High

Dimensional  Data

Bharat Singh and OP Vyas

Department of Information Technology,Indian Institute of Information Technology, India

Abstract: Feature selection adheres to the phenomena of preprocessing step for High Dimensional data to obtain optimal results with reference of speed and time. It is a technique by which most prominent features can be selected from a set of features that are prone to contain redundant and relevant features. It also helps to lighten the burden on classification techniques, thus makes it faster and efficient.We introduce a novel two tiered architecture of feature selection that can able to filter relevant as well as redundant features. Our approach utilizes the peculiar advantage of identifying highly correlated nodes in a tree. More specifically, the reduced dataset comprises of these selected features. Finally, the reduced dataset is tested with various classification techniques to evaluate their performance. To prove its correctness we have used many basic algorithms of classification to highlight the benefits of our approach. In this journey of work we have used benchmark datasets to prove the worthiness of our approach.

Keywords: Data Mining, Feature Selection,  Tree based approaches, Maximum Spanning Tree, High dimensional Data.

 Received February 15, 2015; accepted December 21, 2015

 

Full text 

 

 


 
Print E-mail

Multi-Classifier Model for Software Fault

Prediction

Pradeep Singh1 and Shrish Verma2

1Department of Computer Science and Engineering, National Institute of Technology, Raipur

2Department of Electronics and Telecommunication Engineering, National Institute of Technology, Raipur

Abstract:  Prediction of fault prone module prior to testing is an emerging activity for software organizations to allocate targeted resource for development of reliable software. These software fault prediction depend on the quality of fault and related code extracted from previous versions of software.  This paper, presents a novel framework by combining multiple expert machine learning systems. The proposed multi-classifier model takes the benefits of best classifiers in deciding the faulty modules of software system with consensus prior to testing. An experimental comparison is performed with various outperformer classifiers in the area of fault prediction. We evaluate our approach on 16 public dataset from promise repository which consists of NASA MDP projects and Turkish software projects. The experimental result shows that our multi classifier approach which is the combination of SVM, Naive Bayes and Random forest machine significantly improves the performance of software fault prediction.

Keywords: Software Metrics, Software Fault prediction, Machine Learning.

Received February 7, 2015; accepted September 7, 2015

 

Full text 

 

 

 
Print E-mail

Edge Preserving Image Segmentation using Spatially Constrained EM algorithm

Meena Ramasamy1 and  Shantha Ramapackiam2

1Electronics and Communication Engineering Department, P. S. R. Engineering College, India

2ECE Department, Mepco Schlenk Engineering College, India

Abstract: In this paper, a new method for edge preserving image segmentation based on the Gaussian Mixture Model is presented. The standard GMM considers each pixel as independent and does not incorporate the spatial relationship among the neighboring pixels. Hence segmentation is highly sensitive to noise. Traditional smoothing filters average the noise, but fail to preserve the edges. In the proposed method, a bilateral filter which employs two filters - domain filter and range filter, is applied to the image for edge preserving smoothing.  Secondly, in the Expectation Maximization algorithm used to estimate the parameters of GMM, the posterior probability is weighted with the Gaussian kernel to incorporate the spatial relationship among the neighboring pixels. Thirdly, as an outcome of the proposed method, edge detection is also done on images with noise. Experimental results obtained by applying the proposed method on synthetic images and simulated brain images demonstrate the improved robustness and effectiveness of the method.

Keywords: Gaussian Mixture Model, Expectation Maximization, bilateral filter, image segmentation.

Received December 23, 2014; accepted December 21, 2015

 

Full text 

 

 

 
Print E-mail

Auto-Poietic Algorithm for Multiple Sequence Alignment

Amouda Venkatesan and Buvaneswari Shanmugham

Centre for Bioinformatics, Pondicherry University, India

 

Abstract: The concept of self-organization is applied to the operators and parameters of genetic algorithm to develop a novel Auto-poietic algorithm solving a biological problem, Multiple Sequence Alignment (MSA). The self-organizing crossover operator of the developed algorithm undergoes a swap and shuffle process to alter the genes of chromosomes in order to produce better combinations. Unlike Standard Genetic Algorithms (SGA), the mutation rate of auto-poietic algorithm is not fixed. The mutation rate varies cyclically based on the improvement of fitness value in turn, determines the termination point of algorithm. Automated assignment of various parameter values reduces the intervention and inappropriate settings of parameters from user without prior the knowledge of input. As an advantage, the proposed algorithm also circumvents the major issues in standard genetic algorithm, premature convergence and time requirements to optimize the parameters. Using BAliBASE reference multiple sequence alignments, the efficiency of the auto-poietic algorithm is analyzed. It is evident that the performance of auto-poietic algorithm is better than SGA and produces better alignments compared to other MSA tools.

Keywords: Auto-poietic, crossover, genetic algorithm, mutation, multiple sequence alignment, selection.

Received October 27, 2014; accepted November 29, 2015

 


 
Print E-mail

Temporal Tracking on Videos with Direction Detection

Shajeena Johnson1, Ramar Kadarkarai2

1Department of Computer Science and Engineering, James College of Engineering and Technology, India

2Einstein College of Engineering, India

Abstract: Tracking is essentially a matching problem. This paper proposes a tracking scheme for video objects on compressed domain. This method mainly focuses on locating the object region and predicting (evolving) the detection of movement, which improves tracking precision. Motion Vectors (MVs) are used for block matching. At each frame, the decision of whether a particular block belongs to the object being tracked is made with the help of histogram matching. During the process of matching and evolving the direction of movement, similarities of target region are compared to ensure that there is no overlapping and tracking performed in a right way. Experiments using the proposed tracker on videos demonstrate that the method can reliably locate the object of interest effectively.

Keywords: Motion vector, distance measure, histogram, block matching, DCT, tracking.

 

Received August 19, 2014; accepted April 2, 2015

 


 
Print E-mail

A Network Performance Aware QoS Based

Workflow Scheduling for Grid Services

 Shinu John and Maluk Mohamed

 Department of CSE, MAM College of Engineering, India

Abstract: Grids enable sharing, selection and aggregation of geographically distributed resources among various organizations. They are now emerging as promising computing paradigms for resource and compute intensive scientific workflow applications modeled as a Directed Acyclic Graph (DAG) with intricate inter-task dependencies. Job scheduling is an important and challenging issue in a grid environment. There are various scheduling algorithm proposed for grid environments to distribute the load among processors and maximize resource utilization while reducing task execution time. Task execution time is not the only parameter to be improved; various QoS parameters are also to be considered in job scheduling in grid computing. In this Research we have studied the existing QoS based Task scheduling, work flow scheduling and formulated the problem. The possible solutions are developed for the problems identified in existing algorithms. The scheduling of dependent task (work flow) is more challenging than independent task scheduling. The scheduling of both dependent and independent tasks with satisfying QOS requirements of users is a very challenging issue in grid computing.  This paper proposes a Novel Network aware QoS workflow scheduling method for Grid Services. The proposed scheduling algorithm considers network and QoS constraints. The goal of the proposed scheduling algorithm is to implement the workflow schedule so that it reduces execution time and resource cost and yet meets the deadline imposed by the user. The experimental result shows that the proposed algorithm improves the success ratio of tasks and throughput of resources while reducing makespan and workflow execution cost.

Keywords: Grid Scheduling, QoS, DAG, Execution time, Deadline, Trust Rate.

Received June 25, 2014; accepted September 7, 2016



 
<
 
Copyright 2006-2009 Zarqa Private University. All rights reserved.
Print ISSN: 1683-3198.
 
 
Warning: fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 Warning: fsockopen(): unable to connect to oucha.net:80 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /hsphere/local/home/ccis2k/ccis2k.org/iajit/templates/rt_chromatophore/index.php on line 251 skterr