Rules for Transforming Order Dependent Transaction into Order Independent Transaction*
Hamidah Ibrahim
Department of Computer Science, Universiti Putra Malaysia, Malaysia
Abstract: A transaction is a collection of operations that performs a single logical function in a database application. Each transaction is a unit of both atomicity and consistency. Thus, transactions are required not to violate any database consistency constraints. In most cases, the update operations in a transaction are executed sequentially. The effect of a single operation in a transaction potentially may be changed by another operation in the same transaction. This implies that the sequential execution sometimes does some redundant work. A transaction with a set of update operations is order dependent if and only if the execution of the transaction following the serialibility order as in the transaction produce an output which will be different from the output produced by interchanging the operations in the transaction. Otherwise, the transaction is order independent [8]. In this paper, we present rules that can be applied to generate order independent transaction given an order dependent transaction. An order independent transaction has an important advantage of its update statements being executed in parallel without considering their relative execution orders. With an order independent transaction, we can consider its single updates in an arbitrary order. Furthermore, executing transaction in parallel can reduce the execution time.
Keywords: Transaction, parallel processing, transaction decomposition, subtransaction, update operations.
Received March 2, 2004; accepted June 29, 2004
Integrating GIS and MCDM Using COM Technology
Khalid Eldrandaly1, Neil Eldin2, Daniel Sui1, Mohamed Shouman3, and Gamal Nawara4
1Geography Department, Texas A&M University, USA
2Construction Science Department, Texas A&M University, USA
3College of Computers and Informatics, Zagazig University, Egypt
4Industrial and Systems Engineering Department, Zagazig University, Egypt
Abstract: Problems involving the processing of spatial data such as industrial site selection and land use allocation are multi-facetted challenges. Not only they often involve numerous technical requirements, but may also contain economical, social, environmental and political dimensions that may have conflicting values. Solutions for these problems involve highly complex spatial data analysis processes and frequently require advanced means to address physical suitability conditions, while considering the multiple socio-economic variables. Geographic Information Systems (GIS) and Multi-Criteria Decision-Making techniques (MCDM) are two common tools employed to solve these problems. However, each suffers from serious shortcomings. GIS, which deals mainly with physical suitability analysis, has very limited capability of incorporating the decision maker’s preferences into the problem solving process. MCDM, which deals mainly with analyzing decision problems and evaluating the alternatives based on a decision maker’s values and preferences, lacks the capability of handling spatial data (e. g., buffering and overlay) that are crucial to spatial analysis. The need for combining the strengths of these two techniques has prompted researchers to seek integration of GIS and MCDM. Current integration strategies (loose coupling and tight coupling) have their own limitations. Such limitations were successfully eliminated by using Component Object Model (COM) technology to integrate GIS and MCDM. An illustrative example was included to validate the capabilities of the presented integration strategy.
Keywords: GIS, MCDM, AHP, integration strategies, software interoperability.
Received March 1, 2004; accepted May 29, 2004
Solving the Maximum Satisfiability Problem Using an Evolutionary Local Search Algorithm
Mohamed El Bachir Menai1 and Mohamed Batouche2
1Artificial Intelligence Laboratory, University of Paris 8, France
2 Computer Science Department, University Mentouri of Constantine, Algeria
Abstract: The MAXimum propositional SATisfiability problem (MAXSAT) is a well known NP-hard optimization problem with many theoretical and practical applications in artificial intelligence and mathematical logic. Heuristic local search algorithms are widely recognized as the most effective approaches used to solve them. However, their performance depends both on their complexity and their tuning parameters which are controlled experimentally and remain a difficult task. Extremal Optimization (EO) is one of the simplest heuristic methods with only one free parameter, which has proved competitive with the more elaborate general-purpose method on graph partitioning and coloring. It is inspired by the dynamics of physical systems with emergent complexity and their ability to self-organize to reach an optimal adaptation state. In this paper, we propose an extremal optimization procedure for MAXSAT and consider its effectiveness by computational experiments on a benchmark of random instances. Comparative tests showed that this procedure improves significantly previous results obtained on the same benchmark with other modern local search methods like WSAT, simulated annealing and Tabu Search (TS).
Keywords: Constraint satisfaction, MAXSAT, heuristic local search, extremal optimization.
Received February 29, 2004; accepted June 30, 2004
A Connectionist Expert Approach for Speech Recognition
Halima Bahi and Mokhtar Sellami
Department of Computer Science, University of Annaba, Algeria
Abstract: Artificial Neural Networks (ANNs) are widely and successfully used in speech recognition, but still many limitations are inherited to their topologies and learning style. In an attempt to overcome these limitations, we combine in a speech recognition hybrid system the pattern processing of ANNs and the logical inferencing of symbolic approaches. In particular, we are interested in the Connectionist Expert System (CES) introduced by Gallant [10], it consists of an expert system implemented throughout a Multi Layer Perceptron (MLP). In such network, each neuron has a symbolic significance. This will overcome one of the difficulties encountered when we built an MLP, which is how to find the appropriate network configuration and will provide it with explanation capabilities. In this paper, we present a CES dedicated to Arabic speech recognition. So, we implemented a neural network where the input neurons represent the acoustical level, they are defined using the vector quantization techniques. The hidden layer represents the phonetic level and according to the Arabic particularities, the used phonetic unit is the syllable. Finally, the output neurons stand for the lexical level, since they are the vocabulary words.
Keywords: Artificial intelligence, speech recognition, hybrid system, neuro-symbolic integration, expert system, neural networks.
Received February 23, 2004; accepted July 8, 2004
Data Integration in a PLM Perspective for Mechanical Products
Sihem Mostefai1, Abdelaziz Bouras2, and Mohamed Batouche1
1Department of Computing Science, University Mentouri of Constantine, Algeria
2CERRAL, IUT Lumière, University of Lyon II, France
Abstract: One of today’s hottest topics in information technology is integration. In this paper, we deal with the problem of data integration in a Product Lifecycle Management (PLM) vision. That is, how to integrate product data throughout the entire product lifecycle, ranging from conception, through design, to manufacture, operation and destruction. This paper presents three approaches studied in the context of mechanical products to show how the problem of integration is dealt with. The study is mainly based on some examples of activities (or phases) taken from the product development lifecycle. Including the entire set of activities is out of the scope of this paper. Nevertheless, the three proposed approaches; meta-data, features, and ontologies show enough flexibility and potential to be generalized quite easily to other phases.
Keywords: Multiple view feature modelling, collaborative design, data exchange, PLM, data integration.
Received February 21, 2004; accepted July 6, 2004
A Simulation Study of a Reliable Dynamic Multicast Environment
Abdelaziz Araar, Hakim Khali, and Riyadh Mahdi
Ajman University of Science and Technology Network, UAE
Abstract: In the dynamic multicast environment, static multicast retransmission modes may lead to congestion and loss of packets due to propagation errors of the wireless network. This paper logically divides the dynamic multicast network into fixed and mobile parts, and focuses on the dynamic wireless environment, where mobile member may enter in non-covered areas. The group is divided into many subgroups of mobile members. Each subgroup has one Designated Receiver (DR), which is responsible of multicast. Simulation studies have been conducted to determine the benefits of integrating an improved Forward Error Control (FEC) codes to a reliable multicast protocol P_Mul in the dynamic environment. Members can leave and join the subgroup based on some distributions. DR can support two modes of FEC, proactive and reactive. The simulation tool using OPNET shows that reactive FEC is better with high rate of leave and low rate of join. However, for proactive FEC, it is the opposite. Also, simulation results show that the number of designated receivers is parabolic with respect to the number of retransmissions. This paper investigates the benefits of an improved FEC mechanism for the reliable dynamic wireless networks.
Keywords: OPNET, proactive/reactive FEC, dynamic multicast retransmission, designated receiver.
Received February 11, 2004; accepted July 3, 2004
Partitioning State Spaces of Concurrent Transition Systems
Mustapha Bourahla
Computer Science Department, University of Biskra, Algeria
Abstract: As the model-checking becomes increasingly used in the industry as an analysis support, there is a big need for efficient new methods to deal with the large real-size of concurrent transition systems. We propose a new algorithm for partitioning the large state space modelling industrial designs as concurrent transition systems with hundreds of millions of states and transitions. The produced partitions will be used by distributed processes for parallel system analysis. The state space is supposed to be represented by a weighted Kripke structure (this is an extension of the Kripke structure where weights are associated with the states and with the transitions). This algorithm partitions the weighted Kripke structure by performing a combination of abstraction-partition-refinement on this structure. The algorithm is designed in a way that reduces the communication overhead between the processes. The experimental results on large real designs show that this method improves the quality of partitions, the communication overhead and then the overall performance of the system analysis.
Keywords: Concurrent transition systems, distributed and parallel analysis, abstraction, partitioning, refinement.
Received February 6, 2004; accepted July 4, 2004
Wavelet Coding Design for Image Data Compression
Othman Khalifa
Electrical and Computer Department, International Islamic University, Malaysia
Abstract: In this paper, image compression algorithms using scalar and vector quantisation are proposed. An analysis of wavelet coefficients encoding is explained. Wavelet capability of energy compaction is shown. Also, wavelet vector quantisation and multiresolution codebook generation is discussed. General description of the proposed image compression algorithm with its feature is presented. In addition, simulation results and comparison with other coders is shown.
Key words: Image compression, wavelet, scalar quantisation, vector quantization.
Received January 21, 2004; accepted May 25, 2004
Negotiation Protocol Inter Agents in an Electronic Market
Samir Kechid and Habiba Drias
Department of Computer Science, Algeria University, Algeria
Abstract: In this paper, we present a new environment of negotiation in a multi-agents system for the electronic market. Our environment presents two protocols of negotiation, namely; the protocol of negotiation by auction and the contract net protocol. The choice of the protocol to be used, is made according to the type of product negotiate, for certain products, we use the auction, and for others we use the technique of contract net protocol.
Keywords: Negotiation protocol, auction, contract net protocol, multi-agents systems, electronic business.
Received January 5, 2004; accepted April 13, 2004
Designing Large-Scale ASTN-Based Optical Mesh Networks
Faouzi Kamoun1 and Mohamed El-Torky2
1Dubai University College, College of Information Technology, UAE
2Telecommunications Consultants, Province of Quebec, Canada
Abstract: Automatically Switched Transport Network (ASTN) has many capabilities, such as dynamic connection/routing, that make it attractive for traffic engineering and optimization of next generation large scale optical mesh backbones. With increasing traffic demand spanning large geographic areas, optical mesh networks need to grow rapidly in terms of degree of meshing, bandwidth, and number of nodes. This translates (among others) into: (1) an increasing broadcast traffic and message load at each node in the ASTN control plane, especially during links or nodes failures, where dynamic route computation is required. Maintaining the stability of the routing protocol, and preserving service quality (restoration, network-wide delay, etc) as the mesh network grows larger becomes a key requirement, (2) significant memory, bandwidth and processing requirements to maintain and update network topology databases, and (3) additional operational considerations for connections availability, network latency, fault isolation, link maintenance and correlation of failures. This paper addresses the unique operational requirements in this type of large meshed networks environment and provides network designers with practical solutions to address scalability when building large ASTN-based mesh networks.
Keywords: Next-generation networks, intelligent optical networking, automatically switched transport network, network scalability and survivability, optical mesh networks.
Received November 29, 2003; accepted March 6, 2004
Fuzzy Estimation of a Yeast Fermentation Parameters
Mahmoud Taibi 1, Chabbi Charef 1, and Nicole Vincent 2
1Electronics Department, University of Annaba, Algeria
2Laboratory CRIP5-SIP, University René Descartes Paris5, France
Abstract: The dynamics of fermentation processes are very complex and not completely known. Some state variables are non-measurable, and the process parameters are strongly time dependent. Recently, there are some control methods like fuzzy learning and neural networks, which are promising in dealing with non-linearity, complexity, and uncertainly of these processes. These methods are suitable for the modelling of these systems, which are difficult to describe mathematically. The fuzzy learning methods are useful for the modelling, they are less demanding on the mathematical model and a priori knowledge about the processes. Different techniques for estimating the state variables (that are non-measurable) in the fermentation process have been investigated. A non-linear auto-regressive with exogenous input (NARX) model was developed using process data from a pilot bioreactor. The fermentation process is splitted into three phases, where each phase was treated separately. Generally, fuzzy models have a capability of dividing an input space into several subspaces (fuzzy clustering), where each subspace is supposed to give a local linear model. In our work, we used global learning where the local models are less interpretable, but the global model accuracy is satisfying, and the fuzzy partition matrix is obtained by applying the Gustafson-Kessel algorithm. The fermentation parameters are estimated for a batch and a fed-batch culture. The number of inputs to our fuzzy model are three for a first simulation. We used four inputs for a second simulation, in order to detect some correlations among inputs. The results show that estimated parameters are close to the measured (or calculated) ones. The parameters used in the computation are identified using batch experiments.
Keywords: Fermentation, batch, fed-batch, Takagi-Sugeno model, process.
Received November 18, 2003; accepted July 8, 2004