STRUCTURAL SAFETY CONCEPTS The nature of uncertainties and the - TopicsExpress



          

STRUCTURAL SAFETY CONCEPTS The nature of uncertainties and the manner of dealing with them has been a topic of discussion by statisticians, Scientists, Engineers and other professionals for a long time (Der-Kiureghian and Ditlevsen, 2009; Benjamin and Cornell, 1970; Lindley, 2000; Pate-Cornell, 1996; Vrouwenvelder, 2003; Rao et al, 2007). A structure under the study of a designer is not always under a set of predefined or approximate loadings, and since these approximations and uncertainties are not meant to ignore, generally safety factors are considered to assure reliability. The safety factor ensures a more relaxed mind. However, by considering a safety factor, yet the variations of load (or structural resistance), distribution factor or changes of load probability density will introduce different failure probabilities, which indicate that safety factors do not account for the uncertainties existing within the load or even material properties. In some cases however, a smaller safety factor than unity may even handle the uncertainties due to failure probability, as well fulfilling economical purposes. The uncertainties in structural parameters such as material properties, external loads, geometry, etc., have caused serious attentions to reliability in structural design and analysis (Ghasemi and Yousefi, 2011). Thus, reliability theory, as a branch of theory of probability, provides a firm framework which can introduce a proper factor of safety when required (Reynolds, 2008; ISSC, 2006; Faber and Sorensen, 2002; Smith, 1976). Thus, any system made of a satisfied reliability index, may be referred to as safe. Early studies in the field of structural reliability were carried out by Freudenthal (1945), who brought the method of probability and statistics to bear on characterizing the nature of factor of safety. An illuminating account of the historical development of the subject is available in the book by Madsen et al, (1986). Besides, this book also provides a rigorous account of reliability index based approaches. Some of the other earlier books relevant to this field are those by Pugsley (1962), Bokotin (1969), Tichy and Vorlicek (1972), Benjamin and Cornell (1970), Ang and Tang (1975, 1984), Leporati (1979), Ditlevsen (1981), Elishakoff (1983), Augusti et al, (1984), Yao (1985), Thoft-Christensen and Morutsu (1986) and Madsen (1988). Works of more recent vintage include those by Wen (1990), Soong and Grigoria (1993), Dai and Wang (1992), Nigam and Narayanan (1994) Melchets (1999) and Ditlevsen and Madsen (2005). Several specialsed review articles have also been published. Thus, Shinozuka (1983) presents a discussion on definition and calculation of reliability indices. Methods for calculating probability of failure are critically reviewed by Scheller and Stix (1987). Bjerager (1990) presents a state of the art review of structural models and random field reliability models. Developments in the area of stochastic finite element method (SFEM) have been discussed by Nakagiri (1987), Benaroya and Rehak (1988), Shinozuks and Yamazaki (1988), Brenner (1991), Ghanem and Spanos (1991), Shinozuka (1991), Der Kiureghian et al., (1991) Kleiber and Hien (1992) and Liu et al., (1992). Melchers (1999) has outlined the developments of the reliability analysis of existing structures and discussed the needs for further research. As in other branches of structural mechanics, the emergence of computational power in recent years has strongly influenced the developments in this subject. Other factors that have contributed to the growth of the subject are the increased availability of data. Theoretical Framework on Structural Reliability The study of structural reliability is concerned with the calculations and prediction of the probability of limit state violation for engineered structures (Melchers, 1999). Ditlevsen and Madsen (2005) considered structural reliability as a method that attempt to treat rationally, the various sources of uncertainties. According to Tsompanakis and Papadrakakis (2000) structural reliability analysis is a tool that assist the design engineer to take into account all possible uncertainties during the design and construction phases and the lifetime of a structure in order to calculate its probability of failure. While Tsompanakis and Papadrakakis, (2000). To Stanley et al (1978) reliability-base design considers the probability that the structure will last a given length of time against the agents that can cause it to fail. Reliability-based design is a probabilistic design process where the loads and the strengths of materials and sections represented by their known or postulated distributions, defined in terms of distribution type, mean and standard deviation. The probability of failure (Pf) for a specific design case is calculated as the probability that the maximum total load effect exceeds the resistance to failure. This is opposed to limit state design (LSD) method, which is a semi-statistical design process in which the probabilistic aspects are treated at the code development stage (Kadry and Smaili, 2007) in order to define characteristic values and partial safety factors for load and resistance that are used to ensure, on average an acceptably low probability of failure across a full spectrum of design cases. In the current design codes in Europe, LSD is used as a practical method of incorporating reliability methods in the normal design process. This approach failed to fully address the statistical nature of basic design variable. Issues of uncertainties are lumped conservatively within partial safety factors. A limit state code is calibrated to ensure that selected appropriate reliability levels, called target values, are attained in design. (Zimmerman et al, 1998). In reliability-based concept, the performance function of a structural system according to a specified mission is given by: M = performance criterion – given criterion limit = g(X1,X2, . . . . . ,Xn) 1 in which the Xi (i = 1, . . . . .,n) are the n basic random variables (input parameters), with g( ) being the functional relationship between the random variables and the failure of the system. The performance function can be defined such that the limit state of failure surface, is given by M = 0. The failure event is defined as the space where M > 0. Thus a probability of failure can be evaluated by the following integral. Pf = ∫∫ . . . .∫ fx(x1, . . . . ,xn)dxi . . . dxn 2 Where fx is the joint density function of x1, x2, . . .,xn and the integration is performed over the region where M < 0. Because each of the basic random variables has a unique distribution and they interact, the integral cannot be easily evaluated. Use is made of approximate method. Traditionally, the concern of researchers were on the evaluation of structural reliability of steel and concrete structures and/or components. (Rosowsky et al, 2002; Jinquan and Baidurya, 2007; Afolayan and Abubakar, 2003; Afolayan and Opeyemi, 2008; Holicky and Retief, 2005; Hassan, 2006; Chinwedu, 2002). Due to the performance of timber structures during recent extreme events, such as the Northridge earthquake, and the Hurricane Andrew (Mohammed, 2006), Much attention is being tailored on the reliability of wood frame components and assemblies under normal and accidental loading (Rosowsky and Ellingwood, 2002; Rosowsky, 2002; Filiatrault and Folz, 2001, 2002; Afolayan, 2005; van de Lindt, 2005; Foliente et al, 2001; Afolayan, 2004; Afolayan and Abdulkareem, 2005; Mohammed, 2006; van de Lindt and Mathew, 2003). Study on the adequacy of the current design codes, that are based on limit state design philosophy are being assessed, as many attempt were made to calibrate these new generation codes (Holicky and Gurvanessian, 2002; Mike and David, 2002; Nowak, 1995; Ranta-maunus et al, 2001; Ranta-maunus and Tomi, 2002; Ranta-maunus, 2004; Trezos and Thomas, 2002; Girgio and Silvia, 2002). This research work is an additional efforts that utilizes the concept of artificial intelligenge, genetic algoritjms in particular. Monte Carlo Simulation Method One of the approximate methods of evaluating equation (2.9) is the method of Monte Carlo. The method is used to build probability density function (pdf) of the structural system response, as well as to assess the reliability of components or structures or to evaluate the sensitivity of parameter. Monte Carlo simulation consists of drawing samples of the basic variables according to their probabilistic characteristics and then feeding them into performance function (Rubistein, 1981; Kadry, and Smaili, 2007). The major advantages of the Monte Carlo method is that this method is valid for static, but also for dynamic and probabilistic models, with continuous or discrete variables. The main draw back of this method is that it requires often large number of calculations and can be prohibitive when each calculation involved a long and onerous computer time (Kadry and Smaili, 2007). To avoid the problem of long computer time in the method of Monte Carlo, it can be interesting to build an approximate mathematical model in the form of a polynomial function, called response surface method (Rajeshekhar, 1993). When a response surface has been determined, the system reliability can be easily assessed with Monte Carlo simulation. The practical problems encountered by the use of the response surface method are in the analysis of strongly non-linear phenomena where it is not obvious to find a family of adequate functions and in the analysis of discontinuous phenomena. Likewise, this approach may also be time consuming when there are larger number of random variables (Cordoso et al, 2008). First and Second Order Reliability Method First and Second Order Reliability Method (FORM/SORM) are other approximation methods. The Safety level of a structure is measured by a reliability index. There are different definitions of reliability index, β. The first was introduces by Cornell in the late 1960s (Cornell, 1969). Given the performance function as” M = R – S = g(X1,X2, . . . . . ,Xn) 3 The mean and the standard deviation of the margin can be defined as µM = µR - µS 4 σM = √(σ2 + σ2) 5 and the safety index is given by: βC = µMµS 6 This reliability index is defined as the shortest distance between the failure surface defined by G(x) = 0 and the standardized origin (Fig.1). Fig. 1 Reliability index β and the limit state function of the standardized space The standardized variables are expressed as: X = X-µxσx 7 The point x* on the failure surface is of considerable importance. It is the design point. If a design is carried out using values from the design point this will result in the highest probability of failure, thereof the name. If the limit state function is linear and the variables are non correlated, the shortest distance, that is βHL, can be found directly using basic algebra. However, this is seldom the case. When the limit state function is non linear, approximate methods must be considered. For example, geometrical optimization of the problem which can be solved numerically or analytically. The analytical methods are usually based on a first order Taylor approximation of the limit state function. That is why the method is called First Order Reliability Method (FORM) (Thoft-christensen and Baker, 1982). An iterative method that can be used to find βHL¬, x*i and an estimate of the probability of failure is the Rackwitz algorithm (Ang and Tang, 1984). The design point can be expressed in scalar form. = - βHL 8 where are the direction cosines (the unit vector) in the direction. = AB 9 where A = ∂g 10 and B = √[∑(∂g )2] 11 First Order Reliability Method (FORM) and the Second Order Reliability Method (SORM) are based on first order approximation of the limit state at the design point (Cheng, 2007; Melchers, 1999; Ditlevsen and Madsen, 2005; Rackwitz, 2000). FORM/SORM require the calculations of the performance function and/or its partial derivatives with respect to the basic random variables at each iteration step. A large number of calculation of the performance function and/or its partial derivatives are therefore needed (Deng et al, 2005). Such calculations might be performed efficiently when the performance function g (x) can be expressed as an explicit form or analytical form in terms of the basic variables X. When the performance function are implicit, however, such calculations require additional efforts and might be time consuming (Deng et al, 2005). In many cases of practical importance, particularly for complicated structures, the limit state functions are usually implicit in terms of the random variables. Therefore, derivatives of the limit state functions are not readily available. Another issue of concern is the capability of the conventional structural reliability methods to detect the global minimum during the search for the reliability index (Rackwitz, 2000). Probabilistic Transformation Method (PTM) The probabilistic Transformation Method (PTM) developed by Kadry et al, 2006 and Kadry and Chateauneuf 2006 is a combination between the finite-element method and the probabilistic transformation method. It evaluate the probability density function (pdf) of a function by multiplying the input pdf by the Jacobian of the inverse function. The idea of PTM is based on the following formula (Hogg and Craig, 1978). Fu(u) = fp(p) . │Jp,u│ = fb(p) . │∂φ-1(u)∂u │ 12 Where p is the input parameter, u is the response (solution), and φ-1(u) is the inverse transformation, which is determined either analytically or numerically. One advantage of the PTM-FEM technique in the context of reliability analysis is the evaluation of the pdf, of the response in a closed form as opposed to other numerical methods which give only the first and second moment of response under some condition (Seifedine and Khaled, 2007). Structural Reliability Using Genetic Algorithms Many human inventions were inspired by nature (Dyer, 2003), notably, the use of artificial intelligence in civil engineering domain. Advancement in artificial intelligence and its applications have led to the development of comprehensive search algorithms that lend themselves to immediate use in the reliability evaluation of structural systems and the identification of dominant failure modes. Areas that attracted considerable research outputs are the genetic algorithms and artificial neural networks (Lagaros and Papadopoulus, 2006). According to Rackwitz (2001), genetic algorithms (GA) is known to be capable of detecting global minima during the search for the reliability index. The Genetic algorithm is an adaptive heuristic search method based on population genetics. Genetic algorithm were introduced by John Holland in the early 1970s (Manoj et al, 2010; Black, 1996) It is an iterative procedure that consists of a constant size population of individuals, each one represented by a finite string of symbols, known as ‘genomes’, encoding a possible solution in a given problem space. This space referred to as the search space, comprises all possible solution to the problem at hand. GA-base structural reliability method was first developed by Shao and Murotsu (1999). It consists of generating search directions in the space formed by the random variables affecting the reliability of a structural system. By following a search path until failure is reached, the reliability index, β, corresponding to the search direction is obtained. Genetic search algorithm simulates the biological process of evolution in an attempt to identify the parameters that is genes that define an optimum system. Genetic algorithm is an iterative procedure that consists of a constant-size population of individuals, each one represented by a finite string of symbols, known as the “genome”, encoding a possible solution in a given problem space, comprises all possible solutions to the problem at hand. The standard genetic algorithm proceeds as follows (Black, 1996): an initial population of individuals is generated at random or heuristically. Every evolutionary step, known as generation, the individuals in the current population are decoded and evaluated according to some predefined quality criterion, referred to as fitness. The fundamental principle for building an artificial system that reproduces and mimics the working of evolution date back to the biologist Barricelli who formulated an article about artificial methods to realize evolutionary processes (Baricelli, 1957). The issue of efficiency of GA, which is of concern in all types of applications, is the subject of continuous research (Goldberg and Samtani, 1986; Goldberg, 1989; Grigoriu and Turkstra, 1979). Deng et al (2005) proposed a method for improving the efficiency of genetic algorithm during the search for the minimum reliability index of a structural system. This is based on a splitting technique that shreds a chromosome into many small fragments, which can be easily identified. A dominant structural failure mode normally occurs when a set of structural members fail. Also, the failure of subsets of those members would lead to the occurrence of ‘partial failure’ can be identified, then some of these are very likely to contribute to the dominant failure mode. Conversely, a possible solution that does not contain any important ‘partial failure’ is unlikely to be the solution that would lead to a dominant failure mode. Linkage shredding genetic algorithm for reliability of structural system was proposed by Wang and Ghosn (2005). The genetic algorithm (GA) that differs from other classical optimization in four ways (Goldberg, 1989) is a part of evolutionary computational technique and probabilistic and global search method. Due to these advantages, the GA has been preferred in wide ranges of optimization problems. In order to apply the genetic algorithm, a population of solutions within a search space is initialized on the contrary of the traditional optimization methods that starts from a single point solution. The population can be viewed as points in the search space of all solutions to the optimization problem. Each individual in population has a fitness value defined by a fitness function. Then the artificial evolution processes called the genetic loop which mimic natural evolution are applied to produce new candidate solutions. At the end of the process, the newly created generation replaces previous generation and revolution is repeated until a satisfying solution to the problem is obtained ensuring certain design criteria are satisfied or a maximum number of generations are reached. Reliability Analysis Using Artificial Neural Networks (ANN) Another search technique inspired by nature is the Artificial Neural Networks ANN. To address the issues of implicit performance functions that are sometimes encountered in structural systems, Deng et al (2005) presented three artificial neural network (ANN) based reliability analysis methods. ANN-based Monte Carlo simulation method, ANN-based first order reliability method and ANN-based second order reliability method. These methods employ multi-layer feed forward ANN technique to approximate the implicit performance function. El-Hewy and Mesbahi (2006) proposed an artificial neural network ANN-based response surface method. The method established the relation between the random variables (input) and structural responses using reliability (FORM) and second order reliability method (SORM), or Monte Carlo simulation (MCS) method, to predict the failure probability. In another development Cordoso et al (2008) and Joao et al (2008) proposed methodologies for computing the probability of failure by combining Neural Network (NN) and Monte Carlo Simulation (MCS). The proposed methodology makes use of the capability of ANN to approximate a function for computation of performance measure at lower cost. Cheng et al (2008) proposed a method of reliability of structures using ANN-based response surface method. This method proved to be much more economical to achieve reasonable accuracy when dealing with problems where closed-form failure functions are not available or the estimated probability is extremely small.
Posted on: Mon, 06 Oct 2014 21:52:35 +0000

Trending Topics



Recently Viewed Topics




© 2015