Genetic Algorithims in Engineering

Published on March 2017 | Categories: Documents | Downloads: 32 | Comments: 0 | Views: 159
of 6
Download PDF   Embed   Report

Comments

Content

Engineering Design Optimization with Ge
Richard H. Dinger Principal Engineer The Boeing Company
P. 0. BOX3707 MIC 67-FJ Seattle, Wa. 98124, USA (425) 234-7545 richard.h.dingeraboeing.com

hms

Engineers design systems by searching through the large number of possible solutions to discover the best specific solution. The search process is often time consuming and expensive. But by exploiting the natural processes that biological systems use to evolve and adapt, design engineers can often quickly solve otherwise difficult design problems that resist solution by traditional optimization methods. This paper explains the basic technique of the genetic algorithm and shows how design engineers can use a genetic algorithm to solve real design engineering problems. This paper focuses on explaining how genetic algorithms work. A brief example at the end demonstrates how the practicing engineer can use this powerful technique to solve real world problems in engineering design. The example structural design problem uses a genetic algorithm to minimize the weight of a pin jointed frame, but the genetic algorithm can be applied to almost any type of design problem. Some more basic theoretical references are provided at the end for those interested in a more rigorous explanation of the details of genetic algorithms. Basic Optimization Terminology I am providing some informal definitions for a few basic optimization terms in order to ensure that everyone has a com0-7803-5075-8/98/$10.000 1998 IEEE.

mon understanding of the discussion. These are not rigorous definitions, but shouM be adequate for the working design engineer. Only basic terms are presented here to get the discussion moving; other more specific terms relating to genetic algorithms are introduced as needed throughout the remainder of this paper. Optimization refe to a goal directed search for the "best" solution to a problem. The idea of what is best must be defined by the engineer and is always problem c. Minimizing the weight by seek lightest structure is an example of an optimization problem. Optimization problems seek either the minimum or the maximum of some problem specific property or set of properties, but multiple goals or objectives are not uncommon. The objective function is the design engineer's numerical representation of the goal in the optimization problem. When the objective is to minimize, the term cost function is sometimes used. Genetic algorithms, however, use the term fitness function instead of objective function and that is the term used in this paper. Decision variables are the indepe variables in the optimization problem that the design engineer manipulates while searching for the optimum solution. For example, the design engineer might se-

114

lect the cross sectional area of a structural member as ii decision variable. A constraint refers to a restriction the design engineer places on either the design problem's decision variables or resulting solutions. The design engineer could restrict stress or deflections, for example. A feasible solution is any solution that does not violate any of the constraints the design engineer has placed on the problem being optimized. The feasible region consists of all the feasible solutions taken as a whole. The more restrictive the constraints placed on the problem by the design engineer, the smaller the feasible region. Highly constrained problems may result in an empty feasible region, which obviously has no feasible solutions. The search space encompasses the region that will be searched during the optimization process. The search space includes all possible values that the decision variables can assume. But since most problems have constraints on problem solutions, tlhe search space will include some solutions that are not feasible. The (optimization search process will throw out the solutions that are outside the feasible region. Types of Optimilzation Problems Optimization problem types may be categorized according to many different strategies. For the purpose of this discussion, I arrangie problems in a continuum from numeirically well behaved to completely random. Let's take a look at what is meant by these fundamentat problem characteristics and the traditional solution approaches that have been used. The fitness function of a numerically well behaved problem is continuous and differentiable everywhere within the search space. This fitness function is also

mono-modal having a single minimum or maximum point that represents the optimum solution. Such a problem is easy to solve with any of the classical methods of optimization that are referred to as hillclimbing methods. Hill-climbing methods are designed for maximum exploitation of the shape of the fitness landscape. There are many variations of the hiltclimbing method, but they generally consist of first finding a single feasible solution and then evaluating the derivative in the direction of each decision variable. Next a search is conducted along the steepest slope to find the best relative solution in that direction. The process is repeated at that new starting point until no further improvement is obtained because the solution point is at the top of the hill. This approach literally climbs the hill by always taking the steepest path. The approach similarly takes the steepest decent when seeking a minimum. If your design problem can be solved with the hill-climbing method, you are in luck because nothing is more efficient at exploiting the shape of the fitness function or fitness landscape as it is generally called. Many real world problems, however, do not fit well into this type of solution. When the fitness function is multi-modal the hill-climbing method will get stuck at the top of the first hill encountered. To solve that problem design engineers typically start the solution process from several different locations. The best solution obtained in that series of trials is then presumed to be the global optimum for the solution space. If the design engineer has a good knowledge of the problem domain this may still be the best approach to optimizing the design problem. At the other extreme of the problem continuum is the objective function that ap-

115

pears to be completely random. This appearance of randomness may be caused by incomplete knowledge of the parameters that control the fitness function or the problem couid truly be random. In either case the fitness function appears to have no consistent relationship with the decision variables. The random problem is sofved by a complete exploration of the solution space. The method is referred to as complete enumeration. The complete enumeration method, while effective, is seldom practical. Even with the low cost of modern computers, the computational cost for this exhaustive search is very high for ail but the simplest problems. Most real world problems have many decision variables. Since the search space is the Cartesian product of all the decision variables, the search space becomes enormous for any nontrivial exhaustive search. And if the problem of dimensionality weren't bad enough, most real world problems are not integer valued problems. To exhaustively search for the best real number means the design engineer must decide how fine a granularity to use in the search. For example to search between 20 and 25, should you increment trials by 0.5, or 0.1, or 0.01 or are even finer gradations required? Making the search step size too large may cause you to miss a better solution, while making the step size too small may waste computing resources by needlessly prolonging the search. The design engineer must make all these decisions before the search is undertaken. Between these two extremes of the well behaved and the random fitness function lies the real world problem. The real optimization problem may be discontinuous at several points and is often discontinuous at the global optimum because of a
116

constraint. The real optimization problem may not be differentiable, may have many relative optimums, and may be ary conditions. B random; searching

and adapt through

the more tradit

codes the values abies in a string
The encoding an

can be thought of as a gene in the chromosome. Second, the genetic algor population of individuals to search. Each individual re possit.de solution to the problem. dividual's chromosome encodes of decision variables and so results in a single point in the solution space. Third, evaluation of sea based on fitness search focused actual objective the design engineer is seeking rivatives are not used s rivative calculation is derivatives are not used not affected by disconti

Fourth, the genetic algorithm is not a random search method, but it does use random processles to transition from one search state to another. The random processes give genetic algorithms a good balance between wide exploration of the search space and exploitation of fitness landscape features. Recall that enumeration techniques are very good at exploration, but do not exploit local features. Hill-climbing methods, however, are very good at exploitation, but do not explore the entire search space. The genetic algorithm uses the selection, recombination, and mutation operators on the population of individuals to perform the search. The population is randomly created at the start of the search. Fitness is used to select individuals from the current generation to advance into the next generation. These individuals are recombined and possibly mutated to form the next generation. This process is continued until there is no change in the best individual in the population. Selection begins by determining the relative fitness of each individual by calculating the individual's fitness divided by the total fitness of the entire population. Then a cumulative fitness is calculated for each individlual as the sum of the relative fitness for all members up to the one being calculated. The cumulative fitness is thereby normalized over the entire population to a maximum of 1.0 for the last individual. The population can be thought of as forming a roulette wheel with slots proportional to that individual's fitness relative to the rest of the population. A random number between 0 and 1.0 is generated next and the individual with the cumulative fitness that bounds the random value is selected. This selection process continues until a new population is formed. In general, those individuals
117

with higher fitness values are more likely to be selected, but there is an element of random choice also. Similarly, multiple individuals that have the same chromosome and hence the same fitness will also have a better chance of being selected. Once the new population of individuals is selected recombination begins. The genetic algorithm moves through the population by pairs and randomly determines if each individual pair will be recombined. If that is the case, a random point along the pair of chromosomes is selected and the remainder of each chromosome to the right of the selection point is swapped between the two chromosomes. Two new individuals are formed, which are a recombination of the genes in the original two chromosomes. Finally, each gene of each chromosome of each individual may be randomly mutated in order to introduce additional diversity in the population. The probability of a mutation is generally low, but the design engineer can control this and all other probabilities to fine tune the search process. Since most real world problems have constraints, the genetic algorithm needs a mechanism for applying problem constraints. A penalty function is an easy way to constrain the behavior of the fitness function to the feasible region by applying a penalty for violating a problem constraint. A penalty function reduces the value of the fitness function when a constraint is violated. A good penalty function drops the value of the fitness sharply at the constraint boundary forming a cliff in the fitness landscape. Recall that discontinuities do not bother the genetic algorithm so the sharper the edge of the cliff the better. Good results can be obtained by reducing the unconstrained fitness value with a

Initial Structure

Final Structure

penalty that increases exponentially as the constraint violation increases. The violation squared can be subtracted from the value of the fitness function or the fitness can be multiplied by:

Since the genetic algorithm uses random processes to transition from one generation to the next, the genetic algorithm is not deterministic. That is to say it is unlikely the same answer will be obtained in any two attempts with the same problem. The answer will be the best that the genetic algorithm can find. The genetic algorithm does explore broadly, however, and exploits the fitness landscape to find a very good solution. A Simple Structures Example The genetic algorithm was tested by minimizing the weight of several simple cantilever trusses such as the seven member truss shown in the figure. The genetic algorithm obtained similar results for each case. The results of the seven

member example are explained here in detail. A simple finite element analysis program was driven by the genetic algorithm to analyze the example structures. The same material properties were used for all members, but a com bets allowable load was limited to half that of a tension member's. The cross sectional areas members were selected as the design problem decision variables. The structure was assumed to be pin jointed, so bending was not sidered in the test problems. Allowa ember loads were the constraints on the test problem. When any member's internal load exceeded either the tension or compression allowable value, the fitness was reduced by an exponential penalty fun A population of 50 individuals was used. The probability of r 0.80 and the probabili 0.06. Member cross s encoded to range high that at least one feasible solution existed. The initial solution indicated that member 4 in the figure was not needed and it
118

could be removed from the structure. This was indicated by the genetic algorithm setting thie cross section area to nearly zero. The member was then removed from the structural model and the problem run again. The subsequent analysis indicated that member 2 could be removed for the same reason that member 4 was removed earlier. Removing member 2 would, however, double the unrestrained length of the lower chord and could cause buckling problems. The lower chord of the truss was not analyzed for column buckling in this model. If members 2 and 4 are removed an exact optimum can be found analytically, since no redundant members exist. The initial solution with all seven members in the truss was within ten percent of the true optimum. These same results were repeated with more complex structures. Although the genetic algorithm does not guarantee an optimum solution it certainly does a good job of getting close to the optimum. Conclusions The genetic algorithm has some powerful advantages over both the classical hillclimbing method and the complete enumeration method. First the genetic algorithm provides a good balance of both exploitation and exploration of the search space. That means solutions are ef3cient yet full exploration of the entire space is provided so the solver is less

likely to hang up on a local relative optimum. Second the genetic algorithm has no fear of a discontinuity in the solution space. The problem can be mathematically messy--the hallmark of many real world problems. The design engineer does not need to create elaborate mathematical fictions, however, to fool the solver into thinking the actual problem is well behaved. Finally, the genetic algorithm seeks the very good solution, rather than the very best solution. This is actually a strength that prevents the genetic algorithm from myopically falling into holes in the mathematics or getting stuck on top a local hot spot. The next time you are confronted with an elaborate optimization problem in design engineering give the genetic algorithm a try. The more ill behaved the problem the better the genetic algorithm seems to like it.

References 1. Goldberg, David E., Genetic Algorithms in Search, Optimization, and Machine Learning, Addison Wesley, 1989. 2. Michalewicz, Zbigniew, Genetic Algorithms + Data Structures = Evolution Programs Third Edition, Springer, 1996. Engineering Design Optimization with Genetic Algorithms

119

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close