Monday, April 27, 2009

Pan TC, Kao JJ (2009) GA-QP Model to Optimize Sewer System Design, JOURNAL OF ENVIRONMENTAL ENGINEERING, 135(1) 17-24

In this article, the authors introduce the problem of designing sanitary sewer systems and review the methods other researchers have used to attempt to solve design problems. The authors then describe the motivation for using a GA-QP model. A regular GA might be inappropriate since the solving efficiency can be reduced by many variables, and "many of the randomly produced alternatives are unreasonable or inappropriate." They describe how an LP can help improve a GA, but for their purposes they don't want to reduce their non-linear program into an LP. Instead, they transform their functions into quadratic forms and solve using quadratic programming (QP).


The paper defines the decision variables of the GA genes to be pipe diameters and pumping station locations, and describes the constraints in detail. The fitness function is the inverse of the cost, so the higher the fitness, the better. The two decision variables for the QP include pipe slope and buried depth.


It was a very interesting point that the best design found in the optimization model doesn't even begin to consider many important factors, such as construction, geology, traffic impact, public preferences and land availability. It may be an extremely iterative process to find a solution, determine the real feasibility (based on the above listed factors), have to throw out that solution, and repeat. As powerful as computers are in implementing optimization models, in the end any public sector planning problem requires a large human component in the process.

BRILL ED (1979) USE OF OPTIMIZATION MODELS IN PUBLIC-SECTOR PLANNING, MANAGEMENT SCIENCE, 25

This article written by Brill in 1979 addresses the trend towards using optimization models in solving public policy problems.

Brill argues that optimization models used to help find solutions to public policy problems "have serious shortcomings" which include failture to consider equity or distribution of income, as well as difficulties in estimating benefits/costs of public programs. Brill says public programs' benefits and costs are better kept in their original units rather than trying to translate them into common units to judge amongst completely different types of objectives.

Brill notes that multi-objective programming was increasingly common to try to address these issues, but he makes the point that all objectives need to be clearly identified in order to truly distinguish between inferior and noninferior solutions, as illustrated well in Figure 2.

Brill proposes that a good way to involve optimization models in public sector planning is to: 1, use them to stimulate creativity, and 2. create a man-machine interactive process. This man-machine interactive process is something he briefly describes in this paper, but it sounds like he most likely researched this idea more and published a paper on it soon after this paper. I found Brill's point interesting that optimization models should generate alternatives that meet minimal requirements and are different.

I'd really like to see a case study of these ideas, and how optimization models can assist public planners in generating creative solutions. It seems to me that a human would easily be just as creative as a computer, so I'd like to see an example showing otherwise.

Tuesday, April 14, 2009

Shiau and Wu (2006)

Shiau JT, Wu FC (2006) Compromise programming methodology for determining instream flow under multiobjective water allocation criteria, JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, 42(5), pp. 1179-1191

Increasing demand of water for human consumption results in a decreased availability of water for instream flows to sustain the natural ecology of a stream or river. This therefore creates a multiple-objective problem. The authors use the Indicators of Hydrologic Alteration (IHA) method as well as a Range of Variability Approach (RVA) to quantify the effects of a diversion weir on a stream in Taiwan.

The objectives of this problem are to meet demands and not harm the environment according to IHA and RVA. The method the authors use to solve this multi-objective program is called "compromise programming" which identifies the optimal solution as having the shortest distance to a theoretical ideal point.

Their study really seemed to be two part: first, they demonstrate that the current instream flow of 9.5 m3/s is not sufficient, then they found the optimal solution of 26 m3/s better protects the environment while compromising with preventing water shortages.

I had a difficult time understanding this paper since there were so many different variables being thrown around and I didn't have the patience to memorize/lookup each variable as I came across it. I don't know if they could've communicated their work in any easier way, but it definately hindered getting their point across.

It was nice to see an example of compromise programming being used, and I personally liked that the ecological needs were weighted the same as the human needs (since human needs will increase when ecological needs are ignored).

Tuesday, March 31, 2009

Neelakantan and Pundarikanthan (2000)

Neelakantan TR, Pundarikanthan NV (2000) “Neural network-based simulation-optimization model for reservoir operation,” JOURNAL OF WATER RESOURCES PLANNING AND MANAGEMENT, 126(2) pp. 57-64

The objective of this study was to optimize the operating policy of a reservoir system in Chennai, India. The study considered using a conventional simulation model and an artificial neural network, but determined that the ANN saved many days of computing time. The conventional simulation model was used to generate scenarios to train the ANN. The study utilized a Hooke and Jeeves optimization algorithm. When different inputs generated by Hooke and Jeeves were input to the ANN, the ANN gave the objective function value as the only output, which is reentered into the Hooke and Jeeves model.

The discussion of the trial scenarios and their results made very little sense to me. I thought the standard operation policy was what the reservoir managers were already doing. But the SOP generates a better objective function than the suggested operating policies. I don't see how this is an improvement or a success in the study.

I had a difficult time reading the article because I seemed to miss a lot of critical information at the beginning that had me confused the rest of the article. Really, the authors only wrote a sentence mentioning that a conventional simulation model was created in order to train the ANN, and the ANN was utilized for its quicker solve times. I missed that sentence and then went through the whole article wondering why the authors were neglecting local catchment inflows, transmission losses and percolation losses when those are irrelevant (I think) in an ANN. The full discussion of why a conventional model was necessary and why an ANN was chosen was saved until the very end and then I could go back through and understand a lot more of what was going on earlier in the article. In conclusion I'd say the only fault of the article that stuck out to me was poor order and clarity of the presentation of the material.

Wednesday, March 18, 2009

Perez-Pedini et al 2005

Perez-Pedini C, Limbrunner JF, Vogel RM (2005) “Optimal location of infiltration-based best management practices for storm water management,” JOURNAL OF WATER RESOURCES PLANNING AND MANAGEMENT, 131(6) pp. 441-448

This article examined a methodology for locating the optimal placement of infiltration-based stormwater BMPs. The goal was to reduce the peak runoff flow based on implementing a budget of 25 to 400 BMPs (the results would be a pareto frontier with the number of BMPs implemented on one axis and the percent reduction of peak flow on the other axis).

The researchers modeled a developed watershed as a grid made up of 4,533 hydrologic response units (HRUs) which could be modeled as having a BMP implemented or not. If it was decided to implement a BMP in an HRU, then the CN of the HRU would be decreased by 5 units. The researchers planned to run a genetic algorithm to decide where to place the BMPs. However, testing 4,533 HRUs for possible BMP implementation was too large of a decision space for a GA to handle, and the researchers had to make some limitations to reduce this decision space. They made the decision to limit possible BMPs to HRUs with low permeability (high CN) and HRUs which were in close proximity to a river. The first limitation makes sense that the most effective BMP would be removing an impervious surface, and the second limitation of being close to a river makes sense because it will likely have a larger volume of water running over it than an HRU at a high point in the watershed.

The results of the study find that the first few BMPs implemented have a high reduction in peak flow per BMP, and as the number of BMPs implemented increases, the reduction in peak flow per BMP decreases. This makes sense as the GA should automatically target the very best locations first and then the less effective HRU locations for BMPs will get selected.

I was surprised that the article mentioned that the GA didn't find the optimal solution, as discovered when they played around with the results a little. If they say their results are acceptable because they are near-optimal, I'll believe them, but I would've liked to see this proven a little more in the paper. Perhaps that is for another paper. Another component I would've liked to see expanded on was their study of possible commonalities between selected HRUs. They concluded that there were no dominating characteristics which could identify good HRUs without use of the GA, and I would be interested to read an article which examined this in detail. I suppose since they found no dominating characteristics, the article results might not be interesting enough to warrant an article.

Sunday, March 1, 2009

Behera, P, Papa, F., Adams, B (1999) “Optimization of Regional Storm-Water Management Systems” Journal of Water Resources Planning and Management, 125

Detention ponds are useful engineering tools to help mitigate the impacts of urban development. Land developers see ponds as loss of developable land and extra added costs. So when designing a pond, both economic and environmental goals need to be considered.



The paper demonstrates a methodology to design a detention pond with the decision variables being the storage volume, pond release rate, and pond depth. The constraints of the model include quality constraints for pollution control and runoff control. The objective function is to minimize costs which includes the value of the land used and the construction cost involved.



There was a discussion on isoquants in the paper which I didn't understand at all.



In this paper, the authors have designed a methodology of optimizing for one pond, and they apply this methodology to a larger scale optimization problem of designing a multiple catchment system. For multiple catchments, they use dynamic programming to solve.



It is interesting to see that by not constraining each catchment to certain parameters but simply restricting the end result of the system, they were able to achieve a 13% reduction in cost.

Monday, February 23, 2009

Berry et al (2005)

Berry, J., Fleisher, L, Hart, W. Phillips, C. and Watson, J.-P. (2005) “Sensor Placement in Municipal Water Networks”, Journal of Water Resources Planning and Management, 131(3) pp. 237-243

This article shows a methodology for determining sensor placement in municipal water networks using mixed integer programming. These sensors are placed to determine the presence of contaminants, whether accidentally or intentionally occuring.

The article states that this type of problem can be solved with a variety of modeling approaches utilizing the latest technology in computing power, but the certainty of the results just won't be as good as mixed-integer optimization techniques that make many simplifying assumptions. This was an interesting point since in the last article review I did of Lee et al (1992) I made the comment (see below) that it might not be necessary to make so many assumptions for the sake of quick solutions considering today's technology and advancements in optimization techniques. However, Berry et al was able to answer my question by describing why simplifying assumptions are better, even today.

After making these simplifying assumptions, the authors describe the equations they used to model these problems and describe the three data sets they used to demonstrate their methodology. The sensitivity of their results was interesting since it was low when they had few sensors available to place in the network, high when there was a medium amount of sensors available, and low again when there were many sensors available. I can see how there would be low sensitivity with many sensors since the distances from a sensor to any point in the network is small, but I would expect that the sensitivity would be large with very few sensors since changing the location of a sensor would make big differences in distances to certain nodes, especially the nodes with higher populations and demands. I suppose that's why we have computers assisting us with these problems since they are too complex for a human to have an intuitive sense about it. Or it's just me. :)

For future work this could be applied to techniques for cleaning up contamination. It would be interesting to see how a slightly different application in the same setting might change the results.

Sunday, February 15, 2009

Lee and Deininger 1992

Lee, B. H. and Deininger, R. A. (1992) “Optimal Locations of Monitoring Stations in Water Distribution Systems”, Journal of Environmental Engineering, 118(1) pp. 4-16

The EPA requires that all water distribution authorities to test the water quality in their system. Lee and Deininger have done an important work in contributing to the efficiency and effectiveness of this testing by providing and demonstrating an integer programming approach to testing the water quality. This assists the entire nation as the improved efficiency saves taxpayers' and water users' money.

To set up the integer program, testing nodes need to be defined and assumptions need to be made. When a water sample is taken at a single tap, the quality is thought to represent the quality at the closest node. d_i is the demand at that node, and D is the total demand. With the single sample, the quality of the fraction d_i/D is known. Also, the nodes nearby may be inferred depending on the hydraulics. If all the water at the sampled node came from an upstream node, then the water quality of this upstream node would also be known. Since water distribution systems are complex and the water in one node likely came from a combination of other nodes, the authors set up a method for deciding when the water quality of an upstream node can be assumed to be represented by a downstream node.

I'm not entirely convinced that these assumptions can be made. The weakness of the article is that they make assumptions without any argument attached as to why this is acceptable. But perhaps these assumptions are intuitive to most reading the article, and I'm not familiar enough with the water quality testing industry to know this. For example, the authors make an assumption that if atleast 50% of the sampled water has passed through a node, the sample is considered representative of that node (p. 8). I'd be interested in hearing others' opinions on whether they think this is an intuitively accurate assumption.

I'm sure that since 1992 many more complex models have been presented for how to effeciently and effectively monitor water quality across a large system, but for 1992 standards integer programming was an excellent and quick solution (17-27s to solve!). Nowadays integer programming is rare because engineers don't need to simplify to that level simply to save computing time. Integer programming is only used when it really is the most accurate model of a system. I'm trying to think of real life applications where it would be used today, but I can't think of any. I'd be interested in hearing if anybody else can think of some applications.

Sunday, February 8, 2009

Hardin 1968

Garrett Hardin, "The Tragedy of the Commons," Science, 162(1968):1243-1248

The main motivation the author has for writing this paper is to convince his readers on the need for societal restrictions on family size. He discusses the strains of a growing population on the earth's resources. He also rebutts the idea that people will naturally regulate their own family size for the betterment of the community, and disagrees that subliminally coercing people into restricting family size is an adequate solution.

Hardin uses the example of the tragedy of the commons to explain how a rational player will make decisions that will give him the advantage and any losses will be spread out across society. This is fine in a rural community where the environment can absorb the losses. However, in an urban society the sum of the losses caused by each person's gain directly affects many people in a major way.

Also, Hardin argues that at the time of the article a lot of emphasis was placed on creating an environment of shame for people who did not act in the best interests of the community. For example, he says a common phrase of the day is "responsible parenthood" which implies that a large family is irresponsible. This may be an effective way to motivate families to restrict their size, but Hardin believes that guilt and anxiety is never healthy for a society and therefore is a poor method to achieve an end goal. Also, he argues that those individuals who don't listen to the messages of guilt and conscience will survive better since they are thinking of themselves and not the community, and then natural selection will take place and those with a conscience will become extinct. He says this method of population control works but won't be effective in the long run.

As an alternative, Hardin proposes that a second method for population control as the best method. This method involves a set of laws to restrict family size for the good of all of humanity. However, Hardin was missing an important third method in population control, which I discuss next.

Early on in the article, Hardin mentions that "there is no prosperous population in the world today that has, and has had for some time, a growth rate of zero." Perhaps that was true in 1968, but it is definately not true of today. For example, Japan currently has zero native population growth, and Ukraine is at the top of the list for negative population growth. Overall, 20 countries in the world have zero or negative population growth. (http://geography.about.com/od/populationgeography/a/zero.htm) If Hardin had been an advocate of population control today, he would see that there's a 3rd option that works in limiting family size and is currently at work in society today. I'm not going to be so confident as to define that 3rd option, I think it has something to do with individuals wanting to advance themselves to the point of not having time to create a family. In geography class I learned that societies tend to have a population explosion as they industrialize, and once they are fully industrialized the population growth levels off to around zero. However, if you look at the list of countries with negative or zero population growth, it doesn't seem that the reasons I've given would play a part in the phenomenon since most of the countries are poorer Eastern European countries. I'd be interested in defining the factors that cause a zero population growth and the methods that are necessary to replicate this as a research project (I know, it's not engineering based but it sure is interesting).

Hardin, if he is still alive, should reevaluate his arguments in light of these new trends in countries with decreasing populations. However, he might reach the conclusion that the most intelligent people are the ones who are limiting their family size, which would cause him to argue against this method since the least intelligent are the ones reproducing in society. I can assume this would be his opinion since Hardin was distinctly darwinian in his thinking and he argued for controlled breeding of the "genetically defective" in his 1966 Biology textbook.

I'm not sure that Hardin would go so far as to advocate family size laws that would discriminate against the unintelligent and favor those with good genes. The definition of good genes is undefinable by society; some would say intelligence is the most important gene to pass on in this world, while others would say artistic genes are just as important or more important. This makes the problem of population control and selective breeding a "wicked" problem in the same sense as other societal problems which we've discussed earlier in this course.

Monday, February 2, 2009

Atwood and Gorelick (1985)

Atwood, D and S. Gorelick (1985) “HYDRAULIC-GRADIENT CONTROL FOR GROUNDWATER CONTAMINANT REMOVAL”, Journal of Hydrology, 76(1-2), pp. 85-106

The authors propose to use linear programming to optimize the containment of a contamination plume in groundwater. The Rocky Mountain Arsenal was selected as the location for the case study due to its frequent groundwater contamination problems. The paper discusses some specific characteristics of the model area, including flow boundaries and transmissivity.
The modelers were required to define what the boundary of the contamination would be, which was in terms of contaminant concentration, and they were also required to define governing equations for how groundwater and contaminants move in the aquifer. These governing equations make the model nonlinear, and thus the modelers had to initially assume the velocity gradient to make the model linear. They then took two different paths of solving the model: either they did a global optimization which solved for the entire simulation period, or they did a sequential solution using the head distribution from the previous 6 months as the initial conditions for the next 6 months.
The objective function in the linear equation was to minimize the sum of pumping and recharge rates. The paper then goes on to give the various equations used to model the defining characteristics of the pumping scenarios.
The results of this study are that the global solution has a constant sum of pumping and recharge rates, while the sequential solution increases the sum of the pumping and recharge rates over time.
This paper is interesting since it has proven that hydraulic gradient control makes a difference in containing a contamination plume. Also, by optimizing the procedure, the time of cleanup can be reduced up to two years, which can be a huge amount of money in pumping energy costs. This study was performed in 1985 and much more could be done with the model with more recent technology. That would be the next step in research, is making the model more complex to include the nonlinearities and using new optimization tools to solve that model.

Monday, January 26, 2009

Randall et al 1997

Randall, Dean, Leasa Cleland, Catharine S. Kuehne, George W. Link, and Daniel P. Sheer. “Water Supply Planning Simulation Model Using Mixed-Integer Linear Programming ‘Engine’.” Journal of Water Resources Planning and Management. March/April (1997): 116-124.

Summary:
This paper describes the development of a water supply planning simulation model built for Alameda County Water District (southeast of San Francisco Bay area). The goals of the system were to be user-friendly, adaptable so that little coding was necessary for major changes to the system, and be able to be integrated with other models. The developers chose to model the system with a linear program rather than a network formulation because some constraints can be solved directly with an LP while a network formulation would solve these constraints iteratively, taking more time to solve.

The paper describes the different components of the system, including aqueducts, a creek, reservoirs, percolation ponds, water treatment plants, and an aquifer. There are multiple objectives in the system but the modelers chose to assign weights to each of the objectives so that the system could be modeled with a single objective for simplicity in solving. The paper gives many details of how specific operations were modeled and the challenges in doing so.

The model was applied to help the water district plan for its future needs. A range of future demand scenarios were run to determine the potential magnitude and frequency of shortages. The modelers added various new supply-side and demand-side components to the system to see which components would best alleviate their water shortages. In conclusion, the authors say that a LP or network formulation would’ve both been good approaches, but because of a few details, such as a binary constraint and specific operating details in the diversion dam, the LP was the preferred choice.

Discussion:
The paper is an interesting case study in applying linear programming to help find water resources solutions. However, the paper didn’t seem to be using any new methodologies, nor did it give more than a few sentences on why a linear program was better than a network formulation. I’m a little surprised this paper was published in such a major journal since the work seems insignificant, at least to me. I did almost the exact same work for my undergraduate research and I didn't think it was unique enough to be published. However, this is a slightly older publication and perhaps this kind of application of linear programming was a new methodology at that time.

The model was described as being highly simplified in order to save on computing time. If I were to continue research on this topic today, I would make the model more complex for accuracy. Computing time would not be as much of an issue in today’s powerful computing world. Complexities that I would be interested in adding in would include a hydrologic model to assess the impacts of rainwater harvesting.

Sunday, January 25, 2009

Liebman 1976

Liebman, Jon (1976) “Some Simple-Minded Observations on the Role of Optimization in Public Systems Decision-Making,” Interfaces 6(4) pp. 103-108.

Summary:
Public-sector decision-making requires a different approach than private-sector decision-making. One example of successful problem-solving is with the improvement of urban firefighting organizations. A second area of applied modeling is with river basin quality management. The first example was successful and helpful to the public system, while the second did not reach useful enough conclusions to make improvements in the system.

In most initial applications of operations research, it was easy to find solutions since the problems had clear cut objectives. Nowadays, as techniques in operations research become more advanced, more complex problems can be addressed and things are no longer simple. Especially in public-sector decision-making, the goals, desired results, intended and unintended side-effects are difficult to determine. Goals are usually set by congress regardless of the many differences in public opinion, but it is uncertain if the goals set by congress always applicable and best to model. Another thing to consider is the fact that models do not have the complexities of a system built into them the way humans do, for instance complexities of morality and human values. Private-sector decision-making can avoid some of this difficulty because the goals in private sectors are often more clearly defined and shared by all members of the organization. This isn’t so in public sectors. In the case of the firefighting operations optimization, the goals are clear-cut with little argument. In the case of the river basin quality management, the goals are far from clear and the case is complicated.

With complicated problems and complicated goals, these new optimization tools are best used when they identify a group of alternatives for the decision maker to select from rather than choosing a single alternative as best. There is a certain amount of understanding gained by building a model and playing around with its parameters. This understanding might be communicated through a sensitivity analysis, but in order to gain the full understanding of the system it is best for a decision-maker to get in and work with the model him or herself. Since a model has unclear goals, different modelers might build different models for the same problem based on their own assumptions and biases. It is better to supply a decision maker with results from a number of models instead of just one. When a modeler builds a complex model, a lot of knowledge can be gained through that development, but the complexities do not benefit the decision maker much. It is better that the developer takes what he or she learns from the process of building a complex model and uses that to build a more simple model to give to the decision maker. The process of building a model helps the modeler understand the situation much better than if the modeler were to try to pick up on a model that was built by another person. Since this is the case, it is not a bad idea for a new analyst to rebuild a model from scratch rather than build upon a model previously built.

In conclusion, the article wishes that modelers are aware of when and how modeling tools can appropriately be applied to public-sector decision-making.

Discussion:
This paper does the important work of informing modelers to be wary of trying to model complex public-sector problems with the expectation that they can find a single optimum solution. This paper was fairly short in length and did not cover in depth actual cases where public-sector modeling struggled to find answers. The article would’ve been strengthened if the author had delved into the details of the river basin quality management issues that modelers have faced in the past.

If this were my research, it would be interesting to experiment with giving a complicated model to different modelers and see the differences in the models they produce. Also, as further research I may outline in a paper the methodology behind providing a range of alternatives rather than selecting an optimum, since this is in line with my interests in systems research.

Wednesday, January 21, 2009

Assignment #0

My name is Michelle Hollingsworth.


I am enrolled in CVEN665 because I enjoy systems thinking in a Water Resources context and I'd like to improve upon my ability to take real-world systems and put it in programming terms. Most people will have models of systems in their minds but it requires a developed skill to recognize a system and translate it into mathematical terms. I'm looking forward to the class project as an opportunity to practice model formulation and evaluation of that model.


Critical thinking is when we question the model we have of a system, analyze why the system behaves and interacts in the manner that it does, and through that we should expand our thinking to all possibilities of behavior and interactions, evaluate the quality of those possibilities, and then redefine our new system with our new more reliable model.