In this article, the authors introduce the problem of designing sanitary sewer systems and review the methods other researchers have used to attempt to solve design problems. The authors then describe the motivation for using a GA-QP model. A regular GA might be inappropriate since the solving efficiency can be reduced by many variables, and "many of the randomly produced alternatives are unreasonable or inappropriate." They describe how an LP can help improve a GA, but for their purposes they don't want to reduce their non-linear program into an LP. Instead, they transform their functions into quadratic forms and solve using quadratic programming (QP).
The paper defines the decision variables of the GA genes to be pipe diameters and pumping station locations, and describes the constraints in detail. The fitness function is the inverse of the cost, so the higher the fitness, the better. The two decision variables for the QP include pipe slope and buried depth.
It was a very interesting point that the best design found in the optimization model doesn't even begin to consider many important factors, such as construction, geology, traffic impact, public preferences and land availability. It may be an extremely iterative process to find a solution, determine the real feasibility (based on the above listed factors), have to throw out that solution, and repeat. As powerful as computers are in implementing optimization models, in the end any public sector planning problem requires a large human component in the process.
Monday, April 27, 2009
BRILL ED (1979) USE OF OPTIMIZATION MODELS IN PUBLIC-SECTOR PLANNING, MANAGEMENT SCIENCE, 25
This article written by Brill in 1979 addresses the trend towards using optimization models in solving public policy problems.
Brill argues that optimization models used to help find solutions to public policy problems "have serious shortcomings" which include failture to consider equity or distribution of income, as well as difficulties in estimating benefits/costs of public programs. Brill says public programs' benefits and costs are better kept in their original units rather than trying to translate them into common units to judge amongst completely different types of objectives.
Brill notes that multi-objective programming was increasingly common to try to address these issues, but he makes the point that all objectives need to be clearly identified in order to truly distinguish between inferior and noninferior solutions, as illustrated well in Figure 2.
Brill proposes that a good way to involve optimization models in public sector planning is to: 1, use them to stimulate creativity, and 2. create a man-machine interactive process. This man-machine interactive process is something he briefly describes in this paper, but it sounds like he most likely researched this idea more and published a paper on it soon after this paper. I found Brill's point interesting that optimization models should generate alternatives that meet minimal requirements and are different.
I'd really like to see a case study of these ideas, and how optimization models can assist public planners in generating creative solutions. It seems to me that a human would easily be just as creative as a computer, so I'd like to see an example showing otherwise.
Brill argues that optimization models used to help find solutions to public policy problems "have serious shortcomings" which include failture to consider equity or distribution of income, as well as difficulties in estimating benefits/costs of public programs. Brill says public programs' benefits and costs are better kept in their original units rather than trying to translate them into common units to judge amongst completely different types of objectives.
Brill notes that multi-objective programming was increasingly common to try to address these issues, but he makes the point that all objectives need to be clearly identified in order to truly distinguish between inferior and noninferior solutions, as illustrated well in Figure 2.
Brill proposes that a good way to involve optimization models in public sector planning is to: 1, use them to stimulate creativity, and 2. create a man-machine interactive process. This man-machine interactive process is something he briefly describes in this paper, but it sounds like he most likely researched this idea more and published a paper on it soon after this paper. I found Brill's point interesting that optimization models should generate alternatives that meet minimal requirements and are different.
I'd really like to see a case study of these ideas, and how optimization models can assist public planners in generating creative solutions. It seems to me that a human would easily be just as creative as a computer, so I'd like to see an example showing otherwise.
Tuesday, April 14, 2009
Shiau and Wu (2006)
Shiau JT, Wu FC (2006) Compromise programming methodology for determining instream flow under multiobjective water allocation criteria, JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, 42(5), pp. 1179-1191
Increasing demand of water for human consumption results in a decreased availability of water for instream flows to sustain the natural ecology of a stream or river. This therefore creates a multiple-objective problem. The authors use the Indicators of Hydrologic Alteration (IHA) method as well as a Range of Variability Approach (RVA) to quantify the effects of a diversion weir on a stream in Taiwan.
The objectives of this problem are to meet demands and not harm the environment according to IHA and RVA. The method the authors use to solve this multi-objective program is called "compromise programming" which identifies the optimal solution as having the shortest distance to a theoretical ideal point.
Their study really seemed to be two part: first, they demonstrate that the current instream flow of 9.5 m3/s is not sufficient, then they found the optimal solution of 26 m3/s better protects the environment while compromising with preventing water shortages.
I had a difficult time understanding this paper since there were so many different variables being thrown around and I didn't have the patience to memorize/lookup each variable as I came across it. I don't know if they could've communicated their work in any easier way, but it definately hindered getting their point across.
It was nice to see an example of compromise programming being used, and I personally liked that the ecological needs were weighted the same as the human needs (since human needs will increase when ecological needs are ignored).
Increasing demand of water for human consumption results in a decreased availability of water for instream flows to sustain the natural ecology of a stream or river. This therefore creates a multiple-objective problem. The authors use the Indicators of Hydrologic Alteration (IHA) method as well as a Range of Variability Approach (RVA) to quantify the effects of a diversion weir on a stream in Taiwan.
The objectives of this problem are to meet demands and not harm the environment according to IHA and RVA. The method the authors use to solve this multi-objective program is called "compromise programming" which identifies the optimal solution as having the shortest distance to a theoretical ideal point.
Their study really seemed to be two part: first, they demonstrate that the current instream flow of 9.5 m3/s is not sufficient, then they found the optimal solution of 26 m3/s better protects the environment while compromising with preventing water shortages.
I had a difficult time understanding this paper since there were so many different variables being thrown around and I didn't have the patience to memorize/lookup each variable as I came across it. I don't know if they could've communicated their work in any easier way, but it definately hindered getting their point across.
It was nice to see an example of compromise programming being used, and I personally liked that the ecological needs were weighted the same as the human needs (since human needs will increase when ecological needs are ignored).
Tuesday, March 31, 2009
Neelakantan and Pundarikanthan (2000)
Neelakantan TR, Pundarikanthan NV (2000) “Neural network-based simulation-optimization model for reservoir operation,” JOURNAL OF WATER RESOURCES PLANNING AND MANAGEMENT, 126(2) pp. 57-64
The objective of this study was to optimize the operating policy of a reservoir system in Chennai, India. The study considered using a conventional simulation model and an artificial neural network, but determined that the ANN saved many days of computing time. The conventional simulation model was used to generate scenarios to train the ANN. The study utilized a Hooke and Jeeves optimization algorithm. When different inputs generated by Hooke and Jeeves were input to the ANN, the ANN gave the objective function value as the only output, which is reentered into the Hooke and Jeeves model.
The discussion of the trial scenarios and their results made very little sense to me. I thought the standard operation policy was what the reservoir managers were already doing. But the SOP generates a better objective function than the suggested operating policies. I don't see how this is an improvement or a success in the study.
I had a difficult time reading the article because I seemed to miss a lot of critical information at the beginning that had me confused the rest of the article. Really, the authors only wrote a sentence mentioning that a conventional simulation model was created in order to train the ANN, and the ANN was utilized for its quicker solve times. I missed that sentence and then went through the whole article wondering why the authors were neglecting local catchment inflows, transmission losses and percolation losses when those are irrelevant (I think) in an ANN. The full discussion of why a conventional model was necessary and why an ANN was chosen was saved until the very end and then I could go back through and understand a lot more of what was going on earlier in the article. In conclusion I'd say the only fault of the article that stuck out to me was poor order and clarity of the presentation of the material.
The objective of this study was to optimize the operating policy of a reservoir system in Chennai, India. The study considered using a conventional simulation model and an artificial neural network, but determined that the ANN saved many days of computing time. The conventional simulation model was used to generate scenarios to train the ANN. The study utilized a Hooke and Jeeves optimization algorithm. When different inputs generated by Hooke and Jeeves were input to the ANN, the ANN gave the objective function value as the only output, which is reentered into the Hooke and Jeeves model.
The discussion of the trial scenarios and their results made very little sense to me. I thought the standard operation policy was what the reservoir managers were already doing. But the SOP generates a better objective function than the suggested operating policies. I don't see how this is an improvement or a success in the study.
I had a difficult time reading the article because I seemed to miss a lot of critical information at the beginning that had me confused the rest of the article. Really, the authors only wrote a sentence mentioning that a conventional simulation model was created in order to train the ANN, and the ANN was utilized for its quicker solve times. I missed that sentence and then went through the whole article wondering why the authors were neglecting local catchment inflows, transmission losses and percolation losses when those are irrelevant (I think) in an ANN. The full discussion of why a conventional model was necessary and why an ANN was chosen was saved until the very end and then I could go back through and understand a lot more of what was going on earlier in the article. In conclusion I'd say the only fault of the article that stuck out to me was poor order and clarity of the presentation of the material.
Wednesday, March 18, 2009
Perez-Pedini et al 2005
Perez-Pedini C, Limbrunner JF, Vogel RM (2005) “Optimal location of infiltration-based best management practices for storm water management,” JOURNAL OF WATER RESOURCES PLANNING AND MANAGEMENT, 131(6) pp. 441-448
This article examined a methodology for locating the optimal placement of infiltration-based stormwater BMPs. The goal was to reduce the peak runoff flow based on implementing a budget of 25 to 400 BMPs (the results would be a pareto frontier with the number of BMPs implemented on one axis and the percent reduction of peak flow on the other axis).
The researchers modeled a developed watershed as a grid made up of 4,533 hydrologic response units (HRUs) which could be modeled as having a BMP implemented or not. If it was decided to implement a BMP in an HRU, then the CN of the HRU would be decreased by 5 units. The researchers planned to run a genetic algorithm to decide where to place the BMPs. However, testing 4,533 HRUs for possible BMP implementation was too large of a decision space for a GA to handle, and the researchers had to make some limitations to reduce this decision space. They made the decision to limit possible BMPs to HRUs with low permeability (high CN) and HRUs which were in close proximity to a river. The first limitation makes sense that the most effective BMP would be removing an impervious surface, and the second limitation of being close to a river makes sense because it will likely have a larger volume of water running over it than an HRU at a high point in the watershed.
The results of the study find that the first few BMPs implemented have a high reduction in peak flow per BMP, and as the number of BMPs implemented increases, the reduction in peak flow per BMP decreases. This makes sense as the GA should automatically target the very best locations first and then the less effective HRU locations for BMPs will get selected.
I was surprised that the article mentioned that the GA didn't find the optimal solution, as discovered when they played around with the results a little. If they say their results are acceptable because they are near-optimal, I'll believe them, but I would've liked to see this proven a little more in the paper. Perhaps that is for another paper. Another component I would've liked to see expanded on was their study of possible commonalities between selected HRUs. They concluded that there were no dominating characteristics which could identify good HRUs without use of the GA, and I would be interested to read an article which examined this in detail. I suppose since they found no dominating characteristics, the article results might not be interesting enough to warrant an article.
This article examined a methodology for locating the optimal placement of infiltration-based stormwater BMPs. The goal was to reduce the peak runoff flow based on implementing a budget of 25 to 400 BMPs (the results would be a pareto frontier with the number of BMPs implemented on one axis and the percent reduction of peak flow on the other axis).
The researchers modeled a developed watershed as a grid made up of 4,533 hydrologic response units (HRUs) which could be modeled as having a BMP implemented or not. If it was decided to implement a BMP in an HRU, then the CN of the HRU would be decreased by 5 units. The researchers planned to run a genetic algorithm to decide where to place the BMPs. However, testing 4,533 HRUs for possible BMP implementation was too large of a decision space for a GA to handle, and the researchers had to make some limitations to reduce this decision space. They made the decision to limit possible BMPs to HRUs with low permeability (high CN) and HRUs which were in close proximity to a river. The first limitation makes sense that the most effective BMP would be removing an impervious surface, and the second limitation of being close to a river makes sense because it will likely have a larger volume of water running over it than an HRU at a high point in the watershed.
The results of the study find that the first few BMPs implemented have a high reduction in peak flow per BMP, and as the number of BMPs implemented increases, the reduction in peak flow per BMP decreases. This makes sense as the GA should automatically target the very best locations first and then the less effective HRU locations for BMPs will get selected.
I was surprised that the article mentioned that the GA didn't find the optimal solution, as discovered when they played around with the results a little. If they say their results are acceptable because they are near-optimal, I'll believe them, but I would've liked to see this proven a little more in the paper. Perhaps that is for another paper. Another component I would've liked to see expanded on was their study of possible commonalities between selected HRUs. They concluded that there were no dominating characteristics which could identify good HRUs without use of the GA, and I would be interested to read an article which examined this in detail. I suppose since they found no dominating characteristics, the article results might not be interesting enough to warrant an article.
Sunday, March 1, 2009
Behera, P, Papa, F., Adams, B (1999) “Optimization of Regional Storm-Water Management Systems” Journal of Water Resources Planning and Management, 125
Detention ponds are useful engineering tools to help mitigate the impacts of urban development. Land developers see ponds as loss of developable land and extra added costs. So when designing a pond, both economic and environmental goals need to be considered.
The paper demonstrates a methodology to design a detention pond with the decision variables being the storage volume, pond release rate, and pond depth. The constraints of the model include quality constraints for pollution control and runoff control. The objective function is to minimize costs which includes the value of the land used and the construction cost involved.
There was a discussion on isoquants in the paper which I didn't understand at all.
In this paper, the authors have designed a methodology of optimizing for one pond, and they apply this methodology to a larger scale optimization problem of designing a multiple catchment system. For multiple catchments, they use dynamic programming to solve.
It is interesting to see that by not constraining each catchment to certain parameters but simply restricting the end result of the system, they were able to achieve a 13% reduction in cost.
The paper demonstrates a methodology to design a detention pond with the decision variables being the storage volume, pond release rate, and pond depth. The constraints of the model include quality constraints for pollution control and runoff control. The objective function is to minimize costs which includes the value of the land used and the construction cost involved.
There was a discussion on isoquants in the paper which I didn't understand at all.
In this paper, the authors have designed a methodology of optimizing for one pond, and they apply this methodology to a larger scale optimization problem of designing a multiple catchment system. For multiple catchments, they use dynamic programming to solve.
It is interesting to see that by not constraining each catchment to certain parameters but simply restricting the end result of the system, they were able to achieve a 13% reduction in cost.
Monday, February 23, 2009
Berry et al (2005)
Berry, J., Fleisher, L, Hart, W. Phillips, C. and Watson, J.-P. (2005) “Sensor Placement in Municipal Water Networks”, Journal of Water Resources Planning and Management, 131(3) pp. 237-243
This article shows a methodology for determining sensor placement in municipal water networks using mixed integer programming. These sensors are placed to determine the presence of contaminants, whether accidentally or intentionally occuring.
The article states that this type of problem can be solved with a variety of modeling approaches utilizing the latest technology in computing power, but the certainty of the results just won't be as good as mixed-integer optimization techniques that make many simplifying assumptions. This was an interesting point since in the last article review I did of Lee et al (1992) I made the comment (see below) that it might not be necessary to make so many assumptions for the sake of quick solutions considering today's technology and advancements in optimization techniques. However, Berry et al was able to answer my question by describing why simplifying assumptions are better, even today.
After making these simplifying assumptions, the authors describe the equations they used to model these problems and describe the three data sets they used to demonstrate their methodology. The sensitivity of their results was interesting since it was low when they had few sensors available to place in the network, high when there was a medium amount of sensors available, and low again when there were many sensors available. I can see how there would be low sensitivity with many sensors since the distances from a sensor to any point in the network is small, but I would expect that the sensitivity would be large with very few sensors since changing the location of a sensor would make big differences in distances to certain nodes, especially the nodes with higher populations and demands. I suppose that's why we have computers assisting us with these problems since they are too complex for a human to have an intuitive sense about it. Or it's just me. :)
For future work this could be applied to techniques for cleaning up contamination. It would be interesting to see how a slightly different application in the same setting might change the results.
This article shows a methodology for determining sensor placement in municipal water networks using mixed integer programming. These sensors are placed to determine the presence of contaminants, whether accidentally or intentionally occuring.
The article states that this type of problem can be solved with a variety of modeling approaches utilizing the latest technology in computing power, but the certainty of the results just won't be as good as mixed-integer optimization techniques that make many simplifying assumptions. This was an interesting point since in the last article review I did of Lee et al (1992) I made the comment (see below) that it might not be necessary to make so many assumptions for the sake of quick solutions considering today's technology and advancements in optimization techniques. However, Berry et al was able to answer my question by describing why simplifying assumptions are better, even today.
After making these simplifying assumptions, the authors describe the equations they used to model these problems and describe the three data sets they used to demonstrate their methodology. The sensitivity of their results was interesting since it was low when they had few sensors available to place in the network, high when there was a medium amount of sensors available, and low again when there were many sensors available. I can see how there would be low sensitivity with many sensors since the distances from a sensor to any point in the network is small, but I would expect that the sensitivity would be large with very few sensors since changing the location of a sensor would make big differences in distances to certain nodes, especially the nodes with higher populations and demands. I suppose that's why we have computers assisting us with these problems since they are too complex for a human to have an intuitive sense about it. Or it's just me. :)
For future work this could be applied to techniques for cleaning up contamination. It would be interesting to see how a slightly different application in the same setting might change the results.
Subscribe to:
Comments (Atom)
