Monday, February 23, 2009

Berry et al (2005)

Berry, J., Fleisher, L, Hart, W. Phillips, C. and Watson, J.-P. (2005) “Sensor Placement in Municipal Water Networks”, Journal of Water Resources Planning and Management, 131(3) pp. 237-243

This article shows a methodology for determining sensor placement in municipal water networks using mixed integer programming. These sensors are placed to determine the presence of contaminants, whether accidentally or intentionally occuring.

The article states that this type of problem can be solved with a variety of modeling approaches utilizing the latest technology in computing power, but the certainty of the results just won't be as good as mixed-integer optimization techniques that make many simplifying assumptions. This was an interesting point since in the last article review I did of Lee et al (1992) I made the comment (see below) that it might not be necessary to make so many assumptions for the sake of quick solutions considering today's technology and advancements in optimization techniques. However, Berry et al was able to answer my question by describing why simplifying assumptions are better, even today.

After making these simplifying assumptions, the authors describe the equations they used to model these problems and describe the three data sets they used to demonstrate their methodology. The sensitivity of their results was interesting since it was low when they had few sensors available to place in the network, high when there was a medium amount of sensors available, and low again when there were many sensors available. I can see how there would be low sensitivity with many sensors since the distances from a sensor to any point in the network is small, but I would expect that the sensitivity would be large with very few sensors since changing the location of a sensor would make big differences in distances to certain nodes, especially the nodes with higher populations and demands. I suppose that's why we have computers assisting us with these problems since they are too complex for a human to have an intuitive sense about it. Or it's just me. :)

For future work this could be applied to techniques for cleaning up contamination. It would be interesting to see how a slightly different application in the same setting might change the results.

Sunday, February 15, 2009

Lee and Deininger 1992

Lee, B. H. and Deininger, R. A. (1992) “Optimal Locations of Monitoring Stations in Water Distribution Systems”, Journal of Environmental Engineering, 118(1) pp. 4-16

The EPA requires that all water distribution authorities to test the water quality in their system. Lee and Deininger have done an important work in contributing to the efficiency and effectiveness of this testing by providing and demonstrating an integer programming approach to testing the water quality. This assists the entire nation as the improved efficiency saves taxpayers' and water users' money.

To set up the integer program, testing nodes need to be defined and assumptions need to be made. When a water sample is taken at a single tap, the quality is thought to represent the quality at the closest node. d_i is the demand at that node, and D is the total demand. With the single sample, the quality of the fraction d_i/D is known. Also, the nodes nearby may be inferred depending on the hydraulics. If all the water at the sampled node came from an upstream node, then the water quality of this upstream node would also be known. Since water distribution systems are complex and the water in one node likely came from a combination of other nodes, the authors set up a method for deciding when the water quality of an upstream node can be assumed to be represented by a downstream node.

I'm not entirely convinced that these assumptions can be made. The weakness of the article is that they make assumptions without any argument attached as to why this is acceptable. But perhaps these assumptions are intuitive to most reading the article, and I'm not familiar enough with the water quality testing industry to know this. For example, the authors make an assumption that if atleast 50% of the sampled water has passed through a node, the sample is considered representative of that node (p. 8). I'd be interested in hearing others' opinions on whether they think this is an intuitively accurate assumption.

I'm sure that since 1992 many more complex models have been presented for how to effeciently and effectively monitor water quality across a large system, but for 1992 standards integer programming was an excellent and quick solution (17-27s to solve!). Nowadays integer programming is rare because engineers don't need to simplify to that level simply to save computing time. Integer programming is only used when it really is the most accurate model of a system. I'm trying to think of real life applications where it would be used today, but I can't think of any. I'd be interested in hearing if anybody else can think of some applications.

Sunday, February 8, 2009

Hardin 1968

Garrett Hardin, "The Tragedy of the Commons," Science, 162(1968):1243-1248

The main motivation the author has for writing this paper is to convince his readers on the need for societal restrictions on family size. He discusses the strains of a growing population on the earth's resources. He also rebutts the idea that people will naturally regulate their own family size for the betterment of the community, and disagrees that subliminally coercing people into restricting family size is an adequate solution.

Hardin uses the example of the tragedy of the commons to explain how a rational player will make decisions that will give him the advantage and any losses will be spread out across society. This is fine in a rural community where the environment can absorb the losses. However, in an urban society the sum of the losses caused by each person's gain directly affects many people in a major way.

Also, Hardin argues that at the time of the article a lot of emphasis was placed on creating an environment of shame for people who did not act in the best interests of the community. For example, he says a common phrase of the day is "responsible parenthood" which implies that a large family is irresponsible. This may be an effective way to motivate families to restrict their size, but Hardin believes that guilt and anxiety is never healthy for a society and therefore is a poor method to achieve an end goal. Also, he argues that those individuals who don't listen to the messages of guilt and conscience will survive better since they are thinking of themselves and not the community, and then natural selection will take place and those with a conscience will become extinct. He says this method of population control works but won't be effective in the long run.

As an alternative, Hardin proposes that a second method for population control as the best method. This method involves a set of laws to restrict family size for the good of all of humanity. However, Hardin was missing an important third method in population control, which I discuss next.

Early on in the article, Hardin mentions that "there is no prosperous population in the world today that has, and has had for some time, a growth rate of zero." Perhaps that was true in 1968, but it is definately not true of today. For example, Japan currently has zero native population growth, and Ukraine is at the top of the list for negative population growth. Overall, 20 countries in the world have zero or negative population growth. (http://geography.about.com/od/populationgeography/a/zero.htm) If Hardin had been an advocate of population control today, he would see that there's a 3rd option that works in limiting family size and is currently at work in society today. I'm not going to be so confident as to define that 3rd option, I think it has something to do with individuals wanting to advance themselves to the point of not having time to create a family. In geography class I learned that societies tend to have a population explosion as they industrialize, and once they are fully industrialized the population growth levels off to around zero. However, if you look at the list of countries with negative or zero population growth, it doesn't seem that the reasons I've given would play a part in the phenomenon since most of the countries are poorer Eastern European countries. I'd be interested in defining the factors that cause a zero population growth and the methods that are necessary to replicate this as a research project (I know, it's not engineering based but it sure is interesting).

Hardin, if he is still alive, should reevaluate his arguments in light of these new trends in countries with decreasing populations. However, he might reach the conclusion that the most intelligent people are the ones who are limiting their family size, which would cause him to argue against this method since the least intelligent are the ones reproducing in society. I can assume this would be his opinion since Hardin was distinctly darwinian in his thinking and he argued for controlled breeding of the "genetically defective" in his 1966 Biology textbook.

I'm not sure that Hardin would go so far as to advocate family size laws that would discriminate against the unintelligent and favor those with good genes. The definition of good genes is undefinable by society; some would say intelligence is the most important gene to pass on in this world, while others would say artistic genes are just as important or more important. This makes the problem of population control and selective breeding a "wicked" problem in the same sense as other societal problems which we've discussed earlier in this course.

Monday, February 2, 2009

Atwood and Gorelick (1985)

Atwood, D and S. Gorelick (1985) “HYDRAULIC-GRADIENT CONTROL FOR GROUNDWATER CONTAMINANT REMOVAL”, Journal of Hydrology, 76(1-2), pp. 85-106

The authors propose to use linear programming to optimize the containment of a contamination plume in groundwater. The Rocky Mountain Arsenal was selected as the location for the case study due to its frequent groundwater contamination problems. The paper discusses some specific characteristics of the model area, including flow boundaries and transmissivity.
The modelers were required to define what the boundary of the contamination would be, which was in terms of contaminant concentration, and they were also required to define governing equations for how groundwater and contaminants move in the aquifer. These governing equations make the model nonlinear, and thus the modelers had to initially assume the velocity gradient to make the model linear. They then took two different paths of solving the model: either they did a global optimization which solved for the entire simulation period, or they did a sequential solution using the head distribution from the previous 6 months as the initial conditions for the next 6 months.
The objective function in the linear equation was to minimize the sum of pumping and recharge rates. The paper then goes on to give the various equations used to model the defining characteristics of the pumping scenarios.
The results of this study are that the global solution has a constant sum of pumping and recharge rates, while the sequential solution increases the sum of the pumping and recharge rates over time.
This paper is interesting since it has proven that hydraulic gradient control makes a difference in containing a contamination plume. Also, by optimizing the procedure, the time of cleanup can be reduced up to two years, which can be a huge amount of money in pumping energy costs. This study was performed in 1985 and much more could be done with the model with more recent technology. That would be the next step in research, is making the model more complex to include the nonlinearities and using new optimization tools to solve that model.