Archive for the ‘Article Highlights’ Category

Clinch River Featured Collection

Wednesday, August 20th, 2014

IMG_2403Jennifer Krstolic writes us, “The Featured Collection  (August 2014 issue – KJL) looks fabulous! I didn’t send you, Ken, or Greg Cope a photograph of our ‘Collaboration Cake’. I was extremely pleased at how well this group of authors worked together and the quality of the manuscripts that resulted. When we met for the Clinch Powell Clean Rivers Initiative meeting, I ordered a cake so we could celebrate the release of the papers. I think you’ll like how it looks.  :) –Jen”

And thanks to Jen and her colleagues for putting this together!

Quality of “The Lake”

Friday, December 13th, 2013

Early View article:Geospatial and Temporal Analysis of a 20-Year Record of Landsat-Based Water Clarity in Minnesota’s 10,000 Lakes,” Leif G. Olmanson, Patrick L. Brezonik, and Marvin E. Bauer.

This one is close to my heart. For just about every summer of his long life, my late father in law, from St. Paul, would gather his buddies for a fishing trip up to “The Lake.” (Doesn’t matter which one, they’re all called “The Lake.”) I have his records, so I know a lot has changed. This article quantifies some of the changes. (I also learned Minnesota, “Land of 10,000 Lakes,” actually has ~12,000 of 4 ha or larger.)

[Abstract] “A large 20-year database on water clarity for all Minnesota lakes ?8 ha was analyzed statistically for spatial distributions, temporal trends, and relationships with in-lake and watershed factors that potentially affect lake clarity. The database includes Landsat-based water clarity estimates expressed in terms of Secchi depth (SDLandsat), an integrative measure of water quality, for more than 10,500 lakes for time periods centered around 1985, 1990, 1995, 2000, and 2005. Minnesota lake clarity is lower (more turbid) in the south and southwest and clearer in the north and northeast; this pattern is evident at the levels of individual lakes and ecoregions. Temporal trends in clarity were detected in ~11% of the lakes: 4.6% had improving clarity and 6.2% had decreasing clarity. Ecoregions in southern and western Minnesota, where agriculture is the predominant land use, had higher percentages of lakes with decreasing clarity than the rest of the state, and small and shallow lakes had higher percentages of decreasing clarity trends than large and deep lakes. The mean SDLandsat statewide remained stable from 1985 to 2005 but decreased in ecoregions dominated by agricultural land use. Deep lakes had higher clarity than shallow lakes statewide and for lakes grouped by land cover. SDLandsat decreased as the percentage of agriculture and/or urban area increased at county and catchment levels and it increased with increasing forested land.”

[Please note: I have quoted and paraphrased freely from the article, but the interpretation is my own.]

PET models

Wednesday, December 11th, 2013

Early View article: Do Energy-Based PET Models Require More Input Data than Temperature-Based Models? — An Evaluation at Four Humid FluxNet Sites,” by Josephine A. Archibald and M. Todd Walter.

[Abstract] It is well established that wet environment potential evapotranspiration (PET) can be reliably estimated using the energy budget at the canopy or land surface. However, in most cases the necessary radiation measurements are not available and, thus, empirical temperature-based PET models are still widely used, especially in watershed models. Here we question the presumption that empirical PET models require fewer input data than more physically based models. Specifically, we test whether the energy-budget-based Priestley-Taylor (P-T) model can reliably predict daily PET using primarily air temperature to estimate the radiation fluxes and associated parameters. This method of calculating PET requires only daily minimum and maximum temperature, day of the year, and latitude. We compared PET estimates using directly measured radiation fluxes to PET calculated from temperature-based radiation estimates at four humid AmeriFlux sites. We found good agreement between P-T PET calculated from measured radiation fluxes and P-T PET determined via air temperature. In addition, in three of the four sites, the temperature-based radiation approximations had a stronger correlation with measured evapotranspiration (ET) during periods of maximal ET than fully empirical Hargreaves, Hamon and Oudin methods. Of the three fully empirical models, the Hargreaves performed the best. Overall, the results suggest that daily PET estimates can be made using a physically based approach even when radiation measurements are unavailable.

High-resolution radar rainfall

Tuesday, December 10th, 2013

Early View article: “Long-Term High-Resolution Radar Rainfall Fields for Urban Hydrology,” by Daniel B. Wright, James A. Smith, Gabriele Villarini, and Mary Lynn Baeck

How good is NEXRAD in representing rainfall that hits the ground? This question always comes up. Rain gages are right on the ground, but represent point data in limited locations. The radar is measuring reflectivity of clouds — sometimes fast-moving storms —  at least a mile above the ground. This article sheds some light on the issues.

[Abstract] “Accurate records of high-resolution rainfall fields are essential in urban hydrology, and are lacking in many areas. We develop a high-resolution (15 min, 1 km2) radar rainfall data set for Charlotte, North Carolina during the 2001-2010 period using the Hydro-NEXRAD system with radar reflectivity from the National Weather Service Weather Surveillance Radar 1988 Doppler weather radar located in Greer, South Carolina. A dense network of 71 rain gages is used for estimating and correcting radar rainfall biases. Radar rainfall estimates with daily mean field bias (MFB) correction accurately capture the spatial and temporal structure of extreme rainfall, but bias correction at finer timescales can improve cold-season and tropical cyclone rainfall estimates. Approximately 25 rain gages are sufficient to estimate daily MFB over an area of at least 2,500 km2, suggesting that robust bias correction is feasible in many urban areas. Conditional (rain-rate dependent) bias can be removed, but at the expense of other performance criteria such as mean square error. Hydro-NEXRAD radar rainfall estimates are also compared with the coarser resolution (hourly, 16 km2) Stage IV operational rainfall product. Stage IV is adequate for flood water balance studies but is insufficient for applications such as urban flood modeling, in which the temporal and spatial scales of relevant hydrologic processes are short. We recommend the increased use of high-resolution radar rainfall fields in urban hydrology.”

[Please note: I have quoted and paraphrased freely from the article, but the interpretation is my own.]

Estimating loads

Monday, December 9th, 2013

Early View article:Load Estimation Method Using Distributions with Covariates: A Comparison with Commonly Used Estimation Methods,” by Sébastien Raymond, Alain Mailhot, Guillaume Talbot, Patrick Gagnon, Alain N. Rousseau, and Florentina Moatar.

[Abstract] Load estimates obtained using an approach based on statistical distributions with parameters expressed as a function of covariates (e.g., streamflow) (distribution with covariates hereafter called DC method) were compared to four load estimation methods: (1) flow-weighted mean concentration; (2) integral regression; (3) segmented regression (the last two with Ferguson’s correction factor); and (4) hydrograph separation methods. A total of 25 datasets (from 19 stations) of daily concentrations of total dissolved solids, nutrients, or suspended particulate matter were used. The selected stations represented a wide range of hydrological conditions. Annual flux errors were determined by randomly generating 50 monthly sample series from daily series. Annual and interannual biases and dispersions were evaluated and compared. The impact of sampling frequency was investigated through the generation of bimonthly and weekly surveys. Interannual uncertainty analysis showed that the performance of the DC method was comparable with those of the other methods, except for stations showing high hydrological variability. In this case, the DC method performed better, with annual biases lower than those characterizing the other methods. Results show that the DC method generated the smallest pollutant load errors when considering a monthly sampling frequency for rivers showing high variability in hydrological conditions and contaminant concentrations.

Hydrologic landscape classification

Friday, December 6th, 2013

Early View article:Use of Hydrologic Landscape Classification to Diagnose Streamflow Predictability in Oregon,” by Sopan D. Patil, Parker J. Wigington Jr., Scott G. Leibowitz, and Randy L. Comeleo.

Models in earth sciences, by definition, provide a simplified representation of real world processes and phenomena. For models in hydrology, the water balance concept is the fundamental principle through which various fluxes of water are connected and organized within a catchment. Research has also shown that there are limits to the physio-climatic conditions across which hydrologic models can provide good streamflow predictions. For the prediction of daily streamflow over long periods, studies have shown that catchments in certain regions (e.g., with arid climate, or with high groundwater influence) are typically more difficult to predict. In this article, the goal is to demonstrate that a hydrologically based landscape classification system can be effectively used to characterize the conditions at which a hydrologic model is more likely to perform well; and also to understand why it does not perform well in certain environments.

[How the authors did this is best explained in their abstract:] “We implement a spatially lumped hydrologic model to predict daily streamflow at 88 catchments within the state of Oregon and analyze its performance using the Oregon Hydrologic Landscape (OHL) classification. OHL is used to identify the physio-climatic conditions that favor high (or low) streamflow predictability. High prediction catchments (Nash-Sutcliffe efficiency of  (NS) > 0.75) are mainly classified as rain dominated with very wet climate, low aquifer permeability, and low to medium soil permeability. Most of them are located west of the Cascade Mountain Range. Conversely, most low prediction catchments (NS < 0.6) are classified as snow-dominated with high aquifer permeability and medium to high soil permeability. They are mainly located in the volcano-influenced High Cascades region. Using a subset of 36 catchments, we further test if class-specific model parameters can be developed to predict at ungauged catchments. In most catchments, OHL class-specific parameters provide predictions that are on par with individually calibrated parameters (NS decline < 10%). However, large NS declines are observed in OHL classes where predictability is not high enough. Results suggest higher uncertainty in rain-to-snow transition of precipitation phase and external gains/losses of deep groundwater are major factors for low prediction in Oregon. Moreover, regionalized estimation of model parameters is more useful in regions where conditions favor good streamflow predictability.”

[Please note: I have quoted and paraphrased freely from the article, but the interpretation is my own.]

Ogallala depletion

Wednesday, December 4th, 2013

Early View article:Agronomic Water Mass Balance vs. Well Measurement for Assessing Ogallala Aquifer Depletion in the Texas Panhandle,” by Constant Z. Ouapo, B.A. Stewart, and Robert E. DeOtte Jr.

[Abstract] The Ogallala Aquifer is depleting faster than it is being replenished. Interpretation of well data suggests that the water table in some counties is not declining, or not as much as might be expected in view of the amount of land being irrigated. As the Ogallala Aquifer in the Texas Panhandle receives almost no recharge, a possible explanation is that the current method of using well data for estimating the quantity of water remaining in the aquifer is underestimating water in storage. This study used an agronomic water mass balance approach to estimate how much water has been used for irrigation compared to amounts estimated by well data. The major finding was in counties where irrigation well capacities have declined significantly but irrigation is continuing, there is likely more water in storage than presently estimated, but the amounts of water being used for irrigation in those counties are greater than estimated changes of water in storage. The proposed hypothesis for this difference is there are mounds of water between wells that are not being accounted for and data are presented and discussed to support this conjecture.

Scaling water use estimates

Tuesday, November 26th, 2013

Early View article: “A Multi-Scale Analysis of Single-Family Residential Water Use in the Phoenix Metropolitan Area,” by Yun Ouyang, Elizabeth A. Wentz, Benjamin L. Ruddell, and Sharon L. Harlan.

Single-family residential water use research is often limited to using aggregated water data due to, for example, confidentiality restrictions on household-scale data. Water use data that are accessible to researchers may be aggregated to areal units, such as census blocks, census block groups, census tracts, and cities. To match the water use data, similarly aggregated data for the factors considered to influence water use should also be used. This may lead to an ecological fallacy problem, which can occur when the statistical analysis and conclusions based on aggregated data are not applicable at the individual scale. There is little, if any, empirical analysis that assesses whether spatial scale may cause an ecological fallacy problem in residential water use research. The goal of this study is to address this issue by using the Phoenix area, Arizona as a case study.

Three panel datasets with different spatial scales are used in this study: household scale, census tract scale, and city/town scale. The authors found that the census tract scale data produce similar results compared to the household-scale data when they use the econometric models to study the relationship of single-family residential water use and its determinants in Phoenix, Arizona. No significant ecological fallacy problem was identified by this comparative statistical analysis that is based on the signs, magnitude, and confidence intervals of the parameter estimates.

[Please note: I have quoted and paraphrased freely from the article, but the interpretation is my own.]

Stream restoration tested

Thursday, November 21st, 2013

Early View article:Creating False Images: Stream Restoration in an Urban Setting,” by Kristan Cockerill and William P. Anderson Jr.

This article takes a critical before-and-after look at stream restoration in a case study of Boone Creek in North Carolina. The authors examine how well stream conditions, publicly stated project goals, and project implementation align. There are some good lessons to be learned from this study.

The experience on Boone Creek echoes much of the existing knowledge about stream restoration. This case study demonstrates little coordination among restoration plans and projects. Even though all of the projects were on one small creek, they were not coordinated. In addition, these projects showed disconnects among what pre-restoration monitoring data suggested were the problems on Boone Creek, what the stated restoration goals have been, and what has been implemented.

What these projects have accomplished is to protect the built environment and promote positive public perception. The authors argue that these disconnects among publicized goals for restoration, the implemented features, and actual stream conditions may create a false image of what an ecologically stable stream looks like and therefore perpetuate a false sense of optimism about the feasibility of restoring urban streams.

[Please note: I have quoted and paraphrased freely from the article, but the interpretation is my own.]

Arc StormSurge

Monday, October 28th, 2013

Early View article:Arc StormSurge: Integrating Hurricane Storm Surge Modeling and GIS,” by Celso M. Ferreira, Francisco Olivera, Jennifer L. Irish.

Here’s a timely article on using a new piece of technology.

Historical records of storms are too short and too sparse to support reliable statistical predictions of hurricane surge levels; thus, numerical analysis is used for simulating and predicting flooding in coastal areas. In recent years, improvements in the understanding of the physics of storm surge processes have led to the development of computationally intense hydrodynamic models capable of estimating hurricane flood elevations. However, developing the input to these models requires a significant amount of data and conversion to a model-specific format, and, usually, the model output is not in a format ready for interpretation and for conveying the findings to the public and decision makers. In this context, geographic information systems (GIS) can play an important role in pre- and post-processing spatial information and supporting input/output visualization.
This article discusses the development of an ArcGIS™ data model and a set of tools for the coupled ADvanced CIRCulation and unstructured-grid version of Simulating WAves Nearshore (SWAN+ADCIRC) hydrodynamic and wave models, respectively. An automated file conversion between ArcGIS™ and the model formats is used to ease the preparation of the input files. Visualization of the results is accomplished through maps, generated automatically with ArcGIS™. As part of this working framework, the authors propose the use of a geodatabase specifically designed to store the spatial information needed for modeling storm surges with SWAN+ADCIRC. An example of the application of their framework to the simulation of the storm surge of 1999′s Hurricane Bret in Corpus Christi, Texas, is also included to demonstrate the utility of Arc StormSurge

[Please note: I have quoted and paraphrased freely from the article, but the interpretation is my own.]