Attachment 13
Responses to Comments
1-1
1-2
1-3
1-4
1-5
1-6
1-7
1-8
1-9
1-10
1-11
1-12
1-13
1-14
RESPONSES TO COMMENT SET 1: Utility Water Act Group
Thank you for the detailed comments.
EPA extended the comment period by 30 days in order to accommodate review of supporting materials.
See Section 2a of Part A this ICR for a discussion of the purpose of the study.
Again, see Section 2a of Part A of this ICR for a detailed discussion of the purpose of the study.
EPA recognizes that hypothetical bias is a potential concern in stated preference (SP) surveys and takes this concern seriously. In general, SP methods have “been tested and validated through years of research and are widely accepted by federal, state, and local government agencies and the U.S. courts as reliable techniques for estimating nonmarket values” (Bergstrom and Ready 2009, p. 26). A recent meta-analyses of the stated preference literature also concludes that hypothetical bias may not always be a significant concern (Murphy, et al. 2005).
To reduce the potential for hypothetical bias in this survey EPA has consulted with experts and drawn from peer reviewed literature to address it in the survey design. For example, the survey explicitly incorporates elements that allow mitigation of hypothetical bias, such as the use of reminders about budget constraints (akin to the cheap talk language in Cummings and Taylor 1999; List 2001). These features of survey design are shown to minimize hypothetical bias in experimental settings. The text used in this survey has undergone thorough testing with participants in focus group and one-on-one interviews. EPA believes that the steps taken during survey development and testing have largely mitigated the potential for hypothetical bias. See Section 2d of Part B of this ICR for more information on how we address hypothetical bias.
Similarly, EPA recognizes the potential for households to exhibit yea-saying and to overstate or understate their true WTP in order to influence decisions informed by survey data. Survey and study design choices can mitigate yea-saying. The use of mail survey rather than face-to-face interview has been shown to decrease the social pressure that may influence a respondent to provide a response deemed desirable (Dillman 2000). This survey also employs a conjoint choice framework, where respondents must consider the trade-offs between a status quo and two policy options. Respondents are asked to make a discrete choice among three unranked options rather than a simple yes or no. These options vary in terms of the levels of five environmental attributes (plus cost). In this choice experiment framework it is has been shown that the likelihood for yea-saying and strategic responses is less prominent (Blamey and Bennett 2001, Collins and Vossler 2009).
EPA also recognizes the potential for non-response bias and the impacts it could have on the data analysis. First, EPA is taking steps to obtain the highest possible response rate, thereby mitigating non-response bias. Specifically, EPA is using focus group-tested design choices to encourage participation. EPA is also following the Dillman tailored design method (Dillman 2008) for mail surveys which includes an introduction letter preceding the survey, a reminder post card, and second mailing of the survey, and a reminder letter following the second survey.
EPA will also administer a non-response follow-up survey (Attachment 11) in both the pre-test and full survey in order to examine whether or not respondents are systematically different from non-respondents (see OMB 2006). In the non-response follow-up survey, households that do not return the survey will be randomly sampled to receive a short questionnaire by priority mail. The questionnaire will elicit basic demographic information as well as a few short questions regarding awareness and the reasons they did not complete the survey. Responses to these questions will be used to examine whether respondents are systematically different from non-respondents. See Section 2c of Part B of this Information Collection Request for a description of the non-response follow-up survey.
In addition, in order to identify such respondents EPA includes debriefing questions at the end of the survey to identify respondents who might believe that protecting the environment is important no matter the cost. Sensitivity analysis will be used to examine if and how responses to these debriefing questions influence responses. Again, Section 5b of Part B of this ICR provides a detailed response.
It is impossible to know the magnitude of nonuse values prior to conducting this study. While information is available in Bockstael, McConnell and Strand (1989) on the potential value of water quality improvements in the Watershed, the study is based on a small sample of Bay-area residents, and provides limited information on a broader set of benefits attributable to water quality improvements.
Standard survey development protocols have been used to develop the survey. See Section 3c of Part A for a discussion of background information.
In response to peer review comments from academic experts in stated preference methods, EPA is now only modeling willingness to pay for improvements in bay water clarity, striped bass, blue crab, oyster populations, and the quality of lakes in the watershed. This was previously referred to as the “endpoint” version of the survey. These attributes were chosen based on extensive focus groups and interviews as the environmental features that are most salient to the general public. Furthermore, EPA and NOAA models predict that these features will be impacted by the TMDL. The stated preference survey outlined in the ICR does not estimate the benefits of the TMDL directly; rather this survey is designed to value generic status quo and policy options that result in changes in the environmental attributes. As part of the experimental design, respondents are presented with hypothetical changes in these attributes and cost. In other words, the hypothetical levels associated with each of the attributes and costs in the survey vary across respondents (see Section 2d of Part B). This allows us to identify the parameters and estimate a range of values associated with different scenarios. The variation in costs across programs is not intended to reflect the costs of the TMDL, but rather the likely range of values respondents hold for the options, as found in extensive focus groups and interviews. The parameters estimated from respondents’ choices to these hypothetical scenarios will then be used to estimate the benefits of the TMDL incremental to the baseline.
The survey does remind respondents to consider other things they may spend their money on, like food, clothing, etc., so that they fully consider their budget constraint before making choices. However, respondents are also reminded several times that all other factors (including employment) are held constant across options. In other words, the survey only assesses the value people hold for the attributes specified in the choice experiments. EPA believes that focusing on this subset of factors will lead to a conservative but more reliable estimate of total benefits. EPA proposes to administer three versions of the survey - an increasing baseline, decreasing baseline and constant baseline - in order to estimate benefits of environmental improvements relative to a range of baseline scenarios.
EPA conducted 10 focus groups and 72 cognitive interviews with individuals within and outside the Watershed in order to test their level of understanding of the materials included in the survey (OMB Control Number 2090-0028). We used this standard survey design protocol to identify the most salient environmental endpoints that will be affected by the TMDL.
See Sections 2b and 5b of Part B of the ICR for the survey implementation and econometric analysis approach to be used in the survey project.
Again, the EPA disputes the idea that the stated preference method does not have the ability to collect information with, “quality, objectivity, utility, integrity” on the foundation that these methods are largely accepted as a valuable tool among those seeking to understand the benefits of changes to nonmarket goods. The use and nonuse willingness-to-pay estimates generated from this research will provide a more well-rounded evaluation of future pollution reduction programs in the Chesapeake Bay, contributing to the quality, objectivity, and integrity of information the EPA will disseminate.
We appreciate the attention to these details addressed by UWAG and can assure them that any errors within the experimental design have been rectified.
EPA believes this study will allow public values and opinions to be included in the decision-making process for the Chesapeake Bay. Using current econometric methods, this study will provide unique, policy relevant information about what, if any, further actions are called for in the Chesapeake Bay.
2-1
2-2
2-3
2-4
2-5
2-6
2-7
2-8
2-9
2-10
2-11
2-12
2-13
2-14
2-15
2-16
2-17
2-18
2-19
2-20
2-21
RESPONSES TO COMMENT SET 2: Coalition of 18 Interest Groups (C18)
2-1 A complementary study of the costs of the TMDL is being conducted by EPA’s Chesapeake Bay Program Office and will be issued by EPA after a peer-review is complete.
2-2 No response required.
2-3 No response required.
2-4 EPA recognizes that hypothetical bias is a potential concern in stated preference (SP) surveys and takes this concern seriously. In general, SP methods have “been tested and validated through years of research and are widely accepted by federal, state, and local government agencies and the U.S. courts as reliable techniques for estimating nonmarket values” (Bergstrom and Ready 2009, p. 26). A recent meta-analysis of the stated preference literature also concludes that hypothetical bias may not always be a significant concern (Murphy, et al. 2005).
To reduce the potential for hypothetical bias in this survey EPA has consulted with experts and drawn from peer reviewed literature to address it in the survey design. For example, the survey explicitly incorporates elements that allow mitigation of hypothetical bias, such as the use of reminders about budget constraints (akin to the cheap talk language in Cummings and Taylor 1999; List 2001). These features of survey design are shown to minimize hypothetical bias in experimental settings. The text used in this survey has undergone thorough testing with participants in focus group and one-on-one interviews. EPA believes that the steps taken during survey development and testing have largely mitigated the potential for hypothetical bias. See Section 3(b) of Part A of this ICR for more information on how we address hypothetical bias.
EPA recognizes the potential for non-response bias and the impacts it could have on the data analysis. First, EPA is taking steps to obtain the highest possible response rate, thereby mitigating non-response bias. Specifically, EPA is using focus group-tested design choices to encourage participation. EPA is also following the Dillman tailored design method (Dillman 2008) for mail surveys which includes an introduction letter preceding the survey, a reminder post card, and second mailing of the survey, and a reminder letter following the second survey.
EPA will also administer a non-response follow-up survey (Attachment 11) in both the pre-test and full survey in order to examine whether or not respondents are systematically different from non-respondents (see OMB 2006). In the non-response follow-up survey, households that do not return the survey will be randomly sampled to receive a short questionnaire by priority mail. The questionnaire will elicit basic demographic information as well as a few short questions regarding awareness and the reasons they did not complete the survey. Responses to these questions will be used to examine whether respondents are systematically different from non-respondents. See Section 2c of Part B of this Information Collection Request for a description of the non-response follow-up survey.
EPA agrees that it challenging to measure complex environmental commodities. Standard survey design protocols were followed in developing the survey. As such, EPA conducted 10 focus groups and 72 cognitive interviews with individuals within and outside the Chesapeake Bay Watershed in order to test their level of understanding of the materials included in the survey (OMB Control Number 2090-0028). We used this standard protocol to identify the most salient environmental commodities that will be affected by the TMDL. Limiting the survey to those policy outcomes (i.e., water clarity, striped bass, oysters, blue crabs, and lake water quality) is conservative but we can be confident in the benefits we do capture from the survey.
2-5 EPA believes the survey has practical utility, as required by the Paperwork Reduction Act. The results of the study will be made available to state and local governments which they may use to better understand the preferences of households in their jurisdictions and the benefits they can expect as a result of meeting the TMDL. Finally, stakeholders and the general public will be able to use this information to understand the social benefits of improving water quality in the Chesapeake Bay Watershed to accompany the cost information also being developed by EPA. EPA also believes that the survey meets OMB’s information quality guidelines. We agree that a number based on a poor quality survey is inferior to no number at all. Therefore, EPA is using standard survey design protocols in the design and implementation of the survey, including extensive focus group and interview testing, a pre-test, and a non-response follow-up analysis.
2-6 The attributes on the survey (i.e., water clarity, striped bass, oysters, blue crabs, and watershed lake conditions) were chosen because water quality and ecological modeling show that they will be affected by the nutrient and sediment reduction targets in the TMDL. EPA’s National Center for Environmental Economics has been working closely with water quality modelers in the EPA Chesapeake Bay Program Office and the Office of Research and Development to quantify the impact of the TMDL on the chosen attributes.
EPA has also been working closely with ecosystem modelers in NOAA’s Chesapeake Bay Office and National Marine Fisheries Service’s Office of Habitat Conservation. Specifically, NOAA’s modelers have provided assistance with the eco-system based fishery models "Ecopath with Ecosim" and "Atlantis." These consultations have been instrumental in examining the ecological impacts of reducing nutrient and sediment loads to the Bay of the ecosystem-based fishery models and will allow EPA to more accurately translate the values people place on the various attributes of the Chesapeake Bay highlighted in the survey to benefits estimates associated with the TMDLs.
2-7 The survey is indeed framed in a way to elicit “willingness to pay for generic improvements in water quality.” This allows EPA to estimate the parameters for a range of policy outcomes, which will then be used to estimate a “benefits curve.” To allow for a range in outcomes, EPA describes conditions in 2025 with the current programs in place and have developed three survey versions with different hypothetical future baseline conditions (i.e., with no additional programs), where environmental quality is increasing, decreasing, or constant, as described in Section 5b of Part B of this ICR. The benefits curve will be used to estimate the incremental benefits of the TMDL relative to the most accurate baseline as predicted by the water quality and ecological models developed by EPA and NOAA. Sensitivity analyses will be conducted on the results of the survey to examine the effect of uncertainty in future levels of the environmental conditions, under both the baseline (i.e., without the TMDL) and TMDL scenarios.
Flexibility in the baseline and policy outcomes are important in this case because the Chesapekae Bay TMDL allows for adaptive management and additional offsets if the required nutrient reductions are not being met. So as population in the watershed grows over the future and landuse patterns change, these survey data will still be useful in estimating the benefits of nutrient and sediment reductions in the Chesapeake Bay.
2-8 The EPA recognizes that there are other programs and activities that will affect water quality in the Watershed. For this reason we have included an increasing baseline version of the survey to reflect the fact that absent new programs it is plausible that conditions will improve in the Watershed under these existing programs.
2-9 Again, the improving baseline version of the survey captures this scenario.
2-10 See 2-9.
2-11 EPA agrees that improvements to lakes that are not in the Watershed should not be included in the survey. We have made several modifications to the survey instrument to make it clear that only lakes in the Watershed should be considered. First, we have enhanced the map at the beginning of the survey to identify major cities within and outside the Watershed and added the Finger Lakes to the map (which are clearly marked as being outside the watershed). This helps orient respondents who are considering whether or not they “use” (i.e., engage in recreation activities) the Watershed. Second, we clearly describe the Watershed as including lakes and state that water bodies outside of the Watershed will not be affected by the programs. Finally, we include a follow-up question designed to test their level of understanding that conditions in lakes outside the watershed will not be affected by the programs described by the survey.
2-12 In addition to providing an enhanced map of the Watershed we identify which sampled households are in the Watershed and which are not. Respondents will be told in the cover letter of the survey if their home address is inside or outside the watershed. See Attachments 5 and 6 for examples of the cover letters.
2-13 The survey scenarios were designed based on the goal of illustrating hypothetical but realistic policy scenarios that “span the range over which we expect respondents to have preferences, and/or are practically achievable” (Bateman et al. 2002, p. 259). In the survey these scenarios are framed as generic policies in order to estimate the range of benefits for water quality improvements. These benefit estimates will then be used to estimate the incremental benefits of the TMDL relative to the baseline (see response 2-7).
The survey provides examples of sources of nutrients, including fertilizers, livestock manure, and household wastewater. The list is not intended to be comprehensive. As stated above, different versions of the survey have different baseline assumptions, which will be used in the statistical analysis to reflect the fact that future conditions in the Bay, absent new programs, are uncertain. EPA agrees that this baseline uncertainty stems, at least partially, from the fact that the TMDL does not impact other sources of nutrients and sediments, including air disposition from outside the watershed, sediments, and hurricanes and ocean currents.
2-14 While the sequence of implementation is unknown the experimental design allows EPA to estimate benefits for a range of outcomes.
2-15 We added information to the survey to inform respondents that programs will be implemented over time, with full implementation occurring in 2025.
2-16 A separate analysis of the costs of implementing the TMDLs is being developed by EPA’s Chesapeake Bay Program Office and will be available upon the completion of peer review.
2-17 EPA agrees and a version of the survey with an increasing baseline is now included in the Information Collection Request.
2-18 EPA agrees and does not intend to add the total monetized benefit results from this study with results from other studies, such as those that use revealed preference methods. The results from this study can be used to isolate nonuse values or used alone as a measure of total monetized benefits.
2-19 EPA carefully reviewed the survey instrument and has corrected typos.
2-20 Please see Section 2b of Part B of the ICR for the sampling methodology.
2-21 EPA is using state-of-the-science methods to assess the benefits of the TMDL for the Chesapeake Bay. As such EPA believes that the results will provide useful information to the public and decision makers on how society values improvements in environmental conditions in the Chesapeake Bay.
3-1
3-2
3-3
3-4
3-5
3-6
3-7
3-8
3-9
3-10
RESPONSES TO COMMENT SET 3: Food and Water Watch (FWW)
3-1 Thank you very much for the detailed comments. Stated preference surveys (or surveys to measure WTP) have been used by a variety of federal agencies to assess the benefits of regulations and federal activities (see, for example, NOAA 2002; USEPA 2008, 2009; U.S. Bureau of Reclamation 2012). The use of stated preferences studies (i.e., WTP studies) is consistent with EPA’s peer-reviewed Guidelines for Preparing Economic Analyses (USEPA 2010) and OMB Guidelines, Circular A-4 (OMB 2003). The use of a choice experiment design is consistent with standard practice in the peer-reviewed literature for valuing environmental resources (see Freeman 2003; Bennett and Blamey 2001; Louviere et al. 2000). The individual choices reflected in each household survey response are aggregated with other household responses to estimate a total value for the resource. The stated preference survey is not part of a water quality trading plan, nor will the results of the survey be used to develop a trading plan. The survey is designed to estimate the welfare impacts of water quality improvements and will have no bearing on how those improvements are achieved.
3-2 No response required.
3-3 We agree that the Bay is a complex resource and estimating a total value is challenging. EPA conducted 10 focus groups and 72 cognitive interviews with individuals within and outside the Watershed. These standard protocols allowed for testing of individual’s understanding of the materials included in the survey instrument. This approach was used to identify the most salient environmental resources that will be affected by the TMDL. Limiting the survey to those outcomes (i.e., water clarity, striped bass, oysters, blue crabs, and water quality of lakes in the watershed) is conservative, but means that we are more confident in the benefits we do capture from the survey.
3-4 The study that is referenced (i.e., a citation in Diamond and Hausman 1994 to Desvousges 1993) is almost 20 years old and uses methods that are no longer considered standard (e.g., use of convenience samples). It is standard to include debriefing questions to capture various biases that may appear in survey responses, such as “warm glow.” As such we have included questions to capture respondents who may be responding in such a way.
3-5 The study that is referenced (i.e., Loomis and White 1996) is a meta-analysis based on older studies, many of which were unpublished or not peer-reviewed. While examples of implausible survey results exist, including appropriate debriefing questions, use of focus groups, and pre-testing reduces such occurrences. This project is based on current survey design methods reflecting careful design choices. In addition, the survey instrument will be pre-tested with a small sample to determine whether or not responses are plausible and consistent with economic theory.
3-6 Stated preference surveys capture individual preferences for public goods, that is environmental resources that are shared by all. The choices individuals make in the experimental setting reflect the trade-offs, or preferences, for that individual between environmental improvements and costs. By examining and aggregating individual preferences or choices using the analytical methods described in Section 5 of Part B of this Information Collection Request, the researcher (i.e., EPA) is able to discern a value from the sample of individual choices for the various environmental improvements (also called “attributes”) in the survey. The survey clearly states that many households are being asked about their preferences and choices, and therefore does not imply that any one person would be solely responsible for the program choices.
3-7 and 3-8
The stated preference survey is not part of a water quality trading plan, nor will the results of the survey be used to develop a trading plan. The survey is designed to estimate the welfare impacts of water quality improvements and will have no bearing on how those improvements are achieved.
3-9 Stated preference surveys are routinely used in federal agencies to estimate the value of non-market goods (see, for example, U.S. EPA 2008, 2009; U.S. Bureau of Reclamation 2012). It is not a method to determine a “price” for a good to be sold, but rather a method to reflect society’s value of the resource. There are no plans to “sell” the Chesapeake Bay.
3-10 Enforcement remains an important and relevant goal of the EPA.
REFERENCES
Bateman, I.J., R.T. Carson, B. Day, M. Hanemann, N. Hanley, T. Hett, M. Jones-Lee, G. Loomes, S. Mourato, E. Ozdemiroglu, D.W. Pierce, R. Sugden, and J. Swanson. (2002). Economic Valuation with Stated Preference Surveys: A Manual. Northampton, MA: Edward Elgar.
Blamey, R., & Bennett, J. 2001. Yea-saying and validation of a choice model of green product choice. In J. Bennett & R. Blamey (Eds.), The Choice Modelling Approach to Economic Valuation. Northampton, MA: New Horizons in Environmental Economics. pp. 178-181.
Bockstael, McConnell, and Strand. Bockstael, N.E.; K.E. McConnell; and L.E. Strand. 1989. Measuring the Benefits of Improvements in Water Quality: The Chesapeake Bay. Marine Resource Economics 6: 1-18.
Cummings, R. and L. Taylor. 1999. Unbiased Value Estimates for Environmental Goods: A Cheap Talk Design for the Contingent Valuation Method. American Economic Review 89(3): 649-665.
Diamond, Peter A. and Jerry A. Hausman. 1993. Contingent Valuation: Is Some Number Better than No Number? Journal of Economic Perspectives 8(4): 45-64.
Desvousges, W.H., F.R. Johnson, R.W. Dunford, K.J. Boyle, S.P. Hudson, and K.N. Wilson. 1993. “Measuring Natural Resource Damages With Contingent Valuation: Tests of Validity and Reliability.” In Contingent Valuation, A Critical Assessment, J.A. Hausman, ed., pp. 91–164. Amsterdam: Elsevier.
Freeman, A. Myrick. 2003. The Measurement of Environmental and Resource Values: Theory and Methods. Washington, DC: RFF Press.
Loomis, John B. and Douglas S. White. 1996. Economic Benefits of Rare and Endangered Species: Summary and Meta-Analysis. Ecological Economics 18: 197-206.
Louviere, Jordan J.; Deborah Street; Leonie Burgess; Nada Wasi; Towhidul Islam; and Anthony A.J. Marley. 2000. Modeling the Choices of Individual Decision-Makers by Combining Efficient Choice Experiment Designs With Extra Preference Information. Journal of Choice Modeling 1(1): 128-163.
NOAA. 2002. Stated Preference Methods for Environmental Management: Recreational Summer Flounder Angling in the Northeastern United States. https://www.st.nmfs.noaa.gov/st5/RecEcon/Publications/NE_2000_Final_Report.pdf. (Accessed November 7, 2012.)
OMB. 2003. Circular A-4, Regulatory Analysis, September 17, 2003. Available at: http://www.whitehouse.gov/omb/circulars_a004_a-4/. (Accessed February 22, 2011.)
U.S. Bureau of Reclamation. 2012. Klamath River Basin Restoration Nonuse Value Survey. Final Report. Prepared by RTI International. RTI Project Number 0212485.001.010.
U.S. EPA. 2008. Final Ozone NAAQS Regulatory Impact Analysis. EPA EPA-452/R-08-003. (Accessed November 7, 2012.)
U.S. EPA. 2009. Environmental Impact and Benefits Assessment for the Final Effluent Guidelines and Standards for the Construction and Development Category. EPA-821-R-09-012. (Accessed November 7, 2012.)
U.S. EPA. 2010. Guidelines for Preparing Economic Analysis. EPA 240-R-10-001.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Kelly |
File Modified | 0000-00-00 |
File Created | 2021-01-27 |