SUPPORTING STATEMENT
Preliminary Case Study Assessing Economic Benefits of Marine Debris Reduction
OMB CONTROL NO. 0648-xxxx
B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS
1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g. establishments, State and local governmental units, households, or persons) in the universe and the corresponding sample are to be provided in tabular form. The tabulation must also include expected response rates for the collection as a whole. If the collection has been conducted before, provide the actual response rate achieved.
The potential respondent universe consists of visitors to the coastal study area who are 18 years old or older. Onsite intercept surveys at local beaches will be used to find respondents for the primary mail survey. The onsite intercept survey includes demographic questions and questions about participation in single or multi-day trips. The last question asks respondents if they would be willing to participate in a future mail survey. For those who decline to participate in the onsite interview entirely, we will record their gender and the reason they did not participate. For those who participate in the onsite intercept survey and agree to participate in the mail survey, we will record their mailing and email addresses. For those who complete the intercept survey but do not agree to participate in the mail survey, we will record their ZIP code in lieu of their mailing and email addresses.
We anticipate that 33% of those approached will agree to participate in the onsite survey and mail survey. We will thus need to approach 526 potential respondents to obtain the desired 572 addresses to administer the mail survey. Assuming a 35% response rate for the mail survey, we expect to receive 200 completed surveys and 372 nonresponses (Table 4).
Table 4. Expected number of intercept and mail surveys for each study area
Sample area |
Number of onsite intercept surveys |
Expected number of survey mailings* |
Expected number of completed surveys |
Orange County |
1,733 |
572 |
200 |
Total |
1,733 |
572 |
200 |
* Surveys will be mailed to those who completed the onsite intercept survey and agreed to participate in the mail survey. |
Precision of survey estimates is a direct function of sample size. The needed sample size to secure a specific level of precision can be calculated using the following formula, in which N represents the universe size, is the margin of error, and z is the percentile of the standard normal distribution.
As such, the expected margin of error, with 95% confidence, for the total sample of 200 will be no larger than ±4.2% at the 50 percent population incidence level and a 95% level of confidence. For the survey, with n = 200, we will get error margins +/-10% at a 95% confidence level.
2. Describe the procedures for the collection, including: the statistical methodology for stratification and sample selection; the estimation procedure; the degree of accuracy needed for the purpose described in the justification; any unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.
Statistical methodology for stratification and sample selection
As described in Part A, we will begin sampling procedures by selecting multiple beaches in Orange County, CA. The beaches will be selected to represent various types of beach experiences available, including more- and less-developed beaches. Because this approach relies on judgment to achieve representativeness rather than probability-based sampling, there is some uncertainty involved. The degree of accuracy needed will be discussed below.
When sampling at the selected beaches, we wish to ensure that approximately 50% of respondents take multiple-day (or overnight) trips. The reason for this target is that a sufficient number of respondents taking multiple-day trips is important for characterizing switching by beachgoers between regions, which is a critical aspect of the national model. It is also important for estimating the number of overnight hotel stays, which is necessary for the regional economic analysis. Conversely, targeting a sufficient number of local beachgoers, who are likely to take more trips and visit more beaches in their region, is important for characterizing marine debris at as many beaches in each region as possible. If, during this initial onsite sampling, it appears that the proportion of those taking overnight trips is significantly less than or greater than 50%, we will adjust the sampling rates accordingly. When evaluating whether an adjustment is needed, we will assume that anyone engaged in a day trip at the time of the onsite survey takes only day trips to the area being sampled. When evaluating the adjustment we will also account for the effects of choice-based sampling, described below. To illustrate the type of adjustment that could be made, consider the case where only 25% of respondents take overnight trips during the initial sampling of a given region. This would mean there are three times as many people taking day trips as there are taking overnight trips. To reach the target of an even split, we would adjust the sampling fraction to one-third for those taking day trips.
Onsite sampling is a form of choice-based sampling, where the choices of selected respondents affects their probability of entering the sample. In this study there are three components of choice-based sampling. The first is the length of time a respondent spends at the beach on a given day. The second is the number of days the respondent spends at the beach during a given trip, which is one for day trips but could be more for overnight trips. The third is the number of trips the respondent takes to the given sampling area during the year. Data on the length of time at the beach and the number of days spent at the beach for a given trip will be collected during the onsite interviews. These questions will be asked with respect to the trip taking place at the time of the interview, and the responses will be viewed as a random draw from all trips the respondent takes. Data on the number of trips a respondent takes to the relevant sampling area during the course of the year will be collected in the mail survey.
Estimation procedure
As discussed in Part A, the primary research goal is to quantify the relationship between marine debris and the number of trips to beaches in Orange County, CA. Any change in beach visits caused by potential changes in marine debris, expressed as a percentage, will be used as an input to economic models. The models will estimate the impacts of marine debris on the value of recreation and the regional economy. A secondary goal is to compile response statistics on questions in the survey that do not involve a change in trips, such as what types of debris respondents typically see on the beach, respondents’ demographic characteristics, and other questions.
For all these survey results, the statistical estimation procedure will use a weighted average of a respondent’s answers, where weights account for the sample-selection factors described above. For example, a respondent who spent three days at the beach on her overnight trip, took two trips during the year, and spent four hours at the beach on the day she was intercepted, will have a sampling weight that is the product of one-third, one-half, and one-fourth. If the sampling rate for those taking day or overnight trips is adjusted, the weights used in the estimation will also include a factor that is the inverse of the sampling rate at the time of the onsite interview. The sampling weights will be used to compile respondent statistics at the study areas only. The MDP does not intend to extrapolate its study results to the national level.
The model estimation procedure relies on a nationwide travel cost model of coastal recreation. The model was developed by experts for NOAA and other federal and state trustees in the Deepwater Horizon oil spill assessment. A travel cost model involves a system of demand functions, where price is the cost to individuals of traveling to a given site and quantity is the number of trips individuals take to the site. In the Deepwater Horizon model, travel cost is calculated using an average of airfares and driving costs, depending on the distance traveled and the proportion of people traveling by air or car for any given distance. Travel cost also includes the value of time spent traveling. The structure of the model is nested logit, with coastal beaches grouped into 76 model sites covering all coastal areas of the continental United States, including the Great Lakes. In response to an environmental change, the model accounts for the change in the value of trips to a given site, switching of trips between a given site and alternative sites, and changes in the total number of trips to all sites. The recreation data used in the model come from a sample of 41,716 respondents living throughout the continental United States. Additional details about the Deepwater Horizon model and data can be found in English and McConnell (2015), Herriges (2015), and Leggett (2015). While the final Deepwater Horizon model focused on sites in the southeast United States, data were collected for trips to all beaches throughout the country and this more comprehensive data will be used for the marine debris model.
Since the Deepwater Horizon model already estimates the total number of trips to each site, we will not rely on the marine debris survey to estimate the total number of trips in the selected coastal locations. Instead, we have defined the coastal locations so that they match sites in the Deepwater Horizon model. A percentage change in trips due to changes in marine debris, estimated using the contingent behavior questions from the marine debris survey, will be applied to total trips at the relevant model sites. The resulting change in total trips is the information the model requires to estimate the change in value.
Degree of accuracy needed
Two factors relate to the degree of accuracy needed. First, the survey will be used to calculate average statistics rather than totals. For example, the average percent change in the number of trips is the key result for the analysis of economic value. Statistics for the total change in the number of trips are not required because the models that will be used to calculate value already include estimates of the total number of trips. A percent change from the marine debris survey will be applied to the previously estimated totals, which simplifies the weighting procedures. Specifically, we will not have information on the proportion of total beach trips in the region that are represented by trips to the multiple beaches sampled. We also will not calculate weights that expand from the times when onsite sampling is conducted to all times when beach recreation occurs. These weights, which would be required to estimate statistics reflecting totals, are not required for the average statistics to be used in this study.
Second, the marine debris study is a Pre-test. The results will be used to evaluate the potential for further research in selected communities, regions, or nationwide. The largest source of potential inaccuracy is the sampling of a small number of beaches at one point in time during the recreation season. To achieve more accurate representativeness, probability-based sampling would require a random selection of a large number of beaches and a large number of sampling times throughout the recreation season. The cost of such an effort was not determined to be warranted given the exploratory nature of this study and the need for preliminary information on the value of marine debris to coastal communities.
Specialized sampling procedures
We will not employ any specialized sampling procedures, other than the onsite sampling methods described above.
Periodic data collection
The data collection effort will gather information for a full year of recreation activity in one survey effort. There will be no periodic data collection.
3. Describe the methods used to maximize response rates and to deal with nonresponse. The accuracy and reliability of the information collected must be shown to be adequate for the intended uses. For collections based on sampling, a special justification must be provided if they will not yield "reliable" data that can be generalized to the universe studied.
Our mixed-mode data collection is designed to maximize response rates. The research team believes that a mixed mode survey (onsite intercept and a follow-up mail) mode offers the best opportunity for obtaining a high response rate among the target population (i.e., beachgoers who visit the study areas) at a reasonable cost, while allowing for the use of a visual aid (i.e., a map of local beaches). Recruiting the sample, via intercept survey, provides face-to-face contact that can obtain an original commitment from a recruited respondent to complete the forthcoming mail survey. The combination of intercept and mail modes will maximize response rates for the mail survey (Ditton and Hunt 2001).
A number of measures will be implemented to maximize the response rate, including:
A short beach intercept survey (~ 2 minutes) will identify individuals willing to participate in a mail survey.
Onsite interviewers will recruit potential participants from multiple beaches within Orange County. Interviewers will have background information about the study to provide potential participants context and credibility for the research.
The intercept survey will be administered via computerized tablet to minimize respondent burden and transmit data in real-time.
The initial survey packet will contain an introductory letter informing respondents about the survey and encouraging their participation by a specific date. All letters will include the NOAA logo and will be signed by the Chief Scientist of the NOAA Marine Debris Program.
The survey will be sent via first-class mail and will include a self-addressed, stamped envelope to facilitate response.
One week after sending the initial survey, a thank you/reminder postcard will be mailed to all sampled households thanking them for responding and encouraging them to complete the survey if they haven’t already.
One week after sending the thank you/reminder postcards, a thank you/reminder postcard will be sent to all sampled households who have not yet responded to encourage their survey completion and provide them with information to request another copy of the survey if it has been lost or misplaced.
Three weeks after sending the initial survey, a replacement survey will be mailed to all sampled households who have not yet responded. The replacement survey will include a self-addressed, stamped envelope to facilitate response.
If
If OMB approves the extension of the Pre-test into other regions, NOAA must submit information on their nonresponse bias follow-up study including which key attitudinal questions are used to reweigh the results in addition to the usual demographic variables. If NOAA concludes that a nonresponse bias follow-up study is not warranted due to the preliminary nature of the Regional Pilot Study, NOAA must provide justification for this decision including what information the Regional Pilot Study is likely to produce to inform future agency products and decisions;
At whatever point during this process of the Orange County Pre-test, when 100 completed surveys have been received, a preliminary non-respondent assessment, consisting of a census-based zip code demographic comparison for the on-site intercepts, will be performed. If OMB approves the extension of the Pre-test into other regions, NOAA must submit information on their nonresponse bias follow-up study including which key attitudinal questions are used to reweigh the results in addition to the usual demographic variables. If NOAA concludes that a nonresponse bias follow-up study is not warranted due to the preliminary nature of the Orange County Pilot Study, NOAA must provide justification for this decision including what information the Regional Pilot Study is likely to produce to inform future agency products and decisions;
All survey materials were carefully crafted to provide a pleasing appearance that encourages response. Questions are kept short and the total number of questions was minimized, given the research needs. An attractive, color map of local beaches is included with each survey instrument.
Potential alternative to mixed-mode data collection include a web-based survey and an in-person survey. However, existing probability-based web panels (e.g., GfK Knowledge Networks) would have inadequate sample sizes at the county level, and the cost associated with completing an in-person survey at the study location would be much higher compared to a mixed mode survey. While it would be possible to provide a Web URL that allows mail survey respondents to complete the survey over the internet, recent research has found that providing an internet option in a mail survey does not improve response rates relative to a mail-only approach (Messer and Dillman 2011; Medway and Fulton 2012; Dillman et al. 2014).The potential for nonresponse bias will be assessed by comparing the demographic characteristics of those who did not agree to take the mail survey during onsite intercepts, those who agreed to but didn’t return the survey, and those who completed the survey. Specifically, comparisons will be conducted for ZIP code, age, and gender. If substantial differences are observed, sampling weights will be developed through sequential post-stratification (e.g., raking), so that the weighted demographic totals for the survey data align with corresponding totals for the surveyed region (Battaglia et al. 2004).
4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as effective means to refine collections, but if ten or more test respondents are involved OMB must give prior approval.
Comments on the survey materials were solicited from the following persons outside the agency:
1. Dr. George Parsons, Professor, Department of Economics, University of Delaware.
2. Dr. Eric English, Bear Peak Economics, Boulder, Colorado.
3. Dr. Jason H. Murray, Economist, I. M. Systems Group, Inc.
Additonally, one-on-one discussions were held with beachgoers in the Boston and Los Angeles areas in December 2016 and January 2017. The tests involved seven respondents, in addition to internal testing with employees of Abt. The participants filled out a draft version of the survey instrument and discussed the survey and their responses with Dr. Eric English, a member of the research team. The interviews were designed to evaluate the clarity of the survey questions and the ability of survey respondents to accurately answer the survey questions.
The following summarizes the issues, revisions, and conclusions of the one-on-one discussions:
The map of the study region was important in helping respondents remember the area of interest when answering questions throughout the survey.
Respondents were best able to understand the concept of debris density when the idea was described as the respondent picking up all the debris in a specified area and seeing what they find.
Most respondents said that they were aware of debris levels at the beaches, that they recalled which beaches had more debris and which had less, and were able to make a reasonable estimate of how much debris was present at the beaches.
For the survey page that explains marine debris and includes any questions, it was important to include text that directed the respondent to the next page for questions about marine debris.
When estimating any changes in the number of trips because of changes in debris, most respondents described their thought process in ways that indicated that they understood the questions and gave them careful consideration. Examples include respondents thinking about their children playing in the sand; respondents indicating that beaches were already clean enough so that reductions in debris would not matter to them; respondents saying they would choose closer beaches they had previously avoided if there were less debris; and respondents who would change their behaviors consistently by choosing to take more trips if there were less debris and fewer trips if there were more debris.
The survey took less than 10 minutes for most respondents.
Two additional methodological tests involve comparing survey results to external measures. First, the survey elicits respondents’ estimates of how their recreation choices would change in response to hypothetical changes in marine debris levels. This method is called “stated preference.” It is common in the economics literature to compare stated-preference results to what are called “revealed preference” results. Revealed preference involves inferring changes in behavior from actual choices people have made in the past. We will compare the stated-preference results of the marine debris survey to the revealed-preference results of the marine debris study conducted previously in Orange County, California.
Second, the marine debris survey elicits respondents’ estimates of the amount of debris at beaches in Orange County, CA. These estimates are useful in characterizing the baseline level of debris to which changes are compared. For some beaches, information about the level of marine debris has already been collected onsite. The estimates by respondents to the marine debris survey will be compared with the onsite measurements for validation or potential adjustments.
5. Provide the name and telephone number of individuals consulted on the statistical aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.
The following individuals were consulted on the statistical aspects of the design:
Dr. George Parsons, Professor, Department of Economics, University of Delaware (phone: 302-831-6891).
Dr. Eric English, Bear Peak Economics, Boulder, Colorado (phone: 202-699-6334).
Dr. Adam Domanski, National Oceanic and Atmospheric Administration (phone: 240‑533-0433).
Abt will collect and analyze the information for the Program.
References
Ballance, A., P.G. Ryan, and J.K. Turpie. 2000. How much is a clean beach worth? The impact of litter on beach users in the Cape Peninsula, South Africa. South African Journal of Science 96(5):210–230.
Battaglia, M.P., D. Izrael, D. Hoaglin, and M.R. Frankel. 2004. Tips and tricks for raking survey data (aka sample balancing). American Association of Public Opinion Research.
BLS. 2015. U.S. Department of Labor, Occupational Employment Statistics. Bureau of Labor Statistics. Available: www.bls.gov/oes/.
Dillman, D., J.D. Smyth, and L.M. Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 4th Edition.
Ditton, R.B. and K.M. Hunt. 2001. Combining creel intercept and mail survey methods to understand the human dimensions of local freshwater fisheries. Fisheries Management and Ecology 8(4-5):295–301.
English, E., and K. McConnell. 2015. Overview of Recreation Assessment. DWH Lost Recreational Use NRDA Technical Working Group Report. Available at https://www.fws.gov/doiddata/dwh-ar-documents/940/DWH-AR0021412.pdf.
Hanemann, M., L. Pendelton, C. Mohn, J. Hilger, K. Kurisawa, D. Layton, C. Bush, and F. Vasquez. 2004. Using Revealed Preference Models to Estimate the Effect of Coastal Water Quality on Beach Choice in Southern California. A Report from the Southern California Beach Valuation Project to the National Oceanic and Atmospheric Administration.
Herriges, J. 2015. Model Structure. DWH Lost Recreational Use NRDA Technical Working Group Report. Available at https://www.fws.gov/doiddata/dwh-ar-documents/940/DWH-AR0045972.pdf.
IEc. 2014 Assessing the Economic Benefits of Reductions in Marine Debris: A Pilot Study of Beach Recreation in Orange County, California. Final Report. June 15, 2014. Industrial Economics, Incorporated. Available: https://marinedebris.noaa.gov/file/2574/download?token=zIPamF9O. Accessed 11/3/2016.
Landry, C.E., T. Allen, T. Cherry, and J.C. Whitehead. 2012. Wind turbines and coastal recreation demand. Resource and Energy Economics 34(1):93–111.
Leggett, C. 2015. Travel Cost Computation. DWH Lost Recreational Use NRDA Technical Working Group Report. Available at https://www.fws.gov/doiddata/dwh-ar-documents/940/DWH-AR0056724.pdf.
Lew, D.K. and D.M. Larson. 2005. Valuing recreation and amenities at San Diego County beaches. Coastal Management 33(1):71–86.
Lynn, P. 2013. Alternative sequential mixed-mode designs: Effects on attrition rates, attrition bias, and costs. Journal of Survey Statistics and Methodology 1(2):183–205.
Medway, R.L. and J. Fulton. 2012. When more gets you less: a meta-analysis of the effect of concurrent web options on mail survey response rates. Public Opinion Quarterly 76(4):733–746.
Messer, B.L. and D.A. Dillman. 2011. Surveying the general public over the Internet using addressed-based sampling and mail contact procedures. Public Opinion Quarterly 75(3):429–457.
Millar, M.M. and D.A. Dillman. 2011. Improving response to web and mixed-mode surveys. Public Opinion Quarterly 75(2):249–269.
Parsons, G. and D. Massey. 2003. A Random Utility Model of beach recreation. In The New Economics of Outdoor Recreation, N. Hanley, W.D. Shaw, and R.E. Wright (eds.). Edward Elgar.
Parsons, G., A. Kang, C. Leggett, and K. Boyle. 2009. Valuing beach closures on the Padre Island National Seashore. Marine Resource Economics 24(3):213–235.
Pendleton, L., P. King, C. Mohn, D.G. Webster, R. Vaughn, and P.N. Adams. 2011. Estimating the potential economic impacts of climate change on Southern California beaches. Climatic Change 109(1):277–298.
Schuhmann, P.W. 2012. Tourist perceptions of beach cleanliness in Barbados: Implications for return visitation. Études Caribéennes (19).
Smith, V.K., X. Zhang, and R.B. Palmquist. 1997. Marine debris, beach quality, and non-market values. Environmental and Resource Economics 10(3):223–247.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Sarah Brabson |
File Modified | 0000-00-00 |
File Created | 2021-01-22 |