PRCV SS Part B Statistical Methods_Puerto_Rico_Coral_Reef_Valuation_3-27-15_clean

PRCV SS Part B Statistical Methods_Puerto_Rico_Coral_Reef_Valuation_3-27-15_clean.doc

Economic Value of Puerto Rico's Coral Reef Ecosystems for Recreation/Tourism Uses

OMB: 0648-0713

Document [doc]
Download: doc | pdf



B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g. establishments, State and local governmental units, households, or persons) in the universe and the corresponding sample are to be provided in tabular form. The tabulation must also include expected response rates for the collection as a whole. If the collection has been conducted before, provide the actual response rate achieved.


There are two populations that will be surveyed; permanent residents of Puerto Rico that used the coral reefs of Puerto Rico for recreation over the past 12 months and visitors to Puerto Rico that used the coral reefs on their current (most recent trip) trip. No one currently knows the populations for either residents or visitors that use the coral reefs for recreation. We describe below how we estimate those populations. For visitors, we start out with the number of enplanements, which are the number of people leaving Puerto Rico and is also referr3ed to as a person-trip i.e. one person making one trip to Puerto Rico. In 2013, there were more than 4.6 million enplanements (Table B.5). It is estimated that about 80 to 85 percent of these enplanements are made by visitors who participate in at least one recreation activity on their visit to Puerto Rico (Puerto Rico Tourism Company). For residents, we start with the number of households in coastal municipalities which was estimated to be more than 858,000 in 2013 (Table B.6). We then estimate what percent of those households have a household member who has used Puerto Rico’s coral reefs for recreation.


The unit of analysis for visitors is a person-trip. So we estimate numbers of days and expenditures of coral reef activity per person-trip and can then extrapolate from sample to population based on estimates of total person-trips for coral reef use. For non-market economic value, the unit of analysis is visitor household annual willingness to pay. For residents, the unit of analysis is households. Annual activity, spending and non-market economic value are obtained and extrapolated from sample to population based on number of households estimated to use the coral reefs. Estimates of annual activity pending will be obtained: annual activity using the in-home survey form and expenditures using the expenditure mailback questionnaire. For visitors, all information is obtained about the interview trip. See Part A for the details of what information is obtained from each component of the survey for both residents and visitors.


Visitor Survey


Full Survey


Survey Forms. The visitor survey has four basic components; the Airport survey, the Internet Panel, Expenditure Mailback, and the Satisfaction Mailback. The Expenditure and Satisfaction mailbacks are given to those visitors that don’t want to join the Internet Panel, but accept the mailbacks. As in past work in the Florida Keys, visitors are given both mailback forms and are told that if they fill-out and return both they will increase their chance of winning the sweepstakes/lottery gifts.


Table B1 summarizes each survey form component number of participants (completes) and the net expected response rates for each component. We expect a 90 percent net response rate of those eligible visitors (coral reef users) for the Airport Survey. Because the Airport Survey is limited to 5 minutes on average, a follow-up Internet Panel is recruited to answer more detailed data needs. We require 500 Internet Panel completed for each season (winter and summer) for a total of 1,000 completes. The Internet Panel survey firm (GfK, Inc.) has advised us that we should recruit 1,500 visitors per season from the Airport Survey to get 500 completes of the Internet Survey for a total of 3,000 participants to complete the Airport Survey short-form and 1,000 completes of the Internet Panel survey. We think GfK, Inc. is being very conservative in their planning assumption, but this is the first time anyone has done this so we must plan conservatively to get the number of necessary responses to ensure statistically reliable estimates.


There are three steps in calculating the expected net response rate in our survey of visitors using the Internet Panel. We will calculate the expected response rate at each step and the cumulative response rate across all three steps using AAPOR Response Rate 1, which is the minimal expected net response rate. We do two scenarios below given different ranges of assumptions. The pre-test will help us refine these assumptions.


Scenario 1. AAPOR Response Rate 1 – Summer Season Visitor Internet Panel Survey

Response Rate 1 = I/(I +P) + (R + NC + O) + (UH + UO)


Where

I = Interview

P = Partial Interview

NC = No contact

O = Other

UH = Unknown household

UO = unknown other


Step 1: On-site Interview at the airports using the Tally Sheet to obtain some of the parameters of the AAPOR response rates.


1,500/(1,500 + 10) + (75 + 0 + 0) = 94.64%


We assume 10 partial interviews (P) to get 1,500 completes based on past experience at airports where people get nervous and we have to cut-off the interview because they are focused on boarding announcements and can’t complete the survey. Protocol is to end surveys once boarding announcements are started.


We assume 75 refusals (R) per 1,500 completed interviews based on past experience at airports.


NC, O, UH and UO are either irrelevant or assumed zero in our application.


Step 2: On-site (airport) Recruitment into Internet Panel.


We assume 85.0% will choose to join the Internet Panel and will provide their e-mail and telephone number that will be given to GfK to complete the recruitment into the Internet Panel.


Recruitment stage 1 = 1,275/1,500 = 85.0%


Step 3: Internet Panel Completes.


GfK advised us that we needed almost three recruits to get one complete through steps 2 and 3. GfK says they get between 85 to 90% response rates once panel recruitment is completed. We use the 85% divided between Partial Interviews (P) and Refusals (R) did not complete any of the Internet Survey.


Internet Panel completes = 500/(500 + 29) (59 + 0 + 687) + (0 + 0) = 39.21%


I = 500 completes

P = 5% of eligible (those who completed recruitment into panel) = 29

R = 10% of eligible (those who completed recruitment into panel) = 59

O = 687 (those who did not complete recruitment into Internet Panel)


Net Response Rate = 94.64% * 85.0% * 39.21% = 31.5%



Scenario 2. AAPOR Response Rate 1 – Summer Season Visitor Internet Panel Survey


Step1: On-site Interview at the airports using the Tally Sheet to obtain some of the parameters of the AAPOR response rates.


In this scenario, we assume that GfK was too conservative and we only need to recruit 1,000 to get 500 completes, a two to one ratio instead of three to one.


1,000/ (1,000 + 10) + (50 + 0 + 0) + (0 + 0) = 94.34%


We assume 10 partial interviews (P) to get 1,000 completes based on past experience at airports where people get nervous and we have to cut-off the interview because they are focused on boarding announcements and can’t complete the survey. Protocol is to end surveys once boarding announcements are started.


We assume 50 refusals (R) per 1,000 completed interviews based on past experience at airports.


NC, O, UH and UO are either irrelevant or assumed zero in our application.



Step 2: On-site (airport) Recruitment into Internet Panel.


We assume 85.0% will choose to join the Internet Panel and will provide their e-mail and telephone number that will be given to GfK to complete the recruitment into the Internet Panel.


Recruitment stage 1 = 850/1,000 = 85.0%


Step 3: Internet Panel Completes.


GfK advised us that we needed almost three recruits to get one complete through steps 2 and 3. In this scenario we assume it only takes two recruits to get one complete through steps 2 and 3. GfK says they get between 85 to 90% response rates once panel recruitment is completed. We use the 85% divided between Partial Interviews (P) and Refusals (R) did not complete any of the Internet Survey.


Internet Panel completes = 500/(500 + 29) (59 + 0 + 262) + (0 + 0) = 58.82%


I = 500 completes

P = 5% of eligible (those who completed recruitment into panel) = 29

R = 10% of eligible (those who completed recruitment into panel) = 59

O = 262 (those who did not complete recruitment into Internet Panel)


Net Response Rate = 94.34% * 85.0% * 58.82% = 47.2%




For the mailbacks, past experience has achieved 40 to 45% response rates for the Expenditure Mailback and 50 to 60% for the Satisfaction mailbacks when visitors are given both. We are using the lower estimates to be conservative. To calculate expected net response rates, we multiply the estimates by .9 to account for the 10% expected refusal rates from the Airport Survey.


Data Elements: Since different data elements are obtained from different forms, number of participants (completes) and net expected response rates are also calculated and summarized in Table B1. Activity Participation and Demographics are obtained in the Airport Survey short form and 3,000 completes are expected with a expected net response rate of 90%. Number of Days and Dives by activity and region (Intensity of use) is only done via the Internet Panel.


Since the mailbacks are from the 3,000 who completed the on-site short form (Airport Survey), and the information from these mailbacks is also obtained in the Internet Panel, we add the expected number of participants complete for the two sub-samples (Internet Panel + mailbacks) to calculate total number of participants (completes) and the expected net response rates.



Resident Survey


Full Survey


Survey Forms. The resident survey has two components; the in-house on-site survey and the mailbacks. Each household that completes the in-house, on-site survey form is asked to complete both the expenditure and satisfaction mailback forms. Residents are told that for each survey component they complete will increase their chance of winning a free vacation to the Island of Culebra (i.e. if they complete the in-house on-site form and the two mailbacks, they will be entered three times into the sweepstakes/lottery for the free vacation).


Table B2 summarizes the number of participants (completes) and expected net response rates for each survey component. We are targeting 1,000 completes of resident households for the in-house, on-site form. We expect a 10% refusal rate for eligible households (those in which someone did recreational activities on the coral reefs in Puerto Rico), so the net expected response rate is 90%. For the mailbacks, we expect that 40% will fill-out and return the expenditure mailback for a total of 400 participants (completes) or an expected net response rate of 36% (40%*.9), and 50% will fill-out and return the satisfaction mailback for a total of 500 participants (completes) and a expected net response rate of 45% (50%*.9).


Data Elements. With only two survey components, the resident survey is less complicated and follows the results of the survey forms. The in-house, on-site survey includes Activity Participation and Intensity of use (Days and Dives by activity and region); Non-market economic valuation; and Demographics. For each of these data elements, we have targeted 1,000 participants (completes) with an expected net response rate of 90%. Expenditures, Importance-satisfaction ratings and Special issues come from the mailbacks and follow the number of participants (completes) and expected net response rates for those mailbacks. Table B2 summarizes the results.










Visitor Survey Pre-test


The purpose of the pre-test is primarily to assist in the design of the dollar bid amounts for the non-market economic valuation choice questions in the Internet Panel. Also, net response rates for the Visitor surveys are only guesses since no one has ever recruited an Internet Panel as we are doing for visitors. The GfK conservative assumptions on how many on-site recruits it requires to get 500 completed Internet interviews will be tested (we will recruit 600 to get 200 completes). We will also test the times it takes to complete the Resident in-home survey.


Our greatest uncertainty in this study, which affects our sampling plan is how many visitors and residents use the coral reefs in Puerto Rico. No one has ever done such a study before. The only other studies done did not do probability-based sampling and were not able to extrapolate results from sample to population. We have a probability-based sample design for both residents and visitors and we will be able to extrapolate from sample to population for both populations of coral reef users. No one knows right now what percent of either of those populations uses Puerto Rico’s coral reefs for recreation-tourism. We will determine this for the first time. This will allow other researchers to design follow-up studies to get more depth of information about these uses/users by providing a basis of weighting their samples. All of this could change our expected burden estimates. If we get high proportions of visitors and residents that do coral reef recreation-tourist activities using the coral reefs, we can lower the amount of surveys we have to complete. In addition, if the assumption that GfK is using to ensure delivery of 500 completes to the Visitor Internet Panel survey turn out to be too conservative, we can reduce the number of airport surveys we need to do. This could save costs as well as increase our net response rates. All other elements of the survey have been tested many times over many years and don’t require pre-testing (e.g. satisfaction and expenditure mail back questionnaires).


A sample size of 200 is thought to be adequate for this purpose. All the same forms as the full survey, will be used in the pre-test, except the mailbacks and the Non-market economic use value-Choice -Questions. This will also allow us to test some of our assumptions in calculating our expected net response rates, while the choice questions are designed to simply help with design of the dollar bid amounts for the non-market economic valuation. Table B3 summarizes the number of participants (completes) and expected net response rates. Data elements listed here is restricted to the non-market economic value questions for the dollars bid amounts.



Resident Survey Pre-test


As with the visitor survey pre-test, the primary purpose is to assist in the design of the dollar bid amounts for the non-market economic valuation choice questions in the in-house, on-site survey. Again, a sample size of 200 is thought to be adequate for this purpose. Most of the same forms used in the full survey will be used in the pre-test, except the “Satisfaction Mailback” and the “Expenditure Mailback”. Instead of the Satisfaction mailback, we will use a one-page form in-house to rate the importance of reef attributes used in the non-market economic valuation. The design of the full survey requires that we collapse the number of attributes for efficient design, so we need to determine relative importance.


We don’t need to test the “Expenditure Mailback”. This expenditure questionnaire has been used by the U.S. Forest Service, NOAA, the Department of Interior’s National Park Service and Bureau of Land Management and the U.S. Army Corps of Engineers on many federal, state and local sites throughout the nation since 1985. The questionnaire has evolved overtime based on much learning on how people respond to the various expenditure categories.


The pre-test will also allow us to test some of our assumptions in calculating our expected net response rates. Table B4 summarizes the number of participants (completes) and expected net response rates. Data elements listed here is restricted to the non-market economic value questions for the dollar bid amounts.





2. Describe the procedures for the collection, including: the statistical methodology for stratification and sample selection; the estimation procedure; the degree of accuracy needed for the purpose described in the justification; any unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.


Visitor Survey


Airport Survey


The visitor surveys will be conducted at the airports in Puerto Rico that have flights leaving the island. The Puerto Rico airports keep data on the number of passengers on flights leaving the island (enplanements) and they are summarized by the Puerto Rico Tourism Company. Data is summarized by airport and month. Since past surveys have found that visitors are different by season, surveys will be stratified by season with separate samples by season. There are two seasons; winter (November through April) and summer (May through October). Previous year enplanement data at each of the airports that have flights leaving the island are used to stratify sampling effort across the airports within each season. For each season, 42 days of sampling are planned. Sampling days will be stratified by type of day (weekday and weekend/holiday). Table B5. shows sample stratification of sampling days by season. The overwhelming majority of flights and passengers leaving the island are through the San Juan airport (SJU) with 91% of enplanements each season. The distribution across airports is not significantly different by season.


The sample is a stratified random sample of all people getting on planes leaving the island of Puerto Rico. Stratification is by airport (five airports) and season (two seasons: summer and winter). The Puerto Rico Airport Authority maintains monthly counts of air enplanements (number of people getting on planes leaving the island of Puerto Rico).

We obtained the enplanement data for all airports in Puerto Rico that have flights that leave the island. We don’t include inter-island flights. We pre-stratify our samples across airports and seasons by the number of enplanements the year prior to our survey. See Table B5 in Part B (page 11) of the supporting statement.

We deploy interviewers in teams of two with potentially two teams per session at the San Juan Airport and one team at smaller airports. Each day we receive a list of flights leaving each airport by the Puerto Rico Airport Authority. We make sure we choose flights that properly represent the relative number of passengers by destinations across flights. Our interviewers receive security clearances and are issued security badges. They interview at the gates lounge areas for flights leaving Puerto Rico.

Respondents are randomly selected from people in the lounge/waiting area for the selected flight. Interviewers arrive at the gate/lounge area one hour before each flight. Depending on the layout of the gate/lunge area, each interviewer randomly selects a starting row of seats and for the first row of seats selected the first person in that row of seats is selected then every third or fourth person in the row depending on the size of the lounge and the number of people in the lounge. For the second reow of seats, interviewers slect the second person sitting in the row of seats and then every third or fourth person after that. At each additional row trhe starting point increase by one seat. Each interviewer conducts screening and conducts the complete interview. The Tally Sheet is used for screening passengers for meeting our criteria of being a visitor to Puerto Rico (we screen our permanent residents of Puerto Rico) and that they did at least one coral reef activity while on their visit to Puerto Rico. We are therefore able to use the Tally Sheet to estimate the proportion of all air enplanements that are visitors and coral reef users. We can then tie these proportions back to the population via the air enplanement data from the Airport Authority. Thus, all air enplanements on flights leaving Puerto Rico have an equal non-zero probability of being selected. Those who are eligible and agree to the survey are then interviewed using the Airport Short Form. See Attachment D for the Tally Sheet and the Airport On-site Short form.




We don’t know the probability that a visitor to Puerto Rico is a coral reef user since this is the first study to address the issue. Therefore, we have no idea how many contacts at the airport will be required to identify a coral reef user. So sample size for the Tally sheet is not possible to determine to achieve a sample size of 1,500 per season completing the on-site airport survey to ensure we get 500 completes of the Internet Survey. So we cannot calculate standard errors of the percent of visitors that are coral reef users at this time.

There is no design effect in the visitor survey. It is a simple stratified random sample and doesn’t use cluster sampling. There may be an effect from pre-stratification.For initial sample weighting (the stage where we adjust pre-stratification using prior year distributions in Table B5 to post sample stratification using the actual enplanement data for the months in each season and at each airport) our weights will equilibrate the sample distributions with the actual distributions of enplanements by airport and season. That is all we need for weighting the data at this stage of estimation of total person-trips for those who are coral reef using visitors each season.

Sample weights for each case are individual weights for each case being an observation in a stratum. In the short form we obtain the party size and their reef use activity and demographics to establish second state weights that would adjust for any non-response bias (see answer to question about non-response bias and weighting). We will also be able to develop household weights using the number of household members in the traveling party for application to activity participation & use and non-market valuation. Since expenditures will be estimated on a per person-trip basis they will be estimated using the individual weights. These weighted per person-trip expenditures are then multiplied by the aggregate number of person-trips estimated for those visitors that did coral reef activities to get total expenditures.

Additional weights may have to be established if there is non-response bias. Different weights may have to be developed by type of information (e.g. activity participation, expenditures, importance-satisfaction ratings, non-market economic values). See answer to non-response bias analysis for models to be estimated. If non-response bias is detected, then a combination of multivariate and multiplicative weights will be used. This usually requires some iteration since full multiplicative weights are generally not possible with sample sizes we will be obtaining.



Internet Panel


The airport survey is limited to an average time of 5 minutes based on past experience of conducting surveys at airports in Florida. So for more detailed information, visitors are recruited into an Internet Panel. Unlike many past studies using Internet Panels, we are recruiting our panel members via a stratified random sample of visitors to Puerto Rico that are coral reef users.


The University of Puerto Rico (UPR) recruits visitors into the Internet Panel when doing the Airport Survey. If a respondent agrees to join, the interviewers obtain their telephone number and e-mail address. UPR forwards this information to GfK to follow-up with information about the Internet Panel. UPR also sends GfK respondent’s activity participation information. GfK programs that information in so the follow-up effort to obtain intensity of use information (person-days of use and number of dives for SCUBA and snorkeling) more efficiently (only ask for those activities in regions where they did the activity). GfK will do three follow-ups by e-mail and phone to get people who agreed to join the panel to complete the survey. GfK is responsible for implementing the survey not the recruitment. The panel is implemented by GFK and the panel is used only for the UPR-NOAA study, it will not be used by GfK for any other surveys.


The firm (GfK) that will conduct the survey is highly experienced with implementing Internet Panels. Internet Panel members will be asked information on intensity of use (Days and Dives) for general recreation-tourist activities, since they are already asked participation by activity and region in the airport survey. Panel members will also be asked participation and number of days and dives by reef activities and region of activity. Importance-satisfaction rating and expenditures will also be asked of Panel members. The most important information for this survey is the non-market economic value and how that value changes with changes in conditions of reef attributes.


Mailbacks


For those that complete the airport survey and do not wish to join the Internet Panel, we provide them an option to fill out two mailback questionnaires. One addresses their expenditures and the other their importance-satisfaction ratings and special issues.




Nonresponse Bias Analyses.  

Visitor Survey. As in Leeworthy (1996), we will use multiple regressions for the satisfaction and expenditure mail backs and the Internet Panel. We only have one mode of travel (air), so we won’t being modeling mode of access.

Step 1: First we will run Kolmogorov – Smirnov Two-sample tests for differences in factors for respondents versus non-respondents. Second, we will run probit and logit equations on respondents versus non-respondents (1= respondent and 0=non-respondent). Explanatory variables: Place of Residence, length of stay, age, gender, ethnicity, race, income, household size, second home ownership, and activity participation. This will determine what factors might be related to non-response.

Step 2: Check to see if any of the variables related to non-response are related to various variables for estimation.

For the satisfaction mail back, we will run regressions on each importance and satisfaction rating as the dependent variable. Explanatory variables come from the Airport On-site form including: Place of Residence, length of stay, age, gender, ethnicity, race, income, household size, second home ownership, and activity participation.

For the expenditure mail back, we will run regressions on selected expenditure aggregate expenditure categories (e.g. Lodging, food, transportation, boating, fishing, diving, sightseeing, service and total). Explanatory variables come from the Airport On-site form including: Place of Residence, length of stay, age, gender, ethnicity, race, income, household size, second home ownership, and activity participation.

For the Internet Panel, we would have to do the importance and satisfaction ratings; expenditure categories; and intensity of reef use (person-days of use). Explanatory variables come from the Airport On-site form including: Place of Residence, length of stay, age, gender, ethnicity, race, income, household size, second home ownership, and activity participation.

Step 1 only reveals if there is potential for non-response bias; it is a necessary not a sufficient condition for establishing the existence of non-response bias. Step 2 determines if any of the factors that are related to non-response are significant factors in explaining measurements obtained in the survey. If so, then sample weighting will be required. It is possible, but not certain, that multivariate weighting may be required. We won’t know that until after we complete the survey and do the analyses.

I believe we have more than adequate information in the Airport on-site survey to test for non-response bias. I don’t think we need to add questions. The Airport Survey is time sensitive and we need to keep it to an average time of 4 to 5 minutes and it has been used many times so we are very certain of our estimate of time to do the survey as it currently exists. Adding questions would add burden a possibly lead to greater non-response on-site via incompletes.





Resident Survey


The survey of residents will be a household survey. The sampling frame will be limited to coastal municipalities. This is based on past research which found that Puerto Ricans living in interior island municipalities have very little connection with coastal areas. Therefore the probability of contacting a household where at least one of the household members age 16 or older is a coral reef user for recreation is extremely small and cost prohibitive.


The sample will be stratified by the number of households in each coastal municipality (see Table B6). For within municipality, the methodology to be used for selecting households will use a two-stage stratified random sample.


Because no one has ever done a study of reef use in all of Puerto Rico, we have no idea what percent of households contain a reef user age 16 or older. We will therefore have to make an initial guess (assumption) as to the percent of households in coastal municipalities that contain a reef user age 16 or older to determine the sample size to draw from the Census data.


We have determined for the various estimates we will be trying to make in the study that a sample size of completed surveys for the in-home portion of the survey requires at least 1,000 households that contain at least one reef user. We use a stratified random sample with two stages.


Stage 1: Stratify 1,000 completed in-home surveys across coastal municipalities according the proportion of occupied housing units in each coastal municipality (Table B6). Using a guesstimate that 10 percent of coastal households will contain a reef user age 16 or older, and that 80% of these households complete the survey, we calculate the number of occupied households that need to be randomly selected from each coastal municipality using the following formula:


N = [ n + (n * (1-b))] * (1/a)


Where,


N = Required number of occupied households to select in each coastal municipality


n = Required number of households that complete the in-home survey in each coastal community (Table B6).


a = Estimated percent of coastal community households that contain a reef user age 16 or older (10% or 0.1)


b = Percent of coral reef using households that complete the in-home survey (80% or 0.8)


Results of the above calculations are summarized in Table B7.


Stage 2: Randomly select housing units (addresses) within each coastal municipality according to the distribution of occupied housing units across Census Blocks. This takes the sample sizes from Table B7 for each coastal municipality and distributes across Census Blocks within each coastal municipality. The Census Bureau 2010 Blocks Tiger Line Shape Files will be downloaded from the Census Bureau FTP Site (www2.census.gov) and converted to Google Earth .kml files utilizing shp2kml version 2 free software.


The Census 2010 Data will be downloaded and imported into MYSQL (Open Source Relational Data Base Manager System). This data in combination with the Blocks Tiger Line Files will be utilized to estimate the number of occupied household units for every Census Block inside each of the communities in each geographical area.


Addresses of units within Census blocks will be selected randomly. First a list of streets in each Census block will be developed. Streets will be sorted by the number of housing units . The proportional number of housing units to select on each street will then be developed and then addresses will be selected from the range of addresses on the street. The list of addresses in each municipality will then be sent to the U.S. Post Office to verify that they are deliverable addresses.


The result of the above two-stage sampling is a simple stratified random sample that is a probability-based sample with each household having equal probability of selection. There is no design effect since cluster sampling is not used. Variances and standard errors are calculated using standard formulas for simple stratified random samples (Kish 1995). There will be design effects from stratification and post-sample weighting may have to be conducted to adjust for differences between pre-sample stratifications and post sample rsults. Weights may have to be developed for different demographic factors available in the Census data (e.g. age, race, ethnicity).


Implementation


Households selected will be sent a pre-notification letter stating the purpose of the survey and providing the date(s) the survey team from the University of Puerto Rico – Mayaguez will be in their community. Contact information will be provided for the University of Puerto Rico-Mayaguez with the opportunity to respond if they qualify for the survey and whether they would like to participate. They will be told about the sweepstakes/lottery and the chance to win a free vacation to the Island of Culebra and other gifts. Households will also be provided a self-addresses, postage paid post-card on which they can indicate that no one in their household uses Puerto Rico’s coral reefs for recreation or someone in their household does but they do not want to participate in the survey.


In the field, interviewers will use the Tally sheet to identify if there is anyone in the household age 16 or older that uses Puerto Rico’s coral reefs for recreation activities. This Tally sheet and supporting materials are described in Part A of the supporting statement. The Tally sheet will provide the basis of estimating the percent of households in the coastal municipalities that contain a coral reef user age 16 or older.


Survey Follow-ups, Refusals and Re-interviews: For those who are not at home when the interviewers arrive, two follow-up efforts will be done to convert to a complete. For those who refuse, no follow-up efforts will be conducted. There will also be no re-interviews for quality controls. All of these efforts are beyond our budget.


Pre-test. The pre-test can be used to test the assumptions for the percent of households that contain a reef user age 16 or older and the assumption that 80 percent of these households will complete the survey.


If the assumptions do not hold, additional samples than specified in Table B7 will have to be drawn with the objective of achieving the samples sizes specified in Table B6 by coastal municipality.


Sample Weighting. The above sample design is self-weighting since it is a straight forward stratified random sample. However, if there are different response rates by municipality, it may require post-stratification weighting, including post-stratification by key demographic characteristics in the Census data. Sample weighting may also be required to adjust for non-response if analysis determines there is non-response bias (see section on analysis of non-response bias).





Households selected will be sent a pre-notification letter stating the purpose of the survey and providing the date(s) the survey team from the University of Puerto Rico – Mayaguez will be in their community. Contact information will be provided for the University of Puerto Rico-Mayaguez with the opportunity to respond if they qualify for the survey and whether they would like to participate. They will be told about the sweepstakes/lottery and the chance to win a free vacation to the Island of Culebra and other gifts.



Table B7 (Continued)





_____________________________________________________________________________________

Coastal Municiplaity

Number of Households to complete in -home (Full Survey)

Number of Occupied Households to be Sampled (Full Survey)

Number of Occupied Households to be Sampled (Pre-test)

Number of Households to complete in -home Survey (Pre-test)

______________________________________________________________________________________

Arecibo

43

516

103

9

Hatillo

18

216

43

4

Camuy

15

180

36

3

Quebradillas

11

132

26

2

Isabella

20

240

48

4

Culebra

1

12

2

0

Vieques

4

48

10

1

______________________________________________________________________________________

Total Coastal

1,002

12,024

2,405

200



Degree of Accuracy


Estimation of Sample means and Standard errors

Sample weights will be used in estimating sample means and standard errors of the means using the Statistical Software SAS with formulas adjusted for sample design issues of stratification and weighting following guidelines in (Kish 1995). To extrapolate from sample to population, for visitor samples we would extrapolate to population estimates using our estimates of total person-trips (Visits) of coral reef use and the weighted sample means. For residents the weighted sample means would be extrapolated to population estimates using the number of households that used Puerto Rico’s coral reefs.

The general sampling methodology and estimation of the airport survey and follow-up mailback surveys has been tested several times in the Florida Keys (1995-96 and 2007-08). Sample sizes were selected for application in Puerto Rico to ensure statistical accuracy at the 95% confidence level or plus or minus 5 percent at a minimum with many data elements expected to be estimated with less potential error since sample sizes exceed those necessary to achieve 95% confidence. The same is true for the survey of residents.


For both visitors and residents, a new element not included in previous surveys is the non-market economic value of coral reef use and how that use value changes with changes in conditions of coral reef attributes. The goal is to be able to estimate the marginal value of changes in reef attributes, which will be used in a decision-support tool for assessing restoration management strategies for the Guanica Bay Watershed Restoration Management Plan being led by the U.S. Environmental Protection Agency. These values could also be used in other reef restoration or damage assessments for all of Puerto Rico.


The method chosen is commonly referred to as a stated-preference conjoint analysis (Louviere, Hensher and Swait, 2009). For economic valuation of attributes, the method is also referred to as multi-attribute utility theory (Adamowicz, Louviere, and Swait, 1998). The method that will be used for the full survey is called a fractional factorial design. The reason for the need of using this approach is due to the number of attributes for which marginal values will be estimated. With 12 coral reef attributes with three levels (low, medium and high condition) for 10 of the attributes and two levels for two of the attributes, the possible combination of attributes to form options (bundles of attributes) is equal to 10 to the third power + 2 to the second power or 236,196. In most of the literature, price or the dollar bid amounts for each bundle of attributes is also treated as another attribute when selecting a random sample of all possible combinations. We have chosen to use six levels to the dollar bid amounts resulting in 1,417,176 possible combinations of all attributes. Since this is impossible to implement, we use a fractional factorial design (Louviere, Hensher, and Swait, 2009).


We will first use the procedures found in Johnson et al (2007). Their SAS program code is found to generate an optimal design and test the efficiency of the design. The researcher must choose the number of bundles of attributes (options) that the survey will accommodate. This involves issues of survey fatigue and how many choices you can ask people to make. The literature doesn’t provide any guidance here, but given our survey’s number of questions, we have decided to limit the number of choices any one respondent has to make to four choices with each choice including the Status Quo option (A) plus two other options (B and C). In each choice set, the Status Quo (A) is always included and cost the household $0, but results in all attributes in their low condition. Other options are mixes of low, medium and high conditions. The Status Quo option is often referred to in the literature as the “opt-out option” and provides the basis on which other options are evaluated.


The other choice the researcher has to make is the number of different versions of the survey with versions including different bundles of choices (options or alternatives). The number of versions would be limited by sample sizes.


Initial runs of the programs indicated that we could achieve optimal designs that would be orthogonal (attributes un-correlated) and balanced (equal number of levels of each attribute across all choices) would require at least 36 choices. An orthogonal and balanced design ensures we can estimate the marginal effects or marginal values of each reef attribute for the main effects. We decided our design would use four (4) choice questions per respondent blocked into 9 versions. Each choice contains the Status Quo option plus a B and C option with different bundles of attributes at different levels. We ran the SAS program several times with different numbers of attributes and found that we could not get an efficient design that met the criteria of orthogonal and balanced design with more than 10 reef attributes 8 with 3 levels and 2 with 2 levels) plus price with six (6) levels. Our design with 10 reef attributes (8 with 3 levels and 2 with 2 levels, and price with 6 levels) resulted in 157,464 possible combinations.


Optimization results indicated we could get an efficient design with these choices. However, our focus groups indicated that 12 reef attributes were important to their reef use activities and would influence their values, so we still include all 12 reef attributes in the design, but in the statistical models we will form a composite variable containing two of the attributes (Depth of the reefs and Crowding Conditions). This will avoid omitted variable bias, but will not allow us to estimate the marginal values of each of these two attributes.


Another concern of the randomization in fractional factorial design is the match-ups in the choices (B and C options). One has to review the match-ups of B and C options to ensure they make sense i.e. that an option with higher levels of attributes has a higher price than an option with lower levels of attributes. All of the choices in our design meet this criterion.


And finally, the choice sets have to be checked for dominant options/alternatives. These are options for which all respondents would choose them or not choose them. Such options provide no information in comparative choices (Louviere, 2000). Our design does not include any dominant options.


We checked the 36 choices randomly selected in the fractional factorial statistical design that achieved an orthogonal (uncorrelated attributes) and balanced design. We found no dominated or infeasible choices. We also checked for price (cost) match-ups within choice sets for choices where B and C alternatives might have prices (costs) that were not consistent with what one would expect i.e. that an alternative with generally higher conditions across more attributes would cost less than an alternative with relatively lower conditions across most attributes. We suspect that the result is because we have many attributes leading to a large number of possible combinations and a relatively low probability that a dominated or infeasible combination would be selected. Most of the literature uses a relatively low number of attributes and levels of attributes. All the literature we have reviewed that used four or less attributes with few levels for each attribute usually do have dominated or infeasible combinations and had to arbitrarily delete those combinations. We had no such problem in our application.



The choice questions for the full survey are included in Appendix D. They are the same for residents and visitors. Prices are assigned based on the optimal design and currently include the level of the price (1 to 6). The pre-test will help design the dollar amounts corresponding to the six levels or price (dollar bid amounts).


Determination of the Minimum Sample Size. In Orme (1998), the following formula is found for determining the minimum sample size for a given design:


N = 500 * NLEV/(NALT*NREP)


where,


N = minimum sample size required

NLEV = the largest number of levels in any attribute (here 6 for number of prices)

NALT = number of alternatives (options) per choice set (not including the Status Quo), here 2.

NREP = number of choice sets per respondent (here 4).


So in our design, the minimum sample size required for statistical efficiency is equal to 375. Our planned sample sizes for both the resident and visitor surveys is 1,000 each, so our sample sizes are sufficient to not only meet minimum requirements, but provide added safety for margin of error.


In addition to the above, as a general rule, six observations are needed for each attribute in a bundle of attributes to identify statistically significant effects (Bunch and Batsell, 1989 and Louviere et al, 2000). Since we have 10 reef attributes plus price, we have 11 attributes so we need 66 observations per version. Our design includes 9 versions and for the visitor and resident surveys we plan for 1,000 completes in each sample, so we will have 111 observations per version in each sample, which again is above the requirements to achieve statistical efficiency.


Analysis of Choice Questions. Analysis of the choice questions for estimating the non-market economic use values and how those values change with changes in reef attribute conditions and socioeconomic factors will start out using a standard multinomial model based in random utility theory, as described by Ben-Akiva and Lerman (1985). To summarize their exposition, let U = utility of household (well-being). Consider U to be a function of a vector zin of attributes for alternative i, as perceived by household respondent n. The variation of preferences between individuals is partially explained by a vector Sn of socio-demographic characteristics for person n.


Uin = V(zin, Sn) + ε(zin, Sn) = Vin + εin


The “V” term is known as indirect utility and “ε” is an error term treated as a random variable (McFadden 1974), making utility itself a random variable. An individual is assumed to choose the option that maximizes their utility. The choice probability of any particular option (Status Quo Option A, Option B, or Option C) is the probability that the utility of that option is greatest across the choice set Cn:


P (iCn) = Pr[Vin + εin Vjn + εjn , for all j Cn, j not equal to i]


If error terms are assumed to be independently and identically distributed, and if this distribution can be assumed to be Gumbel, the above can be expressed in terms of the logistic distribution:

Pn(i) = eμVin / eμVjn

The summation occurs over all options Jn in a choice set. The assumption of independent and identically distributed error terms implies independence of irrelevant attributes, meaning the ratio of choice probabilities for any two alternatives is unchanged by addition or removal of other unchosen alternatives (Blamey et al., 2000). The “μ” term is a scale parameter, a convenient value for which may be chosen without affecting valuation results if the marginal utility of income is assumed to be linear. The analyst must specify the deterministic portion of the utility equation ‘‘V,’’ with sub-vectors z and S. The vector z comes from choice experiment attributes, and the vector S comes from attitudinal, recreational, and socio-demographic questions in the survey. Econometrics software will be used to estimate the regression coefficients for z and S, with a linear-in-parameters model specification. These coefficients are used in estimating average household value for a change in one level to another level of a particular attribute for welfare estimation. Welfare of a change is given by (Holmes & Adamowicz, 2003):


$ Welfare = (1/βc)[V0 - V1]


where βc is the coefficient on cost, V0 is an initial scenario, and V1 is a change scenario.


The standard multinomial logit model treats the multiple observations (choice experiment replications) from each household as independent. An alternative is to model these as correlated with a random parameters (mixed) logit model. Thus a random parameters logit model will also be tested using techniques described by Greene (2007).


Econometric Specification


A main effects utility function is hypothesized, and following common practice a linear-in-parameters model will be sought. A generic format of the indirect utility function to be modeled is:


V = βo + β1(Stony Corals change) + β2(Soft Corals and Sponges change) + β3(Consumptive fish change) + β4(tropical fish change) + β5(macroinvertebrates change) + β6(Opportunity to see large wildlife change) + β7(Opportunity to see or catch trophy fish change) + β8(Water Clarity/Visibility change) + β9(Water Cleanliness change) + β10(Composite variable of Depth of Reefs and Crowdedness change) + β11(Cost)


The composite variable of Depth of Reefs and Crowdedness is because the optimal design that meets the criteria of orthogonality and balance for statistical efficiency, which allows us to estimate the marginal values of attributes cannot accommodate more than 10 reef attributes plus price. So we form a composite variable for which we cannot identify the separate effects, but control for omitted variable bias.


NOAA doesn’t maintain that low water quality does not affect fish and wildlife. It depends on the type of water quality and the uses of the coral reefs. If one is talking about SCUBA divers, snorkelers, glass-bottom boat riders, paddle boarders viewing things on the reefs then water clarity is important to see fish and wildlife. If the water quality is low due to high nutrient concentrations, the water may not affect the health of the fish and wildlife but it will lower water clarity, and thus the value to those who want to see fish and wildlife. Fishermen, who are not sight-fishing, won’t care about water clarity and if low water quality is based on high nutrients, their uses will be unaffected. So in our modeling we plan to interact activity participation with reef attributes.


Our focus group work convinced us that users do understand the relationships between water quality in its different dimensions and fish and wildlife as it relates to their reef activities. In the focus groups they were asked to say which attributes were important for which activities. Follow-up discussions then focused on the attributes and levels as to whether and to what extent different attributes at different levels of condition were important to them. The findings were consistent with what is described above—it is activity dependent.



3. Describe the methods used to maximize response rates and to deal with nonresponse. The accuracy and reliability of the information collected must be shown to be adequate for the intended uses. For collections based on sampling, a special justification must be provided if they will not yield "reliable" data that can be generalized to the universe studied.


Ridge to Reefs a non-profit organization has agreed to run a sweepstakes/lottery with chances to win a free vacation or other prizes for participating in the survey. Gifts are offered by local businesses as their contribution to the study.


For both the visitor and resident surveys, no one has ever estimated the number or proportion of these populations that use Puerto Rico’s coral reefs for recreation-tourist activities, so we don’t know the population of coral reef users. This study will be the first to estimate the number of users in the visitor and resident populations.


For the visitor survey, we first screen visitors to determine visitors who have used Puerto Rico’s coral reefs. This will allow us to determine the proportion of all visitors to Puerto Rico that are coral reef users. The airport survey (short form) obtains information on activity participation by region, party size and composition, number of visits to Puerto Rico per year, length of visits, and demographic information. We expect net expected response rates from this portion of the survey of 90%, thus minimizing the probability of non-response bias.


The follow-up surveys for more detailed information involve lower expected net response rates and thus the potential for non-response bias. The main follow-up is the Internet Panel. We will be able to test for differences between those who joined the Internet Panel and completed it and those who completed the airport survey. To further minimize non-response bias, we provide visitors who choose not to join the Internet Panel, the option of filling out mailback surveys. Again, we will be able to test for differences between the mailback survey respondents and respondents to the airport survey. Further, for expenditures and importance-satisfaction ratings we will be able to test for differences between the combined sub-samples of the Internet Panel and the mailbacks and the airport survey for potential non-response bias.


If significant differences exist and therefore the existence of potential non-response bias, then sample-weighting will be conducted to correct for the potential biases.


For the resident survey, we expect net response rates of the in-house on-site survey to be 90% and thus minimal potential for non-response bias. For the mailback components for expenditures and importance-satisfaction ratings and special issue questions, we expect response rates of 40% for expenditures and 50% for the satisfaction questionnaires yielding net expected response rates of 36% and 45%, respectively. For these questions, there is potential for non-response bias. The in-house, on-site survey will contain extensive information on activity participation and use (number of days and number of dives) by activity; place of residence; and demographics to test for differences between those who completed the in-house on-site survey and those who completed the mailback questionnaires.


If significant differences exist and therefore the existence of potential non-response bias, then sample-weighting will be conducted to correct for the potential biases.


NOAA will also report item non-response for the household income variable in both the resident and visitor surveys and expenditure item non-response in the visitor Internet panel for the pre-test and final surveys.




4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as effective means to refine collections, but if ten or more test respondents are involved OMB must give prior approval.


We first conducted focus groups with both residents and visitors with the objectives of determining the coral reef attributes people thought were important to support their coral reef recreational uses, the levels of attribute conditions that would change their non-market economic values (willingness to pay); and their maximum willingness to pay moving from all attributes in the low condition to the medium condition and from the medium condition to the high condition. In addition, we used illustrations in addition to scientific facts about the reef conditions and tested whether focus group members thought the scientific bullets used in describing the different conditions of the attributes communicated the same information. This was done under OMB 0648-0660.


The next step is a pre-test (this application). We need a pre-test to help design the final dollar bid amounts for each bundle of attributes. The focus groups gave us a starting point in designing the bids that we can now test with larger sample sizes to design the bids. We need to make sure that we don’t have the statistical problem of “fat tails” or everyone choosing the highest price for a given option (bundle of attributes) or everyone choosing the lowest price for a given option. We also want to ensure our bids are designed such that a higher price for a given option is not preferred over a lower price for a given option (i.e. it doesn’t make economic sense to pay a higher price if you can get the good or service at a lower price). The range of bids used is critical for estimating the non-market economic use value and how that value changes with changes in reef attribute conditions (marginal value of attributes).


The pre-test will also give us the opportunity to test some of our assumptions used in calculating expected net response rates since this is the first time anyone has done a study of coral reef use for all of Puerto Rico by residents or visitors.


5. Provide the name and telephone number of individuals consulted on the statistical aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Dr. Vernon R. (Bob) Leeworthy, Project Leader (Survey Questionnaire and Sample Design, Economic Valuation Methods, Analyses and Reports)

Chief Economist

NOAA/NOS/ Office of National Marine Sanctuaries

1305 East West Highway, SSMC4, 11th floor

Silver Spring, MD 20910

Telephone: (301) 713-7261

Fax: (301) 713-0404

E-mail: Bob.Leeworthy@noaa.gov

Cell (240) 751-5148






Miguel H Del Pozo, PhD (co-project Leader UPR- Mayagüez, focus groups, survey implementation, analyses and reports)

Antropólogo Social

Catedrático Auxiliar

Dept. Ciencias Sociales

UPR- Mayagüez

miguel.delpozo@upr.edu

787-941-3559


Ruperto Chaparro (Project Co-leader, survey implementation)

PR Sea Grant

Extension Leader

University of Puerto Rico

PO Box 5000

Mayaguez, PR 00681

787-832-8045

Ruperto.chaparro@upr.edu


Matt Weber, PhD (Focus Groups/Qualitative Methods, peer review)
Economist
Western Ecology Division
US Environmental Protection Agency

200 S.W. 35th Street
Corvallis, OR 97333-4902

(541) 754-4315 
weber.matthew@epa.gov


Marisa Mazzotta, Ph.D. (Valuation Methods, peer review)
Environmental & Resource Economist 

Atlantic Ecology Division
US Environmental Protection Agency

27 Tarzwell Drive
Narragansett, RI 02882 

401-782-3026 

Mazzotta.Marisa@epamail.gov


Deborah L. Santavy Ph.D. (Maps, videos, photos, reef attribute conditions)
Ecologist, Gulf Ecology Division
US Environmental Protection Agency
1 Sabine Island Dr.
Gulf Breeze, FL.   32561
850-934-9358,

FAX:  850-934-2402
santavy.debbie@epa.gov





Dr. Alejandro Torres

NOAA/CRCP, contractor

Socioeconomic work in NE (Fajardo, Luquillo, Ceiba) and Culebra

787-222-4545

atorresabreu@gmail.com


Estudios Técnicos, Inc. (Previous coral valuation work in Northeast Puerto Rico)

Wanda I. Crespo Acevedo, PPL

Directora

División de Planificación Ambiental, Urbana y Regional

Estudios Técnicos, Inc.

Domenech 113

Hato Rey, PR

wcrespo@estudios-tecnicos.com

tel. 787-751-1675

fax. 787-767-2117

www.estudiostecnicos.com

Rafael Silvestrini (Visitor surveys, questionnaires, airport enplanement data)

Puerto Rico Tourism Company

San Juan Puerto Rico 00920-3960

(787) 721-2400 ext. 2065

Rafael.Silvestrini@tourism.pr.gov


Juan Jimenez (Regions for estimating use)
Planner
Land Use Program
Puerto Rico Planning Board
P.O. Box 41119
De Diego Ave. Stop 22,
San Juan, PR 00940-1119
Phone(787) 723-6200, ext. 16675
E-mail: jimenez_jr@jp.pr.gov



REFERENCES


Admamowicz, W., J. Louviere, and J. Swait. 1998. Introduction to Attribute-Based Stated Choice Methods: final Report by Advanis, Edmonton, Alberta, Canada submitted to the U.S. Department of Commerce, National Oceanic and Atmospheric Administration, Damage Assessment, Resource Valuation Branch under purchase order 43AANC601388.


Ben-Akiva, M., and S. R. Lerman. 1985. Discrete choice analysis. MIT Press, Cambridge, Massachusetts.


Blamey, R. K., J. W. Bennett, J. J. Louviere, M. D. Morrison, and J. Rolfe. 2000. A test of policy labels in environmental choice modelling studies. Ecological Economics 32:269–286.


Bunch, D.S., and Batsell, R.R. 1989. A Monte Carlo comparison of estimators for the multinomial logit model. Journal of Marketing Research 26: 56-68.


Dillman, D.A. 2000. Mail and Internet Surveys: The Tailored Design Method. Second Edition. John Wiley & Sons, Inc., N.Y., N.Y.


Dillman, D.A., J.D. Smyth, and L.M. Christian. 2009. Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. Third Edition. John Wiley & Sons, Inc., Hoboken, N.J.


English, D. B. K., W. Kriesel, V. R. Leeworthy, and P. C. Wiley. 1996. Economic Contribution of Recreating Visitors to the Florida Keys/Key West. Silver Spring, MD: National Oceanic and Atmospheric Administration. Available at: http://sanctuaries.noaa.gov/science/socioeconomic/floridakeys/pdfs/visecon9596.pdf


Greene, W.H. 2007. NLOGIT Version 4.0 Reference Guide. Plainview, NY. Econometric Software, Inc.


Holmes, T. P., and W. L. Adamowicz. 2003. Attribute-based methods. Pages 171–220 in P. A. Champ, K. J. Boyle, and T. C. Brown, editors. A primer on nonmarket valuation. Chap . 6. Kluwer Academic Publishers, The Netherlands.


Johnson, F. Reed, B. Kanninen, M. Bingham and S Ozdemir. 2007. Experimental Design for Stated-Choice Studies, The Economics of Non-Market Goods and Resources Volume 8, 2007, pp 159-202.


Kanninen, B. (Editor). 2006. Valuing Environmental Amenities Using Stated Choice Studies: A Common Sense Approach to Theory and Practice. Springer, Dordrecht, the Netherlands.


Kish, Leslie. 1995. Survey Sampling. John Wiley & Sons. New York, NY.


Leeworthy, Vernon R. 1996. Technical Appendix: Sampling Methodologies and Estimation Methods Applied to the Florida Keys/Key West Visitor Surveys. Silver Spring, MD: National Oceanic and Atmospheric Administration. Available at: http://sanctuaries.noaa.gov/science/socioeconomic/floridakeys/pdfs/vistechappen9596.pdf


Leeworthy, Vernon R. 2010. Technical Appendix: Sampling Methodologies and Estimation Methods Applied to the Florida Keys/Key West Visitor Surveys 2007-08. Silver Spring, MD: National Oceanic and Atmospheric Administration. Available at: http://sanctuaries.noaa.gov/science/socioeconomic/floridakeys/pdfs/vistechapp0708.pdf


Leeworthy, Vernon R. 2010. Technical Appendix: Sampling Methodologies and Estimation Methods Applied to the Survey of Monroe County residents 2008. Silver Spring, MD: National Oceanic and Atmospheric Administration. Available at: http://sanctuaries.noaa.gov/science/socioeconomic/floridakeys/pdfs/restechapp08.pdf


Leeworthy, Vernon R. and Peter C. Wiley. 1996. Importance and Satisfaction Ratings by Recreating Visitors to the Florida Keys/Key West. Silver Spring, MD: National Oceanic and Atmospheric Administration. Available at: http://sanctuaries.noaa.gov/science/socioeconomic/floridakeys/pdfs/visimpsat9596.pdf


Leeworthy, Vernon R. and Peter C. Wiley. 1997. Technical Appendix: Sampling Methodologies and Estimation Methods Applied to the Survey of Monroe County Residents. Silver Spring, MD: National Oceanic and Atmospheric Administration. Available at: http://sanctuaries.noaa.gov/science/socioeconomic/floridakeys/pdfs/restechappend9596.pdf


Leeworthy, Vernon R. and Peter C. Wiley. 1997. A Socioeconomic Analysis of the Recreation Activities of Monroe County Residents in the Florida Keys/Key West. Silver Spring, MD: National Oceanic and Atmospheric Administration. Available at: http://sanctuaries.noaa.gov/science/socioeconomic/floridakeys/pdfs/resident9596.pdf


Leeworthy, Vernon R. and Ehler, Rod. 2010a. Linking the Economy and Environment of the Florida Keys/ Key West, Economic Contribution of Recreating Visitors to the Florida Keys/Key West 2007-08. Silver Spring, MD: National Oceanic and Atmospheric Administration. Available at: http://sanctuaries.noaa.gov/science/socioeconomic/floridakeys/pdfs/economic08.pdf


Leeworthy, Vernon R. and Ehler, Rod. 2010b. Linking the Economy and Environment of the Florida Keys/ Key West, Importance and Satisfaction Ratings by Recreating Visitors to the Florida Keys/Key West 2007-08. Silver Spring, MD: National Oceanic and Atmospheric Administration. Available at: http://sanctuaries.noaa.gov/science/socioeconomic/floridakeys/pdfs/importance08.pdf


Louviere, J.J., D.A. Hensher, and J.D. Swait. 2009. Stated Choice Methods: Analysis and Application. Cambridge University Press.


Orme, B. 1998. Sample Size Issues for Conjoint Analysis Studies. Sawtooth Software Research Paper Series, Sawtooth Software, Inc.




31


File Typeapplication/msword
File TitleSUPPORTING STATEMENT
AuthorRichard Roberts
Last Modified BySarah Brabson
File Modified2015-03-27
File Created2015-03-27

© 2024 OMB.report | Privacy Policy