Greetings,
Thank you for the opportunity to review and comment on your proposed ICR and survey. Please see below for my comments and suggestions. Please send along any questions that come up.
Best Regards,
Christian Crowley
__________________________________________
U.S. Department of the Interior
Office of the Secretary,
Office of Policy Analysis
1849 C Street, NW, Mail Stop
3530
Washington, DC 20240
Tel.
202.208.3799
Christian.Crowley@ios.doi.gov
__________________________________________
Hello Dr. Crowley,
Thank you for your thoughtful comments. We greatly appreciate the time and effort you dedicated toward the Landsat survey. The comments you provided improved our survey and you can see those changes incorporated throughout the survey.
In addition to addressing your comments (see below), we changed our survey software platform from SurveyMonkey to Qualtrics. This change will help with some of our complex questions and advanced skip logic. You may notice some formatting changes in the current draft survey to adjust for this substitution.
In addition, the draft you reviewed was submitted in July 2017. Similar to the edits you provided, we also worked to improve the survey content. These improvements also address some of the comments you provided such as removing or reordering specific questions, adding additional instructions and questions to help clarify the survey content for respondents, etc.
-Crista
Crista L. Straub, Ph.D.
USGS - Social & Economic Analysis Branch
Fort Collins Science Center
2150 Centre Avenue, Building C
Fort Collins, CO 80526
(o): 970.226.9143
(f): 970.226.9230
(c): 207.951.3383
Email: cstraub@usgs.gov
URL: http://www.fort.usgs.gov/
PPA Comments on ICR 1028-NEW
Supporting Statement A
Page 2, Para. 1: Has there been a previous longitudinal study of LandSat users? To perform a longitudinal study, will you need to link responses from the new survey to respondents from the previous survey? Is there any confidentiality issue with tracking the respondents in this way? On Page 7, Question 10 we state that respondent email addresses are used only to track completions and are not associated with individual responses. If that is the case, how do you plan to perform the longitudinal study?
Thanks for this comment. We changed the text “longitudinal study” to “trend longitudinal study”. Trend longitudinal study is a type of longitudinal study that examines changes within a population over time, but with different participants at each point. This is different from a panel longitudinal study, which examines the same set of people at several points in time. Therefore, we will not need to link responses from the new survey to respondents from the previous study. There is no confidentiality issues tracking the respondents because email addresses are not associated with individual responses.
Page 2, Para. 1: Which groups (among both US and international users) do you plan to study in the longitudinal analysis? How does your recruitment plan account for the required sample sizes of the various groups of interest? What assumptions are you using for the numbers of users in each group that you'll be able to match across the previous survey and the new survey?
We are not conducting a panel longitudinal study, but a longitudinal trend study. Therefore, we will not directly compare the participants. We are not completing any statistical analysis between the current and previous study. Therefore, we do not need to account for a recruitment plan nor assumptions in sampling.
Page 2, Para. 2: The list of "the following laws" includes only the 1992 LRS Policy Act. If there are no additional laws to include, the bullet point could be removed, and the final sentence reworded along the lines of "...information required by the LRS Policy Act..."
We removed the bullet point and modified the sentence to the following: “Specifically, this surveying effort will provide information required by the Land Remote Sensing Policy Act of 1992 (15 USC 5601).”
Page 6, Q.8: Please describe any focus group activity that was used to develop the survey. Please describe your plan for testing the survey, describing what type of users you will recruit for pretesting, and how you will recruit them.
The following content was already provided: A pretest of the survey will be conducted with Federal Landsat users in order to ensure there are no technical issues with the online administration of the survey, the intentions of all questions and responses are clear, and all language is easily understood.
We expanded the above content with the following text: We did not complete a focus group. We have several resources that were used to develop the survey. The following list comprises previous Landsat studies.
U.S. Department of Interior. U.S. Geological Survey. The Users, Uses, and Value of Landsat and Other Moderate-Resolution Satellite Imagery in the United States – Executive Report, by Miller, HM; Sexton, NR; Koontz L; Loomis J; Koontz SR; Hermans, C. Open-File Report 2011-1031, U.S. Geological Survey. Fort Collins, Colorado, 2011.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Researchers at the Land Remote Sensing Program (LRS) also implemented interviews and a survey. Their results helped guide the design of the current survey.
A pretest will be completed with federal Landsat users. Approximately 800 federal Landsat users will be recruited with an expected response rate of 30% (240 participants). We will recruit the federal Landsat users from the population of registered EROS users.
Page 8, Para. 1: Why do you want the size of the international users sample to match the size of the US users sample? If recruiting international users is costly, it would be better to determine the sample size required for a robust statistical analysis, and apply your estimate of response rate and undeliverable (e-mail) rate to determine the appropriate level of effort for recruiting international respondents.
The recruiting of international users is not costly. We are implementing an online survey with low cost per participant. The national sample is a census sampling design. The response rate is expected at approximately 30% requiring a larger sample size (based on previous Landsat surveys). We are also using a large sample size for equal probability of selection method. We have participants in a variety of sectors and we want all sectors to be represented in the survey – including sectors with less participants.
U.S. Department of Interior. U.S. Geological Survey. The Users, Uses, and Value of Landsat and Other Moderate-Resolution Satellite Imagery in the United States – Executive Report, by Miller, HM; Sexton, NR; Koontz L; Loomis J; Koontz SR; Hermans, C. Open-File Report 2011-1031, U.S. Geological Survey. Fort Collins, Colorado, 2011.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Page 8, Para. 1: What analysis do you plan to perform at the level of the international users’ country or region, and how does this affect your target size and makeup of the international users sample? If you are not planning to examine country- or region-level differences in the international group, please discuss why not (e.g., such analysis is not required under the LRS Policy Act; differences are assumed to be irrelevant).
We will complete chi-square analysis by sector and a few other important factors such as observables/environmental parameters, applications, etc. The sample size we are using is acceptable for chi-square analysis. We are using a large sample size for equal probability of selection method to account for the factors that are important to the Land Remote Sensing Program (LRS) such as sector. We are not planning to examine country- or region-level differences because they are assumed to be irrelevant. Previous studies support the expected irrelevance at the country/region level.
U.S. Department of Interior. U.S. Geological Survey. The Users, Uses, and Value of Landsat and Other Moderate-Resolution Satellite Imagery in the United States – Executive Report, by Miller, HM; Sexton, NR; Koontz L; Loomis J; Koontz SR; Hermans, C. Open-File Report 2011-1031, U.S. Geological Survey. Fort Collins, Colorado, 2011.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Page 8, Para. 3: Is there any source (e.g., reports from previous projects) you can cite for the "undeliverable rate" being the same for international users and US users?
Good point. We cited the following source to indicate the similar “undeliverable rates” between national and international users:
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Page 8, Para. 3: Similar to the previous comment about the international users sample, why are you planning to recruit a US users sample that is "far greater than needed to provide sufficient statistical power"? If recruiting US users is costly, it would be better to determine the sample size required for a robust statistical analysis, and apply your estimate of response rate and "undeliverable e-mail rate" to determine the appropriate level of effort for recruiting US respondents.
The recruiting of national users is not costly. We are implementing an online survey with low cost per participant. The national sample is a census sampling design. The response rate is expected at approximately 30% requiring a large sample size. We are also using a large sample size for equal probability of selection method. We have participants in a variety of sectors and we want all sectors to be represented in the survey. The sample size by sector will decrease even more when we add additional factors such as environmental parameters/observables by sector, etc.
Page 9, Table 3: Please discuss how you estimated the completion times for the full survey and the non-response survey. Are these the average completion times across the various potential branches of the survey (e.g., less that one minute for users who answer Q.1 with “No”; up to X minutes for users who fully answer all 45 questions)? If so, how did you estimate the number of users following each branch of the survey?
The completion times across the various potential branches of the survey were averaged. We used the previous Landsat studies to estimate the number of users following each branch of the survey. The survey was also pilot tested with approximately 20 participants. The pilot test included timing different branching options, which were incorporated into the completion time estimate.
Page 10, Q. 14: What is meant by “field data collection”? Is this the same as administering the e-mail survey and collecting responses?
Thanks for catching this confusing phrase. We changed “field data collection” to “survey implementation and data collection”.
Page 11, Q. 16: What analysis and reporting is required under the LRS Policy Act? Which groups and user-locations (among both US and international users) do you plan to study and report on?
We will use chi-square analysis to study respondents by factors that are important to the Land Remote Sensing Program (LRS). Some of these factors include sector, environmental parameters/observables, user applications, etc.
Supporting Statement B
Page 3, Para. 1: Will you collect and analyze any information on users who do not respond to the initial survey or the non-response survey?
We will implement a non-response survey and complete item and unit non-response analyses. However, we will not implement any further data collection beyond the non-response survey.
Page 3, Para. 1: Will you collect and analyze any information on users who asked to be removed from the list? Will you ask these users for any follow-up information, e.g., by a question on a removal-verification webpage such as a “Please share with us why you wish to be removed from this list.” with possible responses such as “I no longer use LandSat data; This survey was sent to me by mistake; I don’t want to spend time on a survey.”
We will collect and analyze information from users who ask to be removed from the list. We will use the following question/response options:
“Please share why you wish to be removed from this survey.”
I know longer use Landsat
Bad timing, otherwise engaged
Not interested
Do not know subject, too difficult
Waste of time
Never do surveys
Other (please specify) ______________________________
Page 3, Para. 1, penultimate sentence: Does the term “either or both samples” refer to the US and international users samples? Please clarify this.
We clarified the term to include both national and international user samples.
Full Survey
Will the survey include some kind of progress indicator? SurveyMonkey (2016) found that response rates can be increased by including at the bottom of each page a progress-completion bar (a visual indicator only, with no indication of percent or pages completed/remaining).1 Yan et al. (2010) found that progress bars do not affect completion rates for “long” surveys (8 pages), and can increase completion rates for “short” surveys (4 pages).2 Sarraf and Tukibayeva (2014) found that reformatting a long survey as a shorter survey with fewer pages increased response rate, in spite of requiring the user to scroll more on each page.3
No, we will not use a progress indicator. We did consider a progress indicator and you have provided good support. However, we think recent research studies provide support for the specific characteristics in the Landsat survey.
From Dillman et al. (2014):
Numerous studies examining the effectiveness of progress indicators show that they rarely have the desired effect of decreasing break-offs (Couper et al., 2001; Crawford et al., 2001; Heerwegh & Loosveldt, 2006). They only tend to be effective in very short surveys. The current draft survey is a long survey. In long surveys they may be more discouraging than encouraging. In addition, most progress indicators reflect the number of questions answered out of the total number possible, making them quite inaccurate for surveys in which respondents are skipped past questions or where some items require far more responses than others. The current draft survey has logic patterns and items with far more responses than others. Dillman et al. (2014) recommends eliminating progress bars except for the shortest of surveys.
We did add encouraging statements throughout the survey such as “You only have two more sections remaining, etc.” to help with survey completion.
Couper, M.P., Traugott, M.W., & Lamias, M.J. (2001). Web survey design and administration. Public Opinion Quarterly, 65(2), 230-253.
Crawford, S.D., Couper, M.P., & Lamias, M.J. (2001). Web surveys: perceptions of burden. Social Science Computer Review, 19, 146-162.
Dillman, D.A., Smyth, J.D., & Christian, L.M. Internet, Phone, Mail, & Mixed-Mode Surveys The Tailored Design Method. Hoboken, New Jersey: John Wiley & Sons, Inc., 2014. Print.
Heerwegh, D. (2003). Explaining response latencies and changing answers using client-side paradata from a web survey. Social Science Computer Review, 21, 360-373.
Will any questions have a response required? Will any questions (apart from Q. 45) be optional? How will you treat surveys submitted with optional questions left blank? Will the respondent have the option (e.g., at the bottom of each page) to submit an incomplete survey? How will you treat surveys submitted before the user reached the final page?
Yes, there are questions that require an answer. They include the following: Question #1, any application question, any observables/environmental parameter question, the question that requests defining spectral band use, and any additional question that includes logic patterns.
All remaining questions are optional. However, we have a very specific population that we are sampling. The participants in this population are likely to complete the survey if they start the survey. Previous Landsat surveys indicate that they are likely to complete the survey with low item and unit non-response.
U.S. Department of Interior. U.S. Geological Survey. The Users, Uses, and Value of Landsat and Other Moderate-Resolution Satellite Imagery in the United States – Executive Report, by Miller, HM; Sexton, NR; Koontz L; Loomis J; Koontz SR; Hermans, C. Open-File Report 2011-1031, U.S. Geological Survey. Fort Collins, Colorado, 2011.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
If surveys are submitted with optional questions left blank, we will complete item non-response analysis. We will conduct nonresponse bias analyses when item response rates or other factors suggest the potential for bias to occur. The expected thresholds that we will likely use for a nonresponse bias analysis are an expected item response rate of less than 70 percent.
The respondent will not have an option to submit an incomplete survey at the end of each page. The respondent can submit when they choose, but they must skip to the last question to submit.
If surveys are submitted before the user reaches the final page, we will complete item and unit non-response analysis. We will conduct nonresponse bias analyses when unit or item response rates or other factors suggest the potential for bias to occur. The expected thresholds that we will likely use for a nonresponse bias analysis are an expected unit response rate of less than 80 percent or an item response rate of less than 70 percent.
I see that the respondent will have the option to save the survey to complete later by simply closing their web browser. I recommend also explicitly including this option e.g., via a button at the bottom of each page.
Thanks for catching this missing feature. We added a “SAVE” button at the bottom of each page.
How will you treat surveys that are only partially completed when the survey is closed?
We will send one remaining email reminder. If the survey is not completed, we will complete item and unit non-response analysis. We will conduct nonresponse bias analyses when unit or item response rates or other factors suggest the potential for bias to occur. The expected thresholds that we will likely use for a nonresponse bias analysis are an expected unit response rate of less than 80 percent or an item response rate of less than 70 percent.
How do you plan to analyze the responses to the 5 open-ended questions over the estimated 9,454 responses?
Analysis of the open-ended questions is potentially complicated. Therefore, we will use Qualtrics to collect survey responses. We can then import the completed responses directly into an NVivo project. The imported data becomes a dataset source that we can sort, filter, or auto code.
Do you plan to assess a potential respondent’s understanding of written English? What information do you plan to gather from potential respondents who do not understand written English?
We will not assess the potential respondent’s understanding of written English. However, we have a very specific population that we are sampling. The participants in this population are likely to have some level of comprehension for written English. We have a list of confirmed users of Landsat imagery from EROS, so we consider this to be a very attentive audience. We feel that due to the highly technical nature of the respondents, they will have some level of comprehension for written English related to the data sites. A previous Landsat survey indicates that the respondents are likely to have some level of comprehension for written English.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Users that ask to be removed from the survey participant list might provide information about respondents understanding of written English. They have the opportunity to list understanding of written English in the “Other” option (if they ask to be removed from the survey participant list).
I assume the some of the material in the draft survey (e.g., italicized branching instructions, answer point-values) is intended for the survey programmer, and will not appear to the respondent. It is worth doing a special check of the on-line version for extraneous instructions to be removed.
We will complete numerous checks of the online version for extraneous instructions to be removed.
Instructions section: how was the estimate of the time required to complete the survey developed? How does this differ from the estimated completion time reported in Supporting Statement A, Table 3? If this estimate is an average taken across the various possible survey branches, that is appropriate for the burden hours estimate. For the instructions however, it would be more informative to report a range based on the longest branch of the survey, along the lines of “We estimate that this survey may require up to X minutes to complete.”
Thanks. Based on your recommendation, we changed the instructions language to the following: “We estimate that this survey may require up to 20 minutes to complete.”
This estimated time does not differ from the estimated completion time reported in Supporting Statement A, Table 3 (indicated as 20 minutes for the full survey and 5 minutes for the non-response survey). As you mentioned, the completion times across the various potential branches of the survey were averaged, which resulted in the burden hours estimate. We used previous Landsat studies to estimate the number of users following each branch of the survey. The survey was also pilot tested with approximately 20 participants. The pilot test included timing different branching options, which were incorporated into the completion time estimate.
U.S. Department of Interior. U.S. Geological Survey. The Users, Uses, and Value of Landsat and Other Moderate-Resolution Satellite Imagery in the United States – Executive Report, by Miller, HM; Sexton, NR; Koontz L; Loomis J; Koontz SR; Hermans, C. Open-File Report 2011-1031, U.S. Geological Survey. Fort Collins, Colorado, 2011.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
A 20-minute survey seems fairly burdensome. A completion time of less than 5 minutes will generate the best response rate, while surveys requiring longer than 11 minutes will generate lower response rates.
The length of the current survey is essential to receive relevant information from our survey respondents. The survey participants’ responses inform the Land Remote Sensing Program (LRS). We are implementing the survey with a large sample size. In addition, we have a very specific population that we are sampling. The participants in this population are likely to start and complete the survey. We have a list of confirmed users of Landsat imagery from EROS, so we consider this to be a very attentive audience. We feel that due to the highly technical nature of the respondents, they will be invested in the current survey. Previous Landsat surveys indicate that they the respondents are likely to start and complete the survey.
U.S. Department of Interior. U.S. Geological Survey. The Users, Uses, and Value of Landsat and Other Moderate-Resolution Satellite Imagery in the United States – Executive Report, by Miller, HM; Sexton, NR; Koontz L; Loomis J; Koontz SR; Hermans, C. Open-File Report 2011-1031, U.S. Geological Survey. Fort Collins, Colorado, 2011.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
What is the purpose in asking for willingness-to-pay (WTP) questions? In particular, is the program interested in demonstrating public values for this information? Or is the program interested in generating revenue from users? Program goals can affect valuation scenarios presented in the survey, and response rates are improved when respondents understand how their answers will be used.
Thanks for these comments. Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
As you know trade-offs that must be made in the design of Landsat 10 (i.e., not all desirable features and the highest resolution of each feature can be included on Landsat 10). Therefore the choice experiment WTP question provides the information to USGS/NASA satellite researchers on the relative incremental/marginal dollar values of different features that might be constructed on Landsat 10. By asking WTP, and indicating that users would need to pay for these out of their current budgets, this forces the potential user to recognize that there is a cost of improving the features of Landsat 10. These relative values can then be compared to the relative cost of adding features and improving the quality of those features that are included. The contingent valuation method (CVM) WTP question is to assess the over public value of continuation the Landsat imagery. This CVM WTP question is a replication of previously asked question. The importance of asking this question again is that applications of Landsat satellite imagery may have changed due to changes in end user technologies and the capability of those technologies to utilized satellite imagery. Thus the value of the Landsat imagery may have changed in the last 5 years. When making large investment decisions it is desirable for agencies and Congress to have relatively current benefit estimates.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
I recommend offering a copy of the results to respondents who complete the survey.
Good idea. We added a statement at the end of the survey “Please contact Crista Straub (cstraub@usgs.gov) with any questions, or to receive a link to the final report (when published).
I recommend an initial branching question designed to separate respondents between those who use the data primarily for work, and those who use the data primarily for personal applications. The questions in each branch would then need to be tailored for the relevant user type. These two broad classes of users have very different budget constraints, and likely differ in other important ways. In particular, questions about willingness-to-pay (WTP) would be answered differently by personal users (based on individual preferences and budget constraints) and professional users (considering client preferences and organizational budgets). Question 28 through 41 should all be reconsidered in light of these two wide groups of potential users.
We did not add an initial branching question designed to separate respondents between those who use the data primarily for work, and those who use the data primarily for personal applications. The survey already includes significant logic patterns that are required to receive relevant data that will inform the Land Remote Sensing Program (LRS). We want to reduce logic patterns when possible and think including clear instructions about “work” versus “personal” allows us to minimize the branching options. In addition, we do not expect to have respondents that use data for personal applications. First, the number of respondents that use the data for personal applications would be minor. Second, Section 1 introduction states “We would like to know about how you use Landsat in your work.” The subsequent questions include the phrase “in your work.” However, this is an important distinction to make, and we want our respondents understand. We expanded the Section 1 instructions to include “The questions in this survey are only about Landsat use related to your work, not personal Landsat use.” We added an additional introductory sentence to explain the approach to respondents.
Likewise, the respondent group is not likely to use Landsat for personal use. It is also not likely that they registered with USGS Earth Resources and Observation Science (EROS) Center for personal use of Landsat. The potential respondent universe or population consists of all users of Landsat imagery who have downloaded the imagery from the EROS Center in the last 12 months. All users are required to enter an email address when they initially register with EROS so contact information is available for all the users.
How do you plan to address the potential for strategic behavior by respondents who want to see LandSat data remain cost-free? It is worth considering this for the scenario(s) you develop for the WTP questions. For example if the program has a goal of merely demonstrating value for the program, then strategic responses may be minimized by carefully explaining this in the instructions for the relevant questions.4 On the other hand, if the program has a goal of generating revenue, then (solely for the purposes of the hypothetical scenario) you might describe a system of monthly or per-image fees designed to cover costs of the program only. You might also describe a system that applies fees only to new users, while existing users may continue to access data free of charge.
Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
As you note there is the potential for strategic behavior in responses to WTP questions. The goal of the CVM WTP question is certainly to have users indicate their value for the Landsat program and the particular features of the Landsat 10 satellite. Roberts, et al (1985) are correct that if you want the users to not to strategically understate their value because they think that USGS might charge, then you might err on the side of telling users that you just want their value and there is no plan to charge. However, research in the last 10 years suggests that telling the respondent that there is no plan to charge could result in a different type of strategic behavior: potentially overstating what they would really pay—which may lead to hypothetical bias (Vossler & Watson 2013, Johnston et al. 2017). In the face of the potential for these two types of strategic behavior, as well as the recent literature, we have leaned toward being more conservative, and have worded the WTP questions in the prior CVM WTP question (and hence this proposed replication of it) and in the choice experiment to imply that users might have to purchase private substitutes for Landsat images in order to infer what the users values are for Landsat imagery.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Vossler, Christian A., and Sharon B. Watson. 2013. Understanding the consequences of consequentiality: Testing the validity of stated preferences in the field. Journal of Economic Behavior and Organization 86:137–47
Johnston, et al. 2017. Contemporary Guidance for Stated Preference Studies, Journal of the Association of Environmental and Resource Economists 4(2)
Q. 1 (and throughout, as needed): please define “past year”, which could be interpreted as referring to the current or previous calendar year, fiscal year, 365-day period, etc.
This is Question #1 in the current draft survey.
Throughout the survey (as needed), we changed “past year” to “during the past 12 months”.
Q. 2: this question could be answered in several possible ways, e.g., % of work hours accessing and analyzing images; % of hours using images in any way; % of projects for which images were used. Please consider what type of information would be most helpful for your goals, and clarify the question accordingly.
This is Question #2 in the current draft survey.
We clarified the question by changing the language to “In the past 12 months, what percentage of your work hours used Landsat imagery in any way?”
It is important to note that the actual percentage is not important. We are trying to determine the measure of dependency versus the actual calculation of hours spent – their perception is the important factor.
Q. 4: same comment as for Q. 2
This is Question #6 in the current draft survey.
We clarified the question by changing the language to “In the past 12 months, what percentage of your works hours was operational in any way?”
Q. 5 (and throughout, as needed): please define “unique Landsat scene”
This is Question #7 in the current draft survey.
We feel the instructions are detailed in this question (and throughout). They include the following: “Please enter a whole number in the box below or check “Don’t know”. If you used the same scene more than once, only count that scene one time. If you are unsure how many scenes you used, please provide your best estimate.”
The phrase “unique Landsat scene” is a common phrase with Landsat users and our respondents will be familiar with the phrase and meaning. We have a list of confirmed users of Landsat imagery from EROS, so we consider this to be a very attentive audience. We feel that due to the highly technical nature of the respondents, they will understand the phrase “unique Landsat scene” with the instructions provided.
Q. 5: For eliciting information about past behavior, I recommend replacing numeric open-ended questions with closed-ended questions covering the range of interest, at the required level of precision. For example:
0 scenes
1-5 scenes
6-10 scenes
11-50 scenes
51-100 scenes
101-500 scenes
etc.
The range of the choices should
be informed using LandSat user statistics, and the range within the
choices must be adapted to convey the required range of precision.
The instructions can then be reduced “If you used the same
scene more than once, only count that scene once.”
Closed-ended
questions take less time to answer, and are less costly to analyze.
Respondents may find it difficult to answer open-ended recall
questions that appear to require a high level of precision.
Closed-ended questions remove the respondent’s concern for
recall precision. The response rate is higher with surveys that use
closed-ended question than with those that use open-ended questions.
This is Question #7 in the current draft survey.
We did not replace the numeric open-ended questions with close-ended questions covering the range of interest, at the required level of precision. First, we have no good estimation of meaningful ranges. For example, we have some users that might enter “1” and some users that might enter “15,000”. Second, the individual numeric responses are important to the Land Remote Sensing Program (LRS). The difference between respondents reporting “5” and “7” is an important difference. Third, this question was used in the previous Landsat survey, and it is important to keep this question consistent between surveys. Finally, we have a list of confirmed users of Landsat imagery from EROS, so we consider this to be a very attentive audience. We feel that due to the highly technical nature of the respondents, they will not find this particular recall question challenging and will have a good estimate of their use for recall precision.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Q. 6: strike “to other users” (if necessary, this could be replaced with “to others”), or define “users”. Please specify whether images included in distributed work products count are considered as “distributed”, and what activities count as distribution, e.g., submitting a report to superiors or others in the organization; submitting to clients; etc.
This is Question #8 in the current draft survey.
Thanks. Based on your recommendation, we changed the question to the following language: “Beyond using Landsat in your own work, did you distribute Landsat imagery or products to others in the past 12 months?
We did not define distributed. We don’t want to provide a narrow definition for the respondents. The actual technical definition is not important for analysis, and we want the respondent to answer the question based on their perception of distribution.
Q. 6: Do you plan to analyze commercial use of imagery, such as packaging and reselling images or image-related products?
This is Question #8 in the current draft survey.
No, commercial use of imagery is not something that we can analyze from the current survey.
Q. 8: same comment as for Q. 5
This is Question #10 in the current draft survey.
We did not replace the numeric open-ended questions with close-ended questions covering the number of users the Landsat imagery was distributed. First, we have no good estimation of meaningful ranges. This question is a new question and we do not want to limit the ranges. We do not have any focus group information on this question that would help define the ranges. Therefore, we don’t feel confident in estimating the options for this question. Second, the individual numeric responses are important to the Land Remote Sensing Program (LRS). The difference between respondents reporting distribution to “5” and “7” or users is an important difference and the LRS would like to have that level of distinction. Finally, we have a list of confirmed users of Landsat imagery from EROS, so we consider this to be a very attentive audience. We feel that due to the highly technical nature of the respondents, they will not find this particular recall question challenging and will have a good estimate of their distribution to other users for recall precision.
We changed “in the past year” to “in the past 12 months”.
Q. 8: replace “users” with “people”, or define “users”
This is Question #10 in the current draft survey.
Based on your recommendation, we modified to the question to the following language: “In the past 12 months, approximately how many people did you distribute Landsat imagery or products to?
Q. 9: please add options for “General public”; and “Don’t know”
This is Question #11 in the current draft survey.
We changed the language from “In which sectors did these users work?” to “In which sectors did these people work?”
We did not add an option for “General public”. Respondents that would select general public is an expected small group. This expected small response could be provided in the “Other” section. Also, general public is not “a sector where users work” as indicated in the question. Since general public is not a working sector, we think it might confuse respondents if we provide that choice.
From Dillman et al. (2014):
We did not add “Don’t know”, which can be answered in the “Other” option. There is research that reports providing a “don’t know” option provides those who cannot put themselves into one of the offered categories a way to register an honest response (Converse & Presser, 1986). Without a nonsubstantive option, these respondents would have to select an untrue answer or skip the question, neither of which is a desirable outcome. Others argue that providing these response options makes it easier for respondents to satisfice; that is, that respondents will select the nonsubstantive option rather than doing the mental work necessary to report their true response (Krosnick, 2002). It is recommended that if we expect respondents to know about or have an opinion about, it may be better to withhold these types of options. We feel that our respondents are knowledgeable about the answer to this question. We also provide an “Other (please specify)” option where respondents could indicate “Don’t know”.
Converse, J.M., & Presser, S. Survey questions: handcrafting the standardized questionnaire. Beverly Hills, CA: Sage, 1986. Print.
Dillman, D.A., Smyth, J.D., & Christian, L.M. Internet, Phone, Mail, & Mixed-Mode Surveys The Tailored Design Method. Hoboken, New Jersey: John Wiley & Sons, Inc., 2014. Print.
Krosnick, J.A. (2002). The causes of no-opinion responses to attitude measures in surveys: they are rarely what they appear to be. In R.M. Groves, D.A. Dillman, J.L. Eltinge, & R.J.A. Little (Eds.), Survey nonresponse (pp. 87-100). New York, NY: Wiley.
Q. 10: same comment as for Q. 5
This is Question #12 in the current draft survey.
We changed the language from “In the past year, approximately how many Landsat scenes (processed into a product or not) did you distribute to these other users?” to “In the past 12 months, approximately how many Landsat scenes (processed into a product or not) did you distribute to these other users.”
We did not replace the numeric open-ended questions with close-ended questions covering the number of Landsat scenes that were distributed. First, we have no good estimation of meaningful ranges. This question is a new question and we do not want to limit the ranges. We do not have any focus group information on this question that would help define the ranges. Therefore, we don’t feel confident in estimating the options for this question. Second, the individual numeric responses are important to the Land Remote Sensing Program (LRS). The difference between respondents reporting distribution of scenes that might include “10” or “12” scenes is an important difference and the LRS would like to have that level of distinction. Finally, we have a list of confirmed users of Landsat imagery from EROS, so we consider this to be a very attentive audience. We feel that due to the highly technical nature of the respondents, they will not find this particular recall question challenging and will have a good estimate of their Landsat scene distribution to other users for recall precision.
Q. 11: it may be worth separating “Energy” into a separate option, and combining “Metals/minerals” into an option like “Mining”. I recommend rewording “Energy” along the lines of “Fossil fuels exploration and production”, adding an option for “Energy transmission and distribution”, and rewording “Utilities” along the lines of “Utilities (non-energy). It may also be worth adding options along the lines of
Oceans
Invasive species (type and extent)
Development of Landsat capabilities
This is Question #13 in the current draft survey.
We did not make changes to the options provided. We had several resources that were used to develop the list of options. It is essential to keep the list consistent due to the previous Landsat studies and our collaborators’ studies. The following list comprises previous Landsat studies.
U.S. Department of Interior. U.S. Geological Survey. The Users, Uses, and Value of Landsat and Other Moderate-Resolution Satellite Imagery in the United States – Executive Report, by Miller, HM; Sexton, NR; Koontz L; Loomis J; Koontz SR; Hermans, C. Open-File Report 2011-1031, U.S. Geological Survey. Fort Collins, Colorado, 2011.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Researchers at the Land Remote Sensing Program (LRS) also implemented interviews and a survey. Their results helped guide the design of the current survey.
Q. 12: Please define “Land skin”. It may be worth adding options along the lines of
Ocean temperature (at surface, and at other depths if available)
Sea level (if available)
Coastal inundation and coastal/inland flooding (or clarifying that these are covered by “Surface water extent”
This is Question #14 in the current draft survey.
Based on your recommendation, we changed “users” to “people”.
The USGS Land Remote Sensing (LRS) Program Requirements Capabilities & Analysis for Earth Observations (RCA-EO) provided the list of options for this question. RCA-EO is a cohesive set of analytical functions and information under development within the USGS LRS Program. The list of options were provided to us and it is essential to keep that list consistent with RCA-EO guidelines.
Q. 13: same comment as for Q. 11
This is Question #15 in the current draft survey.
We did not make changes to the options provided. We had several resources that were used to develop the list of options. It is essential to keep the list consistent due to the previous Landsat studies and our collaborators’ studies. The following list comprises previous Landsat studies.
U.S. Department of Interior. U.S. Geological Survey. The Users, Uses, and Value of Landsat and Other Moderate-Resolution Satellite Imagery in the United States – Executive Report, by Miller, HM; Sexton, NR; Koontz L; Loomis J; Koontz SR; Hermans, C. Open-File Report 2011-1031, U.S. Geological Survey. Fort Collins, Colorado, 2011.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Researchers at the Land Remote Sensing Program (LRS) also implemented interviews and a survey. Their results helped guide the design of the current survey.
Q. 14: same comment as for Q. 12
This is Question #16 in the current draft survey.
The USGS Land Remote Sensing (LRS) Program Requirements Capabilities & Analysis for Earth Observations (RCA-EO) provided the list of options for this question. RCA-EO is a cohesive set of analytical functions and information under development within the USGS LRS Program. The list of options were provided to us and it is essential to keep that list consistent with RCA-EO guidelines.
Q. 17: same comment as for Q. 5
This is Question #30 in the current draft survey.
We did not replace the numeric open-ended questions with close-ended questions covering the number of usable Landsat scenes. First, we have no good estimation of meaningful ranges. This question is a new question and we do not want to limit the ranges. We do not have any focus group information on this question that would help define the ranges. Therefore, we don’t feel confident in estimating the options for this question. Second, the individual numeric responses are important to the Land Remote Sensing Program (LRS). The difference between respondents reporting “35” days versus “36” days is an important difference and the LRS would like to have that level of distinction. Finally, we have a list of confirmed users of Landsat imagery from EROS, so we consider this to be a very attentive audience. We feel that due to the highly technical nature of the respondents, they will not find this particular recall question challenging and will have a good estimate on number of days. The number of days is expected to be incredibly important to our respondents and something they are likely to remember with great detail.
We changed “in the past year” to “in the past 12 months”.
Q. 21: please reverse the order of “No improvement” and the free response field.
This is Question #19 in the current draft survey.
This order is hard to view within the word document. We will have the free response field first within the Qualtrics survey software platform.
Q. 23: please simplify the instructions. I recommend striking “in the real world”, or replacing the first three sentences with something along the lines of “We are interested in knowing how Landsat users would value various potential improvements. Please rank the four options below.” Please clarify “including the zero dollar price tag” along the lines of “offered free of charge (the current situation).”
This is Question #39 in the current draft survey.
Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
We agree with your suggestions and have reworded the question. We have also moved to a simpler choice matrix format comparing Landsat 8 versus just two Landsat 10 options so that respondents can choose their most preferred and least preferred out of the three—which provides an implied ranking. We propose rewording the question as follows:
“We are interested in knowing how Landsat users would value various potential improvements in satellite imagery. We want you to consider the three options below. The first option contains the features of Landsat 8 imagery, which is offered at no charge, as it is now. The other two options represent potential imagery products that you could purchase in the private market. These options have various improvements over Landsat 8 data, such as better spatial resolution or higher frequency of acquisition, but they also cost money. While considering your current or most recent project/organizational budget and the observables you need to derive for your primary application, please select which of the three options is your most preferred and which of the three options is your least preferred.”
Q. 23: there are several potential issues to consider with this question.
This is Question #39 in the current draft survey.
Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
The respondent may not have a “current” project underway, which could be addressed by asking instead about the “most recent project”
Response: Good point: we have added “most recent project”.
The budgets for previous projects were presumably developed taking as given the attributes of the Landsat 8 data, including the zero cost. A better scenario would ask about future projects, which may be similar to the respondent’s previous projects.
This is an interesting point. You are correct that the current/recent budget would not have provided funds for purchase of Landsat imagery. However, it seems there is a concern with asking the respondent to think about “future projects”. This adds an additional element of conjecture to the valuation scenario that we would prefer to avoid. Therefore we have “grounded” the survey in their current, or as you suggest, recent project.
If this analysis is intended to provide a valuation for various attributes, it must be recognized that the budget for an organization or a project is different from an individual’s budget constraint. Answers based on a willingness or ability to spend someone else’s money may not be meaningful. Furthermore, there are likely wide variations in agency budgets and the marginal benefits of Landsat data to various users.
These are certainly valid concerns and ones we worried about in our initial CVM WTP question design from the beginning of the original valuation effort in 2011. Since Landsat imagery is an input into the agency’s or organization’s production process and not a consumer good to be enjoyed for its consumption, it made sense to try and ask about the agency’s or organizations budget not the individual’s budget. So we pre-tested the wording you saw and it worked in the following sense: The higher the dollar amount users were asked to pay the lower the probability they would pay, i.e., the coefficient on the dollar cost variable was negative and statistically significant. And you are correct that different agencies/organizations have different budgets, and this may explain why we obtained statistically significant differences in WTP across different levels of government and organizations. Both of these results may be due: (a) to the difficulty in just passing on higher costs of satellite imagery since agency budgets have been limited or have even been declining in discretionary purchasing power; or (b) for private organizations or companies looking at grants that often have binding caps that limit the total amount of funding available or contracts that are often competed. In all of these situations, higher satellite imagery costs have an opportunity cost of forcing reductions in other budget items such as salaries or travel.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
It may be better to explore how the budget for a project would change if Landsat data were not available. To the extent that substitute products or data are available for purchase, the likely answer would simply reflect the market price of those substitutes. To the extent that Landsat is a factor of production, a change in the price of Landsat would shift the demand curve for substitutes and complements, which would require a more nuanced analysis than simply increasing a project budget to account for the cost of data
As you read from our previous response, we agree with you that Landsat images are an input into a production function. Your comment here provides good insights on the potential input production interactions if Landsat 8 was unavailable. As you saw from in Q23 (the choice experiment) Landsat 8 is still an option. The structure of Q23 the choice matrix is whether the respondent would incur the incremental cost to purchase images that were better than Landsat 8. Some respondents may in fact think through their comparison of options as you suggest. But we are interested in the outcome of their “optimizing” choice in terms of their ranking (now the selection of most preferred and least preferred options), not in the particular way they reached their decision, as long as they are cognizant of potential budget limitations that they encounter in their particular agency or organization.
A related approach would ask the respondent to consider how future budgets would be reallocated if Landsat were no longer available free of charge. It may be worth exploring the percent of a total budget allocated for these data (or substitutes).
This is an interesting suggestion for the CVM WTP question since it is posed in the context of Q29 on Landsat no longer being available. Asking those that indicated they would pay the added dollar amount what budget items they would trim to pay for the added cost is a good idea. We have adopted this. Thanks. We propose adding the question as follows:
“Since in your response to the previous question you indicated you would pay $XXX for imagery, please indicate what how you would pay this added cost in terms of categories from your existing budget you would reduce or your ability to pass this cost onto your clients, whether inside your organization/agency or outside your organization/agency. Answer options: (a) reduce money spent on travel; (b) reduce money spent on other computer software or hardware; (c) reduce amount spent on hiring of personnel or salaries; (d) attempt to pass cost onto clients; (e): other please explain: _____________”
This question was added after Question #46 in the current draft survey (after Question #29 in the previous draft survey). This question is #47 in the current draft.
Another potential approach is to explore the price at which the respondent would be unable to complete a given project or task.
This is a clever idea. In one sense it is asking about the “choke price” or the price at the vertical intercept of the input demand function is. We have a couple of concerns with asking this question. First is that the question is like an open ended WTP question. The CVM literature has moved away from asking this question as it is very difficult for the respondent to answer and as such has high item non response. Second it does allow for the potential for significant strategic behavior if the respondent wants to convey that Landsat has near “infinite” value to them by stating an extremely high (but still plausible) value to them. This is one of the reasons CVM practitioners have moved to closed ended CVM WTP questions or choice experiments. The other concerns is that while there may be some tasks where it would be impossible to complete without the most current Landsat imagery, but as you noted, there are substitutes for Landsat available including using older satellite imagery so we think the number of times a respondent would indicate they can’t complete their task or project would be rare enough that asking such a question may not be very informative. We propose adding the question as follows:
“At what price per scene would it not be possible to complete a typical task or project you work on? $___”
This question was added after Question #47 in the current draft survey (after Question #29 in the previous draft survey…with an additional new question added to the current draft survey). This question is #48 in the current draft.
Q. 23: please clarify
This is Question #39 in the current draft survey.
if “Frequency of revisit” depends on the presence of clouds during overflights;
No, it does not. The following statement in the table helps clarify this concern. “Frequency of revisit (every [X] days – does not ensure a usable image at every revisit).”
which and how many spectral bands would be available under each option
The spectral bands include the following options:
Landsat 8 bands plus red edge (680-730 nm),
Landsat 8 bands plus additional SWIR bands (1.5-2.5 µm),
Landsat 8 bands plus additional TIRS bands (8-14 µm)
Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
what information appears in the cost cells
There will be 9 levels of dollar costs. The usual guidance is to four fold. First to have dollar amounts that nearly everyone asked to pay it would say yes—this helps to statistically estimate the lower bound of the WTP function. Second to have a dollar amounts high enough that nearly every respondent would not pay this amount—this helps to statistically estimate the upper bound of the WTP function. Third is to place the majority of the dollar amounts in the middle of the distribution. Finally, is to use any past WTP information available from the literature to determine what dollar amounts to place in these three areas of the expected WTP distribution. With these four factors in mind we have selected these 9 cost levels based on the percentage of Yes responses to our prior (2012) Landsat CVM WTP.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Q. 24: The analyst could likely determine these rankings based on the information from Q. 23. If this is intended to be a consistency check of the Q. 23 response, consider placing this question before Q. 23. Otherwise, I recommend striking this question.
Good point. We agree with your suggestion, and removed this question from the survey.
Q. 25: replace “need” with “want”, or strike “Ideally”
This is Question #40 in the current draft survey.
Great suggestion. We removed the text “Ideally”.
Q. 27: I assume that the values in the table will be replaced by check-boxes. I recommend removing the central choice-column “neither likely nor unlikely”, as this is not a possible outcome. I recommend adding a choice-column on the far right for “don’t know”.
This is Question #42 in the current draft survey.
Yes, the values in the table will be replaced by check-boxes. Thanks for the feedback. We removed the central choice-column “neither likely nor unlikely”. We added a choice-column on the far right “don’t know”.
Q. 28: please replace “critical” with “important”, as “critical” cannot have a degree like “very”. I recommend relabeling the choices as
Not at all important
Not very important
Somewhat important
Very important
Critical; cannot work without it
As there may be little practical difference between “very important” and “critical”, I recommend removing “Critical”. To accommodate respondents having no experience with some products, add a new choice-column to the far right for “don’t know/no experience with this product”
This is Question #44 in the current survey.
Good point. We changed the “critical” Likert scale to a frequency scale, asking “How often do you use the following Landsat data products for your work?”
Q. 28: please separate existing and potential future features into two separate questions; users have a different type of experience with potential features than with current features…
This is Question #44 in the current survey.
Good point. We created a new division in the table (keeping two sub-tables in the same question), and provided a new sub-question for the future features. We added the future feature sub-question language to “How often would you use the following Landsat data products for your work?”
Q. 29: same comment as for Q. 3
We could not find a comment for Question #3 in your document. Please let us know if there is a comment for Question #3 that you would like us to change in this draft.
Q. 30: for the issues raised for Q. 23, this question may not be meaningful: the respondent may not have control over the budget, or any knowledge about the likelihood of various expenses being approved.
This is Question #49 in the current draft survey.
Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
Asking a question on how certain respondents are of their response is intended to convey information on any number of sources of uncertainty the respondent may have in their answer. As you note in our survey one of these sources of uncertainty may relate to the ability to adjust their budget. So it is valuable to get some insights on the degree of uncertainty (if any) they may have. Traditionally, sources of uncertainty include whether the respondent has thought about the particular currently “unpriced” good in dollar terms. This question has been successfully used since it was first introduced in 1997 (Champ, et al. Journal of Environmental Economics and Management 33(2): 151-162) by many others (for example, it has over 500 citations).
Q. 31: same comment as for Q. 5
This is Question #50 in the current draft survey.
We did not replace the numeric open-ended questions with close-ended questions covering the range of interest, at the required level of precision. First, we have no good estimation of meaningful ranges. For example, we have some users that might enter “1” and some users that might enter “15,000”. Second, the individual numeric responses are important to the Land Remote Sensing Program (LRS). The difference between respondents reporting “5” and “7” is an important difference. Third, this question was used in the previous Landsat survey, and it is important to keep this question consistent between surveys. Finally, we have a list of confirmed users of Landsat imagery from EROS, so we consider this to be a very attentive audience. We feel that due to the highly technical nature of the respondents, they will not find this particular recall question challenging and will have a good estimate of their use for recall precision.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Q. 31: if this scenario is the same as for Q. 29, remind the respondent of the scenario and their budget constraint
This is Question #50 in the current draft survey.
Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
Good point. We now do include the budget reminder that was used in Q29 in this question. We used the language provided below.
“Assume that you are restricted to your current project or organization budget level and that the money to pay any cost for replacement imagery and additional software or training would have to come out of your existing budget”
Q. 31: rather than asking about how many scenes the respondent would buy per year, it may be more meaningful to ask about how many fewer (or more) scenes they would buy, or the approximate percentage change.
This is Question #50 in the current draft survey.
Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
We agree that it might be more meaningful (and probably easier) for the respondent to answer how many fewer (if any) they might buy at $XXX. So we have adopted your suggestion.
Q. 32: please correct the reference “bid amount in Q28” to “bid amount in Q29”.
This is Question #51 in the current draft survey.
We changed the reference bid amount to Question #46.
Q. 33: same comment as for Q. 5
This is Question #52 in the current draft survey.
We did not replace the numeric open-ended questions with close-ended questions covering the range of interest, at the required level of precision. First, we have no good estimation of meaningful ranges. For example, we have some users that might enter “1” and some users that might enter “15,000”. Second, the individual numeric responses are important to the Land Remote Sensing Program (LRS). The difference between respondents reporting “5” and “7” is an important difference. Third, this question was used in the previous Landsat survey, and it is important to keep this question consistent between surveys. Finally, we have a list of confirmed users of Landsat imagery from EROS, so we consider this to be a very attentive audience. We feel that due to the highly technical nature of the respondents, they will not find this particular recall question challenging and will have a good estimate of their use for recall precision.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Q. 33: same comment as for Q. 31
This is Question #52 in the current draft survey.
Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
We agree that it might be more meaningful (and probably easier) for the respondent to answer how many fewer (if any) they might buy at $XXX. So we have adopted your suggestion.
Q. 35: same comment as for Q. 32
This is Question #54 in the current draft survey.
Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
Thank you for catching that. We have corrected it.
Q. 36: same comment as for Q. 31
This is Question #55 in the current draft survey.
Our collaborator on this project, John B. Loomis, Ph.D., provided the response to the comments we received for this question. Dr. Loomis is a Professor at Colorado State University.
We agree that it might be more meaningful (and probably easier) for the respondent to answer how many fewer (if any) they might buy at $XXX. So we have adopted your suggestion.
Q. 38: same comment as for Q. 5
This is Question #57 in the current draft survey.
We did not replace the numeric open-ended questions with close-ended questions covering the range of interest, at the required level of precision. First, we have no good estimation of meaningful ranges. For example, we have some users that might enter “1” and some users that might enter “15,000”. Second, the individual numeric responses are important to the Land Remote Sensing Program (LRS). The difference between respondents reporting “5” and “7” is an important difference. Third, this question was used in the previous Landsat survey, and it is important to keep this question consistent between surveys. Finally, we have a list of confirmed users of Landsat imagery from EROS, so we consider this to be a very attentive audience. We feel that due to the highly technical nature of the respondents, they will not find this particular recall question challenging and will have a good estimate of their use for recall precision.
U.S. Department of Interior. U.S. Geological Survey. Users, Uses, and Value of Landsat Satellite Imagery – Results from the 2012 Survey of Users, by Miller, HM; Richardson L; Koontz, SR; Loomis J; Koontz L. Open-File Report 2013-1269, U.S. Geological Survey. Fort Collins, Colorado, 2013.
Q. 39: Option 2 is likely only relevant for professional users; Options 1, 3, and 4 are likely only relevant for personal users. Please see above comment on including an initial branching question.
This is Question #58 in the current draft survey.
We did not add an initial branching question designed to separate respondents between those who use the data primarily for work, and those who use the data primarily for personal applications. The survey already includes significant logic patterns that are required to receive relevant data that will inform the Land Remote Sensing Program (LRS). We want to reduce logic patterns when possible and think including clear instructions about “work” versus “personal” allows us to minimize the branching options. In addition, we do not expect to have respondents that use data for personal applications. First, the number of respondents that use the data for personal applications would be minor. Second, Section 1 introduction states “We would like to know about how you use Landsat in your work.” The subsequent questions include the phrase “in your work.” However, this is an important distinction to make, and we want our respondents understand. We expanded the Section 1 instructions to include “The questions in this survey are only about Landsat use related to your work, not personal Landsat use.” We added an additional introductory sentence to explain the approach to respondents.
Likewise, the respondent group is not likely to use Landsat for personal use. It is also not likely that they registered with USGS Earth Resources and Observation Science (EROS) Center for personal use of Landsat. The potential respondent universe or population consists of all users of Landsat imagery who have downloaded the imagery from the EROS Center in the last 12 months. All users are required to enter an email address when they initially register with EROS so contact information is available for all the users.
Q. 40: consider asking instead about the budget for future projects…
Good idea…we had the similar concerns. Therefore, this question was removed from the survey. We decided that not enough was known about analysis-ready data (ARD) (in this form) to ask this question.
Q. 42: please clarify the choices along the lines of the following:
This is Question #59 in the current draft survey.
Thanks for your feedback. We changed to the following options:
National/Federal government (of any country)
State/Provincial/Departmental government (in any country)
Local government (in any country)
Q. 43: it may be worth adding an option for “Administrative” or clarifying if that is included under staff.
This is Question #60 in the current draft survey.
Great suggestion. We modified to the following response choice: “Faculty or staff (e.g., administrator, professor, researcher, postdoctoral researcher)
1 www.surveymonkey.com/curiosity/progress-bars-good-bad-survey-survey-says, accessed November 19, 2017.
2 Yan, T., Conrad, F. G., Tourangeau, R., & Couper, M.P. (2010) Should I stay or should I go:
The effects of progress feedback, promised time duration, and length of questionnaire on
completing Web surveys. International Journal of Public Opinion Research, 23:1, 74-97.
3 Shimon Sarraf, and Malika Tukibayeva (2014) Survey Page Length and Progress Indicators: What Are Their Relationships to Item Nonresponse? New Directions for Institutional Research, 2014:161, 83-97.
4 Roberts, J.K., M.E. Thompson, and P.W. Pawlyk. (1985). Contingent Valuation of Recreational Diving at Petroleum Rigs, Gulf of Mexico. Transactions of The American Fisheries Society 114. 214-219. The authors surveyed SCUBA divers about their willingness to pay for an annual pass to access an area that is currently cost-free. To avoid strategic responses, the survey assured respondents that there was no plan to actually charge such a fee, and that the question was merely intended to develop a valuation for the resource.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Modified | 0000-00-00 |
File Created | 2021-01-20 |