Supplementary and Part B questions

NOAA NWS_OMB Submittal for Public Survey_Hazard Simplification Project_10-31-17.docx

NOAA Customer Surveys

Supplementary and Part B questions

OMB: 0648-0342

Document [docx]
Download: docx | pdf

Public Survey

for NOAA’s National Weather Service Hazard Simplification Project



  1. Supplemental Questions for DOC/NOAA Customer Survey Clearance
    (OMB Control Number 0648-0342)



  1. Explain who will be conducting this survey. What program office will be
    conducting the survey? What services does this program provide? Who are the customers? How are these services provided to the customer?


This request is for a set of four surveys to be conducted by NOAA’s National Weather Service (NWS) to assess how best to communicate hazardous weather warning information. The surveys are similar and differ only by the type of weather hazard that is being assessed.


The NWS forecasts hazardous weather situations and issues warnings, watches, advisories (WWA) and other information products to convey the threats posed by these events. These products are intended to help communities prepare for and respond to hazardous weather to protect people’s lives and property. The products are communicated to the public through websites, smart phones, television programs, radio broadcasts, and NOAA Weather radio. NWS customers include weather professionals, transportation and aviation officials, emergency management personnel, public works departments, broadcast meteorologists and other media, and the public.


The NWS has embarked on an effort to simplify and enhance its WWA products, since both prior social science research and NWS service assessments have demonstrated that many members of the public, and even some NWS partners, don’t understand the distinctions among the terms used in the different WWA products or their intent.


This set of surveys builds on and furthers social science research conducted in the summer of 2014 that involved focus groups with emergency managers, broadcast meteorologists, NWS Weather Forecast Office staff, and the public. The focus groups explored the current understanding and utility of the WWA system and possible enhancements to a new or modified system (ICR Reference Number 201103-0690-001, 3/14/14). This work indicated that there is a spectrum of understanding of the current WWA system and a difference of opinion on how much change is needed or desired to enhance the present system. It also showed considerable support for enhancing the current WWA system with simple explanatory language that could convey threats, impacts, and/or desired actions, as well as the use of a color scale to convey threat levels. The surveys also reflect the findings of additional research conducted in 2015 from 1) more than 700 case studies1 from respondents internal to the NWS and external to the agency documenting perceived strengths and weaknesses of the current system and 2) a stakeholder workshop held with media representatives, emergency managers, and social scientists to brainstorm alternative language to the current WWA system.


Based on this prior research, the NWS has created four “prototypes” for new ways of communicating WWA information using different words and/or colors. These prototypes have evolved with input from NWS partners (such as emergency managers, the media, and forecasters), but they have not been tested with the public. The goal of these surveys will be to determine whether the public prefers the current system, or one of the four new prototypes.


Because the NWS conveys warning information for more than 120 hazards/weather events,2 testing new prototypes for all of these warning products is not feasible. The hazards covered in these surveys represent three of the seven major NWS warning product categories:3 (1) winter weather, (2) severe weather, and (3) flooding. NWS is refining messages for the other four product categories under separate projects. The NWS is requesting approval on four distinct hazard-specific surveys:


  • Thunderstorms

  • Winter storms (mild and cold climates)

  • Tornadoes

  • Flooding


The current system consists of four levels of conveying risk: watch, advisory, warning, and emergency. The NWS developed corresponding levels for the new prototypes to allow for comparison to the current system. The five prototypes (current system and four alternatives) and their corresponding levels appear in Table 1. Each hazard-specific survey will test these five prototypes.


Table 1: Prototypes and Their Associated Levels

Level

Current System

Prototype 1

Prototype 2

Prototype 3

Prototype 4

Watch level

Watch

Outlook

Notice

Possible

Possible/Notice

Advisory level

Advisory

Warning

Alert

Moderate

Orange

Warning level

Warning

Warning

Warning

Severe

Red

Emergency level

Emergency

Warning

Emergency

Extreme/Catastrophic

Purple/Dark Purple


The prototypes can be described as follows:

  • Current system – This prototype is the current WWA system.

  • Prototype 1 – This prototype tests two tiers of warning (rather than the current three-tier), maintains the term “warning,” which people understood in the prior research (described above), and eliminates the term “advisory” since the prior research has shown this level and terminology to be the least understood by NWS partners and the public.

  • Prototype 2 – This prototype maintains the tiers of the current system and the “warning” and “emergency” terminology, but changes the wording for the “watch” and “advisory” levels.

  • Prototype 3 – This prototype makes changes to all levels of the current system. It emphasizes impacts and introduces a more hierarchical scale that uses adjectives to describe escalating risk.

  • Prototype 4 – This prototype uses a color scheme (except at that watch level) instead of risk-based wording.

A key aspect of the surveys will be to test the prototypes using changes in severity over time (upgrades and downgrades). For example, as a winter storm approaches an area, the NWS currently may issue a “watch” which may then be later upgraded to a “warning” as the forecasted conditions worsen. Warnings can also be downgraded over time to watches as forecasted conditions improve. Upgrades and/or downgrades are a key aspect of how the NWS communicates information; measuring customer preferences and understanding over time as the NWS upgrades or downgrades its message is important to determining the best approach to conveying this type of information moving forward.


  1. Explain how this survey was developed. With whom did you consult regarding content during the development of this survey? Statistics? What suggestions did you get about improving the survey?


NWS contracted with Eastern Research Group, Inc. (ERG) to develop the surveys. ERG has significant experience assessing technical assistance provided by federal agencies through detailed interviews, focus groups, and surveys that focus on customer satisfaction and outcome attainment.


As noted, the surveys focus on five (current model and four alternatives) different WWA prototypes. The prototypes evolved from input gathered from emergency managers, broadcast meteorologists, NWS Weather Forecast Office staff, and the public derived from focus groups conducted in the summer of 2014 (ICR Reference Number 201103-0690-001, 3/14/14), as well as from case studies documenting perceived strengths and weaknesses of the current system (ICR Reference Number 201504-0648-015, 5/28/15), and a stakeholder workshop held in 2015 with media representatives, emergency managers, and social scientists. These surveys are designed to help the NWS understand what kinds of enhancements or changes to the current WWA system would be most beneficial.


To develop the survey, ERG worked with Dr. Kim Klockow, a Post-Doctoral Fellow at the University Corporation for Atmospheric Research (UCAR). She received her PhD in Geography from the University of Oklahoma. ERG also worked closely with NOAA and NWS leadership on the project, including Elliott Jacks, Chief, Fire and Public Weather Services Branch, and all chiefs in the branch. ERG also coordinated with a graduate student at the University of Georgia-Athens, Castle Williams, who is developing surveys for two additional hazards (winds and excessive heat) under a separate effort with the National Science Foundation (NSF). Suggestions for improving the survey included focusing on impact-based language, simplifying the questions being asked, and keeping the survey to 20 minutes, since it is for a public audience. Branch chiefs also emphasized the need for consistent language, where possible, across different hazard types.


ERG used statistical power analysis to verify that the sample size being used would allow for meaningful comparisons given the survey design. As will be discussed in Part B below, the sample for this set of surveys was set at approximately 7,200 total respondents for all four hazards based on the contract specified between NWS and ERG. Based on ERG’s analysis (described in Part B), the number of respondents for each hazard will allow for identifying reasonable differences between reactions to the prototypes at sufficient statistical power that will NWS make informed decisions on how to proceed with modifying future messages. ERG’s statistical power calculations are detailed in Part B.


  1. Explain how the survey will be conducted. How will the customers be sampled (if fewer than all customers will be surveyed)? What percentage of customers asked to take the survey will respond? What actions are planned to increase the response rate? (Web-based surveys are not an acceptable method of sampling a broad population. Web-based surveys must be limited to services provided by Web.)

How the survey will be conducted

NWS will perform an online survey to collect these data since the NWS distributes much of its information though online web-based sources. NWS’s subcontractor ERG will work with Qualtrics, Inc., a leading provider of online survey services. ERG will instruct Qualtrics to select random samples from a pre-determined set of geographic areas for each hazard type (described in Part B below).

Response rate

ERG expects that 70% of respondents will take the survey. This rate is based on ERG’s prior work implementing similar surveys for NWS.

Maximizing Response

To ensure a maximum response rate:

  • ERG has developed a survey that minimizes the burden on respondents by using good survey design. This includes developing well-written questions and limiting the number of the questions to the minimum necessary.

  • ERG will require Qualtrics, Inc. to select respondents from individuals who indicated they are willing to take surveys. Many email survey lists are constructed from individuals who passively opt in to taking surveys (e.g., by agreeing to a terms of service agreement on a web site). ERG will require Qualtrics to use survey lists where those on the list consciously indicated they would be willing to take surveys (i.e., passive opt-ins will be excluded).

  • ERG will use multiple prompts to generate responses. ERG will use a pre-notification email to respondents, an email that asks the respondent to take the survey, and then two reminder emails.

  • A graduate student researcher at the University of Georgia (Castle Williams) has developed a survey that assesses NWS’ hazard messages for extreme heat. As part of his research, Mr. Williams has performed a series of cognitive tests with potential respondents to improve his survey design. He also implemented a pre-test of his survey instrument. Mr. Williams has agreed to share his results with NWS and ERG, which will allow ERG to incorporate his results into the design of its survey.


  1. Describe how the results of this survey will be analyzed and used. If the customer population is sampled, what statistical techniques will be used to generalize the results to the entire customer population? Is this survey intended to measure a GPRA performance measure? (If so, please include an excerpt from the appropriate document.)


The survey data will be analyzed by comparing respondent preferences for different prototypes. As noted, the purpose of the data collection effort will be to assess how NWS customers react to different warning levels and terminology. ERG will use the survey data to determine which prototypes are more preferred among potential customers by comparing the percentages of NWS customers who prefer which prototype. These results will assist NWS leadership with considering possible changes or modifications to the WWA system.


ERG will use sample weights to generalize the survey results to the populations.


The data do not directly contribute to a GPRA measure.


  1. Collections of Information Employing Statistical Methods


  1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g. establishments, State and local governmental units, households, or persons) in the universe and the corresponding sample are to be provided in tabular form. The tabulation must also include expected response rates for the collection as a whole. If the collection has been conducted before, provide the actual response rate achieved.


Respondent Universe


As noted above, this set of surveys covers four hazards. Some areas of the United States are more prone to some hazards compared to others. The areas NWS selected for each hazard can be described as follows:


  • Thunderstorms – All U.S. states are subject to thunderstorms, so NWS will select a sample from all U.S. states.

  • Winter storms – Winter storms primarily affect colder weather states; however, some states with warmer climates also see winter storms. The NWS’s messaging differs between cold weather areas and warmer areas for winter storms. Thus, the NWS used data from the National Climactic Data Center (NCDC), National Center for Environmental Information (NCEI) climate normal data on snowfall totals for 1981-2010.4 The NWS assigned states that receive the most snow to a “cold” category and states that receive less snow to a “mild” region.5 The states assigned to each group appears in Table 2.

  • Tornadoes – Only a small subset of states is at elevated risk for tornadoes. The NWS reviewed data on the number of tornado events from 2012-2016 compiled by NCDC in the storm events database6 and identified several states with elevated risk for tornadoes. The NWS selected 20 full states and part of another (northern Texas) as the geographic area for the tornadoes survey.

  • Flooding – Although most areas of the United States are subject to flooding, a few states are less prone to flooding. The NWS excluded eight states with the least amount of flooding from the sample (see Table 2).

Table 2 summarizes the geographic focuses described above, the adult population (age 20 and older) for each selected area, the sample that will be selected, the anticipated response rate, and the targeted sample size.



Table 2: Geographic Focus, Populations, and Sample Information for Hazard Surveys

Hazard

Geographic focus

Population Over the Age of 20 [a]

Sample [b]

Response Rate

Targeted Sample [c]

Thunderstorms

All U.S. states

241,022,443

2,860

70%

2,000

Winter storms [d]

Cold: ME, NH, VT, MA, RI, CT, NY, PA, MI, WI, MN, CO, WY, MT, ID

58,594,488

1,430

70%

1,000

Mild: VA, NC, KY, TN, SC, GA, AL, MS, AR, MO, NE, OK

50,265,438

1,430

70%

1,000

Tornadoes

AL, AR, CO, GA, IA, IL, IN, FL, KS, KY, LA, MO, MN, MS, MT, NC, NE, OK, SC, TN, TX (north only) [e]

~120,000,000

[f]

1,000

70%

700

Flooding

All U.S. states, except AK, AZ, HI, ND, NM, NV, UT, and WY [g]

227,520,099

3,860

70%

2,700

[a] U.S. Census Bureau estimates for 2016. The age of 20 was used to proxy total adult population since Census Bureau reports on age 20 and above.

[b] Calculated by dividing the target sample size by the response rate and rounding to the nearest ten.

[c] Sample size calculations are provided below in Section B, Question 2.

[d] The survey for winter storms will be implemented in both “cold” regions and “mild” regions. The reason for splitting the survey is that the NWS uses slightly different criteria for storm warnings in colder regions compared to milder ones. The NWS assigned states to “cold” and “mild” regions based on historical snowfall totals. The NWS also assigned half of the winter storm sample to cold areas and the other half to mild areas.

[e] These states have a higher frequency of tornadoes.

[f] The sample will limit the number of response from urban areas in each state to no more than 10 percent of the total sample to ensure areas such as Chicago and Atlanta do not dominate the responses. This population estimate reflects a rough estimate based on the Census data.

[g] These states were excluded based on a reduced frequency of floods.



Selection method


The NWS will use random selection for the sampling. ERG, the NWS’s subcontractor, will instruct Qualtrics (the company that will maintain the sampling list) to randomly select the numbers listed in Table 2 above. ERG expects that 70 percent of those will respond to the survey. In the event fewer respond or some initial selections are out of scope (e.g., not an adult), ERG will instruct Qualtrics to select replacements and to target the sample size listed in Table 2.



  1. Describe the procedures for the collection, including: the statistical methodology for stratification and sample selection; the estimation procedure; the degree of accuracy needed for the purpose described in the justification; any unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.


Stratification, sample size, and precision and accuracy


The sample size for these surveys was primarily determined by available budget. In winning the contract to perform the work, ERG proposed a sample of approximately 7,200 respondents; thus, the NWS must use that as a sample size that is approximately equal to 7,200 for these surveys. The original specifications that the NWS provided to bidders and that ERG used in determining a sample of 7,200 have changed as the NWS’s needs have shifted since ERG was awarded the work. Specifically, the NWS has asked ERG to include a total of five prototypes to test (the current set of messages along with four new approaches). Given the fixed sample size, ERG has calculated the precision at which differences in responses can be detected between respondents who see different prototypes and the associated statistical power of tests to compare responses.


The survey will involve comparing responses between groups that see different prototypes; thus, the number who see the different prototypes will be a key factor in determining precision and statistical power. The presentation of the prototypes to respondents is further complicated by the fact that NWS messages can be upgraded and downgraded over time. For example, a winter storm watch may be upgraded to a winter storm warning over time. A key to understanding the usefulness of new prototypes will be to determine how customers understand those upgrades and downgrades over time. The NWS has determined that three upgrade/downgrade scenarios are relevant for each prototype:7


  • A warning with an upgrade

  • An advisory with an upgrade

  • A warning with a downgrade


Flooding adds a fourth scenario (emergency with an upgrade). Tornadoes, on the other hand, only includes one scenario, “warning with an upgrade,” since tornadoes do not involve advisories and tornadoes are never downgraded. The total set of respondents available must first be allocated across hazards, then across the five prototypes, and then across upgrade/downgrade scenarios.


As noted, the budget for these surveys includes approximately 7,200 respondents. The goal of this survey is to determine preferences and understandability of the five prototypes. To make relevant comparisons, it is necessary to compare the same upgrade/downgrade scenario between prototypes within each hazard (e.g., warning upgrade for prototype 3 compared to a warning upgrade for the current system for winter storms). Thus, the key in assessing precision is to determine the number of respondents for each upgrade/downgrade scenario for each prototype for each hazard.


If all hazards had the three upgrade/downgrade scenarios, then 1,800 respondents would be allocated to each hazard (7,200 ÷ 4) and each prototype would be allocated 360 respondents (1,800 ÷ 5). The 360 respondents for each prototype would then be divided among the three upgrade/downgrade scenarios for 120 respondents for each scenario. Thus, each sub-sample would have 120 respondents for each comparison cell. As noted, however, the tornado survey only includes one upgrade scenario and flooding includes four; using equal allocation across hazards would imply that the tornado survey has 360 respondents for each comparison cell (1,800 ÷ 5) while the flooding would have only 90 (1,800 ÷ 20). Thus, the NWS re-allocated some of the respondents from the tornado survey to the other hazards to increase the sample sizes for the comparison cells. To re-allocate responses, the NWS first calculated an equal distribution of the 7,200 across the total number of comparison cells (55 = 5 hazards × 11 hazard-specific upgrade/downgrade scenarios); this resulted in 131 respondents per comparison cell. The NWS then used 131 as the minimum for each comparison cell; the total minimum sample for each hazard was then calculated as 131 multiplied by the number of prototypes (5 for all hazards) and the number of upgrade/downgrade scenarios for each hazard (1 for tornadoes, 3 for both thunderstorms and winter weather, and 4 for flooding) and then rounded up to the nearest hundred. For example, for flooding, the minimum sample size was calculated as 2,620 (= 131 × 5 prototypes × 4 upgrade/downgrade scenarios) and then rounded to a targeted sample size of 2,700.


The NWS calculated the statistical power to detect differences between the comparison cells assuming that a 10-point scale question would be used for the comparison.8 Statistical power reflects the probability of detecting a difference between two samples assuming that the difference exists among the populations. The NWS calculated the statistical power of detecting a difference (precision) of 0.5 points, 1.0 point, and 1.5 points in the means of a 10-point scaled question between two sub-samples. The NWS also assumed three scenarios for the standard deviation of the responses: a worst-case scenario corresponding to a standard deviation of 4.59 and two more realistic scenarios with standard deviations of 3.0 and 2.0. The statistical power calculations for the three levels of precision and the three assumed standard deviations, using the minimum per cell size of 131, appears in Table 3. Tests with “strong” statistical power tend to be close to 75 percent or higher. The samples sizes for this survey will provide strong power for detecting 1.5-point differences at all assumed standard deviations and a 1.0-point differences at assumed standard deviations of 3.0 and lower. The NWS has assumed that the worst-case scenario for standard deviation is unrealistic and that 1.0-point changes in a mean value would be relevant; thus, the NWS is confident that the samples will allow for strong tests to compare respondent preferences and understanding between the prototypes.


Table 3: Statistical Power for Different Levels of Precision and Assumed Standard Deviations

Precision (difference in mean)

S.d.= 4.5

S.d. = 3

S.d. = 2

0.5

23%

38%

65%

1.0

56%

85%

99%

1.5

85%

99%

~100%

Note: “S.d.” refers to standard deviation.



Unusual Problems Requiring Specialized Sampling Procedures


None are required.


Periodic Data Collection Cycles


This request is for a one-time data collection.


  1. Describe the methods used to maximize response rates and to deal with nonresponse. The accuracy and reliability of the information collected must be shown to be adequate for the intended uses. For collections based on sampling, a special justification must be provided if they will not yield "reliable" data that can be generalized to the universe studied.


Maximizing response rates


NWS’s subcontractor, ERG, will employ the following to maximize response rates from the sample.

  • ERG has developed a survey that minimizes the burden on respondents by using good survey design. This includes developing well-written questions and limiting the number of the questions to the minimum necessary.

  • ERG will require Qualtrics, Inc. to select respondents from individuals who indicated they are willing to take surveys. Many email survey lists are constructed from individual who passively opt in to taking surveys (e.g., by agreeing to a terms of service agreement on a web site). ERG will require Qualtrics to use survey lists where those on the list consciously indicated they would be willing to take surveys (i.e., excluded passive opt-ins).

  • ERG will use multiple prompts to generate responses. ERG will send a pre-notification email to respondents about the survey, an email that asks the respondent to take the survey, and then two reminder emails.

  • ERG will implement the survey for each hazard in geographic areas where the hazard is relevant to ensure respondents feel some need to respond to the survey. That is, the survey topic should have relevancy for the respondent.

Adequacy for intended uses

The purpose of these surveys is to get feedback from the public on possible new approaches to presenting hazard warning risk information. The key information that the NWS needs are the public’s understanding of the new prototypes and the public’s satisfaction with the new prototypes. The information from these surveys will be used in conjunction with other information being considered by the NWS in how to potentially change hazard-specific warnings. No decisions are being made solely from the data being collected from these surveys. As such, the NWS expects the attainable precision (described above) is more than adequate given the purpose of these data. This is based on several considerations:

  • The surveys involve collecting data from a large number of respondents for each hazard in areas where those hazards are prevalent. Thus, the NWS will have a large number of data points to use in assessing the prototypes.

  • The surveys are being implemented in a way that mimics how the NWS warns the public of severe weather hazards and risk; that is, the NWS issues different types of products corresponding to hazardous weather severity and imminence. It may also upgrade or downgrade these products based on changes in the risk. Understanding how people respond to those upgrades and downgrades is important. The use of upgrades and downgrades, combined with using five prototypes, complicates the survey design by dividing the prototype-specific sample into three groups. Thus, NWS is trading off statistical power/precision (proxied by sample size here) with implementation realism. In short, the NWS could have higher statistical power/more precise estimates if upgrades and downgrades were not used, but the NWS would be missing data on how people respond to changes in the prototypes over time.

  • The prototypes are “directional” in nature rather than specific. The NWS has developed the prototypes based on input from focus groups and other prior research. The NWS also had meetings with internal staff who are responsible for implementing the current hazard warnings. One outcome from the discussions with internal NWS staff is that before messages are implemented, some customization must be done. This survey will provide information on the types of messages that will work better with the public compared to others.

  • Identifying large differences in public preferences for prototypes is acceptable. As noted above, we can expect to find 1.5-point differences in mean values on a 10-point scale between sub-groups in our sample with high statistical power. A 1.5-point difference would indicate a very large preference for one prototype over another; however, based on the calculations summarized in Table 3, and assuming a reasonable amount of variation in the data, the statistical power of finding mean differences of 1.0 (or even 0.5) point is high. A 1.0 difference on a 10-point scale in mean preference between two prototypes is acceptable to the NWS.


  1. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as effective means to refine collections, but if ten or more test respondents are involved OMB must give prior approval.


The NWS’s contractor, ERG, has or will perform the following procedures and tests to ensure an effective data collection process:


  • ERG has performed significant prior research on this subject as described earlier. In 2014, ERG conducted focus groups with emergency managers, broadcast meteorologists, NWS Weather Forecast Office staff, and the public to explore the current understanding and utility of the WWA system and possible enhancements to a new or modified system (ICR Reference Number 201103-0690-001, 3/14/14). ERG also supported the NWS conduct a 2015 workshop with media representatives, emergency managers, and social scientists to explore possible new WWA language. Finally, also in 2015, ERG collected more than 700 case studies from respondents internal to the NWS and external to the agency documenting perceived strengths and weaknesses of the current system (ICR Reference Number 201504-0648-015, 5/28/15).

  • ERG conducted an early internal validation of the survey with ERG staff to gauge their comprehension of the questions and the potential for survey fatigue. This was done as part of ERG’s contract work and ERG staff who participated were paid as ERG employees (i.e., as part of ERG’s contractual duties). ERG also tested the survey with less than ten non-ERG employees. Based on this test, ERG condensed and simplified the questions and scenarios presented.

  • A graduate student researcher at the University of Georgia (Castle Williams) has developed surveys that assess warning messages for two additional hazards, excessive heat and winds, as part of his own thesis research. As part of his research, Mr. Williams has performed a series of cognitive tests with potential respondents to improve his survey design. He also implemented a pre-test of his survey instrument. Mr. Williams has agreed to share his results with the NWS and ERG which will allow ERG to incorporate his results into the design of its survey.

  • ERG will perform a pre-test of the survey prior to full implementation of the instrument. The pre-test sample is included as part of this request. ERG will make changes as needed based on the pre-test.


  1. Provide the name and telephone number of individuals consulted on the statistical aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


The NWS has contracted with Eastern Research Group, Inc. (ERG) of Lexington, MA, to design the survey instrument, develop the sampling approach, implement the survey, and analyze the resulting data collected. The survey design team included the following individuals:


Dr. Lou Nadeau (781) 674-7316; lou.nadeau@erg.com


1 The case studies consisted of open-ended responses to a series of questions related to the uses of the current system (ICR Reference Number 201504-0648-015, 5/28/15).

2 http://www.nws.noaa.gov/wwamap-prd/wwacolortab.php?x=1

5 States receiving little or no snow were excluded from the sample.

7 Table 1 includes the terms used for the warning and advisory levels for each prototype. Although upgrade and downgrade scenarios being tested only refer to warning level and advisory level information, respondents will be presented with a series of prompts within each scenario that reflect the other relevant levels within the prototypes.

8 The 10-point scale question could be respondent preferences for a prototype or their understanding of the information.

9 The worst-case standard deviation for a 10-point scaled question corresponds to one half of the respondents selecting the lowest level (1) and one half selecting the highest level (10).

Significant Weather Event Communication Survey 10


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorLou
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy