0648-0658 SUPPORTING STATEMENT Part B

0648-0658 SUPPORTING STATEMENT Part B.docx

NOAA Bay Watershed Education and Training (B-WET) Program National Evaluation System

OMB: 0648-0658

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT

U.S. Department of Commerce

National Oceanic & Atmospheric Administration

Bay Watershed Education and Training Program National Evaluation System

OMB Control No. 0648-0658


B. Collections of Information Employing Statistical Methods


  1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The respondent universe is composed of B-WET grant recipients and teachers who participate in teacher professional development programs offered by grant recipients. Over the past 3 years (FY19-21) B-WET averaged 165 awardees and 3,374 teachers participating in professional development programs offered by B-WET grant recipients annually. Therefore, approximately 3,539 (number of grantees and teachers) respondents may receive B-WET evaluation surveys in a typical year. Email addresses for grant recipients are obtained during the grant application process. These email addresses are then used to send grant recipients the “grantee” survey. Grant recipients are also sent a separate request for the email addresses of the teachers who participated in the teacher professional development conducted by the grant recipients. The email addresses of teacher participants are then used to send teachers the “teacher professional development” surveys and follow-up “teacher mwee” surveys.


Since we are reaching the whole population of our participants, our data collection is a census - we are not sampling our population, therefore, there is no sampling procedure.


Historical and anticipated response rates and subsequent expected number of respondents are depicted in tables 1 and 2 below.


Limited data has been collected during the current clearance period, since data collection was paused in January 2021 due to the fact that the instruments are not designed to assess project implementation during the ongoing impacts of the pandemic. Therefore, expected response rates reported in Table 1 are informed by the response rates obtained between April 2016 and April 2018, as submitted in our previous clearance. Response rate is analyzed in aggregate for this time period in order to include as much response data as possible in this calculation. This represents the most recent response rate data we have analyzed.



Table 1: Past and Expected Response Rates

Questionnaire

Time Period

N

(number of emails sent successfully)a

n

(number who responded)

R

(response rate)

Future expected Rateb

Grantee

June 2016 - April 2018

201

142

71%

75%

Grantee Nonresponse

New in 2019 renewal

N/A

N/A

N/A

50%

Teacher Post-PD

April 2016 - March 2018

1,390

545

39%

40%

Teacher Post-PD Nonresponse

May 2016 - April 2018

846

182

22%

25%

Teacher Post-MWEE

May 2016 - January 2018

1,322

335

25%

30%

Teacher Post-MWEE Nonresponse

June 2016 - February 2018

987

148

15%

20%

aBounced emails have been subtracted from the sent number.

bFuture expected response rates are expected to be higher due to improvements made to the survey system in 2019 (e.g., improved communication about the evaluation system).


We are not proposing to make changes at this time, and we will update response rate calculations with a planned revision to this information collection. Therefore, we used the same calculations to estimate the number or respondents (n) from our previously approved data collection for our expected number of respondents when calculating the estimated hour burden in Part A, Question 12, as described below in Table 2.


Table 2: Expected Number of Respondents

Informant

Number of possible respondents annuallya

(a)

Expected response rate

(b)

Expected number of responses

(c)=(a)×(b)

Grantees

115

75%

86

Grantees nonresponse

29b

50%

15c

Post-PD teachers

2,507

40%

1,003

Post-PD teachers nonresponse

1,504d

25%

376

Post-MWEE teacherse

2,507

30%

752

Post-MWEE teachers nonresponse

1,755f

20%

351

TOTAL


2,583

a Possible respondents is based on the average of actual participation during three fiscal years FY15-17, as submitted in our previous clearance. We will update respondent calculations with a planned revision to this information collection.

b Estimated number of non-responses (i.e., 25% of 115 Grantees).

c Predicts a 50% response rate of grantee non-responses, i.e. (29 x .5 = 15) (new questionnaire in 2019).

d Estimated number of non-responses (i.e., 60% of 2,507 Post-PD teachers).

e The same teachers are surveyed after their PD (Post-PD Teachers) and again at the end of the following school year (Post-MWEE Teachers).

f Estimated number of non-responses (i.e., 70% of Post-MWEE teachers).


  1. Describe the procedures for the collection of information including:

    • Statistical methodology for stratification and sample selection,

The sample frame is the list of B-WET grant recipients and the list of teachers each grant recipient lists as teachers who participated in their grant-funded programs and is considered a census. Therefore, no sampling procedure is employed.

    • Estimation procedure,

Omega coefficient will be computed for measuring construct reliability. This coefficient provides a measure of the degree to which the survey items are measuring the same construct. Chronbach’s alpha is a commonly used estimation procedure to assess the internal consistency of scale items such as those on surveys. However, recent studies in psychometrics demonstrate that Chronbach’s alpha often results in an overestimation of reliability. Omega provides a better estimate of reliability with much less bias and is fast becoming the preferred method.

    • Degree of accuracy needed for the purpose described in the justification,

Omega values range from 0-1 with 1 representing high reliability and internal consistency with 0 representing a lack of reliability. An Omega value of 0.80 is considered an acceptable measure of reliability and will be used as the lower threshold. For tests of statistical significance (t-test, Chi square, and when appropriate HLM), 95% confidence intervals will be used.

    • Unusual problems requiring specialized sampling procedures, and

We do not anticipate any unusual problems that would rise to the level of shifting to specialized sampling procedures.

    • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

Data collection cycles are determined by the timing of the grant cycle and the timing of the professional development programs. No other periodicity will be utilized.

No changes to the procedures or statistical methodology have been made since the last approval.


  1. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.

Methods to Maximize Response Rates


The following are considered to be best practices for maximizing response rates, compiled from several sources (CDC, 2010; Millar & Dillman, 2011; Patton, 2008; Scantron Corp, 2014; Umbach, 2016):

  • Keep the format and content easy to understand and navigate

  • Keep the questionnaire as lean as possible so it takes the least amount of time possible to complete

  • Ensure that the questions are relevant to the respondents; allow for selecting NA as appropriate

  • Participation should be voluntary, anonymous, and confidential

  • Provide advance notice that the survey is coming

  • Contact the respondent four times, including a prenotification, an invitation, and 2 reminders

  • Include a copy of the questionnaire with the invitation to complete it, including an estimate of how much time it will take to complete it

  • Make the invitation as personal as possible while maintaining confidentiality

  • Include a deadline for completing the questionnaire

  • Publish results online for participants

While NOAA employs the above best practices above to the extent practicable, is not able to use the following best practices for this evaluation system:

  • Provide an incentive, especially a monetary one (however, the grantee is able to provide an incentive at their discretion)

  • Use mixed modes, if possible (e.g., email followed by snail mail) (only email addresses are available for contacting teachers)

  • Allow smartphone or tablet formatting, if possible (the questions are not appropriate for these formats)

Nonresponse Questionnaires

Currently no reminders are sent after the invitation to complete the nonresponse questionnaire. Given that the recipients have already received a pre-notice, an invitation, and 2 reminders for the initial questionnaire, only one reminder after 2 weeks will be added after the nonresponse surveys are distributed. A deadline will be included in the invitations and reminders.


The evaluation system was designed to include post-PD and post-MWEE nonresponse questionnaires to ensure that comparisons can be made between initial and nonrespondent teachers when response rates are below 80%. B-WET has engaged an evaluator to conduct analyses of these results.

All nonrespondents received a one-time email invitation with a Web link to an abbreviated version of a questionnaire. Results from these questionnaires have been compared with those from respondents to the initial questionnaire to determine if there are statistically and substantively/meaningfully significant differences.

If respondent and nonrespondent populations are determined not to be significantly and substantively different, no further analysis will occur. If it is determined that the nonrespondent population is significantly and substantively different from the respondent population, analysis with weighted adjustments for nonresponse, using a method such as those described in Part IV of Survey Nonresponse (Groves et. al. 2002), will be conducted for purposes of formally reporting/publishing results. In other instances, there will be an acknowledgement of how results from the respondent sample may differ from non-respondents.

  1. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


The majority of measures and procedures used as part of the B-WET evaluation system have been tested and successfully implemented by previous studies (e.g., “Evaluation of National Oceanic and Atmospheric Administration Chesapeake Bay Watershed Education and Training Program,” Kraemer et al., 2007; Zint et al., 2014). In addition, an exploratory study of the benefits of MWEEs found that the scales used as part of the proposed B-WET evaluation system (examined using exploratory factor analysis in SPSS and M+) are reliable and valid (Zint, 2012). Reliabilities, for example, ranged between good and excellent (i.e., Cronbach Alpha range: .70 to .90) and the amount of variance explained by the factors were substantial (i.e., range: 40% to 90). The measures used as part of the evaluation system have also been examined for face and content validity by stakeholders consisting of the nine members of NOAA’s internal B-WET Advisory group, three evaluation experts with knowledge of B-WET, three B-WET grantees, and two watershed scientists.


  1. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Individuals Consulted on Statistical Design:

Dr. Michaela Zint, Professor, School of Natural Resources & Environment, School of Education, and College of Literature, Science & the Arts at the University of Michigan developed the statistical design for the proposed evaluation system. She, in turn, consulted with:

  • Dr. Heeringa & Statistical Design Group members, Institute for Social Research, University of Michigan

  • Dr. Lee & Dr. Rowan, School of Education, University of Michigan

  • Dr. Rutherford & Dr. West, Center for Statistical Consultation and Research, University of Michigan


Individual Who Will Conduct Data Collection and Analysis:

The evaluation system is designed to collect data through Qualtrics or a similar online survey platform that automatically generates descriptive statistics. Data may also be downloaded from the survey platform for more sophisticated analysis by Tim Zimmerman, education evaluation associate for the B-WET program.


Bronwen Rice, B-WET National Coordinator, NOAA Office of Education (Bronwen.Rice@noaa.gov, 202.482.6797) will be responsible for managing the data collection process and for ensuring the functioning and maintenance of the evaluation system.


LITERATURE CITED


CDC Department of Health and Human Services. July 2010. Evaluation briefs, No. 21. https://www.cdc.gov/healthyyouth/evaluation/pdf/brief21.pdf

Groves, R. M., Dillman, D. A., Eltinge, J. L., and Little, R. J. A. 2002. Survey Nonresponse. John Wiley & Sons, Inc.: New York.

Kraemer, A., Zint, M., and Kirwan, J. 2007. An Evaluation of National Oceanic and Atmospheric Administration Chesapeake Bay Watershed Education and Training Program Meaningful Watershed Educational Experiences. Unpublished. http://chesapeakebay.noaa.gov/images/stories/pdf/Full_Report_NOAA_Chesapeake_B-WET_Evaluation.pdf

Millar, M. M. and D. A. Dillman. Summer 2011. Improving Response to Web and Mixed-Mode Surveys. Public Opinion Quarterly, Vol. 75, No.2. pp. 249-269.

Patton, M.Q. 2008. Utilization-focused evaluation. 4th ed. Sage: Los Angeles.

Scantron Corporation. 2014. Web page. http://scantron.com/articles/improve-response-rate

Umbach, Paul D. 2016 July 26. Increasing Web Survey Response Rates: What Works? Percontor, LLC. Webinar.

Zint, M. 2012. An exploratory assessment of the benefits of MWEEs: A report prepared for NOAA. University of Michigan: Ann Arbor, MI.

Zint, M., Kraemer, A. & Kolenic, G. E. 2014. Evaluating Meaningful Watershed Educational Experiences: An exploration into the effects on participating students’ environmental stewardship characteristics and the relationships between these predictors of environmentally responsible behavior. Studies in Educational Evaluation: Special Issue on Research in Environmental Education Evaluation, Vol. 41, 4-17.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorAdrienne Thomas
File Modified0000-00-00
File Created2022-04-20

© 2024 OMB.report | Privacy Policy