SUPPORTING STATEMENT - PART B for
OMB Control Number 0584-NEW
Study of Nutrition and Activity in Child Care Settings II (SNACS-II)
Constance Newman
Project Officer
USDA Food and Nutrition Service
1320 Braddock Place
Alexandria, VA 22314
Table of Contents
B.1 Respondent Universe and Sampling Methods 3
B.2 Procedures for the Collection of Information 6
B.3 Methods to Maximize the Response Rates and to Deal with Nonresponse 18
B.4 Test of Procedures or Methods to be Undertaken 19
B.5 Individuals Consulted on Statistical Aspects & Individuals Collecting and/or Analyzing Data 20
Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.
The respondent universe for SNACS-II includes (a) the 48 contiguous States and the District of Columbia (DC), (b) Child and Adult Care Food Program (CACFP) programs (child care centers, Head Start centers, family day care homes (FDCHs), at-risk after-school centers (called at-risk centers henceforth), and outside school hours care centers (OSHCCs)) from the sampled States and their sponsoring organizations if applicable, and (c) children and parents enrolled in the sampled programs. Through about 20,000 sponsoring organizations, the CACFP serves over 3.7 million children daily at over 66,000 child care centers.1 Table B.1.1 summarizes the universe, sample, and expected response rates for each respondent type (sampling stage or substrata) and overall. A description of the efforts designed to ensure a high response rate is described in response to Question B3.
Sampling Overview
The overall objective of the sampling plan is to provide nationally representative samples of CACFP programs; children, teens, and infants served by CACFP programs; and CACFP meals and snacks for program year (PY) 2022–2023. The sampling methods for SNACS-II will mirror those of SNACS-I (SNACS-I; OMB Number 0584-0615, expired 10/31/2019),), and the respondent universe will be similar, to ensure comparability of estimates across the two studies and provide required levels of statistical precision, while minimizing data collection costs and respondent burden. In addition, the SNACS-II design addresses specific challenges faced in SNACS-I.
We will use a multi-stage stratified cluster sampling design. The first stage of selection, also known as the primary sampling unit (PSU), will be the State. We will select a nationally representative probability sample of 25 States. In the second stage, we will select a sample of core-based statistical areas (CBSAs)2 and clusters of non-CBSA counties from the selected States as secondary sampling units (SSUs). In the third stage, we will sample CACFP programs3 within sampled SSUs. In the fourth and final stage, we will sample children, teens, and infants who are served by the sampled programs. We provide the sampling plan in more detail in Appendix O.
Table B.1.1. SNACS-II respondent universe, samples, and expected response rate by respondent category.
Respondent category |
Universe* |
Initial sample** |
Children sampled from eligible consents |
Expected overall eligibility and response rate*** |
Target number of respondents |
Overall eligibility and response rate from SNACS–I*** |
CACFP programs: |
158,9991 |
2,126 |
-- |
63% |
1,340 |
41% |
Sponsored non-Head Start child care centers |
18,7232 |
231 |
-- |
64% |
148 |
48% |
Sponsored Head Start child care centers |
11,218i |
484 |
-- |
64% |
310 |
54% |
Independent child care centers |
20,0463 |
253 |
-- |
64% |
162 |
48% |
Family day care homes |
87,398i |
500 |
-- |
64% |
320 |
23% |
At-risk centers |
19,2094 |
345 |
-- |
58%**** |
200 |
44% |
Outside-school-hours care centers (OSHCCs) |
2,405i |
313 |
-- |
64% |
200 |
36% |
State agency |
49 |
25 |
-- |
100% |
25 |
100% |
Children/parents enrolled in provider programs |
5,169,705i |
5,880 |
2,880 |
75% |
2,160 |
68% |
Sponsored non-Head Start child care centers |
1,357,5285 |
602 |
344 |
75% |
258 |
65% |
Sponsored Head Start child care centers |
416,944i |
1,260 |
720 |
75% |
540 |
83% |
Independent child care centers |
1,453,5156 |
658 |
376 |
75% |
282 |
65% |
Family day care homes |
640,061i |
1,680 |
480 |
75% |
360 |
-- |
At-risk centers |
1,215,1837 |
840 |
480 |
75% |
360 |
53% |
OSHCCs |
86,474i |
840 |
480 |
75% |
360 |
54% |
Parents (of Youth age 10 and older) |
5,169,7058 |
960 |
549 |
75% |
411 |
68% |
Parents (of Infants) |
695 |
400 |
75% |
300 |
68% |
|
Total |
10,498,458 |
9,686 |
3,829 |
-- |
4,236 |
-- |
*See the endnotes for more details on the calculations of the universes of the respondent categories.
**For children/parents, the initial sample is the initial number we will attempt to consent.
***For programs, the overall eligibility and response rate is a combination of the eligibility rate, the recruitment response rate, and the completion rate. The completion rate reflects the percentage of recruited respondents who complete the data collection activities. For example, we estimate an 80 percent recruitment rate for programs and an 80 percent completion rate among the programs we recruit, resulting in an overall response rate of 64 percent (.80*.80 = .64). For children and parents, since we attempt to consent all children/parents before selecting the sample of children/parents, the rates we present in this column reflect only the response rate among eligible, consented children/parents. For children/parents enrolled in provider programs and the parents of youth age 10 and older, we expect about 71 percent of the initial sample to consent and be eligible. For parents of infants, we expect about 86 percent of the initial sample to consent and be eligible. For children/parents enrolled in provider programs, we expect to sample 69 percent on average of the eligible consents. For the parents of youth age 10 and older, we expect to sample 80 percent on average of the eligible consents. For parents of infants, we expect to sample about 67 percent on average of the eligible consents. The SNACS-I response rates are based on available SNACS-I documentation and may not reflect the response rate formula that will be used for SNACS-II. However, based on the study team’s experiences in prior similar studies and our understanding of SNACS-I response rates and sampling issues, we are confident in the rates. The use of systematic sampling should reduce response burden, and increased contact planned during recruitment using a team of experienced and knowledgeable recruiters should increase response. These strategies, in addition to other features of the sampling and recruitment plans, should help to meet the target response rates.
**** The overall rate for at-risk centers incorporates an expected 10 percent ineligibility rate. For other types of programs, the ineligibility rate is expected to be very small.
Describe the procedures for the collection of information including:
Statistical methodology for stratification and sample selection,
Estimation procedure,
Degree of accuracy needed for the purpose described in the justification,
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
Detailed descriptions of the SNACS-II data collection activities can be found in Section A.2 of Supporting Statement A and Appendix B (including the data collection mode and changes to instruments since SNACS-I). Before data collection begins in sampled States, we will recruit the programs, their sponsors, and parents of children, infants, and teens to participate in the study. A team of experienced recruiters will use mail, email, telephone, and a study website to recruit participants. Appendix P summarizes the recruitment procedures.
The complete sampling plan is in Appendix O. The following section summarizes the SNACS-II sampling plan.
Stage 1: Selecting States: In the first sampling stage, we will select a national probability sample of 25 States. In comparison with the 20 States sampled in SNACS-I, this larger number of PSUs is expected to improve the precision of estimates by reducing the design effect due to intra-State correlation. We will select States using a stratified systematic probability proportionate to size (PPS) design, which supports an overall self-weighting sample of programs within domains of interest to minimize the design effect. We will oversample States with a higher proportion of rural counties or areas to ensure adequate sample sizes of rural programs. In addition, we will sample at least one State in each of the seven Food and Nutrition Service regions.
Stage 2: Selecting the secondary sampling units (SSUs). We will select a stratified PPS systematic sample of geographically defined SSUs within the sampled States. The first set of SSUs to define will be CBSAs, including metropolitan areas and micropolitan areas.4 For non-CBSA counties, we will require a minimum of 12 CACFP programs for each SSU; we will combine contiguous counties as needed until we reach this minimum. In SNACS-I, SSUs were defined to have approximately 30 listed providers.5 However, we may adjust the minimum number based on the distribution of programs across the SSUs after we collect the lists of programs from the sampled States.
The selection of SSUs will use a measure of size (MOS) based directly on program average daily attendance (ADA) instead of basing it on Census information about children in poverty (as was done in SNACS-I). This approach is expected to yield more precise estimates. The MOS used in sampling SSUs will be the aggregated ADA within the SSU.
Stage 3: Selecting programs. In sampling programs of providers, SNACS-II is designed to: (1) minimize the chances of sampling more than one provider from a single sponsor to minimize intra-class correlation and reduce sponsors’ response burden; and (2) minimize the sampling of more than one type of program from the same provider to minimize the response burden on providers.
In each sampled SSU, we will stratify the list of CACFP programs into seven mutually exclusive groups by program type: (1) sponsored child care centers; (2) independent child care centers; (3) Head Start centers; (4) FDCHs; (5) at-risk centers; (6) OSHCCs, and (7) programs associated with providers that have multiple programs of different types. Within each of the seven strata, providers may appear multiple times in the list—once for each program they operate. Programs will be sorted by the ADA of their sponsor (where applicable) and, within that, by their provider’s ADA, and then by their own program ADA. We will then draw a systematic sample of programs, with equal selection probabilities within each stratum. Most of the sampled programs will be selected from the first six strata because most programs are within CACFP providers that operate only one type of program.6
The sampling approach will help ensure that providers in different geographical areas with different program sizes are represented, and will minimize the possibility of sampling multiple programs of the same type from the same provider or sponsor. The sorting by the sponsor’s ADA means providers of a given type who are associated with the same sponsor will be grouped together in the sorted list, and when a systematic sample of programs of providers from that sorted list is selected later, the likelihood of sampling multiple programs of the same type from the same sponsor is minimized.
We will clearly link programs sampled in the seventh stratum to the providers they operate under, and this distinction of which of their program(s) were sampled will be communicated to the provider when they are recruited into the study. For example, providers who operate both a child care center and an at-risk center will know we are asking them to provide data (and, where applicable, cooperate with onsite data collection) for the child care center or the at-risk center—whichever program was sampled—not both. If, during recruitment or data collection, a program sampled from Strata 1 through 6 is found to operate under a provider that operates more than one type of program, we will select one of the programs randomly (we will apply a weighting adjustment for this subsampling).
We will select a supplementary sample of center-based programs to achieve desired levels of precision for the meal cost estimates. This supplementary sample is needed because FDCHs are excluded from the meal cost estimates. We will select a supplementary sample of 144 center-based programs to contribute data for Objective 6 (60 child care centers, 60 Head Start centers, 12 at-risk centers, and 12 OSHCCs). We will do this using the same approach used to select programs for the main sample, but we will restrict the sampling frame to programs that are not selected into the main sample. The weights for analyses of meal costs will take into account the probability of selection at each phase: selection into the main sample, and selection into the supplementary sample. Combined with the sample of 300 center-based programs sampled for onsite data collection under Objectives 3a and 3b, this will yield a total sample of 444 center-based programs for the meal cost data collection (150 child care centers, 150 Head Start centers, 72 at-risk centers, and 72 OSHCCs).
Stage 4: Selecting children: We will select a subsample of 420 of the 1,340 programs in the sample for onsite data collection. Specifically, when we select the sample of programs, we will designate random subsamples of programs of each type as part of the onsite subsample. In child care centers, Head Start centers, and FDCHs, we will focus the child sample on the primary age groups served by these programs—ages 1 to 5. Similarly, in at-risk centers and OSHCCs, we will focus the child sample on the primary age group served by these programs—ages 6 to 12. This focused selection of children will avoid the problems encountered in SNACS-I, where children outside these age ranges were allowed into the sample but, because there were so few of them that ended up in the sample, ultimately contributed little to analyses of child-level outcomes. The inclusion of children (ages 1 to 5) from FDCHs to assess their dietary intake on days they are in child care and on a day when they are not in child care differs from the SNACS-I sampling approach. Also different from the SNACS-I sampling approach, the respondent universe in SNACS-II includes teenage participants (ages 10–18) who attend at-risk afterschool centers or OSHCCs and will be asked to participate in the Food and Physical Activity Experiences Survey (Appendices F21/F22)
In addition to the sampling of children and infants within classrooms, SNACS-II will collect data from 720 teens—defined as ages 10 to 18—in at-risk centers and OSHCCs. We will include all teens from the 60 at-risk centers and 60 OSHCCs participating in child-level data collection. Additionally, we will select one classroom with children ages 6 to 12 in each of these at-risk centers and OSHCCs to complete these data collection activities. We will also include some of the children sampled for Objectives 3a and 3b—specifically, children ages 10 to 12—in the teen study. To reach the 720 completes needed for the teen study, we will sample one additional classroom in each AR center and OSHCC, attempt to obtain consent for all teens in the sampled classrooms, and sample additional teens per classroom (among the consented teens). The number of additional teens needed to reach the targeted number of completes will depend on the number of youth ages 10 to 12 who are sampled for Objectives 3a and 3b. Based on the actual observed distribution of teens across at-risk centers and OSHCCs, we will revise the sampling approach as needed. For example, we may revise the selection of classrooms or the number of teens selected per classroom. We will increase or reduce the number of teens sampled exclusively for the teen study, factoring in expected response rate, to ensure we achieve the target number of completes.
We will compute analysis weights at the program and child levels for each instrument or combination of instruments, consistent with proposed analysis plans and completion rates. We will design the weights to bring the weighted distribution of the sample back in line with the population distribution and to significantly reduce, if not eliminate, the potential for bias resulting from nonresponse. The various analysis weights comprise base weights that account for selection probabilities and adjustments to those weights for nonresponse.
The base weight for each stage of selection also accounts for the sampling probabilities of prior selection stages and any nonparticipation in those prior stages. For example, the base weight for a program is the inverse of the probability of selection for the program, and will be the product of the PSU adjusted sampling weight, SSU adjustment weight, and the program sampling weight. We will then adjust these cumulative base weights for program nonresponse. We will compute the nonresponse adjustment factors within subsets of programs referred to as “weighting cells.” These cells will likely be based on variables or the propensity scores resulting from logistic regression models that predict the likelihood of responding (an alternative approach is chi-square automatic interaction detection [CHAID]). Possible covariates in these models may include variables such as geography, level of urbanicity, type and size of program, and other program characteristics. For child-level weights, an examination of whether factors such as gender and age (if made available to us by the programs) are correlated with both child-level response propensity and the child-level outcomes will inform whether we should include those factors in child-level nonresponse modeling and associated cell creation. To compute the child-level weight, we will start with the program weight, make a child-level adjustment for inability to obtain consent and then compute and apply the factor for sampling children among the consented children in the program. When the response rate for a particular program or provider is high, we may consider the use of a within-program adjustment to take advantage of the correlations among those children without introducing large weighting effects.
In addition to these “full sample” analysis weights, we will attach a series of jackknife replicate weights to each data record for variance estimation. In addition to the replicate weights, we will also provide stratum and unit codes in the data files to permit calculation of standard errors using the full sample weights with Taylor Series approximations.
We will select some States with certainty, if their MOS is sufficiently large relative to other States in the same stratum.7 For the other (noncertainty) States, the variance strata will be the same as sampling strata, and the variance PSUs will be the selected States. We will pair the non-CBSA SSUs across certainty States according to their geographic locations to form variance strata. We will arrange the CBSA SSUs within the certainty States into pairs according to their geographic locations to form variance strata. We will provide information that permits analysts to account for the finite population correction (due to the high sampling rates of States in the first stage of selection) when computing variance estimates. We expect the resulting variance estimates to be an overestimation, because of the cross-State pairing of the non-CBSA SSUs in certainty States, given that we will select only a small number of non-CBSA SSUs. Also, the systematic selection of CBSA SSUs in the certainty States reduces the true variance in a way that cannot be captured, also resulting in a conservative variance estimate.
Finally, we will conduct a nonresponse bias analysis for any data collection component with a unit response rate below 80 percent or, where applicable, a cumulative response rate below 80 percent. The goals of nonresponse bias analysis are to assess the extent to which (1) nonresponse has introduced an appreciable risk of bias, and (2) the weighting process has corrected for any such risk. Procedures to achieve these goals include (1) identifying factors that are available for both respondents and nonrespondents and that are associated with nonresponse, (2) determining which of these factors are also associated with key study variables, (3) using this information to run response propensity models and form cells based on the resulting covariates or propensity scores, and (4) checking to see if the weighting process corrected imbalance on characteristics associated with both nonresponse and key study variables.
For the full program sample, the goal is a total of 1,340 participating programs. Assuming a conservative intra-PSU+SSU correlation of 10 percent and a design effect from weighting (DW) of 1.5,8 this will result in an overall design effect of about 1.7 to 1.9 for each key program subgroup.9 The half-width of a 95 percent confidence interval (CI) is 7.7 to 10.3 percentage points for each key program subgroup, resulting in a half-width of 3.7 percentage points for all program types combined (Table B.2.2). All of the key subgroups and most of the other subgroups have half-widths of 10 percentage points (rounded) or less. Subgroups within the sponsored center subgroup have larger half-widths because some of these center types are rare (that is, they represent small percentages of the universe and the sample of CACFP programs).10
Table B.2.2. Precision levels for program subgroups: Objectives 1 and 2
CACFP program type |
Program completes |
Programs per SSU |
Overall design effect |
95
percent |
Total |
1,340 |
|
|
3.7 |
Key subgroups |
||||
FDCHs |
320 |
4.0 |
1.9 |
7.7 |
Head Start centers |
310 |
3.9 |
1.9 |
7.7 |
Child care centers |
310 |
3.9 |
1.9 |
7.7 |
Independent centers |
162 |
2.0 |
1.7 |
9.9 |
Sponsored centers |
148 |
1.9 |
1.6 |
10.3 |
At-risk centers |
200 |
2.5 |
1.7 |
9.1 |
OSHCCs |
200 |
2.5 |
1.7 |
9.1 |
Other subgroups |
||||
Urbanicity of child care centers, Head Start centers and FDCHs |
||||
Rural |
235 |
2.9 |
1.8 |
8.6 |
Urban |
705 |
8.8 |
2.7 |
6.0 |
Sponsorship of sponsored centers |
||||
Sponsored-affiliated |
92 |
1.1 |
1.5 |
12.6 |
Sponsored-unaffiliated |
56 |
0.7 |
1.5 |
16.0 |
Cooperate/chain |
64 |
0.8 |
1.5 |
15.0 |
Other sponsored |
84 |
1.1 |
1.5 |
13.2 |
Size of center, for child care centers and Head Start centers |
||||
Small centers |
207 |
2.6 |
1.7 |
9.0 |
Medium centers |
207 |
2.6 |
1.7 |
9.0 |
Large centers |
207 |
2.6 |
1.7 |
9.0 |
Tier of FDCH |
||||
FDCH Tier I |
160 |
2.0 |
1.7 |
10.0 |
FDCH Tier II |
160 |
2.0 |
1.7 |
10.0 |
Note: Assumes 25 PSUs and 80 SSUs; weighting design effect =1.5; intra-PSU+SSU correlation = .10; and variable population percent = 50. Details may not sum to totals due to rounding.
CACFP = Child and Adult Care Food Program; CI = confidence interval; FDCH = family day care home; OSHCC = outside school hours care center; SSU = secondary sampling unit.
At the child level (for Objectives 3a and 3b), we assume an intra-program correlation of 30 percent11. This is a relatively conservative assumption that should cover most of the survey variables. For the full sample of children, the half-width is 4.1 percentage points, and for children within the key program subgroups, all half-widths are 11.8 percentage points or less (Table B.2.3). Half-widths for children in some of the other program subgroups are larger; however, half-widths for children in the non-key subgroups—urban and rural center programs and small, medium, and large centers—are 10 percentage points or less.
The precision of the estimates generated for the teen subgroups (for Objective 3c; not shown) will be the same as the precision of estimates for the younger children in the at-risk centers and OSHCCs (10 percentage points). For plate waste estimates for Objective 4 (not shown), the precision of estimates will be better than the precision of estimates for children in Objectives 3a and 3b because the analysis will use additional meal observations that will be collected from children but not used for Objective 3a. For the sample of infants (Objective 5; not shown), we compute a half-width of a 95 percent CI of 8.0.
Table B.2.3. Precision levels for child-level estimates by program subgroup: Objectives 3a and 3b
CACFP program type |
Total responding parents/caregivers |
Overall design effect |
95
percent |
Total |
2,160 |
|
4.1 |
Key subgroups |
|||
FDCHs |
360 |
2.8 |
8.6 |
Head Start centers |
540 |
4.0 |
8.5 |
Child care centers |
540 |
4.0 |
8.5 |
Independent centers |
282 |
3.8 |
11.3 |
Sponsored centers |
258 |
3.8 |
11.8 |
At-risk centers |
360 |
3.8 |
10.0 |
OSHCCs |
360 |
3.8 |
10.0 |
Other subgroups |
|||
Urbanicity of child care centers, Head Start centers and FDCHs |
|||
Rural |
360 |
3.2 |
9.3 |
Urban |
1,080 |
6.0 |
7.3 |
Sponsorship of sponsored centers |
|||
Sponsored-affiliated |
162 |
3.8 |
14.9 |
Sponsored-unaffiliated |
96 |
3.8 |
19.4 |
Cooperate/chain |
114 |
3.8 |
17.9 |
Other sponsored |
144 |
3.8 |
15.7 |
Size of center, for child care centers and Head Start centers |
|||
Small centers |
360 |
3.8 |
10.0 |
Medium centers |
360 |
3.8 |
10.0 |
Large centers |
360 |
3.8 |
10.0 |
Tier of FDCH |
|||
FDCH Tier I |
180 |
2.4 |
11.3 |
FDCH Tier II |
180 |
2.4 |
11.3 |
Note: Assumes 25 PSUs and 80 SSUs; weighting design effect =1.5; intra-PSU+SSU correlation = .10; intra-program correlation = .30; and variable population percent = 50. Details may not sum to totals due to rounding.
CACFP = Child and Adult Care Food Program; CI = confidence interval; FDCH = family day care home; OSHCC = outside school hours care center.
For estimating meal costs (Objective 6), an intra-PSU+SSU correlation of 10 percent and a DW of 1.5 are assumed; this will result in a design effect of 1.6 for child care and Head Start centers and 1.5 for at-risk centers and OSHCCs.12 A population standard deviation of 35 percent of the mean is assumed for meal costs. This is a very conservative benchmark—the range of costs across programs should be tighter in general. The half-width of a 95 percent CI is 4.1 percentage points for the full sample of programs, 7.2 percentage points for child care centers and Head Start centers, and 9.9 percentage points for at-risk centers and OSHCCs (Table B.2.4).
Table B.2.4. Precision levels for estimates of meal costs by program subgroup: Objective 6
CACFP program type |
Meal cost program subsample size |
Programs per SSU |
Overall
|
95
percent |
Total |
444 |
|
|
4.1 |
Key subgroups |
||||
Child care centers |
150 |
1.9 |
1.6 |
7.2 |
Independent centers |
77 |
1.0 |
1.5 |
9.5 |
Sponsored centers |
73 |
0.9 |
1.5 |
10.0 |
Head Start centers |
150 |
1.9 |
1.6 |
7.2 |
At-risk centers |
72 |
0.9 |
1.5 |
9.9 |
OSHCCs |
72 |
0.9 |
1.5 |
9.9 |
Other subgroups |
||||
Urbanicity of child care centers, Head Start centers and FDCHs |
||||
Rural |
75 |
0.9 |
1.5 |
9.7 |
Urban |
225 |
2.8 |
1.8 |
6.1 |
Sponsorship of sponsored centers |
||||
Sponsored-affiliated |
45 |
0.6 |
1.5 |
12.6 |
Sponsored-unaffiliated |
27 |
0.3 |
1.5 |
16.1 |
Cooperate/chain |
31 |
0.4 |
1.5 |
15.1 |
Other sponsored |
41 |
0.5 |
1.5 |
13.2 |
Size of center, for child care centers and Head Start centers |
||||
Small centers |
100 |
1.3 |
1.5 |
8.5 |
Medium centers |
100 |
1.3 |
1.5 |
8.5 |
Large centers |
100 |
1.3 |
1.5 |
8.5 |
Notes: Assumes 25 PSUs and 80 SSUs; weighting design effect =1.5; intra-PSU+SSU correlation = .10; and variable population percent = 35 percent. Details may not sum to totals due to rounding.
CACFP = Child and Adult Care Food Program; CI = confidence interval; OSHCC = outside school hours care center; SSU = secondary sampling unit.
There are no unusual problems that require specialized sampling procedures.
We will conduct the data collection effort one time only during PY 2022–2023. Concern regarding the periodicity of data collection cycles is not applicable.
Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.
Our plans for maximizing response rates begin during the recruiting phase. All communications with respondents will be courteous, supportive, and informative, and designed to put respondents at ease, address any questions or concerns they might have, and gain their cooperation. Communications will emphasize the important role each respondent plays in the study and acknowledge their time and effort. For applicable activities, the communications will describe incentives (see Section A.9 of Supporting Statement A). We will conduct communication through multiple modes, including mail, email, phone, text messages, and the study website, and materials for FDCHs, parents/guardians, and teens will be available in English and Spanish.
Another key aspect of achieving high response rates is training data collectors well and carefully monitoring their work. Trainings will emphasize strategies for gaining cooperation, effectively providing assistance or administering interviews, and working with children. Data collection supervisors will communicate with their teams regularly to identify and resolve problems quickly. In addition, because FDCH operators may be afraid to let strangers into their homes,13 we will provide interviewers who visit FDCHs with colorful name tags as well as tote bags, caps, or aprons branded with the study name and logo. We will instruct interviewers to make sure the study name and logo is visible if programs look outside to see who is knocking at the door.
The study team will administer instruments in the mode that is least expensive and expected to be most convenient for respondents. Bilingual interviewers and Spanish versions of instruments will be available. When on site, interviewers will accommodate the scheduling needs of the program to minimize disruptions.
Throughout data collection, the study team will monitor response rates carefully to identify any subgroups with lagging rates and promptly follow up on uncompleted instruments to avoid potential nonresponse bias. For example, if FDCHs that are not part of the onsite sample have lower response rates to the Provider Survey or Menu Survey, the study team will send additional email reminders, conduct additional phone follow-up to encourage participation, and offer to help them complete the Provider Survey over the phone. Parents will be texted and called regularly to remind them of the dietary intake interviews. The study team will share information with interviewers about instruments that are being completed offsite so they can follow up as needed to answer questions or connect sample members with the appropriate toll-free number, email address or web login information.
Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.
Most of the instruments and procedures prepared for SNACS-II were used in SNACS-I, and the study team has firsthand experience fielding very similar instruments. The study team therefore conducted a streamlined pre-test focused on instruments with new content or procedures. They conducted the pretest in November and December 2020.
The study team conducted the pre-test with four providers (one FDCH, one sponsored center, one independent center, and one at-risk center), two parents/guardians, and two youth. The providers pre-tested the Provider Survey and Sponsor/Center Cost Interview, and also provided feedback on the meal observation procedures, Infant Intake Form incentive plans, and the study recruiting materials. The parents/guardians pretested the Parent Interview and the youth pretested the Food and Physical Activity Experiences Survey. Appendix Q contains the memorandum summarizing the pretest procedures, findings, and resulting instrument changes. All resulting changes from the pretest are reflected in the current study materials.
Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.
Mathematica and Westat will collect and analyze the information, in coordination with FNS. Table B.5.1 lists the individuals who consulted on the data collection instruments, procedures, or statistical aspects of the design.
Table B.5.1. Individuals consulted on data collection or analysis
Name |
Title |
Affiliation |
Telephone number |
Barbara Carlson, M.A. |
Director of Survey Statistics |
Mathematica |
617-674-8372 |
Jill DeMatteis, Ph.D. |
Vice President and Associate Director of Statistical Staff |
Westat |
301-517-4046 |
Sarah Forrestal, Ph.D. |
Senior Researcher |
Mathematica |
312-994-1017 |
Mary Kay Fox, M.Ed. |
Senior Fellow |
Mathematica |
617-301-8993 |
Elizabeth Gearan, M.S. |
Senior Researcher |
Mathematica |
617-301-8978 |
Geri Henchy |
Director of Nutrition Policy and Early Childhood Programs |
Food Research and Action Center |
202-986-2200 |
Jianzhu (Jane) Li, Ph.D. |
Senior Statistician |
Westat |
240-314-2566 |
Roline Milfort, Ph.D., P.M.P. |
Senior Study Director |
Westat |
301-251-8229 |
Catherine Stafford |
Child Health and Nutrition Manager |
CocoKids |
925-899-5885 |
Susan Hooker |
Executive Director |
Concern for Youth |
607-324-0808 |
Mingshan Zheng |
Mathematical Statistician |
National Agricultural Statistics Service |
202-720-0830 |
Constance Newman, PhD |
Senior Research Analyst |
FNS |
703-305-2576 |
Alice Ann Gola, PhD |
Social Science Research Analyst |
Formerly at FNS |
|
Endnotes
1 Source: FNS National Data Bank (NDB), last accessed on December 12, 2019, at https://fns-prod.azureedge.net/sites/default/files/data-files/Keydata-August-2019.pdf.
2 Metropolitan and micropolitan statistical areas are collectively referred to as core-based statistical areas. Metropolitan statistical areas have at least one urbanized area with a population of 50,000 or more. Micropolitan statistical areas are a new set of statistical areas that have at least one urban cluster with a population of at least 10,000 but less than 50,000.
3 Although we will not be formally sampling sponsors, we will coordinate with sponsors during recruitment. For sponsored providers, we will contact sponsors to gain their cooperation before contacting any providers. We will also collect cost-related data from sponsors as needed.
4 CBSAs that split across States will include only the part of the CBSA within the sampled State.
5 In SNACS-I, the number of providers in each area was not available prior to SSU selection and therefore was estimated using data on low-income children from the ACS and Childcare Aware of America (2012) with further adjustments for whether the SSU is urban or rural.
6 In SNACS-I, which did not take the issue of multiple providers with multiple types of programs into account in sampling providers, only 51 of more than 3,000 sampled providers operated more than one program. Although this is a small proportion overall, it is important to plan for this situation. In SNACS-I, only 20 of these 51 providers completed any data collection activities, and none provided data for both of the programs they were sampled for. (Source: SNACS-I summary memorandum on recruitment; April 2, 2018.)
7 State MOS is the total number of children ages 5 to 18 living in low-income households, defined as households with annual incomes less than 100 percent of the Federal poverty level, based on the most recently available year of American Community Survey (ACS) 1-year State-level estimates. This is used as a proxy for State aggregated CACFP average daily attendance because not all States will have this data available.
8 This weighting design effect used throughout (1.5) is meant to be very conservative, accounting for both differential sampling rates and nonresponse adjustments.
9 The overall design effect is , where is 1.50, = 4 or 3.875 or 2.5 (average cluster size), = 10 percent.
10 Because we are using conservative assumptions with respect to the expected design effects, it is quite possible that some of these subgroups will meet the target of 10 percentage points when we are able to incorporate more information from SNACS-I.
11 The overall design effect is , where is 1.50, =1.50 or 1.13, , with the approximating formula from Skinner et al. (1989).
12 The overall design effect is , where is 1.50, = 1.88, = 10 percent for the centers and, for at-risk centers and OSHCCs, the design effect is simply DW=1.5 as is less than 1.
13 Ward, D.S., Vaughn, A.E., Burney, R.V., and Østbye, T. (2016). Recruitment of family child care homes for an obesity prevention intervention study. Contemporary Clinical Trials Communications, 3, 131-138.
1 These numbers were obtained from Tables 11 and 12 of the Food and Nutrition Service (FNS) National Data Bank April 2020 key data report, retrieved from https://www.fns.usda.gov/sites/default/files/data-files/Keydata-April-2020b.pdf.
2 This is an estimate based on the ratio of two proportions: the proportion of all child care centers that are sponsored non-Head Start child care centers (0.099) over the proportion of all child care centers that are independent and sponsored non-Head Start child care centers (0.205). This ratio is then multiplied by an estimate of the number of child care centers that are independent and sponsored non-Head Start child care centers (38,769). This estimate is based on the difference between the total number of child care centers in FY 2020 (71,601) and the number of Head Start centers, OSHCCs, and at-risk centers listed in the table (32,832). The proportions of all child care centers that are independent and sponsored non-Head Start child care centers can be found on page 5-1 of the Volume 1 report of the CACFP characteristics study (https://www.fns.usda.gov/sites/default/files/ops/CACFPSponsor-Provider-Characteristics-Vol1.pdf). The total number of child care centers in FY 2020 is available in Table 11 of the FNS National Data Bank April 2020 (see endnote i for reference).
3 This is an estimate based on the difference between the estimated number of independent and sponsored non-Head Start child care centers (38,769) and the estimated number of sponsored non-Head Start child care centers (18,723) in the table. See endnote ii for more details on these estimates.
4 This is an estimate based on the number of centers that participated in the at-risk component of the CACFP in FY 2015 (16,685) multiplied by the rate of growth in total child care centers between FY 2015 (62,194) and FY 2020 (71,601). The number of centers that participated in the at-risk component of the CACFP in FY 2015 can be found on page 8-1 of the Volume 1 report of the CACFP characteristics study (see endnote ii for the study reference). The total number of child care centers in FY 2015 can be found in Table 11 of the FNS National Data Bank August 2016 key data report (https://www.fns.usda.gov/sites/default/files/datastatistics/keydata-august-2016.xls). The total number of child care centers in FY 2020 is available in Table 11 of the FNS National Data Bank April 2020 (see endnote i for reference).
5 This is an estimate based on the ratio of two proportions: the proportion of all child care centers that are sponsored non-Head Start child care centers (0.099) over the proportion of all child care centers that are independent and sponsored non-Head Start child care centers (0.205). This ratio is then multiplied by an estimate of the total average daily attendance of independent and sponsored non-Head Start child care centers (2,811,043). This estimate is based on the difference between the total average daily attendance of child care centers in FY 2020 (4,529,644) and the total average daily attendance of Head Start centers, OSHCCs, and at-risk centers listed in the table (1,718,601). The proportions of all child care centers that are independent and sponsored non-Head Start child care centers can be found on page 5-1 of the Volume 1 report of the CACFP characteristics study (see endnote ii for the study reference). The total average daily attendance of child care centers in FY 2020 is available in Table 11 of the FNS National Data Bank April 2020 (see endnote i for reference).
6 This is an estimate based on the difference between the estimated total average daily attendance of independent and sponsored non-Head Start child care centers (2,811,043) and the estimated total average daily attendance of sponsored non-Head Start child care centers (1,357,528) in the table. See endnote v for more details on these estimates.
7 This is an estimate based on the proportion of the estimated number of at-risk centers (19,209) over the total number of child care centers in FY 2020 (71,601). This ratio is then multiplied by the average daily attendance of all child care centers in FY 2020 (4,529,644). The total number of child care centers and the total average daily attendance of child care centers in FY 2020 are available in Table 11 of the FNS National Data Bank April 2020 (see endnote i for reference).
8 This estimate assumes a one-to-one ratio of children enrolled in provider programs to parents of those children.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Modified | 0000-00-00 |
File Created | 2022-05-03 |