Measurement Development: Quality of Caregiver-Child Interactions for Infants and Toddlers
Supporting Statement, Part B
For OMB Approval
February 28, 2011
This section provides supporting statements for each of the five points outlined in Part B of the Office of Management and Budget (OMB) guidelines, in order to collect information for the Measurement Development: Quality of Caregiver-Child Interactions for Infants and Toddlers (Q-CCIIT) project. The submission requests clearance for recruiting and sampling procedures, data collection instruments and procedures, plans for data analysis, and reporting of findings from the Q-CCIIT focus groups and pilot and psychometric field tests. As noted in Part A, the Q-CCIIT project is a three-year project that started in 2010–2011, with a two-year data collection and reporting phase (FY 2011–2013).
Sampling and estimation procedures
The Q-CCIIT project data collection will include semistructured discussions with focus groups representing key stakeholders on the use and properties of the new Q-CCIIT observation measure to assess the quality of infant/toddler child care. We will use purposeful identification to select respondents for focus groups, and will begin by inviting parents and caregivers from child care settings we visited during pretesting efforts (see sections A.1 and B.4). So that discussions represent a broad range of opinions and perspectives, we will invite parents and caregivers from diverse socioeconomic and cultural backgrounds, and might include child care settings beyond those from the pretest. For a focus group with training and technical assistance (T/TA) providers, we will circulate invitations to anticipated attendees of a regional or national T/TA networking meeting.
The Q-CCIIT project will also consist of a pilot test and a psychometric field test. Each phase has multiple sampling stages. The pilot test will consist of 120 classrooms from 4 geographic locations; the psychometric field test will include 400 classrooms from 10 geographic locations. Each phase will consist of three purposive sampling stages: geographic location, child care setting, and classroom.
One of the major concerns of this data collection is to have the greatest cultural and economic diversity possible as well as a representative range of quality. To obtain information needed to select locations and centers purposively for diversity, we will consider a number of different sources. First, we will receive information about Early Head Start program characteristics from the Early Head Start Family and Child Experiences Survey (Baby FACES) study. Second, in selecting locations, we will investigate state guidelines for ratio and group size (as a proxy for quality) as well as information from national studies and Quality Rating and Improvement Systems, if available. Third, we will consider the locations of universities in relation to the centers in an attempt to sample classrooms with high levels of maternal education and exemplary child care programs, which would add to the diversity we seek. Third, we will consult the Common Core of Data, which maintains demographic information on local elementary schools. Fourth, we can use U.S. Census data to find additional demographic information at the local level. All these sources will inform our purposive selection of geographic locations for sampling programs/settings and classrooms. See Section A.2 for more on the recruitment of child care settings into the Q-CCIIT project.
Pilot test. For the pilot phase, we will purposively select 4 locations. In each, we will select 5 to 10 center-based programs depending on how many are needed to yield 10 infant and 10 toddler classrooms.3 Some of these may be mixed-age classrooms. We will select 10 family child care (FCC) settings within each of the locations. To select settings, we will consult lists from Child Care Resource and Referral agencies (CCR&Rs) in the same geographic areas as the centers and select them purposively to obtain diversity as described above. This will give us a total of 40 FCC settings in the pilot. From the center-based programs, we will have a total of 80 classrooms selected for cultural, linguistic, and economic diversity to the extent possible, with a minimum of 30 infant and 30 toddler classrooms total across all sites (the remaining classrooms may be mixed-age). The pilot data collection phase includes the Q-CCIIT observation, the caregiver background questionnaire, and parent-report child competence questionnaires.
Psychometric field test. For the psychometric field test, we will begin by selecting 10 Early Head Start programs from the 89 currently participating in the Baby FACES study. We will target those programs in large urban or suburban geographic areas that include a university so that we can access a diverse population across maternal education, income, cultural, and linguistic backgrounds. To ensure representation of the child care population, we will also include Early Head Start programs in rural areas, from which community child care providers have differing access to supports. We will also choose these programs based on proximity to other Early Head Start programs.
For each of the 10 Early Head Start programs selected, we will attempt to obtain information about the geographic boundaries of its service area. We will obtain a list of all other Early Head Start programs adjacent to this program to understand the aggregate Early Head Start service area through the Head Start Program Information Report. We will also obtain a list of all FCC settings and community-based centers that serve both infants and toddlers up to 30 months old in each Early Head Start service area using CCR&Rs for the appropriate zip codes.
In each aggregate Early Head Start service area, we will purposively select about 5 Early Head Start centers, 2 community-based centers, and 10 FCC settings. We will assign random numbers to facilitate the selection of the appropriate number of settings within each service area.
We will create a second area, not serviced by Early Head Start, adjacent to each of the 10 Early Head Start service areas. The second areas will be about the same size as the aggregate Early Head Start service areas and will not include child care settings that serve Early Head Start children through a partnership with participating Early Head Start programs. To create a second area, we will start with a radius of 10 miles beyond the aggregate Early Head Start service area. We will then expand this by increments of 5 miles until we have recruited the number of classrooms needed. The second area is for data collection purposes and will be used to ensure the representativeness of our findings across different centers and FCC settings.
In the area not served by Early Head Start, we will select about 3 community-based centers and 10 FCC settings.4 After ensuring that a wide variety of community-based programs and FCC settings are available, we will use random numbers to select the number we need.
In summary, we will have selected 10 geographic locations based on the locations of our original 10 Early Head Start programs. In each location, we will have two areas of about the same size, one that is served by Early Head Start grantees and one that is not (Figure B.1). There will be about 5 Early Head Start centers, 5 community-based centers, and 20 FCC settings for each of the 10 geographic locations. This gives approximate totals of 50 Early Head Start centers, 50 community-based centers, and 200 FCC settings.
After selecting the centers, we will select classrooms. We will treat each FCC setting as a single classroom for sampling and, at each center-based program, we will select a minimum of two classrooms, which will create 100 classrooms from Early Head Start centers, 100 from community-based centers, and 200 from FCC settings, for a total of 400 observations. We will select infant and toddler classrooms before mixed-age classrooms, if possible. If an abundance of center-based classrooms meet our specifications, we will assign them random numbers to aid in selecting the appropriate number. It is unlikely that many of the 200 FCC settings will be devoted solely to infants or toddlers. Instead, they will probably include children of mixed ages.
Depending upon the number of infants and toddlers under 30 months old in each classroom, we might have to include additional observation classrooms from additional centers to reach our desired sample size for parent-report child competence questionnaires (N = 1,600 at time one). In each classroom selected, we will include all infants and toddlers younger than 30 months in our population of children eligible to participate in the parent-report data collection activities.
Figure B.1
Sampling for the Psychometric Field Test
1 Geographic Location
(10
total geographic locations)
1 Area Served by
1 Area Not
Served by
Early
Head Start
Early Head Start
~10 FCCs
~2 Center-Based
~5 Early Head
~3 Center-Based
~10 FCCs
Programs
Start
Centers
Programs
1 Classroom
>2
Classrooms
(mixed-age)
(~1 infant,
1 toddler)
>2
Classrooms
>2
Classrooms
1 Classroom (mixed-age)
(~1 infant,
1 toddler)
(~1 infant,
1 toddler)
Note: Results in 400 classrooms across 10 geographic locations. We anticipate approximately 200 FCC classrooms, 100 toddler classrooms (50 are Early Head Start, 50 are community based), and 100 infant classrooms (50 are Early Head Start, 50 are community based).
The psychometric field test will include test-retest and validation measure observation components that require further selection of classrooms within the sample. We will assign random numbers to the classrooms to assist with selecting the 60 test-retest classrooms. Validation classrooms will be selected based on the schedule of Q-CCIIT observations to maximize cost-efficiency. Classrooms could potentially be selected for both test-retest and validation observations.
Sampling and estimation procedures
Estimation procedure. Because the sampling for this project will be done almost entirely in a purposive, as opposed to probabilistic, manner, no weights will be constructed to account for the probability of selection. However, we do plan to create weighting adjustments to account for nonresponse at the various stages of data collection.
Degree of accuracy needed for the purpose described in the justification. As described in A.16, analyses with the psychometric field test data will involve calculation of reliability and validity estimates for all classrooms as well as subgroups of classrooms based on ages served. In addition, the field test will include validation of the Q-CCIIT measure in predicting child competence expected to be associated with high-quality caregiver-child interactions. Thus, our analyses will include child-level analyses in addition to class-level analyses. Given certain assumptions about the sample design, the child sample size should be large enough to detect meaningful associations (correlations) between the classroom observation measures and the child outcomes. Table B.1 demonstrates the minimum detectable correlations that can be detected with 80 percent power at alpha equal to 0.05, given our expected sample sizes for infants and toddlers. We show detectable correlations assuming an interclass correlation coefficient (ICC) of 0.10. We also assume a measurement reliability factor of 0.65. We assume no design effect due to unequal weighting, and are also making the assumption that the standard deviation for dependent and independent variables is 1. These calculations do not take into account any precision that would be gained by introducing child-level covariates into the models.
Table B.1
Minimum Detectable Correlations BETWEEN CHILD CARE CHARACTERISTICS AND CHILD OUTCOMES for the Q-CCIIT PSYCHOMETRIC FIELD TEST Sampling Design at Time One
|
Number of Classrooms |
|
Number of Children |
|
Design Effect |
Effective Child
|
Minimum Detectable Correlations |
||||
|
Center |
FCC |
Total |
|
Center |
FCC |
Total |
|
|
|
|
Total |
200 |
200 |
400 |
|
1,200 |
400 |
1,600 |
|
1.3 |
800.0 |
0.099 |
Infant |
100 |
200 |
300 |
|
480 |
200 |
680 |
|
1.1 |
392.3 |
0.141 |
Toddler |
100 |
200 |
300 |
|
720 |
200 |
920 |
|
1.2 |
495.6 |
0.126 |
Note: In calculating minimal detectable correlations, we assumed the following: (1) ICC = 0.10; (2) both the new Q-CCIIT measure and the child competence outcomes have been standardized (that is, variances of both independent and dependent variables are 1); (3) the outcome measures have a reliability of at least 0.65; and (4) statistical power is 80 percent. The hypothesis tested would indicate that we could reject the null hypothesis that there is no correlation between the Q-CCIIT measure (at the child level) and the child competence outcome.
FCC = family child care.
Based on these calculations we can see that, for example, the minimum detectable correlation between the measure and the child outcome at time one for infants and toddlers combined is 0.099. This means that if the true underlying correlation is 0.099 or higher, then given this sample size, 80 percent the time we will reject our null hypothesis that the correlation is equal to zero (that is, that there is no association).
When we return six months after getting the time one measure, we expect about 15 percent attrition in the number of parents who respond. Due to this attrition and nonresponse, our minimum detectable correlation will be higher, as demonstrated in Table B.2. At time two, for the same assumptions as with time one, our minimum detectable correlation between the Q-CCIIT measure and the child outcome for infants and toddlers combined is now 0.106.
Table B.2
Minimum Detectable Correlations BETWEEN CHILD CARE CHARACTERISTICS AND CHILD OUTCOMES for the Q-CCIIT PSYCHOMETRIC FIELD TEST Sampling Design at Time TWO
|
Number of Classrooms |
|
Number of Children |
|
Design |
Effective |
Minimum Detectable Correlations |
||||
|
Center |
FCC |
Total |
|
Center |
FCC |
Total |
|
|
|
|
Total |
200 |
170 |
370 |
|
1,020 |
340 |
1,360 |
|
1.3 |
697.4 |
0.106 |
Infant |
100 |
170 |
270 |
|
408 |
170 |
578 |
|
1.1 |
337.2 |
0.153 |
Toddler |
100 |
170 |
270 |
|
612 |
170 |
782 |
|
1.2 |
427.3 |
0.136 |
Note: In calculating minimal detectable correlations, we assumed the following: (1) ICC = 0.10; (2) both the new Q-CCIIT measure and the child competence outcomes have been standardized (that is, variances of both independent and dependent variables are 1); (3) the outcome measures have a reliability of at least 0.65; and (4) statistical power is 80 percent. The hypothesis tested would indicate that we could reject the null hypothesis that there is no correlation between the Q-CCIIT measure (at the child level) and the child competence outcome.
FCC = family child care.
Data collection procedures
As noted previously, we propose to collect information using multiple methods, including classroom observations; self-administered questionnaires (SAQs) with parents and caregivers; and focus groups with parents, caregivers, and T/TA providers. Below is a brief description of each Q-CCIIT project instrument.
Child care setting recruitment form. To start recruitment, we will send an advance letter to selected settings, and a Mathematica site coordinator will then call the setting to review the basic topics discussed in the letter, including data collection activities, tokens of appreciation, and respond to questions. As part of this initial call, the Mathematica site coordinator will collect information on the number of eligible classrooms as well as identify a setting point person (SPP) who will assist with recruiting families into the project and will later provide more specific information on eligible classrooms affiliated with the setting. For settings that are eligible and agree to participate, Mathematica site coordinators will call SPPs to obtain classroom child rosters5 and update information to assist in completion of a child care setting recruitment form. Site coordinators will collect the names and birthdates of the children from birth to 30 months of age in each of the eligible classrooms, along with the parent’s primary language. We will use this information to determine which age-specific parent-report child competence questionnaire to ship for each child and whether to send the materials in English or Spanish. Gathering of information would take about 30 minutes per setting.
Q-CCIIT observation measure. In both the pilot and the field test, we will conduct classroom observations for 3 hours in the morning. In general, the observations will not require anything from participants in the project and thus will not impose a time burden. The Q-CCIIT measure will include a request to observe a short group activity (such as shared book time for caregivers who already do such activities) taking less than 10 minutes of the 3 hours. It will conclude with brief follow-up questions (less than 5 minutes) on how typical the day was and about specific events if they were not observed (for example, departure interactions with children and families). We expect that 100 percent of eligible classrooms within participating settings will participate.
Caregiver background questionnaire. After the observation, caregivers who spend more than fours a day in the classroom will be asked to complete a questionnaire on characteristics that could account for variation in observed interactions. The Q-CCIIT observer will provide a paper-and-pencil instrument to the caregiver, as well as an envelope for returning it to the observer when completed. The questionnaire will take 15 minutes, and we expect that 100 percent of caregivers will complete it.
Parent-report child competence questionnaire. We will ask parents to complete an SAQ about their child’s cognitive, language, and social-emotional development, as well as demographic characteristics about their family, to examine the validity of the Q-CCIIT measure by investigating its association with child competence. The questionnaire will take about 45 minutes, and we will provide parents with an addressed postage-paid envelope for returning it. For the pilot test, the questionnaire will be collected at one time point (when the observation is conducted) for initial analysis of concurrent validity. We expect to receive completed questionnaires from 60 percent of parents (N = 560). For the psychometric field test, the questionnaire will be collected at two time points—at the beginning of data collection, and again six months later6—to examine the ability of the Q-CCIIT measure to predict child competence. We expect to receive completed questionnaires from 60 percent of parents at time one (N = 1,600), with an attrition rate of 15 percent at time two (N = 1,360).
Focus group interview guide. Before the pilot test, Mathematica will conduct focus groups with parents, caregivers, and T/TA providers on the constructs and behaviors assessed in the Q-CCIIT measure. These discussions will allow us to broaden the scope of feedback received from stakeholders and will provide evidence for investigating the face validity of the Q-CCIIT measure. Parents and caregivers will be recruited with the help of the SPPs at the pretest sites. We will distribute fliers inviting volunteers. For T/TA providers, we will circulate invitations to anticipated attendees of a regional or national T/TA networking meeting. We expect that each semi-structured discussion will last about 1 hour and 55 minutes and that 100 percent of parents, caregivers, and T/TA providers volunteering will participate.
Parent focus group demographic questionnaire. At the end of the session, we will ask participants to complete a brief SAQ on demographics (such as the ages at which they used infant/toddler child care, the type of child care used, and their race/ethnicity). The SAQ will help us understand the diversity of the group of people sharing their views. We anticipate that the questionnaire will take 5 minutes and that all focus group participants will complete it.
Caregiver focus group demographic questionnaire. After the session, we will ask participants to complete a brief SAQ on demographics (such as how long they have worked with infants and toddlers, the type of child care they work in, and their own race/ethnicity), so that we can understand the diversity of the group of people sharing their views. We anticipate that the questionnaire will take 5 minutes and that all focus group participants will complete it.
T/TA provider focus group demographic questionnaire. After the session, we will ask participants to complete a brief SAQ on demographics (such as their race/ethnicity, training, and years of experience). The SAQ will help us understand the diversity of the group of people sharing their views. We anticipate that the questionnaire will take 5 minutes to complete and that all focus group participants will complete it.
The Q-CCIIT project expects to obtain a very high response rate for focus groups and caregiver questionnaires, but achieving a high rate from parents will require additional effort. Strategies for maximizing response follow.
Achieving a high response rate starts with obtaining a high level of cooperation. Our attractive and easy-to-read materials, as well as our relationships with the setting’s staff, will enable us to explain the project to parents and caregivers, supporting their participation. We will distribute advance letters and fact sheets to settings, caregivers, and parents; gift cards as tokens of appreciation are another strategy for boosting participation. For his/her assistance in organizing data collection, we will give each SPP a $25 gift card to purchase materials to use with the children. As a token of appreciation for participation in the focus groups, we will give each participant a $25 gift card. Each participating caregiver will receive a $50 gift card for allowing us to conduct the observation and for completing the caregiver background questionnaire. Finally, parents will receive a $25 gift card for completing questionnaires at time one and again at time two.
Despite encouraging participation through clear and attractive materials and gift cards as tokens of appreciation, we do anticipate some nonresponse. Our electronic data receipting system will enable us to track real-time response rates by instrument and respondent. Thus, we will monitor that parents and caregivers complete instruments in a timely manner. The timing will be especially important for the parent-report child competence questionnaire because it contains age-specific infant/toddler measures. We intend to follow up with nonresponders while our observers are on site and enlist the assistance of the SPP in obtaining any remaining parent-report questionnaires. We will also conduct follow-up telephone calls with parents who have not returned their completed SAQs and when necessary send additional SAQs to parents if we have addresses or to caregivers to distribute if we do not have addresses.
Gathering caregiver background questionnaires on site will ensure greater response rates, but for a caregiver who could not complete the form at that time of the observation and was left with materials for shipping the questionnaire, we will take follow-up measures similar to those with the parent questionnaires. For example, we will instruct Q-CCIIT observers, before they leave the geographic location, to check in with caregivers from whom they have not collected forms; we will also enlist the assistance of the SPPs for pursuing SAQs, conduct follow-up telephone calls for caregivers who have not returned SAQs, and ship additional SAQs when necessary.
In summary, although we will make our best efforts to avoid nonresponse, we will also have procedures in place to convert nonresponse and maximize completion rates.
We will calculate marginal and cumulative response rates at each stage of sampling and data collection. As reflected in the American Association for Public Opinion Research industry standard for calculating response rates, the numerator of each response rate will be the number of eligible completed cases, and the denominator will be the number of eligible cases.
We will develop the Q-CCIIT observational measure iteratively during a pretest phase in the late spring of 2011. Throughout this phase, we will refine observation items and procedures and develop new items as needed. The pretest will only involve conducting classroom observations (no questionnaires will be administered), with the Q-CCIIT measure evolving and changing.
3 For sampling purposes, an infant classroom will be defined as one where children are younger than 15 months. A toddler classroom will be defined as one with children between 15 months and 36 months. This cut-point matches professional organizations use of age ranges to define recommended group size and child-to-staff ratios.
4 The 20 total FCC settings might not be equally spread between the Early Head Start service area and the area not served by Early Head Start.
5 Because some settings could have concerns about providing personal identifying information, Mathematica site coordinators will be prepared to work with SPPs to use alternate means of creating identifiers, such as recording children’s initials rather than their names on rosters.
6 If a child leaves the classroom before 6 months, we will collect the follow-up parent questionnaire at that time. Mathematica will verify rosters biweekly with SPPs to confirm classroom enrollment.
File Type | application/msword |
Author | Dawn Smith |
Last Modified By | DSHOME |
File Modified | 2011-05-20 |
File Created | 2011-05-20 |