Alternative Supporting Statement for Information Collections Designed for
Research, Public Health Surveillance, and Program Evaluation Purposes
Mother and Infant Home Visiting
Program Evaluation:
Kindergarten Follow-Up (MIHOPE-K)
Pre-testing of Evaluation Data Collection Activities
0970 - 0355
Supporting Statement
Part B
Submitted By:
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Project Officers:
Nancy Geyelin Margie
Laura Nerenberg
Part B
B1. Objectives
Study Objectives
The Mother and Infant Home Visiting Program Evaluation kindergarten follow-up (MIHOPE-K, approved under OMB #0970-0402) data collection began in 2018 and its in-home assessment elements (direct assessments of children, direct assessments of caregivers, and videotaped caregiver-child interactions) need to be adapted for remote administration so that data collection can safely continue during the COVID-19 pandemic in the form of a virtual visit to participants’ homes. The objective of this generic information collection (GenIC) request is to develop and test the instruments and procedures for remote administration of direct assessments of children, direct assessments of caregivers, and videotaped caregiver-child interactions (the MIHOPE-K elements that had been conducted in families’ homes and will now compose the “virtual visit”). The pretest conducted under this clearance will result in improved instruments and procedures for the virtual visit, and thus reduced MIHOPE-K respondent burden, and improve the quality of the data gathered through MIHOPE-K.
Generalizability of Results
This study is intended to present internally valid description of the performance of the remote administration protocol for the direct assessments of children, direct assessments of caregivers, and videotaped caregiver-child interactions that have previously been administered in families’ homes as part of MIHOPE-K data collection, not to promote statistical generalization to other sites or populations.
Appropriateness of Study Design and Methods for Planned Uses
The purpose of this collection is to develop and test virtual administration of data collection instruments and procedures. This study design enables families, assessors, and study staff to experience the protocol for remote assessments, provide feedback about any elements that were challenging, and give the researchers the opportunity to adjust the protocol before fielding it with the MIHOPE-K sample. Pretesting remote administration of these instruments and protocols is a critical step in successfully continuing MIHOPE-K data collection amid the COVID-19 pandemic, while in-person data collection is infeasible for an indefinite amount of time.
The findings from this pretest are intended to present descriptive data of the performance of a subset of MIHOPE-K instruments and should not be generalized to other sites or populations. While the study team will attempt to recruit a respondent sample that closely matches the characteristics of the MIHOPE-K sample, the sample size for this pretest will not necessarily be able to include participants who represent all the experiences of the families in the MIHOPE-K sample. For example, the geographical concentration of participants in the pretest sample will not be representative of the geographical spread of MIHOPE-K participants throughout the United States. As noted in GenIC Supporting Statement A.2. Purpose and Use, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.
B2. Methods and Design
Target Population
For each of the fifty-five families selected to participate in remote administration of direct child assessments, direct caregiver assessments, and videotaped caregiver-child interactions, we will collect information from the implementation of the virtual assessments, caregiver debrief, assessor debrief, and field staff debrief. The pretest sample will not include any members of the MIHOPE-K sample. To best determine the usability of the remote administration protocol, the pretest sample will resemble the MIHOPE-K sample as closely as possible within a smaller and geographically concentrated sample, include both English and Spanish speakers, child ages and characteristics comparable to the focal children in the MIHOPE sample, and caregivers and children with a variety of levels of tech savviness and home distractions. Because participants will be purposively selected, they will not be representative of the population of home visiting recipients. Caregivers will not be representative of the population of families that home visiting programs serve. Instead, we aim to obtain variation in families’ experiences to understand engagement and usability of the instrument within the range of families in the MIHOPE-K sample.
Sampling
The study team aims to recruit respondents who represent the diversity of the MIHOPE-K sample (including linguistic, ethnic, and racial diversity) and are concentrated near Mathematica Policy Research in central New Jersey. The recruitment will be concentrated in this geographic area despite the use of virtual data collection methods due to the need to drop off equipment to participating families’ homes prior to the assessment and the need to pick up that equipment after the assessment has occurred. This drop-off and pick-up will also occur with data collection for MIHOPE families participating in virtual MIHOPE-K visits. The study team will recruit respondents by reaching out (directly and through colleagues) to networks and organizations (e.g., Head Start centers) we work with to identify a participant sample with similar characteristics to the MIHOPE-K sample members. Recruited respondents will include mothers of children who will be entering kindergarten or first grade in the fall to match the ages of the children in the MIHOPE-K sample, both English and Spanish speakers, children with characteristics comparable to the children in the MIHOPE-K sample, and caregivers and children with a variety of levels of tech savviness and home distractions. See Instrument 1 for the recruitment script for the proposed pretest.
B3. Design of Data Collection Instruments
Development of Data Collection Instruments
Thus far, we have fielded the MIHOPE-K data collection with 2,711 families. The child and caregiver assessments included in the virtual visit are all standardized assessments. Due to the COVID-19 pandemic, MIHOPE-K in-person instruments for direct child assessments, direct caregiver assessments, and videotaped caregiver-child interactions were adapted for remote administration. The study team consulted with the Data Recognition Corporation (DRC) and Riverside Insights to determine which modifications to their respective proprietary measures were allowable and recommended.
After the initial round of implementation of the remote protocol, including the gathering of feedback from caregivers, assessors, and field staff, the instruments and procedures will be revised based on that feedback. We will provide updated instruments and procedures to OMB and will use the updated instruments and procedures to implement the remote protocol with additional families, gathering additional feedback, and revising instruments and procedures again, as needed.
B4. Collection of Data and Quality Control
This section describes the collection of pretest data for MIHOPE-K. Best practices will be followed for conducting the data collection, including training and certifying staff on data collection procedures and monitoring data collection to ensure that high quality data are collected and the target response rate is achieved. Mathematica Policy Research is the subcontracted survey firm for the proposed pretest. Our data collection method builds on the methods used in previous phases of MIHOPE to the greatest extent possible. In particular:
Design and text of respondent contact materials will be informed by principles of behavioral science.
Varied methods will be used to reach out to respondents (i.e., email, text messages, phone calls).
Tokens of appreciation will be provided to increase families’ willingness to respond to the pilot data collection.
The pretest recruitment plan will include the following (all caregiver-specific contact materials are included in Appendix A):
Pilot overview shared with recruitment network.
Pilot overview provided to potential families.
Invitation letter for recruited families outlining what to expect and confirming scheduled appointment.
Email notifications.
Text messages.
The components of the virtual visit will include:
Direct assessments of children: Direct child assessments such as Hearts & Flowers and the Woodcock Johnson IV Picture Vocabulary and Oral Comprehension subtests will be administered to assess the child’s receptive language skills, early numeracy, working memory, inhibitory control, and cognitive flexibility. Assessors will also observe and rate parental warmth and the child’s emotions, attention, and behavior.
Direct assessments of caregivers: Direct caregiver assessments (i.e. the Digit Span) will be administered to assess maternal self-regulation.
Videotaped caregiver-child interaction: Observations of caregiver-child interactions will be conducted using a videotaped interaction. Children’s behaviors towards the caregiver will be gathered in the context of caregiver-child interaction, including engagement with the caregiver and negativity toward the caregiver. Caregivers’ parenting behaviors, including supportiveness and respect for child’s autonomy, will also be assessed, as well as features of the caregiver-child dyad (e.g., affective mutuality).
Direct child assessments, direct caregiver assessments, and videotaped caregiver-child interactions will occur virtually through WebEx, a videoconferencing platform, and the assessments will be facilitated by a remote assessor (the virtual visit protocol is included in Instrument 2). The remote assessor will meet the family through the videoconferencing platform to guide them through the entire virtual visit. All families will be provided with technology with which to participate in the virtual visit. This technology will be delivered by study team staff to each participating family. The remote assessor can share their screen through the videoconferencing platform and take remote control of the family’s device to move them through the assessments. The caregiver-child interaction will be videotaped using the technology provided to families and the remote assessor will assist the parent in framing the video. The virtual visit will be followed by a caregiver, assessor, and field staff debrief (the debrief questionnaire for caregivers is included at the end of the virtual visit protocol in Instrument 2).
Direct assessments of children
A direct assessment of the child’s language development will be conducted using the preLAS Johnson IV Picture Vocabulary and Oral Comprehension (WJPV) subtests, which are from the Woodcock Johnson IV: Tests of Oral Language (WJOL). The Picture Vocabulary subtest assesses receptive language by having the children point to pictures of objects or actions on an easel panel that are named by the assessor. The Oral Comprehension subtest assesses the children’s ability to understand a short passage by having them provide a missing word based on cues from the sentence (for example, “water looks blue and grass looks ______”). It takes about 5 minutes to administer each subtest. A Spanish version of the Woodcock Johnson subtests for bilingual Spanish-English speakers is available.
The Woodcock Johnson III Applied Problems subtest will be used to measure children’s early numeracy and math skills. This is a subtest from the Woodcock Johnson III: Test of Achievement and measures children’s ability to solve oral math problems (for example, “how many dogs are there in this picture?”). It takes approximately 5 minutes to administer this task. A Spanish version of the subtest is also available. Before we conduct the Woodcock Johnson subtests, we will also administer a preLAS language screener for children who may be bilingual in order to determine which versions they should be administered.
Children’s executive functioning, including their working memory, inhibitory control, and cognitive flexibility, will be assessed using a combination of the Digit Span, Hearts & Flowers, and Attention Sustained task:
Digit Span, which is a measure of working memory, assesses the child’s ability to repeat an increasingly complex set of numbers first forward, and then backward. It takes about 2 to 3 minutes to administer.
Hearts & Flowers is designed to capture inhibitory control and cognitive flexibility and is administered through an application on a tablet. The task includes three sets of trials: (1) 12 congruent “heart” trials, (2) 12 incongruent “flower” trials, and (3) 33 mixed “heart and flower” trials. Children are presented with an image of a red heart or flower on one side of the screen. For the congruent heart trials, the children are instructed to press the button on the same side as the presented heart. For incongruent flower trials, children are instructed to press the button on the opposite side of the presented flower. Accuracy scores are drawn from the incongruent block and mixed block. It takes approximately 5 minutes to administer this task.
Children will be motivated throughout the assessment by completing a simple virtual game which involves gradually building a colorful and endearing monster over the course of the assessment. This simple virtual game will be more engaging than a typical progress bar and will help children stay motivated and engaged throughout the entire assessment.
Direct assessments of caregivers
A Digit Span is being used to assess maternal cognitive control (and specifically, working memory – an aspect of cognitive control). This assessment protocol is included in Instrument 2.
Videotaped caregiver-child interactions
A caregiver-child interaction task will be administered in order to assess the behavior of the mother and of the child during a semi-structured play situation. The interaction task will be videotaped and viewed at a later date by trained coders, who will rate caregiver and child behavior to assess qualities of parenting (such as parental supportiveness, parental stimulation of cognitive development, parental intrusiveness, parental negative regard, and parental detachment) and the child’s behavior (such as child engagement of parent, child’s quality of play, and child’s negativity toward parent). These outcomes require independent assessments (as opposed to self-reports, which may be more likely to be influenced by home visiting programs through raising parents’ awareness of preferred or desired responses regarding various types of parenting behaviors).
The caregiver-child interaction task will involve semi-structured play and will consist of tasks that were used in previous longitudinal studies that measured child development outcomes (i.e., the NICHD Study of Early Child Care and Development, a longitudinal study that examined the relationship between child care experiences and characteristics and child development outcomes, and the Early Head Start Research and Evaluation Project, a longitudinal impact evaluation of the Early Head Start program). The activities involve having the parent and child play with toys such as an Etch-A-Sketch, wooden blocks, animal puppets, and/or Play Doh. With each toy, the pair is instructed to either complete a specific task or to play with them in whichever way they would like. The activities are fun and interesting for children to complete with their caregivers. In the proposed pretest, families will be randomly assigned to participate in the caregiver-child interaction task either sitting on the floor (the standard administration of the task) or sitting at a table, although the location of the task will be flexible between these two conditions and ultimately depend on what the home environment of the family can support given table space, floor space, and lighting. The play time lasts approximately 15 minutes. The protocol is included in Instrument 2.
Quality Assurance
Quality assurance reviewers will review the assessments of remote assessors to ensure completeness, correct scoring of responses, and that clear guidance was given to families during the assessment. We added checkpoints after each task in the virtual visit protocol to collect additional information that will inform any issues that could affect the validity of the data and can be used to inform adjustments between phases of the pilot. Following their review, quality assurance reviewers will provide feedback to assessors and answer any questions the assessors might have. This quality assurance protocol will be the same as the MIHOPE-K quality assurance protocol for in-home assessors. As part of the processing of the caregiver-child video interaction data, the study team will also conduct reviews of all videos to check for audio and picture integrity. Transfer of all video files will be carefully logged in a transmission log to minimize any errors in the transfer of data.
B5. Response Rates and Potential Nonresponse Bias
Response Rates
The pilot test findings are not designed to produce statistically generalizable findings and participation is wholly at the respondent’s discretion. Response rates will not be calculated or reported.
NonResponse
As participants will not be randomly sampled and findings are not intended to be representative, non-response bias will not be calculated. Respondent demographics will be documented and reported in written materials associated with the data collection.
B6. Production of Estimates and Projections
The data will not be used to generate population estimates, either for internal use or dissemination.
B7. Data Handling and Analysis
Data Handling
To mitigate errors, the computerized instrument is programmed to guide the assessor through the instrument with the participant. The instrument is programmed to allow only certain responses to each item, confirm caregiver and child information, hide response options that are not applicable, pipe in the child’s name and appropriate text for randomization, and remind the remote assessor of the view to display for the child and caregiver. All data processing will be carefully programmed by a team member with knowledge of the protocol and instruments and will be checked by another staff member to minimize errors.
Data Analysis
The study team will compile the feedback gathered during pretesting, identify themes and areas in the protocol in need of adjustment, and modify the places in the instruments and procedures that presented challenges. The study team will compare some aspects of the data collected during the assessments (e.g., rate of missingness) to data from prior rounds of MIHOPE-K data collection, all of which was conducted in person. The analysis of rate of missingness, for example, will help the study team determine if there are any places in the protocol or procedures that make virtual participants less able to complete the assessment than in-person participants.
Data Use
None of the data from this information collection will be released to the public. Pretest data will be used by the study team to improve instruments and procedures for virtual administration of the materials approved under OMB #0970-0402.
B8. Contact Person(s)
MDRC
Kristen Faucetta (kristen.faucetta@mdrc.org)
Charles Michalopoulos (charles.michalopoulos@mdrc.org)
Mathematica
Eileen Bandel (ebandel@mathematica-mpr.com)
Nancy Geyelin Margie (OPRE/ACF)
Laura Nerenberg (OPRE/ACF)
Attachments
Instrument 1_MIHOPE-K_Pilot_Recruitment Script
Instrument 2_MIHOPE-K_Pilot_Virtual Visit Specifications
Appendix A_MIHOPE-K_Pilot_ Caregiver Notification Materials
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | OPRE |
File Modified | 0000-00-00 |
File Created | 2024-07-25 |