APP Justification Part B

APP Justification Part B.docx

Applying Promising Practices for Small and Rural Libraries (APP) Program Evaluation

OMB: 3137-0123

Document [docx]
Download: docx | pdf





Evaluation and Learning for IMLS's Applying Promising Practices for Small and Rural Libraries (APP) Program

PART B. DESCRIPTION OF STATISTICAL METHODOLOGY


The analysis plan for this evaluation project is designed to produce an understanding of the capacity-building efforts of mentor organizations for two cohorts of IMLS grantees (2019 and 2020) across three practice areas (Digital Inclusion, Community Memory, and Transforming School Library Practice) in the Accelerating Promising Practices for Small and Rural Libraries (APP) program. Appendix H1-3 presents a cross-walk of research questions to the specific evaluation methods and data sources.

To answer the evaluation questions in Appendix H1, we propose a mixed methods evaluation study design that spans across two cohorts of grantees and three mentor organizations across three Communities of Practice (COP). Additional secondary data will be collected from intra-grantee conversations on the message boards from each Community of Practice’s digital portal, and available mentor organization records. The evaluation team will extract conversation data from the digital portals, and attend and observe 24 digital webinars (2 per year per cohort for each COP) and one in-person convening for each cohort and COP per year, following protocols outlined in Appendix F. Participant libraries for each cohort and COP will participate in three surveys capturing baseline, mid-point, and end-of-program capacities as well as experiential data with observation of technical assistance (TA) done by the mentor organization within each COP. Surveying also will be done of those libraries that applied to participate in Cohort 1, were declined, and then chose not to apply to participate in Cohort 2 in order to understand their experience and reasons for not re-applying.

B. 1. Respondent Universe

The universe for this evaluation includes four sets of respondents: (1) staff from forty-five small and rural library grantees (all libraries participating in the two cohorts across three COPs), (2) all seventy-two unsuccessful applicants to the APP program; (3) all three mentor organizations leading capacity-building efforts for the COPs; and (4) six IMLS staff members involved in APP program recruitment.

Set 1: The first respondent set includes all library grantees that are involved in each cohort of funded projects and participate in one of the Communities of Practice. This respondent set will respond to baseline, mid-point, and end-point surveys assessing their capacity to conduct their funded projects, as well as an endpoint interview capturing their reflections on the overall change. Program leadership staff from each library will be asked to participate, but responses will be aggregated for each library, making the grantee libraries the unit of analysis for comparative purposes as explained later in this document.

Set 2: The second respondent set includes applicants to the APP program who were not selected to participate in Cohort 1 and did not reapply to be part of Cohort 2 – consisting of 72 libraries. This respondent set will participate in a survey to assess why they chose not to participate in the Cohort 2 application process. The individual who submitted the application on behalf of their library will be asked to participate in the survey.

Set 3: The third respondent set includes the three mentor organizations that are providing technical assistance (TA) support to each of the three COPs. This respondent set will participate in a mid-point and end-point interview as well as quarterly check-ins to assess their experiences and take-aways from provision of the TA support. Key staff from each mentor organization will be asked to participate, and responses will be aggregated for each mentor organization, making the mentor organizations the unit of analysis.

Set 4: The fourth respondent set includes key IMLS staff involved in the project. It includes APP program officers, IMLS Deputy Director for Library Services, and communications staff involved in recruitment of the first and second cohorts of the APP program. This respondent set will participate in interviews to obtain information about the recruitment processes and how they may have differed between the two cohorts and/or contributed to different characteristics of the participants in the first and second cohorts.

B.2. Potential Respondent Selection Methods

Our proposed selection methods for each respondent set can be found below. We are drawing from all key informants for each response set and as such, we do not require any sampling methodology.

Set 1: Community of Practice Members/Grantees

A census of all library program leadership staff for both Cohorts 1 and 2 (n=88) will be completed.

Set 2: Cohort 1 applicants that chose not to reapply to Cohort 2

A census of all unsuccessful non-returning applicants (n=72) will be completed.

Set 3: Mentor organization representatives

We plan to include up to three program staff from each of the three mentor organizations (i.e., up to nine participants) in two hour-long group interviews, one at the mid-point of the project and one at the end of the project as well as quarterly check-ins. While two of the mentor organizations have more than three project team members (approximate FTE staff, volunteers, and consultants - Mentor Organization #1 = 5, Mentor Organization #2 = 5, Mentor Organization #3 = 3), we are limiting the size of group interviews to a maximum of three per team to focus on those core members that are affecting the mentoring support. Interviewees will be purposely selected based on the criteria of (1) knowledge of the technical assistance provided, (2) role within the mentor organization’s efforts, and (3) availability. Each mentor organization interview will include at least the project leaders. Due to changes in the programming and/or staff, participants may change over time with the goal of engaging individuals with the most knowledge about the program. While we recognize that this purposeful sampling methodology could induce some bias, we feel that purposefully selecting staff based on their level of involvement in the project would be more informative than identifying a staff member based on random selection which could result in a participant that is only ancillary involved in the work.

Set 4: IMLS staff

We expect to interview six IMLS staff members: three APP program officers (one of the three APP program officers is also a deputy director), the IMLS Director, and two Communications staff that participated in recruitment processes for APP Program Cohort 1 and/or 2.

B.3. Response Rates and Non-Responses

Table B.3 shows the anticipated response rates for each of the data collection tools included in the burden estimates.

Table B.3: Anticipated Response Rates for Data Collection

Data Collection

Respondents

Timing of Data Collection

Universe

Anticipated Response Rate

Cohort 1 (C1) Grantee APP Capacity Baseline Survey

All C1 participating program staff for each of the three Communities of Practice

Jan, 2020

N = 37

92%

Cohort 1 (C1) Grantee APP Capacity Mid-Point Survey

All C1 participating program staff for each of the three Communities of Practice

Oct, 2020

N = 37

100%

Cohort 1 (C1) Mentor Mid-point/Quarterly Interviews

All liaisons and/or coaches from each of the three mentor organizations

Oct-Nov, 2020

Oct, 2020, Jan 2021, Apr 2021


N = 9

100%

Cohort 2 (C2) Grantee APP Capacity Baseline Survey

All C2 participating program staff for each of the three Communities of Practice

Oct, 2020

N = 17+

100%

Cohort 1 (C1) Grantee APP Capacity End-Point Survey

All C1 participating program staff for each of the three Communities of Practice

Jul-Aug, 2021

N = 37

100%

Cohort 1 (C1) Grantee End-Point Interviews

All C1 participating program staff for each of the three Communities of Practice

Jul-Aug, 2021

N = 37

100%

Cohort 1 (C1) Mentor End-Point Interviews

All liaisons and/or coaches from each of the three mentor organizations

July-Aug, 2021

N = 9

100%

Cohort 2 (C2) Mentor Mid-point/Quarterly Interviews

All liaisons and/or coaches from each of the three mentor organizations

July-Aug, 2021

Oct, 2021, Jan 2022, Apr 2022

N = 9

100%

Cohort 2 (C2) Grantee APP Capacity Mid-Point Survey

All C2 participating program staff for each of the three Communities of Practice

Aug, 2021

N = 17+

100%

Cohort 2 (C2) Grantee APP Capacity End-Point Survey

All C2 participating program staff for each of the three Communities of Practice

Jul-Aug, 2022

N = 17+

100%

Cohort 2 (C2) Grantee End-Point Interviews

All C2 participating program staff for each of the three Communities of Practice

Jul-Aug, 2022

N = 17+

100%

Cohort 2 (C2) Mentor End-Point Interviews

All liaisons and/or coaches from each of the three mentor organizations

July-Aug, 2021

N = 9

100%

IMLS Staff Interviews

All IMLS program officers and staff involved in recruitment for the APP program

July-Aug, 2020

N = 6

100%

Non-Returning Cohort-1 Applicant Survey

Applicants from Cohort 1 application process that did not apply to be part of Cohort 2 for each of the three Communities of Practice

Sep, 2020

N=72

70%



Survey Response Rates, Strategies, and Addressing Non-responses

As shown in Table B.3, we expect nearly a 100% response rate for all program staff surveys – including the baseline, mid-point, and end-point capacity surveys. This places the minimum response rate at the level recommended by OMB to minimize bias. Our previous experience indicates that participants in capacity-building programs are engaged enough to want to provide information to support and improve their programming1. Further, we will be aggregating responses by library. As such, only one person needs to respond to represent their library to achieve the required data collection. If only one person responds to the survey, we will assess their relative role in the project and follow up with the library if necessary, to ensure that the respondent was selected to represent the library’s view and position. If this is not the case, we will work with the library to have the complete leadership team respond to the survey.

Our confidence in our high response rate is based on experiences we have had in leveraging relationships built with the grantees similar to those in Cohorts 1 and 2. First, we will collaborate with the mentor organizations to inform the potential respondents of the surveys. Second, we will leverage the capacity of the survey tool to send automated reminders for potential respondents who have not yet completed the survey. Copies of a sample invitation email and reminder can be found in Appendix A. Finally, where possible and with support from IMLS and the mentor organizations, we will share our findings with the APP grantees – helping them understand the value of the evaluative data and providing data around their own individual changes in capacity as well as across their cohort within their program. Letting them know about this share-back ahead of time engages respondents as active participants not only in giving data for the evaluation, but also in using it.

To avoid survey fatigue and recognizing that the participants will be asked to complete three surveys over the course of 24 months, we are keeping our surveys to less than 20 minutes in length each. The surveys will also be shared with the mentor organizations prior to deployment to ensure that the items are appropriate and clear. Our own experience is echoed by Saleh and Bista2, where they found a similar response rate when shorter surveys of similar structure and perceived importance are presented to a population of individuals having some interest in the topic.

In the case of Set 2 group, comprising those that had applied for funding in Cohort 1 but not Cohort 2, we believe we will experience a 70% response rate. Compared to members of Cohort 1 and Cohort 2, their connection with IMLS is less strong. As a result, they may be less likely to participate. We will engage the same methods as identified above to improve the probability of participation for this group – including presentation of a short survey, explanation of the confidentiality, importance and use of the data to the prospective participants, automated reminders and if needed, follow up calls by PPG staff.

If survey response rates fall below 80%, we will extend the data collection time period by two weeks and conduct a comparative analysis of participant responses during the normal data collection period to those captured during the additional two weeks. Demographic information, along with survey responses will be assessed to determine if there is a possible non-response bias with the tardy respondents representing the population of participants who are possibly reluctant to complete the survey. Depending on the results, we will determine if there is need to address non-response using statistical procedures such as weighting. Application of the methods will be subject to peer review from IMLS staff with statistical methodological expertise.

Interview Response Rates, Strategies, and Addressing Non-Responses

We expect that we will have a near 100% response rate for most of our interviews. Both IMLS staff as well as mentor program staff have indicated a strong interest in learning from the evaluation and are expected to participate. This aligns with previous experience we have had in conducting similar interviews in other evaluation efforts3. To increase the probability of participation, PPG staff will reach out directly to the applicants selected to participate in the interviews– initially through email and then through follow-up calls. Our experience has shown that this process elicits enough goodwill and interest that the potential participants are more likely to agree to participate.

Web-portal Conversation Scraping

At quarterly intervals, PPG staff will download all conversation data from each of the portals for subsequent coding. Information collected includes the text of each statement within each conversation, who posted that statement, as well its relationship to other preceding statements. In addition to the eventual thematic coding of the data, the data will be coded for type of statement (Appendix F). The data will be used to assess how the cohorts are building relationships and using their Communities of Practice.

Observational Data Collection

As mentioned above, PPG staff will observe both a selection of webinars (two per cohort per year) as well as in-person convenings (one per cohort per year). The webinars will be selected based on timing within the grant year, one near the beginning of the year and one halfway through the year. Using the observation rubric (Appendix F), interactions amongst the grantees and the mentor organizations will be scored.

B.4. Tests of Procedures and Methods

This section explains the different analytic methods we plan to use to address the evaluative questions shared in Appendix H1. Our design consists of mixed qualitative and quantitative methods, leveraging administrative, survey, and interview data across the key stakeholder groups for the program. Our analytic methods focus on deriving the best information from the most effective sources. As mentioned above, the sources include:

  • APP grantee/cohort administrative data for both cohorts

  • IMLS administrative records, including review of grant applications

  • APP grantee/cohort member program staff – baseline, mid-point, and end-point surveys; end-point interviews

  • APP mentor program staff – mid-point and end-point interviews

  • IMLS program officers – interviews

  • APP applicants to Cohort 1 that did not apply to Cohort 2 – one survey

  • Cohort document review – grantee performance reports, mentor records of engagement with cohort members, mentor organization originated surveys, other documents identified in the course of the project

  • Web-portal conversations – scraping of conversations from the mentor organization’s web portals

  • PPG observations – one in-person convening per program per cohort per year (up to six), portal conversations and materials (once per quarter), and two webinars per program per cohort per year (up to twenty-four)

Appendix H shares the research question, data source, and information expected to be gleaned from the collected data.

Analysis of survey and administrative data. Analysis of survey and administrative data will have five distinct purposes:

  1. Identification of pre-APP programmatic capacities of the grantee organizations

  2. Identification of descriptive variables that could affect the ability of the libraries to improve their programmatic capacities

  3. Assessment of change in programmatic capacities at the mid-point of programming

  4. Assessment of the overall change in programmatic capacities at the end-point of programming with interest in rate of change over the course of the program

  5. Analysis of grantee organizations’ experience in participating in a capacity-building cohort through a mentor organization model


Data preparation. Survey response will be collected using the online tool, SoGoSurvey. The data will be cleaned. Responses with less than 10% of questions answered will be removed from the dataset. Administrative data from the review will be collated to allow for use of descriptives of the libraries as a possible covariate for change in capacity analyses as well as provide a picture of the grantee population. Finally, notes will be generated for each interview, with each interviewee’s comments captured individually within the interview protocol form (one form for each interviewee).

Descriptive and categorical analyses. We will conduct initial descriptive analysis of the survey, document review, and some observation-derived (in-person cohort meetings as well as web-based communications and webinars) data to understand response patterns across items for the whole sample, by cohort, and by the three Communities of Practice. These descriptive analyses will examine means, distribution of scores on Likert-type scales for each item. We will also conduct a series of cross-tabs to examine patterns for different categories of respondents, such as by programmatic area and cohort. We will present these results using a series of data tables and graphical representations.

Comparative analyses. While the data collected represents the complete universe of libraries involved in the program, statistical analyses require specific conditions to be effective in analysis. We propose to conduct a statistical power analysis prior to engaging in any comparative analyses to ensure that the requirements for valid comparative analyses are present. We will also use these analyses to identify which statistical methods will be applied. For example, nonparametric analyses (e.g., Kruskal-Wallis Rank Sum Test, Chi-Square) and the Student t-test4 are considered to be more resistant to the effects of non-normal distributions versus the F-test used in a standard ANOVA. Should no comparative analyses be possible, the descriptive and categorical analyses will be presented across the program showing shifts in scores from baseline to mid-point and on to end-point.

Our assessment of statistical power for the dataset will include tests for skewedness (clumping of scores at the low or high end of scales) and kurtosis (tightness of variance). If the variance among the program and cohort grantees’ scores are small enough to allow, we will conduct comparative analyses assessing likely changes in scores within items to understand how the programmatic cohorts’ capacities have changed both at the mid-point and end-point relative to the baseline and each other.

Qualitative data will be assessed using thematic analysis outlined below with comparisons to be conducted for the survey derived data across baseline, mid-point, and at the end of the program.

Analyses we intend to conduct include the following:

  • Within cohort and within Community of Practice. Where possible analyses of variance and co-variance will be conducted to assess the change in indicators of the capacity of the libraries to implement their programs. The intent is to understand how the mentee organizations’ capacity evolved over the course of the capacity building as they progressed within their cohort group in their program.


  • Between cohorts within Community of Practice. Similarly, analyses of variance and co-variance will be applied to assess the change in capacity indicators across baseline, mid-point, and end-point scores. With these analyses, we look to understand differences between cohort 1 and 2 for the programs. This will also enable some understanding of the impact of different factors such as changes in the manner in which the mentor organizations engaged their grantee groups as well as other external factors such as the current COVID-19 crisis and post-COVID-19 effects.


  • Within cohort and between Communities of Practice. An analysis of variance with appropriate post-hoc analyses will be conducted. Prior to analyses, the mentee datasets will be assessed for skewedness with appropriate corrections applied for the analysis of variance among the three communities of practices. Our focus will be on understanding the effects of any differences in the mentor organizations’ approach on the grantees.



Qualitative thematic analysis of interview and web-portal data. Some of the open-ended survey questions, the interview captured data, and observation of the web-portal conversations are designed to elicit emergent themes. Some examples of these emergent themes include challenges faced by the mentor or grantee organizations, environmental factors that affected the capacities of the grantees, and unintended effects of the capacity-building efforts. Thematic analyses will also be conducted reviewing conversations held on the mentor organizations’ web-portals. Here we look to the data to identify themes in capacity-building issues and solutions as well as how the grantees collaboratively explore other issues that, while not the purpose of the funded technical assistance, either provide additional capacity-building support for their funded programs, or address other issues affecting the libraries and their communities. Using an inductive coding method for developing the themes, enabling us to let those responding to our surveys and participating in our interviews tell the story of their experience and to develop theory which will emerge from their observations. We will develop codebooks based on these evolved themes that we will apply to the second cohort – supporting the testing of emergent capacity-building theory. However, should additional themes emerge, we will augment the codebook and subsequent codes accordingly.

Observational analysis of web-portal, webinar, and in-person convenings. In addition to quantifiable measurement such as number of participants in a meeting or number of communications conducted on the mentor organizations’ web-portals, we will be assessing the depth of the communications – coding each initial outreach or query on the web-portals as to a possible theme as well as the depth and number of responses from the other grantees. The intent is to better understand how the grantees interact with each other, within the same cohort as well as across cohorts within the same thematic area, and to capture capacity-building issues and solutions that evolved from the interactions.

B.5. Contact Information for Statistical or Design Consultants

PPG

Lisa Frantzen, MBA, lfrantzen@partnersforpublicgood.org

Charles Gasper, MS(R), MS, cgasper@partnersforpublicgood.org

IMLS

Marvin Carr, Ph.D., mcarr@imls.gov



12 Akey, T., Beyers, J. (2020). Community Catalyst Initiative Evaluation: Interim Findings. Unpublished manuscript, ORS Impact, Seattle, WA.

2 Saleh, A. & Bista, K. (2017). Examining factors impacting online survey response rates in educational research: perceptions of graduate students. Journal of MultiDisciplinary Evaluation (13,29): issn 1556-8180.

3 Akey, T., Beyers, J. (2020). Community Catalyst Initiative Evaluation: Interim Findings. Unpublished manuscript, ORS Impact, Seattle, WA.

4 Student t-tests are considered highly robust even in conditions where non-normal data is present. However, the methodology is suspect for skewed data, where Chi-Square analysis will be used if needed. For reference please see Havlick, L.L. & Peterson, N. L. (1974). Robustness of the t test: A guide for researchers on effect of violations of assumptions. Psychological Reports (34.3c), Simkovic, M. & Trauble, B. (2019). Robustness of statistical methods when measure is affected by ceiling and/or floor effect. PLOS One (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0220889) and Posten H.O. (1984) Robustness of the Two-Sample T-Test. In: Rasch D., Tiku M.L. (eds) Robustness of Statistical Methods and Nonparametric Statistics. Theory and Decision Library (Series B: Mathematical and Statistical Methods), vol 1. Springer, Dordrecht (https://doi.org/10.1007/978-94-009-6528-7_23)

6


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorKim A. Miller
File Modified0000-00-00
File Created2021-01-19

© 2024 OMB.report | Privacy Policy