MAP_Part B_v4

MAP_Part B_v4.docx

Museum Assessment Program Evaluation

OMB: 3137-0106

Document [docx]
Download: docx | pdf

Museum Assessment Program Evaluation

Section B. Collections of Information Employing Statistical Methods


B.1. Respondent Universe and Response Rate Estimation


B.1.1. Universe

The target universe for the Museum Assessment Program Evaluation is the approximately 850 physical institutions in the United States that have completed the MAP process between the years 2006 and 2014 (2006-2009 =376; 2010-2014 =465). These institutions include museums of all types, including history, art, natural history, science centers, children’s museums, botanical gardens, etc. This study does not include institutions that started the MAP process but did not, for whatever reason, complete it. Only institutions that completed a MAP qualify to participate. It also does not include institutions that completed their MAP in 2015 since it is unlikely these sites have had enough time to act on their MAP results and achieve any of the specific changes that this study aims to investigate. For rationale refer to Part A.2.


B.1.2 Estimated Response Rate

Our goal is to obtain an overall response rate of approximately 40% for the online surveys and 90% for telephone interviews. These goals are consistent with the 2009 MAP Impact Study whose methods and approach was similarly structured [online survey response rate = 40%; telephone interview response rate = 90%].


B.1.3 Respondent Selection

AAM has compiled an exhaustive list of MAP participants from the past decade. This list is merged with the AAM membership database to ensure that contact information and staff rosters are as up to date as possible. This list serves as the basis for the MAP participant universe.


Prior to this study’s data collection, AAM will conduct a verification sweep to ensure that recent changes in institutional personnel and contact information are accounted for. While it is possible that there may be some institutions that have not updated their AAM membership in 2015, we believe the likelihood of missing or invalid data from the universe is low. IMLS and AAM are confident that there are no other lists or databases that could be viably used for this study.


B.1.4 Prior Data Collection

In 2009, the American Alliance of Museums (AAM) conducted an in-depth MAP Impact Study that aimed to understand the effect of the MAP program on participating institutions and the field by using feedback from MAP institutions, peer reviewers, and museum leaders. This study included responses from 194 MAP museum sites, 222 MAP peer reviewers, and 14 Museum Association leaders. The proposed study is only focused on museums, given its purpose as describe in section A.2.



B.2. Procedures for the Collection of Information


B.2.1 Design

This study employs mixed methodologies that allows researchers to combine quantitative data on participant attitudes, values, and actions with qualitative data that explore the variety and nuances of their MAP experience.

In the first phase of study all previous MAP participants will be invited to complete an online survey. A paper version of this survey will be available upon request. [Appendix A: Online Survey; Appendix B: Paper Survey]. The online survey can be accessed at: https://www.surveymonkey.com/r/ZX3TYLP


Spotlight Impact will host the online survey using SurveyMonkey® (www.surveymonkey.com); using its design engine to develop an instrument that will include open-ended, multiple-choice, scale, and Likert-type (rating) questions. The survey will also incorporate headings, sections, and conditional logic branching to optimize the user experience. SurveyMonkey is compatible on Macs and PCs, and are accessible through JavaScript enabled browsers (Internet Explorer 8.0 or later, Firefox 13.0 or later, Safari, 5.0 or later, and Google Chrome 16 or later). SurveyMonkey products are also accessible from mobile devices such as smart phones and tablets. At the completion of this phase of study, SurveyMonkey data will be downloaded in Microsoft Excel (.xls), with analysis files stored in IBM SPSS format (.sav). Descriptive and inferential analysis (i.e., ANOVA, chi-square) of the close-ended questions will be conducted on quantitative data.


In the second phase of study approximately 8-12 survey respondents will be identified as high performers and invited to participate in a follow-up telephone interview. [Appendix C: Telephone Interview] The goal of these interviews is to further explore the nuances of their survey responses and achieve a greater understanding of their MAP experience, the resulting changes on their institution, and any relationship between those two factors. These interviews will result in approximately five case studies that best exemplify a variety of successful organizational practices informed or influenced by the participant’s MAP experiences..


Spotlight Impact and AAM will analyze survey data and determine a subset of participants that best represent a diverse cross-section of survey respondents and subsequently the museum field. Online survey responses will be filtered using the following protocol to select interviewees.

  • Because case studies focus on best practices resulting from MAP, it is important to first identify high-performers who indicate that their organization has implemented changes in institutional practices since participating in the MAP process (Q6). Of these, viable candidates are those who also indicate that these institutional changes can significantly be attributed to their participation in MAP (Q9).

  • Case studies will reflect the three different types of museum assessments offered by AAM and account for any variation in changes to the MAP program. Therefore, candidates will be selected based on the type(s) of assessment program the organization completed (Q12/Q18/Q24/Q30) and the time since they completed their MAP (Q13/Q19/Q25/Q31).

  • One goal of the case studies is to understand the longitudinal results of MAP participation. Therefore, case studies will also reflect the timeframes in which the changes occurred (Q13/Q19/Q25/Q31). We will select for the case studies institutions that completed a variety of MAP-related activities in different timeframes (short-term=within a year of completing MAP; mid-term=between 1-3 years of completing MAP; long-term=after 3 years of completing MAP)

  • Because case studies are intended to reflect best practices from the variety of types of organizations represented in the professional museum community, variation in museum types is important. Institutional profile data (Q40-Q42) will be reviewed to ensure not all interview candidates are from the same type of organizational segments.



An interview guide designed to encourage open-ended dialogue will be used to ensure data is collected and captured consistently. Interviewers will use a Microsoft Word template to simultaneously record participant responses during the telephone interview. Interviews will be conducted by the two individuals at Spotlight Impact that have been intimately involved in the survey design and response analysis, and are thus familiar with the MAP program and the analysis objectives. Interviews will be conducted approximately 3-4 weeks after the close of the online survey


There is no intention to conduct comparative qualitative analysis across case studies. Each case study is intended to be an individual examination of a best case scenario that will help better understand the experiences of high-performing institutions and their engagement with MAP. Thus, the purpose of these interviews is to explore a museum’s internal/external conditions and determine the best practices and organizational successes that emerged. This phase of the proposed study is intended to understand the nuances of a single organization’s experience with MAP. Data collected from interviews will be analyzed holistically as a case to understand potential relationships between MAP participation and changes in institutional capacity. This analysis would be grounded in the framework of the online survey questions and the goals that IMLS and AAM intend to achieve with the program.

http://www.aam-us.org/resources/assessment-programs/MAP/assessment-types


B.2.2 Communications and Access

Upon OMB approval, Spotlight Impact, LLC will begin field operations. AAM will contact all potential study participants via email – explaining the importance of the study, asking for participation, and providing instructions on how to access the online survey. A maximum of two follow-up emails will be sent to participants as a reminder to non-respondents and respondents who have only partially completed the survey. A paper questionnaire will be made available if a potential participant specifically requests this method. AAM will send out this paper survey and provide instructions for its completion and return. As part of this survey, participants will be invited to partake in a follow-up telephone interview approximately 3-4 weeks after the close of the online survey. Those who agree will enter their contact information directly into the survey template and their data will be added to a Microsoft® Excel database. Their survey responses will be reviewed based on the selection criteria earlier described.


The process for validating contact information, emailing, and response tracking are described below.




B.3. Methods to Secure Cooperation, Maximize Response Rates, and Deal with Non-Response


B.3.1 Sample and Contact Validation, Emailing and Tracking

Prior to the implementation of this study, AAM’s list of all MAP participants will be updated using their latest membership database. As a secondary measure, AAM will email all individuals on that list to inform them of the upcoming MAP study and encourage participation. This verification measure serves three key purposes: 1) to validate the appropriate contact information within the MAP institution; 2) to confirm the Internet capability of the institution (i.e. that they have online access); and 3) to identify sites that may require a paper survey. Any information variances or changes emerging from this verification process will be received by AAM and re-merged with their membership database. Any requests for paper surveys will be compiled by AAM, who will record their contact information and mailing address.


A final, AAM-vetted contact list (of unique email addresses) will be imported into SurveyMonkey’s survey campaign engine and will allow Spotlight Impact to track individual response rates and schedule reminders for non-respondents and incomplete responders. A maximum of two reminders will be sent prior to the closure of the survey. Upon closure of the survey, Spotlight Impact will export an aggregated list of responders and non-responders. This information will be provided to AAM so they have a general record of evaluation participants.


We expect the overall data collection period to approximately 4-6 weeks over the course of January and February 2016.


B.3.2. Gaining Cooperation

As described in the previous section, AAM will email the MAP participant universe in advance of the study. This email will not only serve to validate contact information, but allows AAM to build anticipation for the study and encourage MAP institutions to participate. This email will be sent from AAM’s Office of Museum Standards and Excellence. [Appendix D: Validation Email]


Spotlight Impact will draft an email inviting MAP participants to complete the online survey. This email will include consent language, survey instructions, and appropriate contact information. [Appendix E: Survey Invitation] This email will be from AAM President & CEO, Laura Lott, and will be sent to individuals using SurveyMonkey’s survey campaign. In conjunction with this direct effort, AAM will promote the MAP evaluation study broadly through its newsletter and website. This will build greater awareness for the initiative in the field and serve as a reminder for those directly contacted to engage in the process.


Spotlight Impact will employ a subtle approach for recruiting participants for telephone interviews. The online survey includes the following language: “In an effort to learn more about MAP participants and their experiences, we are conducting brief telephone interviews in the coming months. This conversation will expand upon the responses you have provided here, and will allow AAM to gain a better understanding of how MAP has contributed to your institution, and how the process can be improved. If you would be willing to be contacted, please provide your information below.” This process allows respondents to self-select if they choose to participate in a follow-up conversation. Spotlight Impact and AAM send candidates an email inviting them to schedule an interview time. This email will be sent from Spotlight Impact, LLC. [Appendix F: Interview Invitation]


No monetary incentives or personal benefits are offered in exchange for participation in either online survey or telephone interview. As such, we expect respondents will participate due to a sense of community and goodwill.


B.3.3. Technical Methods to Maximize Response Rates

For the MAP Evaluation study, it is anticipated that an overall response rate of at least 40% can be achieved. In addition to the methods described above, we will employ a number of techniques to enhance the ease-of-use of the online survey, thus maximizing response rates for completed surveys:


Field-level Data Validation. SurveyMonkey provides field-level validation for data collected through its survey engine. These include data type validation for numeric, date, four-digit year, currency, percent, email, and URL fields. Additionally, user selections are also validated where limitations are applied (e.g., check no more than 3 options, select top 2 items, etc.) Individual fields can be declared as required, and the required flag can respond to customized skip logic against answers provided in other fields. The survey mechanism provides immediate feedback response to any validation errors on the respondent-side when a response fails an edit, and performs server-side validation checks to ensure data integrity. Customized pop-up help can be provided for any individual field or section.


Survey Support. In addition to the field-level online help provided by the SurveyMonkey website, content and technical support will be provided by email or phone by Spotlight Impact.


Skip Routines. Data quality will be enhanced through a set of user feedback mechanisms such as logical relationship validation checks and skip logic. For example, a user who responds "No" to the question "Since participating in MAP, have you applied for any outside funding?" should logically respond "No" to the follow up question "Did participating in MAP help you in any way with securing that funding?" Spotlight Impact will determine the pattern of skip logic required during instrument development. By employing skip routines, participants will only be asked to respond to questions that are relevant to their situation.


Progress Bar. The survey will be presented in section-level and/or sub-section level pages, with data validation and response storage occurring on each individual page. Survey respondents will be able to save their responses at any time and return to complete the survey over multiple visits. A visual feedback mechanism will indicate the progress a user has made through the survey. This visual cue will be accompanied by a “% Complete” notification. These mechanisms set expectations throughout the survey and help participants anticipate completion of their task.


Confidentiality and Data Security. All data transferred to and from the SurveyMonkey website is protected through encryption protocols such as Secure Sockets Layer (SSL) and Transport Layer Security (TLS) technology, and is validated by Norton and TRUSTe. Data collected will be stored on SurveyMonkey servers located at their SAS70 Type II certified facility in the United States. This service is compliant with data encryption protocols and fully discloses its security policy at: https://www.surveymonkey.com/mp/policy/security/.


At the completion of this phase of study, all data downloaded from SurveyMonkey will be in Microsoft Excel (.xls) or IBM SPSS (.sav) formats. These files will be secured on Spotlight Impact computers and backed up using Dropbox. Dropbox is designed with multiple layers of protection, including secure data transfer, encryption, network configuration, and application- and user-level controls that are distributed across a scalable, secure infrastructure. Dropbox files at rest are encrypted using 256-bit Advanced Encryption Standard (AES). Dropbox uses Secure Sockets Layer (SSL)/Transport Layer Security (TLS) to protect data in transit between Dropbox apps and our servers; it's designed to create a secure tunnel protected by 128-bit or higher Advanced Encryption Standard (AES) encryption. Dropbox applications and infrastructure are regularly tested for security vulnerabilities and hardened to enhance security and protect against attacks. Dropbox files are only viewable by people who have a link to the file(s). Only members of the evaluation team will have access to these data. In addition to raw data, evaluators will make available copies of data collection and analysis protocols and instruments. No personally identifiable information will be included in raw data that is disseminated, however, for sites that have actively chosen to provide their contact information for follow-up telephone interviews, that information will be made available to AAM.


Response Rate Monitoring and Reminders. Spotlight Impact will monitor response rates on a weekly basis and provide status reports to AAM. Over the collection period, reminder emails generated through SurveyMonkey will be sent to non-responders and partial responders. In the event response rates are lower than expected, AAM may decide to extend the data collection period by two weeks.


Paper Survey. If requested, MAP participants may receive a paper-version of the survey. This document will contain the exact same questions as those found online. Instructions and skip-logic directions will be included. AAM will disseminate these paper surveys to MAP sites and provide a postage-paid envelope for their return. At the end of the survey period, AAM will provide these paper surveys to Spotlight Impact who will manually enter this information and merge the data with that collected online.



B.4. Tests to Minimize Burden


To avoid duplication of questions posed to MAP participants, Spotlight Impact reviewed the 2009 evaluation study conducted by AAM, purposefully focusing on question types, items, and survey organization. While the vast majority of questions for this 2015 study are unique and specifically designed to address the project goal of assessing the contribution of MAP participation to institutional change and professional capacity building, 15 questions from earlier studies were incorporated into this survey in order to adhere to existing AAM member segmentation and provide data continuity. These exceptions are as follows:

  • In order to consistently measure the size and demographics of the institutions represented in this study,4 institutional Profile questions from the 2009 MAP study were directly duplicated in this online survey—in question wording, structure, and response item options. Questions 40-43 on the online survey include; region, museum type, museum size by annual budget, and museum size by number of full time / part time staff. These are standard questions employed by AAM in all of their surveys and provide a systematic way to describe their membership.

  • To measure motivation/influences for participating in MAP, Question 2 of this online survey adapted a similar question from the 2009 MAP study. While the question wording and structure is the same between the two studies, the response items (likely influencing factors) were updated to include five (5) additional likely influencing factors aligned with AAM’s intended goals for MAP participation. These items include; “Desire to leverage institutional change”, “Desire to get our board engaged”, “Desire to create a foundation for strategic planning”, “Desire to increase our community engagement/visibility”, and “Desire to enhance fundraising efforts”.


Additionally, the online survey mechanism (on SurveyMonkey) will be tested to assess the survey validation routines; skip patterns, and overall time it takes to enter survey response, as well as specific timing for sections of the questionnaire. Spotlight Impact and AAM staff will conduct this testing, essentially test-driving the survey mechanism with dummy data. Spotlight Impact will make mechanism adjustments to the web-based survey prior to data collection. Reviewers may also view and test the survey at the following practice link: https://www.surveymonkey.com/r/ZX3TYLP



B.5. Individuals Responsible for Study Design and Performance


The following individuals are responsible for the study design and the collection and analysis of the data for Museum Assessment Program Evaluation.


Personnel Involved with the MAP Evaluation

Person

Address

Email / Phone

Institute of Museum and Library Services

Christopher J. Reich

Senior Advisor, Office of Museum Services

1800 M Street NW
9th Floor
Washington, DC 20036-5802


creich@imls.gov

202-653-4685

Matthew Birnbaum, Ph.D.

Senior Evaluation Officer

1800 M Street NW
9th Floor
Washington, DC 20036-5802

dswan@imls.gov

202-653-4759

American Alliance of Museums

Julie Hart
Senior Director, Museum Standards & Excellence

1575 Eye Street NW,

Suite 400
Washington, DC 20005

jhart@aam-us.org

202.218.7712

Spotlight Impact, LLC

Angelina Ong
Principal / Project Manager

3635 Woodland Park Ave N

Unit 219

Seattle, WA 98103

angie@spotlightimpact.com

206-484-1953


6

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitlePurple highlights indicate an OMB question
SubjectRevised per IMLS
AuthorSamantha Becker
File Modified0000-00-00
File Created2023-08-28

© 2024 OMB.report | Privacy Policy