CBAMS II: Questionnaire Comments
Comment E1: Why do you refer to it as the “Census of the United States”?
This is language from CBAMS I, so practically speaking, it will ideally stay the same. However, this is no intended to be one proper name for the Decennial Census. It is intended to describe the nature of this particular census, since there is a possibility that some people will know that “census” has a broader meaning.
This still seems like strange wording but it needs to be consistent between I and II so there is no need to change it.
Comment E2: Is this enough description to really jog a person’s memory? Why not mention it’s a piece of paper that was mailed to you home. It was blue colored, you would have received it in March last year. I think this would jog our brains, but not necessarily the general public’s. ?
Again, this language is from CBAMS I. A change here especially to add details to the aided awareness question would reduce our ability to align segments from CBAMS I and II. Further, the question is not asking for recall of the Decennial Census. It is asking whether the respondent has ever heard that we had a census. For this reason, cueing with details of their recent Decennial experience is likely to aid the wrong awareness. That is, I do not think we want to identify people who remember getting mail from Census office in the government; we want to identify people who remember reading, seeing, or hearing something about the fact that the government counts the people. Note that responses to these questions do not drive skips later in the survey.
Similar to E1, I’m not sure respondent would get that difference but it is important to remain consistent so there is no need to change it.
Comment E3: Is the only use of this information for the linkage mention in section A2? It seems out of place here. It should be either in the beginning or with the demographic questions
The zip code will be used to assign a census block cluster for at least some records for comparison to the attitudinal mindset. This is a part of profiling the mindsets. We considered its placement carefully. Our experts wanted to have this measure for all respondents including breakoffs. However, putting this right at the beginning of the survey could interfere with the establishment of rapport. We put it here because this is as close to the beginning as we could get it and have it come after a coherent group of survey questions.
Okay. No change necessary.
Comment E4: I would change this to “Do you believe completing your census form…” By including two verbs, you now don’t know which part they benefit or don’t benefit from, the answering or the sending. You really only care about the sending so just ask about the sending.
This question is here because it has been shown to be a driver on other surveys. We agree that the wording, especially of the response scale, is not ideal. However, to change the wording would perhaps change the question’s utility in segmentation—particularly important because we are attempting to identify a small set of questions that profiles the segments. However, the use of two verbs is unlikely to impair the utility of the question substantially for two reasons:
1. It seems unlikely that people think that answering would have a different outcome than sending, and 2. From the government’s point of view, those two verbs refer to a single action. That is, their meaning in the sentence is the same as “responding to” would be. Furthermore, we have used this question on multiple other surveys, including the tracking surveys conducted during the 2010 Census, and we would like to keep the wording consistent for comparisons.
Okay. No change necessary
Comment E5: In general ranking is a cognitively difficult task for respondents. While I think chunking the list in this manner is a good idea, it still may provide a challenge to people who just don’t have an affect towards these items. It is also unclear to me how these questions inform goals 1 and 2 or how the introduction related to the ranking task – is this some sort of Sophie’s choice of who gets funding and who doesn’t?
I’m not exactly sure what OMB’s specific question is, but I can provide some detail on this task. This is a MaxDiff exercise, a forced choice approach widely used in market research. OMB is absolutely right that ranking, especially on the telephone, is difficult if not impossible for respondents. MaxDiff was developed as an alternative to a full-list ranking. In visual-mode surveys, this approach can be used to obtain rankings for a large set of attributes. In a phone context, it can be used to create a bite-sized task that yields a ranking of attributes. Our only alternatives to MaxDiff on phone are to do a full ranking or to have respondents rate each item.
In CBAMS I, the research team asked respondents to rate how important each of these sources of funding was. The observed variance in these responses was extremely low, and their potential to aid the segmentation or profiling was unrealized. In CBAMS II, we must increase the variance across respondents, especially the differences between respondents attributable to beliefs rather than affect. Otherwise, we will not be able to achieve our goal of differentiating between people in the negative mindsets. A simple rating of the importance of funding several things does not achieve this; only a force choice or ranking task will.
We did consider a full ranking which is achieved in phone surveys by asking for the most important, reading the whole list again and asking for the next most important, and so on. With this many attributes, that task is cognitively onerous and would take quite a long time.
The MaxDiff approach is cognitively defensible. There are just three items in each task, few enough that respondents should be able to hold all three in memory during question administration. This is in contrast to the requirements of a full ranking task. In cognitive testing, respondents understood each item and understood the task although when we tried with four items, they found the task much more difficult. They also noted that choosing the “highest” and “lowest” was too hard, but choosing the “highest” and “next highest” (the approach we use in telephone ranking tasks) was feasible.
Ultimately, MaxDiff is analyzed to yield a ranking of items per respondent. These rank orders will be used in the segmentation analysis.
Comment E6: I would ask these questions before the ranking questions since they are concrete and easier to answer. This also puts all the attitude/ belief questions together.
We can implement this change, which would only impact questionnaire programming.
As discussed in the phone call, please add information explaining how the MaxDiff works, include any applicable research citations, and a statement of how the results from the MaxDiff inform CAMS. On the phone, it was describes as a way to find appropriate messages for a particular mindset - please add that to the supporting statement.
Comment E7: Shouldn’t you only get this question if CE2=2? If not, what happens when CE2=1 (yes the form was completed) and CE6=1 since technically this situation shouldn’t happen.
Since recall of mailing back the Census is known to be inaccurate, we elected to ask this question of everyone. Additionally, if we received the mailed back Census after the NRFU cutoff date, then the respondent would still have received an interviewer visit.
Okay, no change necessary.
Comment E8: So far you’ve been focusing on the Census Bureau and now you’re switching respondents minds to the government in general. I would include a transition sentence that says that and organize the survey so all the Census specific question are first and then questions about the government in general.
How do you feel with prefacing this section with this statement:
For this set of questions, think of the government in general and not just the Census Bureau.
Okay, please add that sentence to the instrument.
Comment E9: How do these questions support Goals 1 and 2?
These questions will be used in segmentation. These questions support our goals of segmenting mindsets because it is important to understand the characteristics of, similarities and differences between the mindsets when it comes to these behaviors. Again, the mindsets will ultimately inform communications campaigns, and knowing this information helps us to determine how to speak to each mindset.
Okay, no change necessary
Comment E10: Consider including a question about other agencies besides Census doing the record linkage? Maybe I’m okay with Census but not IRS. The same maybe true of other Census surveys besides Decennial
The specific purpose of asking these questions is to determine whether the respondents are comfortable with Census’ use of the data. The results will only impact communications efforts regarding the Census.
Okay, no change necessary however this is an important construct that we need to research and understand over the course of future research.
Comment E11: What about people who don’t think this really saves money since Census will just take the “extra” from Decennial and spend it on something else. In other words, I didn’t get a cut in my taxes because Census saved some money. Based on this you may want to change “save money” to “spend less”
These scenarios are intended to test specific messages Census might use about the choice to use administrative records. The message content would most likely be “save money”, since this seemed more compelling. To test “spend less” would mean not testing another message type. It is my understanding (ICF Macro) that Dr. Groves requested that we test these messages. Dr. Groves and Peter Miller along with other survey methodologists including Frauke Kreuter agreed to this wording. Given this reasoning, we are still open to changing if OMB prefers.
Okay, no change necessary however this is an important construct that we need to research and understand over the course of future research.
Comment E12: But you aren’t spending more money – you already send an interviewer to my house if I don’t return my form so this is the same money. I think I see where this is going but this isn’t quite it. I think AMCost5 gets at this construct and I would delete this item.
Using Administrative Records is substantially cheaper than sending an interviewer to nonrespondents. I can obtain approximate budgeting on this if you would like to see it.
Change the language in the question so that it is clear that there is a comparison between administrative records and interviewer visits. This makes it clear to the respondent what you’re asking without relying on their short term memory to bring that concept forward from the previous questions.
Comment E13: What about partial SSN?
It is my understanding that Census plans to use full SSN, so that is probably the best thing to test. The government might use partial or complete SSN. Respondents’ ability to distinguish between those concepts is likely limited, although we have not tested that, and adding the words lengthens the question.
Okay, no change necessary however this is an important construct that we need to research and understand over the course of future research.
Comment E14: Something that isn’t tapped by this question is why a respondent doesn’t want the Census Bureau using the records. For example, a respondent might say no to credit records not because they feel the information is sensitive but because the data is wrong and if you use a credit report to complete my census form, who knows what you’ll get
The intent of the commented question is, as you note, to obtain information about which things people would approve and does not obtain reasons. To a large extent, this is because that kind of drill-down is outside the scope of the present test and would add substantially to survey length.
Okay, no change necessary however this is an important construct that we need to research and understand over the course of future research.
Comment E15: Again you have no way of separating privacy reasons from data quality reasons and even if you don’t fester that out here, be careful not to make that assumption in the report.
The intent is not to make any statements in the report about reasons why people would or would not support administrative records use. The intent is to directly compare the impacts of several ways of talking about administrative records.
Okay, no change necessary
Comment E16
We are still trying to capture the same respondent’s feelings regarding an interviewer visit. Interviewer visits are a point of contention, and it seems logical to understand what types of people believe what - - are the same people who are comfortable with admin records usage the people who wouldn’t mind and interviewer, ok with admin records but not an interviewer, ok with interviewer but not admin records, etc.
Okay, no change necessary
Comment E17
Phone calls are not an option being explored by the Census Bureau at this time, while admin records usage is; therefore, we have no reason to ask about phone calls. If OMB would prefer we still ask about phone calls, we will.
I understand that it is not a current option for Census but as a respondent it seems like a logical choice that is missing. No change is necessary but it might come up in the interviews.
Comment E18: I’m not sure I understand why you ask the same set of questions three times instead of just asking each question once and asking them why? Some of the questions come off as a bit awkward because you are trying to force them into this framework. I really think this section needs cognitive testing before fielding
Each respondent will be asked this set of questions exactly once. The language will be different each time; this is an experiment. Unfortunately, stated reasons for behavior and beliefs, especially stated reasons for hypothetical or new beliefs can be extremely inaccurate (For a classic example, see Nisbett & Wilson, 1977). Directly testing the language will be much more effective in determining which frame is most effective.
Comment E19: Regardless of the above comment, how is this measuring control?
I believe there is some confusion about the structure of this task. This is the “control” frame in an experimental sense—it does not have a frame like the other groups of similar questions.
Comment E20: By the third time you ask me this I would be annoyed. I just told you twice I’m not giving you my SSN.
See the above response. The skip language refers to several “frames”; these are assigned in sampling, and there is one per record.
Comment E21: An alternative way would be to ask about the item and then ask why they don’t want to do something. For example:
How willing or unwilling are you to give you SSN to Census?
(if unwilling)
What if it saved you money?
What if you didn’t have to complete a census form?
What if meant we didn’t have to send an interviewer to your house?
If still unwilling:
Why are you so unwilling?
I don’t trust the govt
I have control issues
Ive been told never to give my SSN to anyone
This should get you to the same constructs and perhaps some additional ones and only have to ask a respondent about their SSN once.
See notes above about the measurement of reasons; the ultimate goal here is to compare the frames.
As discussed on the phone, Census will add more information about the frames experiment to the supporting statement that clearly explains what the questions are measuring and how the rotation works. It is not clear in the Supporting Statement that respondents only get one of the frames. Census will also add a power analysis to the package.
Comment E22: Goal 5 is how Census can reach this mindset but just because I use the internet or my phone has internet capacity doesn’t mean it’s a good way for the Census Bureau to contact me. In other words, I might be okay with a text message from the Gap but not from the Census bureau and that isn’t really addressed here.
These questions were constructed in the same manner that the “how to reach” questions from CBAMS I were. Ideally, we need to know who we can reach how - - it might not be the best way to reach each person, but for the mindset it is a good start…that is, perhaps the unacquainted mindset scores very low on this but the leading edge scores highly - - then we would know not to even explore this communication channel for unacquainted but know it’s an option for leading edge.
Okay, no change necessary however this is an important construct that we need to research and understand over the course of future research.
Comment E23: These questions aren’t demographics and should either be at the beginning or the end of the section. Also, they aren’t mentioned in the supporting statement and they should be.
These questions are not demographics, but they depend on the answer to a demographic for their skip pattern. Additionally, these questions were added to help with the Census in Schools/Partnership project as they are not conducting a survey. However, that project has substantially changed in scope, and I am currently waiting to hear from that team on whether or not they still need us to obtain this information. If not, we will drop. If yes, we will add information about these questions to the justification statement.
The Census in Schools questions will be dropped from the instrument
Nisbett, R. and T. Wilson (1977). "Telling More than we can Know: Verbal reports on mental processes." Psychological Review 84(3): 231-259
File Type | application/msword |
Author | wrobl001 |
Last Modified By | wrobl001 |
File Modified | 2011-04-22 |
File Created | 2011-04-22 |