Cognitive Testing Report

Final Report_FSS Trust Survey_Cognitive Testing.doc

Federal Statistical System Public Opinion Survey

Cognitive Testing Report

OMB: 0607-0969

Document [doc]
Download: doc | pdf

PowerPlusWaterMarkObject357831064

Cognitive Interview Evaluation of the

Federal Statistical System Trust Monitoring Survey:

Results of interviews conducted in October, 2011


Stephanie Willson

Questionnaire Design Research Laboratory

National Center for Health Statistics

Centers for Disease Control and Prevention


  1. Introduction


This report documents cognitive testing results of the Federal Statistical System (FSS) Trust Survey. This survey represents a movement to understand public opinions and knowledge of the FSS, specifically, trust in the FSS, the credibility of federal statistics, and attitudes toward and knowledge of the statistical uses of administrative records. To this end, a working group was formed to design and implement a cross-agency survey of public attitudes about federal statistics and statistical agencies. The working group consists of members from Office of Management and Budget (OMB), the Census Bureau, National Agricultural Statistics Services (NASS), the Internal Revenue Service (IRS), and the National Center for Health Statistics (NCHS).


The next section briefly describes the qualitative methodology of cognitive interviewing, including the procedure for sampling interview respondents, the data collection method, and analysis plan. The third section of the report presents an overview of general findings, followed by a more detailed question-by-question analysis.


  1. Methodology


Sampling and Respondent Demographics


The Census Bureau, NCHS, and IRS participated in cognitive interviewing. We completed a total of 42 interviews. Two versions of questions 1, 5, 6, and 7 were tested, with 21 interviews conducted using each version. Respondents were selected using a purposive sample. The goal of a purposive sample is not to obtain a statistically representative sample. Instead, emphasis is on coverage of the survey questions and topics, not the survey population. As a result, respondents were selected according to whether they reported having an interest in or following statistics on health, unemployment, and/or population counts. In addition, we sampled people with various levels of trust in the Federal government.1 Demographic diversity among respondents was also important, especially with regard to education. We aimed for a sample that included those with high and low levels of educational attainment (college graduates and those with a high school diploma or less). Some interviews (18) took place in the lab at NCHS. However, the Census Bureau and the Internal Revenue Service conducted the remainder of the interviews off-site in an effort to create an environment conducive to respondents expressing distrust of government and government institutions. Interviews were designed to last 60 minutes and a $40 token of appreciation was given to respondents at the conclusion of the interview.

The demographic breakdown of respondents by gender and age appears in Table 1. Information on educational attainment was not consistently obtained and is, therefore, not reported here.


Table 1: Demographic summary of respondents by gender and age (N = 42)



Total Percent

Gender


Female 22 52%

Male 20 48%

Age


Under 40 years 14 33%

40 years and over 24 57%

Missing 4 10%



Data Collection


Cognitive interviewing, as a qualitative methodology, offers the ability to understand the interpretive process behind answers to survey questions. Different types of cognitive interviewing techniques exist. Respondent narrative and intensive follow-up verbal probing were the primary methods used for data collection in this project. With these techniques, interviewers administer the survey question, obtain an answer, and then probe the respondent for information relevant to his or her responses. Follow-up probes are initiated when contradictory information is given by the respondent, as this may indicate points of confusion and misinterpretation. Probes are also useful for exploring pre-identified areas of concern in the instrument. On the other hand, respondent narrative allows for the exploration of unanticipated problems by producing rich and detailed information on how respondents answered the question, what they were thinking when answering, and how they interpreted the meaning of the question.


Shedding light on the question-response process, data from narratives allow the analyst to determine which stage in the process of answering a survey question – comprehension, retrieval, judgment, or response – the respondent had difficulty with, if any. The appropriateness of response categories can be evaluated with this procedure, as can the ability of participants to draw upon their own experiences and knowledge to answer the questions effectively. Because the intensive interviewing method provides extensive detail on the question-response process, not only does it allow the interviewer to identify which questions and/or response categories are problematic, it also shows why and how questions are problematic, leading to informed strategies for improving question design in terms of maximizing construct validity.


All data were entered into and are currently stored on Q-Notes, a software application maintained by NCHS. This software was designed specifically for cognitive interview methods and helps to facilitate a common approach among different agencies – agencies that may normally have very different techniques – who are participating in a single project.


Method of Analysis


Data analysis proceeded according to the grounded theory approach which does not aim to test existing hypotheses, but instead generates explanations of response error and various interpretive patterns that are closely tied to the empirical data. The process of analysis is a constant comparison of data in three distinct steps. The first step occurs within the interview as the interviewer attempts to understand how one respondent has come to understand, process, and then answer a survey question. Basic response errors can be identified by comparing respondents’ answers to the survey questions to the narrative they provide during the interview. When logical contradictions are evident between the narrative and the survey answer, the interviewer explores why these contradictions occurred.


The second step in analysis occurs once the interview is over, and is a systematic comparison across all interviews. This level of comparative analysis reveals patterns in the way people answer survey questions. At this level, it’s possible to identify the construct that’s being captured by the survey question and illustrate the substantive meaning behind the survey statistic.


The final phase of analysis is a comparison of patterns across sub-groups, identifying whether particular groups of respondents interpret or process a question differently from other groups. At this level of analysis, that is, identifying patterned differences among subgroups, we begin to understand where potential for bias would occur in survey estimates.


  1. Results


Overview of Findings


The Challenge: Respondents’ Knowledge of the Topic (or Lack Thereof)


The driving factor shaping the question-response process in these questions was respondents’ lack of understanding and knowledge of the Federal Statistical System in particular and statistical information in general. This is consistent with findings from an OECD (Organization for Economic Cooperation and Development) international effort aimed at measuring attitudes about the general population’s trust in statistics produced by national governments. The United States portion of the cognitive interviewing, conducted by the National Center for Health Statistics, found that most respondents completely misunderstood the questions because they had no knowledge of the Federal Statistics System. The respondents that did have some notion of the FSS were those who had made use of federal statistics (usually for work or educational purposes). Not surprisingly, respondents with first-hand experience with federal statistics had less difficulty making sense of the questions and had interpretations that were more consistent with question intent than those with no experience with government statistics.


The current study produced results consistent with the OECD project. We found that respondents have very little knowledge of federal statistics. As a result, the survey questions fall victim to a phenomenon common among many attitude questions; that is, there is no static underlying evaluation to capture. Instead, responses are created on the spot and are often inconsistent across questions aiming to measure similar concepts. In order for respondents to answer a survey question about a topic they have not previously considered, respondents take into account a variety of considerations about the issues and then make judgments about which considerations to use when answering the question. These considerations often consist of a random assortment of feelings, beliefs, impressions, and general values. The more heterogeneous these considerations, the more instability there is among responses. In other words, at any given time, or among different questions designed to measure the same construct, respondents may sample and apply different considerations when formulating their answer. This process, referred to as the Belief Sampling Model (Torangeau, Rips, Rasinski, 2000), explains response instability evident in many attitude questions. Furthermore, it is important to note that response instability is concerning to the extent that it indicates that a single concept is not being measured by a question. This diminishes our faith in the item’s construct validity.


We found four forms of evidence supporting the notion that respondents do not have predefined opinions about federal statistics and that their answers to the survey questions lack stability. First, interpretations shifted among questions attempting to measure similar concepts. We also found that attitudes themselves shifted as respondents weighed different considerations on the topic. Second, in order to answer a question, respondents sometimes were not thinking about statistics at all, but rather drawing on other (often irrelevant) considerations and topics. Third, sometimes there was general confusion over what a question was asking, to the point that respondents could not answer the question. Finally, respondents often limited their understanding of a question to the examples given, rather than thinking broadly about federal statistics. All of these factors suggest that respondents do not have specifically formulated underlying evaluations which are knowable and measurable. The following discusses these four themes in more detail.


1. Shifting interpretations: One example of a lack of stability in the survey responses is the shifting interpretation apparent among respondents. There were many instances of respondents giving contradictory answers to similar survey questions or providing a narrative during probing that did not match the way they answered the survey question.


For example, one respondent disagreed with the statement “federal statistical agencies can get information collected by any one of them” because she believed the opposite to be true. She said, “I don’t think they share enough information. Unfortunately. You would think in this technology age they would, but they don’t.” However, in questions related to record linkage this same respondent said she was very much against her personal information being shared. This question is about sharing undefined, generic information. But when she thinks a question is about sharing her personal information, her opinion is different. This lack of specificity in the question results in shifting interpretations as people sample different values and beliefs.


A similar phenomenon occurred for one respondent answering question 5a, version 2 (“How do you feel about federal agencies collecting information directly?”). At first she said she supports “anything that protects my privacy.” However, in talking about accuracy, she says that if a record already exists, it would be more accurate than a survey. She discussed linkage to health records and income data, suggesting that “Their memory would be better than mine.” Again, this shifting attitude is clearly prompted by having the respondent think about different considerations surrounding a topic. Another respondent had the same issue in question 5b, version 1 (“How do you feel about federal agencies collecting information directly?”). He said, “I’d rather give my information directly – don’t go behind my back.” However, in a previous question (10) he said he didn’t mind if the government gets information about him from various agencies. It seems clear that this question caused him to think about privacy differently from the way he thought about it in question 10.


There were many examples of this pattern in the data. It’s apparent that people’s answers easily fluctuate, depending on what they are prompted to think about and what considerations they sample before answering. This phenomenon is partly explained by the fact that broad topics are highly subject to context effects. This suggests that more stable estimates will be obtained by asking specific questions that cue all respondents to sample similar considerations. This idea will be discussed further in the section on proposed solutions.


2. Not thinking about statistics: Many people do not have predefined opinions about federal statistical data because they have little awareness of this type of information to begin with. Even those with some level of awareness do not possess sophisticated knowledge of statistical information and have not thought extensively about the topic, especially in relation to the Federal government. In order to answer the survey questions, respondents who fall into this category may sample from a set of considerations that are irrelevant to statistics. In essence, they are not understanding (or answering) the question as intended because they don’t have the required knowledge base to do so. Additionally, this problem was exacerbated by some questions that were worse than others at conveying the idea that they were about statistics. Questions 4j and 4l are good examples of this. 2


For example, one respondent agreed with question 4f (“Statistics provided by federal agencies are often biased.”). However, when asked why he agreed, he said that the lobby industry is influential. Lobbyists have the ability to persuade members of congress to vote certain ways and for certain policies. Similar misinterpretations were evident in question 4h (“There is political interference in the work of federal statistical agencies.”). To explain why she agreed with the statement, one respondent said, “I’ll just say one word: lobbyists!” She mentioned that politicians are influenced by lobbyists to vote in ways that sometimes contradict the platform on which they were voted in. These kinds of explanations illustrate how some respondents (those with no knowledge of federal statistics) do not interpret the questions as intended. As a result, the statistics produced by these items will not reflect the desired construct.


3. Confusion over what the question was asking: A related point is that respondents with very little knowledge of federal statistics sometimes had difficulty understanding what a question was asking altogether. If this confusion was great enough, they could not determine what beliefs to sample and, therefore, could not answer the question at all. For example, in question 4h, “There is political interference in the work of federal statistical agencies”, one respondent couldn’t answer because it did not make sense to him. He asked, “Who would be doing the interfering? Because I thought the government would be one big government, so who would be playing interference?”


Most times, however, confused respondents (i.e., those who were unaware of federal statistics and, therefore, could not understand the questions as intended) did give an answer. This created a variety of scenarios. Sometimes respondents could not stay on topic and had a difficult time understanding the nature of the questions. In these cases, survey administration was quite burdensome, for both the interviewers and the respondent. Other times respondents would draw upon their own experience in relation to the topic (not statistics on the topic). Several respondents answered the questions based on their own experiences with being laid off from work and trying to find a job. Finally, but less commonly, respondents would provide completely unrelated examples. For example, one respondent says that “If the Fed says they’re gonna do something, they’re gonna do it. If they say you’re going to be in jail for 6 years, you will be there for 6 years, not a day more or a day less.”


Question 4l is a good example of how respondents think about all kinds of information. It states, “Information collected to create federal statistics is sometimes used by the police and the FBI to keep track of people who break the law.” Many respondents were not thinking of statistics as much as they were thinking about the government accessing people’s personal files or police records. They cited examples such as terrorists lists, sexual predator lists, lists of traffic ticket recipients, travel records (such as airline tickets), and personal files (one person talked about how they kept a large file on John Lennon).


4. Interpretations limited to examples given: A final indication that respondents cannot think generally about federal statistics is that many respondents limited their interpretation of the questions to the examples given in the first question, especially if they were already familiar with the agency. (The Census Bureau was, by far, the most recognized agency, but others were mentioned as well.) Even when subsequent questions asked about federal statistics more generally, some respondents thought specifically of the agency they knew, such as Census. Question 3 and question 4d in the question-by-question section are good examples of how respondents thought only of the examples in the question.


This is not necessarily an unwelcomed pattern and, in fact, suggests a potential solution to the challenges of these questions. Respondents who lack a clear understanding of the FSS need something on which to base their answers. Rather than leave this up to the haphazard nature of heterogeneous belief sampling, it is preferable to at least have respondents think about a federal statistical agency when formulating their answers. The next section covers this and other possible improvements to the questions.




Proposed Solutions: Define the Context, Be Specific, Keep it Simple


Because most people in the general population have little or no knowledge of the Federal Statistics System and have given little thought to statistical information in general, these attitude questions, like many attitude question, are likely to produce unstable estimates. The challenge, then, for question design is to craft questions that elicit consistent interpretations among respondents. When piecing together an answer, it’s important that respondents consider similar factors when arriving at an answer. For example if, when asked about their attitude about information sharing within the government, one group of respondents thinks about non-descript information being shared while another group thinks about personal identifiers, then the question is not measuring the same concept.


We suggest three question design strategies for improving construct validity. First, it’s imperative to define concepts early and often. Respondents do not have shared understandings (or much understanding at all) of the concept of federal statistics; therefore, the questions must adequately define the context. Second, questions should be specific rather than general. Third, the question should be as simple as possible, not only in grammar and wording, but also conceptually. Multiple concepts should not exist in the same question. These strategies are discussed next.


1. Define concepts up front: Because many people do not have a good deal of knowledge about federal statistics, it’s important that questions convey this topic right away and consistently. Question 1 does a good job of setting the context for the rest of the questions to the extent that it uses specific federal statistics. This turned out to be the most important function of question 1, and the question could be strengthened in this regard by eliminating non-federal statistics. Rather than a test of knowledge (which we already know is limited), it should serve as a “primer” to define what it is we mean by federal statistics and federal agencies.


2. Be specific: Many respondents in our sample were thinking of specific federal agencies when answering the questions. The most common agency cited was the Census Bureau, but others were mentioned too, such as the Bureau of Labor Statistics. Similarly, respondents tended to think specifically about unemployment statistics or the population count even when the question was asking them to think generally.


The downside to thinking of specific examples is that the question is measuring opinions on those examples only. However, we contend that this is preferable to respondents not thinking about federal agencies or statistics at all – which tended to happen when they were presented with broad topics such as “federal statistics”.


3. Keep it simple: Each question should be as straightforward as possible and avoid complex concepts that require higher-level thought and analysis on the part of respondents. Many respondents are not aware of or have given much thought to federal statistics. Even those with higher awareness levels do not possess sophisticated thinking on the matter. Therefore, it’s important to keep each question simple and to the point. Questions should not mix concepts or present complicated scenarios. Nor should they contain broad concepts that invite multiple understandings.


These are general guidelines for improving survey questions measuring trust in the FSS. The next section reviews findings for the questions tested, and applies these suggestions to the design of each.3


Question-By-Question Analysis


The bulk of cognitive testing took place on questions relating directly to trust in federal statistics. Because the instrument was lengthy and cognitive interviews are limited to one hour, there was little time to do a thorough evaluation of the questions focusing on administrative records and attitudes towards record linkage. However, some information was obtained and this, combined with extrapolating general findings from other questions, allows us to draw some conclusions about questions on administrative records. This discussion is placed at the beginning of the questions on administrative records on page 21.


  1. [VERSION 1] Please tell me if you have ever heard or read about the following numbers on the radio, TV, newspapers, the internet, or anywhere else.

    1. Have you ever heard or read about the unemployment rate?

Yes No [GO TO Q1b]

      1. If yes, Do you happen to know what organization measures the unemployment rate? [blind coded]

        1. Dept. of Labor

        2. Bureau of Labor Statistics

        3. Census

        4. Federal Government

        5. Media

        6. Other, specify _________________

        7. Don’t know


Findings: For the first part of this question (1a), the intent is potentially unclear. Some respondents weren’t sure if the question was asking whether they knew the figure or simply whether they had a general awareness of the statistic. For example, one respondent answered “no” because “I couldn’t give you a figure.” But he also states that “I know that it’s high. It’s been in the news.” He said that it’s important to Obama’s economic package and that the U.S. is currently in a recession. During probing he says that he was thinking the question was asking whether he was familiar with a number or percentage. This could be considered a false negative because he has clearly heard and read about the unemployment rate, yet still answered “no”. A similar pattern of interpretation emerged in version 2 of this question. One respondent said he thought the question was asking what the population count is for the U.S. and another said he thought the question was asking “do you know the actual figure”.


A more common interpretation was for respondents to understand this as a general awareness question and answer “yes”. The range of levels of awareness could range from those with a fairly high level of awareness (demonstrated by their knowledge of the actual number) to those with very little awareness (knowing little more than such a figure exists).


As a result, this question tends to capture respondents with very different levels of knowledge and familiarity of the statistic. This ranged from people who referred to this statistic for their work (a social worker and a college professor) to those who were simply casual news readers. If the intent of the question is to capture a full range of respondents, the only improvement would be to indicate that knowledge of the actual figure is not being asked. This would minimize false negative responses.


For the second part of the question (1ai.), most people were unsure who measures the unemployment rate. In these cases, one of two strategies was used. Respondents either guessed or answered “don’t know”. The split was about half and half (12 and 10 respectively). This has implications for question design only if it matters that respondents who guess do not engage similar strategies for answering. If it does not matter – because from the data analysts would not know who was guessing and who was not – then the question could remain as-is. If it does matter – if we need to know which answers are guesses – then the question should clarify that it is acceptable/not acceptable to guess.


The patterns demonstrated in this question were replicated in version 2 as well, for both 1a and 1ai. Additionally, both versions of the question did a reasonably good job defining the topic of the questions as federal statistics. This is a critically important function to the extent that many people have little-to-no awareness of federal statistics. Even those with some level of awareness have generally given very little thought to the topic and, therefore, have no pre-existing opinions. Some version of this question should be retained if for no other reason than to define the context for respondents.


    1. Have you ever heard or read about the total number of people in the United States or the population count?

Yes No [GO TO Q1c]

      1. If yes, Do you happen to know what organization conducts the population count? [blind coded]


        1. Dept. of Commerce

        2. Census Bureau

        3. Federal Government

        4. Media

        5. Other, specify _________________

        6. Don’t know


Findings: See discussion of question 1a. The patterns of interpretation for this question were similar to that one. However, it’s worth noting that fewer respondents were unsure in 1bi than they were for 1ai. Census was by far the most well-known agency. Still, some people were guessing when they answered ‘Census’. It’s also worth noting that no one was aware that Census does surveys in the traditional sense. All were thinking of the population count. While most people do not have extensive knowledge of Census and its activities, they are aware of the agency and have some sense of its mission. This understanding gave respondents a mental foothold in making sense of the attitude questions that followed.


    1. Have you ever heard or read about obesity statistics?

Yes No [GO TO Q1d]

      1. If yes, Do you happen to know what organization measures obesity? [blind coded]

        1. Dept. of Health and Human Services

        2. Centers for Disease Control

        3. National Institute of Health

        4. National Center for Health Statistics

        5. Hospitals

        6. Federal Government

        7. Media

        8. Other, specify _________________

        9. Don’t know


Findings: See discussion of question 1a. The patterns of interpretation for this question were similar to that one. However, this topic was less likely to be perceived as a statistical one. Many people cited the Obama campaign against childhood obesity as the reason they were aware of obesity. Additionally, a couple respondents were not sure what the word obesity meant.


If Census was the most well-known agency associated with the offered statistics, NCHS was the least well known. In fact, not one respondent answered NCHS in question 1ci. This question elicited many “don’t know” responses and those who did answer were much more likely to be guessing than in the previous two questions (unemployment and population count).


    1. Have you ever heard or read about the Neilson TV ratings?

Yes No [GO TO Q2]


      1. If yes, Do you happen to know who calculates the Neilson TV Ratings? [blind coded]

        1. Neilson

        2. Federal Government

        3. Media

        4. Other, specify _________________

        5. Don’t know


Findings: See discussion of question 1a. The patterns of interpretation for this question were similar to that one (the question was interpreted as either a general awareness question or as asking whether the respondent had specific knowledge of the TV ratings). Several respondents answered “don’t know” but many guessed. Those who guessed often got it correct, simply because they went with the obvious answer of Neilson.


  1. [VERSION 2] I will read you some numbers that you may have heard of or read about on the radio, TV, newspapers, the Internet or somewhere else. Please tell me if you have heard of them:

    1. The Unemployment rate? Yes No

    2. The total number of people in the United States, or the population count? Yes No

    3. Obesity statistics? Yes No

    4. The Consumer Price Index? Yes No

    5. The Gross Domestic Product or GDP? Yes No

    6. The Consumer Confidence Index? Yes No

    7. The Dow Jones Industrial Average? Yes No

    8. The Nielsen TV Ratings? Yes No


Findings: See discussion of question 1a, version 1. The patterns of interpretation for this question were similar to that one. Topics d, e, f, and g captured very general and very vague notions of these topics. Few respondents could define them. Those that did came up with very general definitions. For example, when asked what the CPI was (when answering “yes” to the question) one respondent said he didn’t know for sure but it might be “about people who set prices, like on gas or groceries.” Another respondent says, “This is about stores, prices and stuff, right?” Someone else said it’s a measure of how much money people are spending. Other people answered yes when they couldn’t describe at all what the statistic was about. When asked what the GDP was, one respondent answered “yes” simply because she had heard the term before – she had no idea what it actually was. Another respondent demonstrates the same pattern by saying, “It’s...uh…the um…I kinda lost my memory there for a minute. I don’t remember now, but I know I’ve heard or read about it.” Clearly, items d through g are not topics that the general population has much awareness of and they do little to help set the context for this series of questions as being about federal statistics. If anything, they – along with Neilson rating – serve to muddle the context and could be dropped from the question.


  1. If yes, You mentioned that you have heard of the unemployment rate. Do you happen to know who measures the unemployment rate? (blind coded)

  1. Dept. of Labor

  2. Bureau of Labor Statistics

  3. Census

  4. Federal Government

  5. Media

  6. Other, specify _________________

  7. Don’t know


Findings: See discussion of question 1ai. The patterns of interpretation for this question were similar to that one. People had little knowledge of who produces these numbers and many who answered were simply guessing.


      1. If yes, You mentioned that you have heard of the population count. Do you happen to know who conducts the population count? (blind coded)

        1. Dept. of Commerce

        2. Census Bureau

        3. Federal Government

        4. Media

        5. Other, specify _________________

        6. Don’t know [GO TO Q2]


Findings: See discussion of question 1ai. The patterns of interpretation for this question were similar to that one. People had little knowledge of who produces these numbers and many who answered were simply guessing. Of all the topics, however, people were most likely to know that Census conducts the population count.


      1. If yes, You mentioned that you had heard of obesity statistics. Do you happen to know who measures obesity statistics? (blind coded)

        1. Dept. of Health and Human Services

        2. Centers for Disease Control

        3. National Institute of Health

        4. National Center for Health Statistics

        5. Hospitals

        6. Federal Government

        7. Media

        8. Other, specify _________________

        9. Don’t know


Findings: See discussion of question 1ai. The patterns of interpretation for this question were similar to that one. People had little knowledge of who produces these numbers and many who answered were simply guessing. In fact, this was the least known topic. No one answered NCHS in this question. (One person did mention CDC, but then changed her mind, reasoning that obesity is not a disease, so the CDC probably doesn’t cover that.)


      1. If yes, You mentioned that you had heard of the Neilson TV Ratings. Do you happen to know who calculates the Neilson TV ratings?

        1. Neilson Company

        2. Federal Government

        3. Media

        4. Other, specify _________________

        5. Don’t know


Findings: See discussion of question 1ai. The patterns of interpretation for this question were similar to that one. People had little knowledge of who produces these numbers and many who answered were simply guessing.


  1. When important decisions need to be made based on statistics, which of the following sources is more believable to you: [mark all that apply]



a.       A University

b.      An agency of the Federal government

c.       A private company

d.      A political party

e. The media

f.       Don’t care

g.      Don’t know  


Findings: This question did not pose any special difficulty for people and they generally seemed to understand what it was asking. Only two people missed the intent and were not thinking about statistics when answering. (One person talked about political parties and another talked about politicians who lie.)


Interpretations of “believable” fell into one of three categories: bias, competency, and trustworthiness. Twenty respondents were thinking that an organization is believable if it is absent any bias. Respondents talked about agencies being neutral, objective, and impartial. An organization is NOT believable if it leans to different sides, has a “bent” or a “slant” one way or the other, distorts things, has an agenda, puts a spin on information, manipulates the numbers or in general “steers you in the wrong direction”.


Eleven respondents interpreted believable as competent. Respondents said an agency is believable if it has the resources, know-how, and expertise to do its job.


Seven respondents defined believable in this context as being trustworthy. They talked about agencies manipulating numbers, withholding information or, conversely, reporting the facts and being transparent about their practices.


Though possessing slightly different angles, these interpretations are reasonably close in meaning. This suggests that the underlying construct is tightly defined and that no modifications to the question are necessary.


  1. Numbers like the unemployment rate, the population count and obesity statistics are statistics produced by agencies of the federal government. We call them “federal statistics.” Have you ever used or talked about federal statistics like the unemployment rate, the population count or obesity statistics for study, work, or any other purpose?

Yes No [GO TO Q4] Don’t know [GO TO Q4]


    1. (If yes): Have you used federal statistics frequently, occasionally, or only once or twice?


Findings: The intent of this question could be clarified. The words “used or talked about” in the third sentence prompted different interpretations among respondents. Some had a strict interpretation of the question because they focused on the word “used”. For example, some thought this question was asking whether they had used statistics for school or work. Others had a more liberal interpretation and said yes if they had simply ever talked about the topic. In order to capture respondents more consistently, the intent should be clarified. If the goal is to capture those who have actually used statistics, then drop the phrase “or talked about”, which confounds the measurement.


Another pattern in this question was respondents’ tendency to think only of the examples given in the question (unemployment, population count, obesity). This has the potential for respondents to give false negative replies. For example, one respondent had used statistics from the Department of Transportation, but did not include that in his answer. He answered “no” to this question. However, although the examples may limit the types of statistics respondents think about, this is preferable to the alternative. If examples are deleted, this leaves the question open to broader interpretation. We have seen many instances where respondents cannot make sense of broadly worded questions on federal statistics. The tendency of most people is to not think of statistics at all when answering the questions because they have little or no familiarity with the topic. On the other hand, when respondents are given specific, concrete examples to think about, we can feel more confident about the consistency of what we are measuring.


In the follow-up question, 3a., a couple respondents had trouble providing or did not provide an answer because they needed a timeframe. These were instances where they had used federal statistics at some point in their life, but were not currently OR had used them to different degrees at different times, and still were. This invoked questions of “In what period?” and “It depends on what period of my life.” For the latter person, for example, when she was in school the answer would have been frequently but is only occasionally with her current job.



  1. To what extent do you agree or disagree with the following statements about federal statistics?

  1. Federal statistics on unemployment, population, and health are important for understanding our society. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: It’s unclear what this question is actually measuring. All but two respondents agreed to it without giving extensive rationale for why. Many people gave an explanation that was almost tautological. For example, one person said he agreed because it’s important to know these things to help us understand what problems there are. Others agreed, but gave “off the wall” explanations such as “People need to work. It keeps the country going in a positive direction. It’s important for people to work and keep the economy going.” One person who disagreed did so because she doesn’t care about statistics and the other person gave an explanation that was not even related to statistics, saying that people who make decisions are “out of touch with citizens.” Without rethinking the intended construct, it might be worthwhile to simple drop this question.


  1. Policy makers need federal statistics to make good decisions about things like federal funding. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: This is another question that all but two respondents agreed with. It’s a difficult question to disagree with, especially when respondents do not think in sophisticated ways about statistics. As a result, when respondents agreed, many gave simplistic reasons why. One person said, “You have to have facts before you can make a decision. You can’t go by just a feeling.” Others expressed this same sentiment, saying that statistics give people “concrete facts” or “put things in black and white”. On the other hand, other respondents expressed the idea that they only somewhat agreed because statistics tell only part of the story. One person said they “give a snapshot of the situation.” She said statistics shouldn’t be the only part, but it is a vital part. The two people who disagreed did so saying that politicians can put a spin on the numbers in order to support they argument they are putting forth. Almost everyone had at least a rudimentary understanding of what the question was asking and was able to provide an answer. Only a couple people were confused enough to not supply an explanation for their answer.


  1. State and local government officials need federal statistics to make good decisions about things like where to locate hospitals and schools. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: There was slightly more split on this question – five respondents disagreed while the rest agreed. Many respondents were generally thinking about the distinction in this question between federal and local (or state) level information. Four agreed that it’s important for local level officials to have information at the national level (thinking primarily of Census population counts). As one person said, “Local government would already know where these things are in their jurisdiction. On the other hand, it is nice to have a national list if they need to make an argument for more.” At the same time, three respondents disagreed, reasoning that local officials probably already know the details of their situation. As one respondent said, “I’m assuming there would be state statistics on the same issue.” Federal level statistics would not tell them anything new. However, at least three other respondents did not see or make the distinction between local and federal level information. They were answering only on the basis of whether it is important to have information for decision making – which makes this question very similar to the first two.


  1. Statistics provided by the federal agencies are generally accurate. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: Like question 2, this one was generally well understood, and interpretations were very close in meaning. Respondents understood accurate many ways, the most common of which was that it meant a statistic is correct (eight respondents gave this meaning). The next most common interpretation of accurate was “representativeness” (six respondents). This came up in cases where respondents were thinking specifically of the Census and whether they thought the population count was representative of the country. Other words used to describe accurate were truth, precise, scientific, trustworthy, unbiased, and reliable.


There was a split of people both agreeing and disagreeing with the statement. People who agreed with the statement generally said they trusted the government (“I believe in the government. The government is not intentionally trying to deceive us.”) or had no reason NOT to trust the government (“They don’t really have any reason to fudge the numbers.”). Those who disagreed were, again, often thinking of the Census and giving examples of how some people don’t get counted for a variety of reasons (e.g., mistakes by Census workers, people lying on the forms). But overall, this question was simple and to the point, so interpretations were fairly homogeneous.


  1. Federal statistics give a good picture of life in the United States. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: This question was misinterpreted by many respondents. Twelve people said they thought the question was asking whether statistics show that life is good in the U.S. This clearly is not the intent of the question. The construct should be communicated more directly.


  1. Statistics provided by federal agencies are often biased. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: Like question 2 and 4d, this question was understood in common ways among respondents. Descriptions of bias in this context included not correct, leaning in one direction, favoring one group, being self-interested, having an agenda, showing a certain conclusion, not being truthful in the numbers, and being prejudiced.


Only three cases were troublesome. Two respondents did not know what bias meant and one person was not thinking about statistics when she first answered (she was talking about how the rich and the poor need to work together). Otherwise, the majority of respondents understood the question in similar ways.


  1. Statistics produced by federal agencies, like the Census Bureau and the Bureau of Labor Statistics, do not favor one political party or another. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: This question was not exactly simple or straightforward for respondents. In fact, a couple missed the intent completely (one person thought it was asking whether statistics are about one political party or another), a couple had vocabulary issues (they were not sure what “political party” meant) and a couple answered “it depends” because they could see the issue two different ways. As one person said, “Statistics can’t favor anybody. Facts are facts, they can’t be politically influenced.” But then she goes on to say that there are ways to group statistics together so that the results look different than they otherwise would – you can present them in ways that supports your argument. Similarly, another person says, “They can report it differently, but the numbers are the numbers.” This demonstrates that the question can be about two different things. It can be about whether statistics, themselves, are accurate or whether the statistics are accurately portrayed. This suggests that the construct should probably be clarified.


  1. There is political interference in the work of federal statistical agencies. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: This question does communicate a clear concept and, as a result, was met with varying degrees of confusion. This was a question where people lost sight of the fact that it was asking about federal statistics. Instead, several respondents understood it as a question about politics in general. One person agreed, saying, “I would think so. Politics is very dishonest and is everywhere, especially when there is money to be divided up.” In a similar vein, another person says, “Yeah, it’s all about political…everybody lies, it makes no difference. People are messed up before you even come into office.” Another respondent was asked why she agreed. She explained, “I guess like with passing bills, you have one party that may agree strongly so they want to get that bill pushed. But then you have another party that wants something different.” Another respondent had a similar perspective and said, “That’s where lobbyists come in.” They affect how senators will vote on issues. Another person couldn’t answer because he couldn’t distinguish federal agencies from other parts of government. He said, “Who would be doing the interfering? Because I thought the government would be one big government. So who would be playing interference?” In sum, at least eight respondents missed the intent of this question to one degree or another.


The rest of the respondents generally seemed to understand the question as intended. For example, one respondent thought specifically of BLS. To him the question was asking whether politicians tell BLS to publish or not publish their findings. Another person thought about Census and talked about how politicians want to “shift the numbers” so that they can redistrict in a way that benefits them. Another person said the question means that each party wants to “put its own stamp on it” and someone else said it “means that something is getting in the way of a statistical reading”. These interpretations seem more in line with question intent. Nevertheless, there was enough variation in interpretations to warrant clarifying the question.


  1. People can trust federal statistical agencies to keep information about them confidential. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: The intent of this question seemed well understood by respondents as demonstrated by interpretations that were all very similar. Only one respondent missed the intent by not thinking about federal agencies. He disagreed, saying, “Ain’t nothing confidential, ‘cause the bible said there ain’t no secrets.” It seems confidentiality prompted him to think about “keeping secrets” – which isn’t an unreasonable definition of confidentiality – but he clearly loses sight of the fact that the question is asking him to think about confidentiality in the context of statistical information.


All other respondents understood the question as intended. They talked about information being “public” vs. “private”, whether information is sold, information being “leaked” or spread intentionally (because of a “few bad apples” or other agencies seeking information about people) or unintentionally (hackers or mistakes just happen), and legal obligations to keep information private. No improvements to this question are necessary.


  1. Federal statistical agencies share too much information with each other. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: The intent of this question is unclear. The focal point became information, but the type of information people thought about was inconsistent among respondents and (often) with the intent of the question. Additionally, respondents sometimes lost sight of the fact that the question was about statistics. For example, some people disagreed, saying that agencies don’t share enough information with each other. However, they were thinking of examples unrelated to statistics. One person thought of how the federal government botched the response to hurricane Katrina because they were not communicating effectively and sharing information with each other. Another person thought of law enforcement and apprehending criminals. He thought of the FBI and CIA saying, “I don’t think they share enough information.” Similarly another person cited 9/11, suggesting that it could have been prevented if the FBI, CIA and “those people” shared more information with each other.


Some people were thinking of confidential information about people, and some people were not. This led to inconsistent interpretations. Several other people answered “don’t know” saying they have no idea what or how much is being shared among government agencies. The question was too vague and should be clarified, particularly with regard to the concept of information.


  1. All federal statistical agencies can get information collected by any one of them. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: The intent of this question is unclear and not well conveyed. This caused respondents to interpret it different ways. For example, most respondents understood this question to be about difference between whether agencies can or should share data. Some interpreted it to be asking “can the government share data” (as in, do they have the technical capability to share) while others thought it was asking, “should they” or “are they supposed to” share data with each other (in regard to rules of confidentiality and /or public use). That’s two different questions and the construct should be clarified.


  1. Information collected to create federal statistics is sometimes used by the police and the FBI to keep track of people who break the law. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: The intent of this question was not well communicated to respondents. The result was that most were not thinking about federal statistics when answering the question. Instead, many were thinking of “personal files” found in the likes of police records, terrorist lists, sexual predator lists, welfare files, and airline ticketing lists (none of which are surveys). As one person remarked, “I think it’s general knowledge that the FBI and CIA have access to terrorists information that the rest of us are not privy to.” Someone else thought of the police tracking individual people through their cell phones or satellite images. This question should be modified to clarify the construct.


  1. Federal statistical agencies give personal information about people to marketing firms. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: Generally, this question did a good job conveying intent and did not confuse most people. Only two people were not thinking of statistical agencies when answering the question (one thought of what credit card companies do, the other thought of risks of using the Internet). All other respondents understood the question more consistently and more in line with question intent. For example, a couple people disagreed because to share information with marketing firms “would put them in a bad spot, so it’s not to their advantage.” Another person expressed the same idea saying, “There’s no incentive for them to do it. I can’t see how they would benefit.” Another says, “I just assume that’s not something they’re in the business of doing.”


A couple people were a little unsure about the difference between what they would want to be true and what actually happens. As one person put it, “I’m not sure about marketing firms. I would HOPE they don’t do that. That’s an invasion of privacy, isn’t it?” But because he wasn’t sure about actual practice, he answered “don’t know” to the question. Another person who wasn’t sure decided to answer “disagree.” She said, “I don’t think they would do that” because marketing firms can’t be trusted. With the same logical another person also disagrees, saying “I don’t want to believe that’s true” but he didn’t know for sure.


  1. If I needed to, I could easily find out exactly how federal statistics are produced. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: Respondents generally understood this question, but it did prompt some confusion. For example, at least four people understood it to be asking about the skills of the inquisitor (or of the respondent in particular). This interpretation may be created by the first-person language in the question, which is unlike any of the other questions. One respondent said it depends on “what you know”, another says agree for “someone who is computer savvy”, and another says that statistics are too complicated to be understood by “regular people.” A fourth person put it best when he said, “I wouldn’t know how to do that. I wouldn’t have a clue.” For this group of respondents, the question is not about the transparency of federal statistical agencies, so the construct needs to be clearer. Taking the question out of the first person may help.


  1. Federal statistical agencies are honest and professional. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: This question had a smattering of issues. For three respondents the question was double barreled – honest and professional were two different things. Two people could not answer. One said, “I don’t know why you guys put those two together – I can’t answer that. I would have to separate those.” (For professional she would say agree, but disagree for honest.) The other person expressed the same idea, “They’re not honest; they may be professional.” A third person answered on the basis of professional but not honest. He said, “I’d say they are professional. I’ll agree with that. But the honest part…it’s two different parts.” Other respondents tended to focus on one of those words, mostly the word “honest”. When they answered they were thinking of whether they ever heard media reports of an agency not being honest. Respondents cited examples such as ‘fudging the numbers”, outright lying, or “mishandling information”.


Four respondents said they didn’t know. However, one provided an answer anyway. He chose “agree” but said, “I guess I wouldn’t know – I don’t think they have reviews. I would have to read up and find something out to find out that they weren’t being honest or professional.”


The question seems unclear enough to warrant a redesign in the direction of simplifying the construct.


  1. Private companies could produce more accurate statistics than Federal statistical agencies. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree?


Findings: This question did not seem to pose any special difficulties for respondents. The question was generally interpreted one of three ways. Respondents were deciding who would do a better job based on 1) who would be less biased, 2) who has more resources (more money; more expertise), or 3) who would be more efficient or faster.


Bias or special interest was the most common interpretation. For example, one person said private companies “will produce what they’ve been paid to produce, whoever their client is.” Others expressed the same idea. Of private companies one respondent said, “that information would not be as trustworthy. There is motivation for private gain because of the drive for profit among companies.” Similarly another person said private companies “just do it for the money – they don’t care if the numbers are right or wrong.” Conversely, another respondent says the government would be biased because “They would want to bias them [the numbers] to look good. They don’t care about the people.”


Having resources was another common interpretation. Several people made judgments based on access to financial resources (some thought the government had endless supplies of money while others thought business did) and others thought about who had more expertise.


Finally, a few people thought businesses would do a better job because they are smaller and/or deal with less bureaucracy. All of these interpretations seemed in line with the intent of the question.



Questions on Administrative Records:


Questions related to attitudes on administrative record linkage were included in the instrument and are listed below. Unfortunately, time limits of the hour-long cognitive interview precluded our ability to gather extensive, detailed data on these questions.4 A question-by-question analysis is not presented here for that reason; however, some general conclusions based on the data we did gather combined with lessons from the previous questions are discussed.


Evidence from question 5a, versions 1 and 2, suggests that questions on administrative records are vulnerable to context effects and shifting interpretations. Prior to taking this survey, respondents had given little thought to the topic. As a result, their opinions were swayed by the question wording and the topics presented therein. For example, we observed that respondents were more likely to agree than disagree with each version of 5a, even though each one asked them to consider something different. Version 1 asks if respondents support record linkage over survey administration in order to save time and money. A majority of respondents said they favor record linkage. Conversely, version 2 asks if they support the government obtaining their information directly in a survey instead of record linkage, and a majority favored that as well. It is also worth mentioning that when respondents did NOT favor record linkage (asked in version 1), most cited privacy concerns as the reason why. This is notable, as it was an issue arising from the respondents themselves and not suggested to them by the question. Privacy issues are related to the next point as well.


Questions on record linkage had some elements that were open to interpretation, which caused a degree of confusion. One example of wording that created confusion was the term “information”. This was mentioned by several respondents when they discussed the idea of federal agencies obtaining information about them through record linkage versus through a survey. Respondents expressed concern over personal information being shared, which was predominantly defined as name, address, social security number, and the like. A couple respondents mentioned information such as medical history or income as personal. The common theme here is privacy concerns. Vague terms like “collecting information directly” (what does directly mean?) also added to the confusion. For example, a couple respondents were confused by question 5a (version 2). Because the question was too long for them to process in its entirety, they essentially focused on the last sentence, “How do you feel about federal agencies collecting information directly?” As a result, they did not understand what “directly” meant. This sentence, when considered outside the context of the rest of the question, is not well defined. One person asked collected directly compared to what? Another person said, “As opposed to indirectly?”, not understanding what it would mean to collect data indirectly.


We also have evidence to suggest that another source of confusion was long questions on administrative records. Some context is certainly needed. But when there was too much information in the question, respondents lost track of what it was asking or were able to focus on only one part of the question instead of the question in its entirety (thereby missing the intent and altering the desired construct).


Because administrative record linkage is not a topic to which most people have given much thought, questions should communicate intent as simply as possible and be specific in the information they have respondents consider. Without further testing at this point, we would suggest using questions 10, 11, and 12 below. Question 10, for example, specifies the type of information being shared so that respondents don’t have to speculate. We would also add “ask people for it directly in a survey or…” in order to specify what “directly” actually means. Similarly, question 11 addresses privacy concerns while also being very specific about the type of information being collected.



  1. [VERSION 1] Sometimes federal statistical agencies need to get information such as employment history or retirement benefits. They can do it by getting the information from other government agencies or by asking people for it directly in a survey. How do you yourself feel about federal agencies trying to save government money and save people’s time by sharing information with each other? Are you strongly in favor, somewhat in favor, somewhat against, strongly against?


    1. Some people think people’s privacy would be better protected if each agency collected the information directly through surveys. How do you feel about federal agencies collecting information directly? Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?


  1. [VERSION 1] When you fill out a form for a government agency about your own employment, do you think they keep a record of that information?

Yes No [GO TO Q7]

[If Yes:] i. Which of the following best describes what you think happens to that record:

        • The government agency does not share your information and uses it only for the purpose it was collected for.

        • The government agency shares it with any other government agency that may need it.

        • The government agency only shares your information with your consent.

      1. What do you think they should do with that record:

        • Keep it only for themselves?

        • Share it with other government agencies as needed?

        • Ask you first, then share it if you say it is ok.


  1. [VERSION 1] How about Health or Medical records - When you fill out a form for a government agency about your own health, do you think they keep a record of that information?

Yes No [GO TO Q8]

[If Yes:] ii. Which of the following best describes what you think happens to that record:

        • The government agency does not share your information and uses it only for the purpose it was collected for.

        • The government agency shares it with any other government agency that may need it.

        • The government agency only shares your information with your consent.

      1. What do you think they should do with that record:

        • Keep it only for themselves?

        • Share it with other government agencies as needed?

        • Ask you first, then share it if you say it is ok.


5. [VERSION 2] Sometimes federal statistical agencies need to get information such as employment history or retirement benefits. They can do it by getting the information from other government agencies or by asking people for it directly in a survey. Some people think people’s privacy would be better protected if each agency collected the information directly through surveys. How do you feel about federal agencies collecting information directly? Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?


    1. How do you yourself feel about federal agencies trying to save government money and save people’s time by sharing information with each other? Are you strongly in favor, somewhat in favor, somewhat against, strongly against?


  1. [VERSION 2] When you fill out a form for a government agency about your own income, do you think they keep a record of that information?

Yes No [GO TO Q7]

[If Yes:] i. Which of the following best describes what you think happens to that record:

        • The government agency does not share your information and uses it only for the purpose it was collected for.

        • The government agency shares it with any other government agency that may need it.

        • The government agency only shares your information with your consent.

      1. What do you think they should do with that record:

        • Keep it only for themselves?

        • Share it with other government agencies as needed?

        • Ask you first, then share it if you say it is ok.


  1. [VERSION 2] How about government programs, like food stamps or temporary aid to needy families - When you fill out a form to apply for government program, do you think they keep a record of that information?

Yes No [GO TO Q9]

[If Yes:] ii. Which of the following best describes what you think happens to that record:

        • The government agency does not share your information and uses it only for the purpose it was collected for.

        • The government agency shares it with any other government agency that may need it.

        • The government agency only shares your information with your consent.

      1. What do you think they should do with that record:

        • Keep it only for themselves?

        • Share it with other government agencies as needed?

        • Ask you first, then share it if you say it is ok.


8.Earlier we talked about the unemployment rate that the Bureau of Labor Statistics produces. Currently the unemployment rate is measured by asking people about their work experience directly in a survey.

    1. Do you think the unemployment rate would be more accurate or less accurate if it was calculated from information already available to other government agencies, like your state unemployment office, or would it not make a difference?

More accurate

Less accurate

Would not make a difference


9.Earlier, we also talked about the census that is conducted every ten years by the U.S. Census Bureau. Currently, the census asks for the number of people that live in each household, their ages, genders, race, ethnicity and relationships among household members.

    1. Imagine that the census was conducted, in part, with Information already available to other government agencies, like the Social Security Administration. Do you think it would be more accurate, less accurate, or it would not make a difference?

    2. What if the census were conducted, in part, with commercial data from a private company? Do you think it would be more accurate, less accurate than asking people directly, or would it not make a difference?


10.Federal Statistical Agencies often need detailed, specific information to create statistics about people. They can either ask people for it directly or get it from another available source. Which do you think would be more accurate:

    1. To get your earnings history information from you or from the Social Security Administration?

    2. To get your current income information from you or from the IRS?

    3. To get your employment information from you from a state agency, like the employment or workforce office?

    4. To get your health services experiences from you or from Medicare?

    5. To get your health services experiences from you or from your doctor?

    6. To get your home value from you or from a private company?

    7. To get information on your purchases from a credit card company you use?


11.If you knew your specific name and information would never be singled out and would only be used for statistics, would you be in favor of, or against a federal statistical agency:

    1. Getting your earnings history information from the Social Security Administration? Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?

    2. Getting your current income information from the IRS? Are you strongly, etc.

Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?

    1. Getting employment information about you from a state agency, like the employment or workforce office?

Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?

    1. Getting your health services experiences from Medicare?

Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?

    1. Getting your health services experiences from your doctor?

Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?

    1. Getting your home value from a private company?

Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?

    1. Getting information on your purchases from a credit card company you use?

Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?


12.In order to do a survey, Federal statistical agencies need contact information, like a phone number or an address, to locate people for a survey.

    1. How do you feel about a statistical agency getting your contact information from another government agency, like the post office? Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?


    1. How do you feel about a statistical agency getting your contact information from a state office, like a state Department of Motor Vehicles? Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?


    1. How do you feel about a statistical agency getting your contact information from a private company, like a commercial mailing list company? Are you strongly in favor, somewhat in favor, somewhat against, or strongly against?


13.Now I’m going to read you a list of organizations in American society. Please tell me how much confidence you, yourself, have in each one – a great deal, quite a lot, some or very little?

  1. The mass media, such as newspapers, radio, and television.

  2. Bloggers on the Internet.

  3. The Federal government.

  4. Federal statistics, such as the unemployment rate, the population count, or obesity statistics.

  5. Political polls.

  6. Your state government.

  7. Banks.

  8. Large corporations

  9. The U.S. Supreme Court.


14.Now just a few questions about some other topics:

  1. In general, how worried are you about an invasion of your personal privacy: very worried, somewhat worried, not very worried, or not worried at all?

  2. Have you personally ever been the victim of what you felt was an invasion of privacy? (yes/no)

  3. People like me don’t have any say about what the government does. (Do you agree strongly, agree somewhat, disagree somewhat, or disagree strongly?)

  4. The government knows more about me than it needs to. (Do you agree strongly agree somewhat, disagree somewhat, or disagree strongly?)


I have just a couple of alternative questions I’d like to ask you:

1: Which one of the following statements do you believe is true?
(1) The government has a single central database of the name, address and date of birth of all US residents.
(2) There is no single government central database of the name, address and date of birth of all US residents, but there are separate databases with this information maintained by individual departments such as the Census, the Social Security Administration, and the IRS.
(3) There are no single government nor individual departments’ records of the name, address and date of birth of all US residents.
(4) Don’t know


2: And which, if any, of the following statements would you prefer to be true?
(1) The government has a single central database of the name, address and date of birth of all US residents.
(2) There is no single government central database of the name, address and date of birth of all US residents, but there are separate databases with this information maintained by individual departments such as the Census, and the IRS.
(3) There are no single government nor individual departments’ records of the name, address and date of birth of all US residents.
(4) Don’t know


3: And which, if any, of the following statements would you prefer to be true?
(1) The government has a single central database of the name, address and date of birth of all US residents which is used only to help produce statistics on society and the economy.
(2) The government has a single central database of the name, address and date of birth of all US residents which is used only for administrative purposes such as paying benefits or for tax returns.
(3) The government does not have a single central database of the name, address and date of birth of all US residents.
(4) Don't know



1 During screening, respondents reported whether they had a high level of trust in the Federal government, a low level of trust, or a level of trust somewhere in between.

2 See the question-by-question analysis for details on specific questions.

3 Because of the one-hour time limit of the cognitive interview, not all questions could be adequately tested.

4 It is our hope that a future round of cognitive testing will be planned in order to more thoroughly evaluate questions on attitudes about administrative record linkage.

27


File Typeapplication/msword
File TitleCognitive Interview Evaluation of the
AuthorStephanie Willson
Last Modified ByWillson, Stephanie (CDC/OSELS/NCHS)
File Modified2011-12-06
File Created2011-11-30

© 2024 OMB.report | Privacy Policy