How to observe without being totally awkward

Dog looks worried or confused in office

Happy fall to all you data nerds out there! We appreciate you being here with us. Last time we discussed how to get permission from your participants when you want to do an observation. You might be wondering how you can actually do the observation without it being completely awkward and perhaps even cringey. Today we are going to discuss just that!

First let’s review our goal for this project: We want to evaluate if caregivers are learning skills during storytime and using those skills with their children outside of storytime 

Based on this goal, we decided to do observations of caregivers and children in the library while they are not participating in storytime. Ideally from a research perspective, we would observe them at home, but that would not be practical or comfortable for anyone involved. Even in the library context, we are going to need to be careful to make sure that our participants feel as comfortable as possible. 

Being a participant observer

There are a variety of ways you can behave as an observer. For most library situations, I recommend a version of what researchers call “participant observation.” You’re observing while still interacting with the people you’re observing to a limited extent. This setup feels more comfortable while still giving you, as the observer, some distance from what you are observing. What would this look like for our example project? When the family you are observing tells the children’s desk that they are in the library, you would first introduce yourself to the family. Then during the observation you would talk with them only if it’s really important or necessary.

When is it really necessary to jump out of observer mode? A classic example I lived through with a team of librarian-observers was a child in the group we were observing getting a serious nosebleed. At the time there was only one library staff member who was teaching the group, but three of us were observing. One of us stopped observing and took the child to get medical attention. The instructor who actually knew the content that needed to be covered continued running the group. My best advice for when to break out of observing “mode” is to try to avoid it, but trust yourself when it feels like an appropriate time to spring into action. You are probably right!

Making people feel comfortable

When observing, we’re trying to balance getting quality data with making the participants feel comfortable. Every population, and every individual, has different needs to feel comfortable. It can help to start by thinking back to times you were in potentially awkward situations and someone made you feel more comfortable. What did they do? Remember in this case we want to go a step beyond that and treat people how they want to be treated, not just how we would want to be treated.

In a situation like this with caregivers, we should definitely reassure them that the library staff is not there to judge them. Parents feel judged a lot! It’s helpful to emphasize that you are evaluating storytime and not them. Nonetheless, don’t tell participants “We’re looking to see what early literacy skills you use outside of storytime.” Then they will inevitably show you every early literacy skill they have ever heard of! Instead, you might explain the project like this: “We want to make storytimes better. To do that, we need to understand how caregivers and children are interacting outside of storytime. We are watching so we can learn and make storytime as helpful and fun as possible. We are not evaluating you as a parent. Do you have any questions? Is there anything else I can do that would make you feel more comfortable?”

Working with children 

Children are going to want to interact with you while you’re observing them, especially if they know you. You should explain to them what you’re doing and why you are acting differently. For example: “Today, my job is to be very quiet and pay attention really carefully to the fun time you are having with your caregiver. You can look at me and I’m going to smile at you, but I’m not going to talk with you like I usually would. It doesn’t mean I’m mad at you. I’m just really focused on watching and listening today. I’ll tell you when we’re done and we can talk more then. Do you have any questions? ”

Conclusion

Having these kinds of conversations with participants before you start will help the observation go well. We observed teens for a project once, who are perhaps the most self-conscious creatures on the face of the earth. The staff observers introduced themselves to the teens at the beginning of their time together even though we already had informed consent. Although I can’t know for sure, I think we were able to collect valuable data on that project partly because the observers introduced themselves right before the observations and were very friendly and open. Remember, you set the tone for the whole interaction at the beginning.

Up next 

Next time we’ll talk about how to focus your observations and collect data that will be valuable to you. We’ll look forward to seeing you then!

LRS’s Between a Graph and a Hard Place blog series provides instruction on how to evaluate in a library context. Each post covers an aspect of evaluating. To receive posts via email, please complete this form.

How to observe: Ask first!

A kitten peaks out from between books

Welcome back! We left off talking about why you would use observations to collect data. Observation can be a great data collection tool when you want to see how different people interact with each other, a space, or a passive program. Observation is also helpful when it is difficult for someone to answer a question accurately, like when you ask them to remember something they did or—particularly with children—if you ask them to give critical or written feedback, both of which can be developmentally inappropriate.

To review, our big research question is: “Does attending storytime help caregivers use new literacy skills at home?” Our small questions within that big question are:

  • Were caregivers already using literacy skills at home prior to attending a storytime? 
  • Are caregivers learning new literacy skills during storytime? 
  • Do caregivers use new literacy skills from storytime at home?

After thinking through these in our last post, we decided that it would be both helpful and realistic to observe caregivers and their children in the library. This won’t tell us about caregivers’ behavior at home, but it’s not an option for us to follow them around their homes. While observing caregivers in the library won’t tell us what they are doing at home, it still helps us see if they’re learning skills during storytime that they’re using outside of storytime.

Ok, so how would we actually do this? Here are the key elements of any observation:

  • Get permission from your participants
  • Decide how you will approach the observation
  • Focus in on some specific things you’re looking for
  • Take notes or videos of what you’re observing
  • Code the notes or video to identify patterns

For the rest of this blog post, we will focus on the first point. It’s important to get started on the right foot. 

Why is getting permission important? 

We’ve discussed informed consent before in this blog series, and it is still important with observations. If you came to your library and a staff member followed you around the whole time without an explanation, it would feel weird and even invasive. Now, you may be thinking, “Wait a minute, if people know we’re watching, they’re going to act differently!” You’re exactly right. But the reality is, people are going to know you’re observing them no matter what. Even if you think you could soundlessly skulk around the stacks, most people are going to sense something weird is going on. 

Remember–we don’t want to be creepy when we’re doing research! Watching people without asking their permission is 1) pretty creepy, 2) not good for building and maintaining trust with our users, and 3) violates one of our core library values of privacy. Ethically, we have to ask people if it’s ok to observe them while they’re in the library, even if it does change their behavior.

How do you ask permission to observe?

In this example, you could explain at storytime that you’re doing a research project to improve storytime. You are looking for some volunteers who spend time at the library outside of storytime to let you observe them interact with their child. If they are interested, you have a form they can sign, and the next time they’re hanging out at the library, you’d like them to come say “hi” at the desk in the children’s area. That way your staff know when they are in the library and the staff member observing them can introduce themselves too, which makes the whole thing less awkward.

As part of the informed consent process, remember that you need to address both that the participants can stop participating at any time, for any reason, and how you will protect their privacy. These elements are particularly important during an observation of caregivers and children. What identifying information do you absolutely need? The more anonymous the data can be, the better. Make sure you also establish a clear and easy way for the caregiver to end the observation while it is happening. 

In the next blog post in this series, we will explore the different approaches to observation. After you’ve gotten permission, how do you actually sit, watch, and collect meaningful data? Join us next time to find out!

Nothing About Us, Without Us: Equitable evaluation through community engagement

 

This is a “guest post” from the Colorado Virtual Library Equity, Diversity, and Inclusion blog.

When you wake up, one of the first things you might do is open your weather app to see what the temperature is and if it’s supposed to rain that day. You then use that information—or data—to make important decisions, like what to wear and whether you should bring an umbrella when you go out. The fact is, we are all collecting data every day—and we use that data to inform what we do next.

It’s no different in libraries. We collect data about circulation, program attendance, the demographics of our community, and so on. When we collect the data in a formalized way and use it to make decisions, we call this evaluation. Simply put, “evaluation determines the merit, worth, or value of things,” according to evaluation expert Michael Scriven.

Equitable Evaluation

So what does this have to do with equity, diversity, and inclusion? Well…everything. If evaluation does in fact determine the merit, worth, or value of programs and services, what happens when your library’s evaluation excludes or overlooks certain groups from the data? Let’s take a look:

You are trying to evaluate patron satisfaction at your library, so you print off a stack of surveys and leave them on the lending desk for patrons to take. While everyone in your target audience may have equal access to the survey (or in other words, are being treated the same), they don’t all have equitable access. Sometimes people may need differing treatment in order to make their opportunities the same as others. In this case, how would someone who has a visual impairment be able to take a printed survey? What about someone who doesn’t speak English? These patrons would likely ignore your survey, and without demographic questions on language and disability, the omission of these identities might never be known. Upon analyzing your data, conclusions might be made to suggest, “X% of patrons felt this way about x, y, and z.” In reality, your results wouldn’t represent all patrons—only sighted, English-speaking patrons.

Inequities are perpetuated by evaluation when we fail to ensure our methods are inclusive and representative of everyone in our target group. The data will produce conclusions that amplify the experiences and perspectives of the dominating voice while simultaneously reproducing the idea that their narrative is representative of the entire population. Individuals who have historically been excluded will continue to be erased from our data and the overarching narrative, serving to maintain current power structures.

Evaluation With the Community, not On the Community

That’s a heavy burden to take on as an evaluator and a library professional, especially when taking part in people’s marginalization is the last thing you would want to do. Luckily, the research community has long been working on some answers to this problem. Community-based participatory research (CBPR) is contingent on the participation of those you are evaluating (your target population) and emphasizes democratization of the process. CBPR is defined as:

“focusing on social, structural, and physical environmental inequities through active involvement of community members, organizational representatives, and researchers in all aspects of the research process. Partners contribute their expertise to enhance understanding of a given phenomenon and integrate the knowledge gained with action to benefit the community involved.”

CBPR centers around seven key principles:

  1. Recognizes community as a unit of identity
  2. Builds on strengths and resources
  3. Facilitates collaborative partnerships in all phases of the research
  4. Integrates knowledge and action for mutual benefit of all partners
  5. Promotes a co-learning and empowering process that attends to social inequalities
  6. Involves a cyclical and iterative process
  7. Disseminates findings and knowledge gained to all partners

As one librarian put it, CBPR “dismantles the idea that the researcher is the expert and centers the knowledge of the community members.” When those that you are evaluating (whether it be patrons, non-users, people with a disability, non-English speakers, etc.) are involved in the entire process, your data will invariably become more equitable. As a result, your evaluation outcome will more effectively address real problems for your community. It’s a win-win for everyone.

However, if diving into a full community-based participation evaluation feels impossible given your time and resources, it’s okay. Think of CBPR as your ideal and then adjust to a level that is feasible for your library. The continuum of community engagement below outlines what some of those different levels might look like.

The continuum of community engagement ranges from total CBPR on the left end of the spectrum to community engagement on the right end of the spectrum. Total CBPR is full involvement in all parts of the study development and conduct. CBPR light is partial involvement in some or all parts of the study development and conduct. Community based research is research conducted in collaboration with members of the community. And community engagement is working with community members and agencies to reach community members.

The Big Takeaway

Evaluating your practices, policies, and programs in a library can lead to better outcomes for your library community. However, even the best of intentions can create harm for historically underrepresented groups when they are excluded from the very data used to make decisions that impact them. When undertaking an evaluation of any kind, think about the principles of CBPR and how you can incorporate them into your plan.

Why Observe? Watch and Learn

When I was a kid, one of my favorite summer activities was staring at hummingbirds. I would sit for hours, moving as little as possible, while I took notes about everything I saw. (Yes, I was a pretty weird eight year old.) I wanted to ask the hummingbirds so many questions, but I don’t speak hummingbird! Observing them was my only option for trying to understand their behavior. 

While it is literally impossible to ask a hummingbird to take a survey, there are many times with humans when a survey won’t work to collect the data you need either. Observation can be a great data collection tool when you want to see how different people interact with each other, a space, or a passive program. Observation is also helpful when it would be difficult for someone to answer a question accurately, like when you ask them to remember what they did or, particularly with children, if you ask them to give critical feedback or written feedback, both of which are sometimes developmentally inappropriate. 

In this post, I’m going to talk about why you might choose observation as a data collection method. Next time, I’ll talk about the logistics of observations and how you can use observational data. To better understand why you would collect data with observations, let’s use our example evaluation question from throughout this blog series: “Does attending storytime help caregivers use new literacy skills at home?” 

When we first outlined number data and story data, we talked about when to use each. We also outlined how to break your research question down into smaller questions. You really need to do that work to get to this point, so let’s go back and review what we did.  Here are some of the sub-questions we identified within our larger evaluation question:

  • Were caregivers already using literacy skills at home prior to attending a storytime? 
  • Are caregivers learning new literacy skills during storytime? 
  • Do caregivers use new literacy skills from storytime at home? 

Would a survey work to collect this data? We certainly could ask caregivers all of these questions. But we would immediately bump into some of the problems that come up when people self-report data: 1) we are not great at remembering things accurately and 2) we want to portray ourselves in the best possible light (social desirability bias). Let’s take a look at how those challenges would impact the data collection for our questions. 

  • Were caregivers already using literacy skills at home prior to attending a storytime? 
    • They may not know or accurately remember which skills they knew before attending storytime and which they learned at storytime. 
  • Are caregivers learning new literacy skills during storytime? 
    • They may report that they are learning new literacy skills at storytime because they don’t want to hurt anyone’s feelings—even if they aren’t actually learning those skills. 
  • Do caregivers use new literacy skills from storytime at home? 
    • They may report that they are using new literacy skills at home because that feels nice to say—even if they aren’t actually using those skills at home. 

So we could collect that data using a survey, but it may not be very accurate. We could get more accurate data by observing caregivers at home with children before they ever attended a library storytime and then continuing to observe after they started attending storytime. Then we could see for ourselves what skills they already knew and used at home, and which ones they learned at storytime. We could tally up how often they were using those skills too. Great! Let’s go follow people and their children around their homes 24 hours a day taking notes for several months. 

What? You don’t think that’s going to be a thrilling success? Unlike hummingbirds, who don’t seem to mind too much or alter their behavior a lot while I am watching, humans mind quite a bit and can change their behavior when they are being observed. Additionally, do you know any library staff who have the time to do this kind of intense observational study? Yeah, that’s what I thought. The time involved in observation and successfully navigating privacy concerns are two major elements that you always need to consider. 

What can we do that’s a little more realistic? Collecting data in the real world is often about doing what you can with what you have. In this case, it is unlikely anyone would let us come follow them around their home. We can, however, more easily observe caregivers and their children in the library. This would allow us to observe for indicators of caregivers learning skills during storytime and to observe if families are using early literacy skills while they are spending unstructured time in the library. Intrigued as to how we would do that? Come back for our next post where we’ll get into the nuts and bolts of how you can collect data using observation and pull out important takeaways from that data.

If you are an aspiring birdnerd, the two hummingbirds pictured are both species we have in Colorado and you can learn more about them here

Surveys: Don’t just set it and forget it!

Surveys are the rotisserie oven of the data collection methods. You simply “set it, and forget it!” That’s why it’s important to be strategic about how you’re reaching your target population. Otherwise, you may be leaving out key subsets of your audience—which are often voices that are already historically underrepresented.  

Is your survey equitable? 

Let’s say you want to send out a survey to library users, so you print off a stack of copies and leave them on the lending desk for patrons to take. While everyone in your target audience may have equal access to the survey (or in other words, are being treated the same), they don’t all have equitable access. Sometimes people may need differing treatment in order to make their opportunities the same as others. In this case, how would someone who has a visual impairment be able to take a printed survey? What about someone who doesn’t speak English? These patrons would likely ignore your survey, and without demographic questions on language and disability, the omission of these identities might never be known. Upon analyzing your data, conclusions might be made to suggest, “X% of patrons felt this way about x,y, and z.” In reality, your results wouldn’t represent all patrons—only sighted, English-speaking patrons. 

Who has access to your survey? 

Start by thinking about who you want to answer your survey—your target population. Where do they live? What do they do? What identities do they hold? Consider the diversity of people that might live within a more general population: racial and ethnic identities, sexual orientation, socio-economic status, age, religion, etc. Next, think through the needs and potential barriers for people in your target population, such as language, access to transportation, access to mail, color blindness, literacy, sightedness, other physical challenges, immigration status, etc. Create a distribution plan that ensures that everyone in your target population—whether they face barriers or not—can access your survey easily. Here are some common distribution methods you could use: 

  • Direct mail – Here’s more information about how to do a mail survey and it’s advantages and disadvantages. 
  • Online – For more information on how to make your online survey accessible, check out this article from Survey Monkey.
  • Telephone – In a telephone survey, someone calls the survey taker and reads them the questions over the phone while recording their answers. 
  • In-person – Surveys can also be administered in-person with a printed stack of surveys or a tablet. However, with this approach you might run into the dangers of convenience sampling

Depending on your target audience, surveys are rarely one-size-fits-all. The best plan is often a mixed-methods approach, where you employ multiple distribution strategies to ensure equitable access for all members of your target population. 

Who is and isn’t taking your survey?

Great! You’ve constructed a distribution plan that you feel can equitably reach your target population, but did it work? The only way to know for sure is by collecting certain demographic information as part of your survey. 

As library professionals, collecting identifying information can feel like a direct contradiction to our value of privacy. Yet, as a profession we are also committed to equity and inclusivity. When administering a survey, sometimes it’s necessary to collect demographic data to better understand who is and isn’t being represented in the results. Questions about someone’s race, ethnicity, income level, location, age, gender, sexual orientation, etc. not only allow us to determine if those characteristics impact someone’s responses, but also help combat the erasure of minority or disadvantaged voices from data. However, it’s important to note that: 

  1. You should always explicitly state on your survey that demographic questions are optional, 
  2. You should ensure responses remain anonymous either by not collecting personal identifying information or making sure access to that information is secure, and 
  3. Only collect demographic information that’s relevant and necessary to answer your particular research question. 

Compare the data from your demographic questions with who you intended to include in your target audience. Are there any gaps? If so, re-evaluate your distribution plan to better reach this sub-group(s), including speaking to representatives of the community or people that identify with the group for additional insight. Make additional efforts to distribute your survey, if necessary.

Conclusion

Inequities are perpetuated by research and evaluation when we fail to ensure our data collection methods are inclusive and representative of everyone in our target group. The absence of an equitable distribution plan and exclusion of relevant demographic questions on your survey runs the risk of generating data that maintains current power structures. The data will produce conclusions that amplify the experiences and perspectives of the dominating voice while simultaneously reproducing the idea that their narrative is representative of the entire population. Individuals who have historically been excluded will continue to be erased from our data and the overarching narrative.

Colorado Talking Book Library 2020

Results from the 2020 Colorado Talking Book Library (CTBL) patron survey are in! Survey respondents gave CTBL high marks again with 99% rating CTBL’s overall service as good or excellent in 2020. This is the ninth survey in a row (over 16 years) where 98% or more of respondents rated CTBL’s overall service as good or excellent.

The Colorado Talking Book Library provides free library services to Coloradans who are unable to read standard print materials. This includes patrons with physical, visual, and learning disabilities. The CTBL collection contains audio books and magazines, Braille books, large print books, equipment, and a collection of descriptive videos. In October 2020, CTBL was serving 6,190 active individual patrons and 605 organizations, which include health facilities and retirement homes.

In partnership with CTBL, the Library Research Service (LRS) has developed and administered a bi-annual patron survey since the fall of 2004. This year’s survey presented distinct challenges as it was administered during the COVID-19 pandemic. CTBL’s building closed to walk-in service on March 20, 2020, but the library continued to operate and provide services for CTBL patrons despite the extraordinary circumstances.This year’s survey asked questions about the devices patrons use, how they decide what to read next, how they value CTBL, and more.

To read this year’s full report, click here. To view the infographic, click here.

Report:

CTBL 2020 report

Infographic:

CTBL infographic

Guest Post: Why Use Inclusive Language

The Colorado State Library (CSL)’s Equity, Diversity, and Inclusivity Team (EDIT) is dedicated to raising awareness about EDI issues and spotlighting those values in Colorado’s cultural heritage profession. This guest post is the first in CSL’s new blog series that will regularly be posted on Colorado Virtual Library here. Twice a month, members of the LRS team will be looking at EDI research and how it applies to the library profession. We encourage you to visit the CVL website to learn more! 


Using appropriate terminology is a vital part of being an effective communicator. Using inclusive language is a way of showing consideration for everyone we meet. It is a way of recognizing, accepting, and sometimes celebrating personal characteristics such as gender, race, nationality, ethnicity, religion, or other attributes that make up a person’s identity. Using inclusive language centers the individual person and is one way of showing solidarity, allyship, and just plain old kindness. In a profession that aims to foster a welcoming, respectful, and accessible environment, inclusive language should be part of the everyday vernacular of library staff.

So, what is inclusive language?

As the Linguistic Society of America puts it:

Inclusive language acknowledges diversity, conveys respect to all people, is sensitive to differences, and promotes equal opportunities.

Inclusive language is the intentional practice of using words and phrases that correctly represent minority—and frequently marginalized—communities, such as LGBTQ+ (Lesbian, Gay, Bisexual, Transgender, and Queer/Questioning), BIPOC (Black, Indigenous, and People of Color), people with disabilities, people with mental health conditions, immigrants, etc. The key is to avoid hurtful, stereotypical language that makes individuals feel excluded, misunderstood, and/or disrespected. The use of inclusive language acknowledges that marginalized communities have ownership over the terminology that they use to refer to themselves, not the majority. It should also be noted that terminology isn’t necessarily ubiquitous across an entire group.

Keeping up-to-date

You might have said to yourself, there are so many new words or phrases nowadays, it’s hard to keep up! You might also have felt like you were worried about “saying the wrong thing.” Rest assured that language is always evolving as social, cultural, and technological changes occur, and you’re not expected to know everything all of the time. A willingness to learn and an awareness that you don’t have all the answers are extremely helpful traits that can aid in building trust with the people you meet.

One resource to keep in mind is the Pacific University’s extensive glossary of Equity, Diversity & Inclusion terms. Northwestern’s Inclusive Language Guide also offers a lot of examples of preferred terms.

Centering the individual first

Inclusive language centers the individual by referring foremost to someone as a person. Doing so reinforces the idea that someone is not defined by certain characteristics, such as race, religion, or disability. For example, it is still fairly common to refer to a person with a disability as simply “disabled.” It is now becoming more standard to use the phrase “Person with a disability.” The aim is to acknowledge the individual person first; this is also known as person-first or person-centered language. For example, “She is a person with a disability” rightfully acknowledges that this person has a disability, but they are not one-and-the-same, or synonymous with that disability. For more on inclusive language with respect to disability, check out this guide by the Stanford Disability Initiative.

Another way of thinking about centering the individual is with respect to race and ethnicity. Instead of referring to “a black” or “a Jew,” simply remembering to add the word “person” (i.e., a black person, a Jewish person) affirms that you are describing a person above all, while making it clear that you are not defining someone based on a single trait.

Pronouns: If you’re not sure, ask

Mostly we use the pronouns that are consistent with the person’s gender expression regardless of what we think their biological sex might be. If you are unsure of how to refer to an individual or what the correct words to use may be, asking respectful questions creates an opportunity for learning and the person you are asking may—or may not, as is their right—wish to affirm their identity to you. If you are unsure of a person’s pronouns, and it is appropriate to ask, keep it simple with something like, “Would you mind sharing what pronouns I should use when speaking to you?” In the case of gender identity, it is always better to ask than to assume. For more information on LGBTQ+ inclusive language, check out the Ally’s Guide to Terminology by GLAAD.

Always use a transgender person’s chosen name. Also, a person who identifies as a certain gender should be referred to using pronouns consistent with that gender. When it isn’t possible to ask what pronoun a person would prefer, use the pronoun that is consistent with the person’s appearance and gender expression.

-From GLAAD’s Ally’s Guide to Terminology

Do your research

Inclusive language is a broad and evolving topic. As with most things, doing a little bit of solo research can go a long way. Try to utilize reliable, research-based sources whenever possible, and also seek out the voices of experts from diverse backgrounds.

Conclusion

Intentionally using and remaining receptive to the appropriate terminology are key ways of giving others the dignity they deserve. Library staff engage with an intersection of many different types of people on a day-to-day basis. It is critical that we reinforce what libraries represent as an inclusive place for all by using the language that mirrors our values.

By Michael Peever, Consultant Support Specialist at Colorado State Library

Bad Survey Questions, part 2

Bad Survey Questions – pt. 2

Don’t let those bad survey questions go unpunished. Last time we talked about leading and loaded questions, which can inadvertently manipulate survey respondents. This week we’ll cover three question types that can just be downright confusing to someone taking your survey! Let’s dig in. 

Do you know what double-barreled questions are and how to avoid them?

When we design surveys it’s because we’re really curious about something and want a lot of information! Sometimes that eagerness causes us to jam too much into a single question and we end up with a double-barreled question. Let’s look at an example: 

         How satisfied are you with our selection of books and other materials? 

O    Very dissatisfied
O    Dissatisfied
O    Neither satisfied nor dissatisfied 
O    Satisfied
O    Very satisfied

Phrasing the question like this creates two problems. First, if a respondent selected “very dissatisfied,” when you analyzed the data you wouldn’t know if they were saying they were very dissatisfied with only the books, only the materials, or both. Second, if the respondent was dissatisfied with the book selection, but was very satisfied with the DVD selection, they wouldn’t know how to answer this question. They have to just choose an inaccurate response or stop the survey altogether.  

Survey questions should always be written in a way that only measures one thing at a time. So ask yourself, “What am I measuring here?” The double-barreled issue is in the second part of the survey question. What are you measuring the satisfaction of? Books and materials. 

Two ways of spotting a double-barreled question are: 

  1. Check if a single question contains two or more subjects, and is therefore measuring more than one thing.
  2. Check if the question contains the word “and.” Although not a foolproof test, the use of the word “and” is a good indicator that you should double check (pun intended) for a double-barreled question.

You can easily fix a double-barreled question by breaking it into two separate questions.

How satisfied are you with our selection of books?
How satisfied are you with our selection of other materials?

This may feel clunky and cause your survey to be longer, but a longer survey is better than making respondents feel confused or answer incorrectly. 

Do you only use good survey questions every day on all of your surveys, always?

Life isn’t black and white, therefore survey questions shouldn’t be either. Build flexibility into your response options by avoiding absolutes in questions and answer choices. Absolutes force respondents into a corner and the only way out is to give you useless data. 

When writing survey questions, avoid using words like “always,” “all,” “every,” etc. When writing response options, avoid giving only yes/no answer options. Let’s look at the examples below:

                    Have you attended all of our library programs this summer?  O Yes   O No

The way this question and response options are phrased would force almost any respondent to answer “no.” Read literally, you’re asking if someone went to every library program you’ve ever had, whether or not it was offered this summer or for their age group. Some respondents might interpret the question as you intended, but why leave it up to chance? Here’s how you might rewrite the absolute question:

How many of our library programs did you attend this summer?

Instead of only providing yes or no as answer choices, you should also use a variety of answer options, including ranges. For instance, if you also asked the survey respondent how many books they read during the summer, your answer options could be:

O    I have not attended any
O    1-3
O    4-6
O    7-9
O    10+
O    I do not know

Chances are, a respondent would feel like they easily fall into one of these categories and would feel comfortable choosing one that’s accurate.

Have you indexed this LRS text in your brain? 

In libraryland, we LOVE acronyms and jargon, but they don’t belong in a survey. Avoid using terms that your respondents might not be familiar with, even if they’re deeply familiar to you. If you use an acronym spell it out the first time you mention it, like this: Library Research Service (LRS). Be as clear and concise as possible while keeping the language uncomplicated. For instance, if asking how many times someone used a PC in the last week, be sure to explain what you mean by PC, and include examples like below: 

In the last week, how many times have you used a PC (ipad, laptop, android tablet, desktop computer)? 

Do you remember all the tools and tips we covered in our bad survey questions segment?

Hey, that’s ok if not! Here’s a quick review of things to do and don’t do in your surveys:

   Do use neutral language.

     Don’t use leading questions that push a respondent to answer a question in a certain way by using non-neutral language.  

   Do ask yourself who wouldn’t be able to answer each question honestly.

     Don’t use loaded questions that force a respondent to answer in a way that doesn’t accurately reflect their opinion or situation.

   Do break double-barreled questions down into two separate questions.

     Don’t use double-barreled questions that measure more than one thing in a question.

   Do build flexibility into questions by providing a variety of response options.

     Don’t use absolutes (only, all, every, always, etc.) that force respondents into a corner.

   Do keep language, clear, concise and easy to understand.

     Don’t use jargon or colloquial terms. 

 

Bad Survey Questions, part 1

In our last post, we talked about when you should use a survey and what kind of data you can get from different question types. This week, we’re going to cover two of the big survey question mistakes evaluators make and how to avoid them so you don’t end up with biased and incomplete data. In other words—all your hard work straight into the trash!

Do you think a leading question is manipulative? 

Including leading questions in a survey is a common mistake evaluators make. A leading question pushes a survey respondent to answer in a particular way by framing the question in a non-neutral manner. These responses therefore produce inaccurate information. Spot a leading question by looking for any of these characteristics:

  • They are intentionally framed to elicit responses according to your preconceived notions.
  • They have an element of conjecture or assumption.
  • They contain unnecessary additions to the question.

Leading questions often contain information that a survey writer already believes to be true. The question is then phrased in a way that forces a respondent to confirm that belief. For instance, take a look at the question below. 

Do you like our exciting new programs? 

You might think your programs are exciting, but that’s because you’re biased! This question is also dichotomous, meaning they must answer yes or no. While dichotomous questions can be quick and easy to answer, they don’t allow any degree of ambivalence or emotional preference. Using the word “like” also puts a positive assumption right in the question, pushing the respondent in that direction. A better way to write this question would be: 

How satisfied are you with our new programs?

In order to avoid leading questions, remember to do the following: 

  • Use neutral language. 
  • Keep questions clear and concise by removing any unnecessary words.
  • Do not try to steer respondents toward answering in a specific way. Ask yourself if you think you know how most people will answer. This might highlight assumptions you’re making.

Why are loaded questions so bad?

Similar to leading questions, loaded questions force a respondent to answer in a way that doesn’t accurately reflect their opinion or situation. These types of questions often cause a respondent to abandon a survey entirely, especially if the loaded questions are required. Common characteristics of loaded questions are: 

  • Use words overcharged with positive or negative meaning. 
  • Questions that force respondents into a difficult position, such as forcing them to think in black and white terms. 
  • Presupposes the respondent has done something. 

Let’s look at some examples of loaded questions. Put yourself in the shoes of different respondents. Can you think of someone that would have trouble or feel uncomfortable answering them?

How would someone who has never accrued late fees answer this question? This places someone in a logical fallacy. If they answer “yes,” they are saying that they once had late fees. If they answer “no” because they never started accruing late fees, then they are saying that they are still getting charged.

Why did you dislike our summer reading program?

How would someone who likes the summer reading program answer this question? This places someone in a logical fallacy. Any answer choices they select would be inaccurate. The question is loaded because it presupposes that respondents felt negatively about the program.

When you used our “ask a librarian” service, was the librarian knowledgeable enough to answer your question?

What if the librarian wasn’t knowledgeable, but was helpful? Maybe they didn’t know the answer, but they pointed you in the right direction so that you could find the answer. This phrasing causes the respondent to think in black and white terms, either they gave you the answer or nothing. Not to mention this question assumes you’ve used the service at all! 

Here are some ways to avoid using loaded questions:

  • Test your survey with a small sample of people and see if everyone is able to answer every question honestly. 
  • If you aren’t able to test it, try putting on multiple hats yourself and ask yourself who wouldn’t be able to answer this? 
  • You can also break questions down further and use what’s called “skip logic.” This means you would first ask respondents, “Have you used our ask a librarian service?” If they answer “yes,” then you would have them continue to a question about that service. If they answer “no,” they would skip to the next section. 

How useful was this blog post for learning about surveys and helping you file your taxes?

As the bad question example above might allude to, we aren’t done with this topic! In our next post, we’ll talk about double-barreled questions and absolute questions, so stay tuned! As always, if you have any questions or feedback we’d love to hear from you at LRS@LRS.org.

Are you ready to learn about surveys? Ο Yes Ο No

1. What is a survey?

If you’ve ever responded to the U.S. Census, then you’ve taken a survey, which is simply a questionnaire that asks respondents to answer a set of questions. Surveys are a common way of collecting data because they efficiently reach a large number of people, are anonymous, and tend to be less expensive and time-intensive than other data collection methods. The purpose of surveys is to collect primarily quantitative data. Surveys can be administered online, by phone, by text, or in print. 

2. Should I use a survey to collect data? 

In our last post we talked about how to decide which data collection method fits your evaluation. The first step is figuring out your evaluation question and determining if a survey can answer it. Surveys might be the right option if you want to collect information from a large number of people about their needs, opinions, or behaviors. For instance, they can help you determine what patrons learned from a program, the different ways people use resources at your library, or even what services non-users might be interested in, among other things. 

Surveys might not be the right method if: 

  You’re primarily trying to answer questions of why or how, as these work best as open-ended questions and are better suited for interviews or focus groups. Surveys can contain open-ended questions, but they are typically supplemental to the closed questions that make up the majority of the survey.

  Participant self-reported behavior is likely to be inaccurate. For instance, surveying children on how engaging a program was might not be the best approach.

In addition to these criteria, you should also consider time and costs associated with a survey and whether these line up with the resources you have available. A more thorough breakdown of the costs associated with a survey can be found here

3. How many of these question types have you used? (Mark all that apply)

Although survey questions can be written in a multitude of ways, ultimately every question is either closed, open-ended, or a combination of both. Open-ended questions ask the survey respondent to provide an answer in their own words, like in the example below.

Why did you decide to read this blog post? 

Open-ended questions allow the evaluator to collect robust data by not limiting the respondent to a list of possible answers. For instance, maybe you’re reading this blog right now because your cat walked across your keyboard and accidentally clicked on the link. The survey is unlikely to include that answer option on a closed question, but an open-ended question can capture that sort of qualitative data. 

Although there are many pros to using open-ended questions, there are also some downsides. Most open-ended questions take a long time and a skilled evaluator to analyze the qualitative data they produce. That’s why closed questions are more commonly used on surveys.

Unlike open-ended questions, closed questions provide a set of answer choices and produce quantitative data. Let’s explore some different types of closed questions.

Multiple choice questions allow respondents to select one or more options from a set of answers that you define. A common drawback of multiple choice questions is that they limit answers to a predetermined list like below, which may not reflect everyone’s responses. Often the problem is solved by adding an “other” option where a respondent can write in their answer if it isn’t part of the list. 

How do you feel today?

  Happy

  Sad

  Other, please specify: ___________

Adding an “other” option makes part of this question open-ended. When you analyze the data for this question, pay close attention to the percentage of respondents who chose “other.” If it’s a large portion (usually more than 10 percent), you will need to do some qualitative analysis of these answers.

Likert scale questions give respondents a range of options (usually five or seven choices). They’re often used to gauge someone’s feelings or opinions and can be written as statements instead of questions (see below). Writing a likert scale can be tricky because you need to make sure your response options are balanced. We’ll talk about that more in depth in our next post. Here’s an example of a likert scale question.

I am learning something from this post on surveys.

  Strongly agree

  Agree

  Neither agree nor disagree

  Disagree

  Strongly disagree 

Demographic questions ask respondents about characteristics that are descriptive, such as age, gender, race, income level etc. Demographic questions allow you to gain a deeper insight into your data. For instance, I could use a question that asks a respondent’s age to analyze whether younger respondents were more likely to say they “disagree” or “strongly disagree” on the question above. 

These are the most common question types you’ll find on a survey, but for a deeper dive on different question formats, such as matrix, dropdown, and ranking, check out this article from SurveyMonkey. 

4. Stay tuned for surveys pt. 2?       Yes       Definitely     I wouldn’t miss it for the world

We’ve all probably taken a survey, but there’s a lot that goes into making them balanced, understandable, and unbiased. In our next post we’ll cover why the question above should never be on a survey and other common mistakes people make when writing survey questions.