Nothing About Us, Without Us: Equitable evaluation through community engagement

 

This is a “guest post” from the Colorado Virtual Library Equity, Diversity, and Inclusion blog.

When you wake up, one of the first things you might do is open your weather app to see what the temperature is and if it’s supposed to rain that day. You then use that information—or data—to make important decisions, like what to wear and whether you should bring an umbrella when you go out. The fact is, we are all collecting data every day—and we use that data to inform what we do next.

It’s no different in libraries. We collect data about circulation, program attendance, the demographics of our community, and so on. When we collect the data in a formalized way and use it to make decisions, we call this evaluation. Simply put, “evaluation determines the merit, worth, or value of things,” according to evaluation expert Michael Scriven.

Equitable Evaluation

So what does this have to do with equity, diversity, and inclusion? Well…everything. If evaluation does in fact determine the merit, worth, or value of programs and services, what happens when your library’s evaluation excludes or overlooks certain groups from the data? Let’s take a look:

You are trying to evaluate patron satisfaction at your library, so you print off a stack of surveys and leave them on the lending desk for patrons to take. While everyone in your target audience may have equal access to the survey (or in other words, are being treated the same), they don’t all have equitable access. Sometimes people may need differing treatment in order to make their opportunities the same as others. In this case, how would someone who has a visual impairment be able to take a printed survey? What about someone who doesn’t speak English? These patrons would likely ignore your survey, and without demographic questions on language and disability, the omission of these identities might never be known. Upon analyzing your data, conclusions might be made to suggest, “X% of patrons felt this way about x, y, and z.” In reality, your results wouldn’t represent all patrons—only sighted, English-speaking patrons.

Inequities are perpetuated by evaluation when we fail to ensure our methods are inclusive and representative of everyone in our target group. The data will produce conclusions that amplify the experiences and perspectives of the dominating voice while simultaneously reproducing the idea that their narrative is representative of the entire population. Individuals who have historically been excluded will continue to be erased from our data and the overarching narrative, serving to maintain current power structures.

Evaluation With the Community, not On the Community

That’s a heavy burden to take on as an evaluator and a library professional, especially when taking part in people’s marginalization is the last thing you would want to do. Luckily, the research community has long been working on some answers to this problem. Community-based participatory research (CBPR) is contingent on the participation of those you are evaluating (your target population) and emphasizes democratization of the process. CBPR is defined as:

“focusing on social, structural, and physical environmental inequities through active involvement of community members, organizational representatives, and researchers in all aspects of the research process. Partners contribute their expertise to enhance understanding of a given phenomenon and integrate the knowledge gained with action to benefit the community involved.”

CBPR centers around seven key principles:

  1. Recognizes community as a unit of identity
  2. Builds on strengths and resources
  3. Facilitates collaborative partnerships in all phases of the research
  4. Integrates knowledge and action for mutual benefit of all partners
  5. Promotes a co-learning and empowering process that attends to social inequalities
  6. Involves a cyclical and iterative process
  7. Disseminates findings and knowledge gained to all partners

As one librarian put it, CBPR “dismantles the idea that the researcher is the expert and centers the knowledge of the community members.” When those that you are evaluating (whether it be patrons, non-users, people with a disability, non-English speakers, etc.) are involved in the entire process, your data will invariably become more equitable. As a result, your evaluation outcome will more effectively address real problems for your community. It’s a win-win for everyone.

However, if diving into a full community-based participation evaluation feels impossible given your time and resources, it’s okay. Think of CBPR as your ideal and then adjust to a level that is feasible for your library. The continuum of community engagement below outlines what some of those different levels might look like.

The continuum of community engagement ranges from total CBPR on the left end of the spectrum to community engagement on the right end of the spectrum. Total CBPR is full involvement in all parts of the study development and conduct. CBPR light is partial involvement in some or all parts of the study development and conduct. Community based research is research conducted in collaboration with members of the community. And community engagement is working with community members and agencies to reach community members.

The Big Takeaway

Evaluating your practices, policies, and programs in a library can lead to better outcomes for your library community. However, even the best of intentions can create harm for historically underrepresented groups when they are excluded from the very data used to make decisions that impact them. When undertaking an evaluation of any kind, think about the principles of CBPR and how you can incorporate them into your plan.

Why Observe? Watch and Learn

When I was a kid, one of my favorite summer activities was staring at hummingbirds. I would sit for hours, moving as little as possible, while I took notes about everything I saw. (Yes, I was a pretty weird eight year old.) I wanted to ask the hummingbirds so many questions, but I don’t speak hummingbird! Observing them was my only option for trying to understand their behavior. 

While it is literally impossible to ask a hummingbird to take a survey, there are many times with humans when a survey won’t work to collect the data you need either. Observation can be a great data collection tool when you want to see how different people interact with each other, a space, or a passive program. Observation is also helpful when it would be difficult for someone to answer a question accurately, like when you ask them to remember what they did or, particularly with children, if you ask them to give critical feedback or written feedback, both of which are sometimes developmentally inappropriate. 

In this post, I’m going to talk about why you might choose observation as a data collection method. Next time, I’ll talk about the logistics of observations and how you can use observational data. To better understand why you would collect data with observations, let’s use our example evaluation question from throughout this blog series: “Does attending storytime help caregivers use new literacy skills at home?” 

When we first outlined number data and story data, we talked about when to use each. We also outlined how to break your research question down into smaller questions. You really need to do that work to get to this point, so let’s go back and review what we did.  Here are some of the sub-questions we identified within our larger evaluation question:

  • Were caregivers already using literacy skills at home prior to attending a storytime? 
  • Are caregivers learning new literacy skills during storytime? 
  • Do caregivers use new literacy skills from storytime at home? 

Would a survey work to collect this data? We certainly could ask caregivers all of these questions. But we would immediately bump into some of the problems that come up when people self-report data: 1) we are not great at remembering things accurately and 2) we want to portray ourselves in the best possible light (social desirability bias). Let’s take a look at how those challenges would impact the data collection for our questions. 

  • Were caregivers already using literacy skills at home prior to attending a storytime? 
    • They may not know or accurately remember which skills they knew before attending storytime and which they learned at storytime. 
  • Are caregivers learning new literacy skills during storytime? 
    • They may report that they are learning new literacy skills at storytime because they don’t want to hurt anyone’s feelings—even if they aren’t actually learning those skills. 
  • Do caregivers use new literacy skills from storytime at home? 
    • They may report that they are using new literacy skills at home because that feels nice to say—even if they aren’t actually using those skills at home. 

So we could collect that data using a survey, but it may not be very accurate. We could get more accurate data by observing caregivers at home with children before they ever attended a library storytime and then continuing to observe after they started attending storytime. Then we could see for ourselves what skills they already knew and used at home, and which ones they learned at storytime. We could tally up how often they were using those skills too. Great! Let’s go follow people and their children around their homes 24 hours a day taking notes for several months. 

What? You don’t think that’s going to be a thrilling success? Unlike hummingbirds, who don’t seem to mind too much or alter their behavior a lot while I am watching, humans mind quite a bit and can change their behavior when they are being observed. Additionally, do you know any library staff who have the time to do this kind of intense observational study? Yeah, that’s what I thought. The time involved in observation and successfully navigating privacy concerns are two major elements that you always need to consider. 

What can we do that’s a little more realistic? Collecting data in the real world is often about doing what you can with what you have. In this case, it is unlikely anyone would let us come follow them around their home. We can, however, more easily observe caregivers and their children in the library. This would allow us to observe for indicators of caregivers learning skills during storytime and to observe if families are using early literacy skills while they are spending unstructured time in the library. Intrigued as to how we would do that? Come back for our next post where we’ll get into the nuts and bolts of how you can collect data using observation and pull out important takeaways from that data.

If you are an aspiring birdnerd, the two hummingbirds pictured are both species we have in Colorado and you can learn more about them here

Surveys: Don’t just set it and forget it!

Surveys are the rotisserie oven of the data collection methods. You simply “set it, and forget it!” That’s why it’s important to be strategic about how you’re reaching your target population. Otherwise, you may be leaving out key subsets of your audience—which are often voices that are already historically underrepresented.  

Is your survey equitable? 

Let’s say you want to send out a survey to library users, so you print off a stack of copies and leave them on the lending desk for patrons to take. While everyone in your target audience may have equal access to the survey (or in other words, are being treated the same), they don’t all have equitable access. Sometimes people may need differing treatment in order to make their opportunities the same as others. In this case, how would someone who has a visual impairment be able to take a printed survey? What about someone who doesn’t speak English? These patrons would likely ignore your survey, and without demographic questions on language and disability, the omission of these identities might never be known. Upon analyzing your data, conclusions might be made to suggest, “X% of patrons felt this way about x,y, and z.” In reality, your results wouldn’t represent all patrons—only sighted, English-speaking patrons. 

Who has access to your survey? 

Start by thinking about who you want to answer your survey—your target population. Where do they live? What do they do? What identities do they hold? Consider the diversity of people that might live within a more general population: racial and ethnic identities, sexual orientation, socio-economic status, age, religion, etc. Next, think through the needs and potential barriers for people in your target population, such as language, access to transportation, access to mail, color blindness, literacy, sightedness, other physical challenges, immigration status, etc. Create a distribution plan that ensures that everyone in your target population—whether they face barriers or not—can access your survey easily. Here are some common distribution methods you could use: 

  • Direct mail – Here’s more information about how to do a mail survey and it’s advantages and disadvantages. 
  • Online – For more information on how to make your online survey accessible, check out this article from Survey Monkey.
  • Telephone – In a telephone survey, someone calls the survey taker and reads them the questions over the phone while recording their answers. 
  • In-person – Surveys can also be administered in-person with a printed stack of surveys or a tablet. However, with this approach you might run into the dangers of convenience sampling

Depending on your target audience, surveys are rarely one-size-fits-all. The best plan is often a mixed-methods approach, where you employ multiple distribution strategies to ensure equitable access for all members of your target population. 

Who is and isn’t taking your survey?

Great! You’ve constructed a distribution plan that you feel can equitably reach your target population, but did it work? The only way to know for sure is by collecting certain demographic information as part of your survey. 

As library professionals, collecting identifying information can feel like a direct contradiction to our value of privacy. Yet, as a profession we are also committed to equity and inclusivity. When administering a survey, sometimes it’s necessary to collect demographic data to better understand who is and isn’t being represented in the results. Questions about someone’s race, ethnicity, income level, location, age, gender, sexual orientation, etc. not only allow us to determine if those characteristics impact someone’s responses, but also help combat the erasure of minority or disadvantaged voices from data. However, it’s important to note that: 

  1. You should always explicitly state on your survey that demographic questions are optional, 
  2. You should ensure responses remain anonymous either by not collecting personal identifying information or making sure access to that information is secure, and 
  3. Only collect demographic information that’s relevant and necessary to answer your particular research question. 

Compare the data from your demographic questions with who you intended to include in your target audience. Are there any gaps? If so, re-evaluate your distribution plan to better reach this sub-group(s), including speaking to representatives of the community or people that identify with the group for additional insight. Make additional efforts to distribute your survey, if necessary.

Conclusion

Inequities are perpetuated by research and evaluation when we fail to ensure our data collection methods are inclusive and representative of everyone in our target group. The absence of an equitable distribution plan and exclusion of relevant demographic questions on your survey runs the risk of generating data that maintains current power structures. The data will produce conclusions that amplify the experiences and perspectives of the dominating voice while simultaneously reproducing the idea that their narrative is representative of the entire population. Individuals who have historically been excluded will continue to be erased from our data and the overarching narrative.

Colorado Talking Book Library 2020

Results from the 2020 Colorado Talking Book Library (CTBL) patron survey are in! Survey respondents gave CTBL high marks again with 99% rating CTBL’s overall service as good or excellent in 2020. This is the ninth survey in a row (over 16 years) where 98% or more of respondents rated CTBL’s overall service as good or excellent.

The Colorado Talking Book Library provides free library services to Coloradans who are unable to read standard print materials. This includes patrons with physical, visual, and learning disabilities. The CTBL collection contains audio books and magazines, Braille books, large print books, equipment, and a collection of descriptive videos. In October 2020, CTBL was serving 6,190 active individual patrons and 605 organizations, which include health facilities and retirement homes.

In partnership with CTBL, the Library Research Service (LRS) has developed and administered a bi-annual patron survey since the fall of 2004. This year’s survey presented distinct challenges as it was administered during the COVID-19 pandemic. CTBL’s building closed to walk-in service on March 20, 2020, but the library continued to operate and provide services for CTBL patrons despite the extraordinary circumstances.This year’s survey asked questions about the devices patrons use, how they decide what to read next, how they value CTBL, and more.

To read this year’s full report, click here. To view the infographic, click here.

Report:

CTBL 2020 report

Infographic:

CTBL infographic

Guest Post: Why Use Inclusive Language

The Colorado State Library (CSL)’s Equity, Diversity, and Inclusivity Team (EDIT) is dedicated to raising awareness about EDI issues and spotlighting those values in Colorado’s cultural heritage profession. This guest post is the first in CSL’s new blog series that will regularly be posted on Colorado Virtual Library here. Twice a month, members of the LRS team will be looking at EDI research and how it applies to the library profession. We encourage you to visit the CVL website to learn more! 


Using appropriate terminology is a vital part of being an effective communicator. Using inclusive language is a way of showing consideration for everyone we meet. It is a way of recognizing, accepting, and sometimes celebrating personal characteristics such as gender, race, nationality, ethnicity, religion, or other attributes that make up a person’s identity. Using inclusive language centers the individual person and is one way of showing solidarity, allyship, and just plain old kindness. In a profession that aims to foster a welcoming, respectful, and accessible environment, inclusive language should be part of the everyday vernacular of library staff.

So, what is inclusive language?

As the Linguistic Society of America puts it:

Inclusive language acknowledges diversity, conveys respect to all people, is sensitive to differences, and promotes equal opportunities.

Inclusive language is the intentional practice of using words and phrases that correctly represent minority—and frequently marginalized—communities, such as LGBTQ+ (Lesbian, Gay, Bisexual, Transgender, and Queer/Questioning), BIPOC (Black, Indigenous, and People of Color), people with disabilities, people with mental health conditions, immigrants, etc. The key is to avoid hurtful, stereotypical language that makes individuals feel excluded, misunderstood, and/or disrespected. The use of inclusive language acknowledges that marginalized communities have ownership over the terminology that they use to refer to themselves, not the majority. It should also be noted that terminology isn’t necessarily ubiquitous across an entire group.

Keeping up-to-date

You might have said to yourself, there are so many new words or phrases nowadays, it’s hard to keep up! You might also have felt like you were worried about “saying the wrong thing.” Rest assured that language is always evolving as social, cultural, and technological changes occur, and you’re not expected to know everything all of the time. A willingness to learn and an awareness that you don’t have all the answers are extremely helpful traits that can aid in building trust with the people you meet.

One resource to keep in mind is the Pacific University’s extensive glossary of Equity, Diversity & Inclusion terms. Northwestern’s Inclusive Language Guide also offers a lot of examples of preferred terms.

Centering the individual first

Inclusive language centers the individual by referring foremost to someone as a person. Doing so reinforces the idea that someone is not defined by certain characteristics, such as race, religion, or disability. For example, it is still fairly common to refer to a person with a disability as simply “disabled.” It is now becoming more standard to use the phrase “Person with a disability.” The aim is to acknowledge the individual person first; this is also known as person-first or person-centered language. For example, “She is a person with a disability” rightfully acknowledges that this person has a disability, but they are not one-and-the-same, or synonymous with that disability. For more on inclusive language with respect to disability, check out this guide by the Stanford Disability Initiative.

Another way of thinking about centering the individual is with respect to race and ethnicity. Instead of referring to “a black” or “a Jew,” simply remembering to add the word “person” (i.e., a black person, a Jewish person) affirms that you are describing a person above all, while making it clear that you are not defining someone based on a single trait.

Pronouns: If you’re not sure, ask

Mostly we use the pronouns that are consistent with the person’s gender expression regardless of what we think their biological sex might be. If you are unsure of how to refer to an individual or what the correct words to use may be, asking respectful questions creates an opportunity for learning and the person you are asking may—or may not, as is their right—wish to affirm their identity to you. If you are unsure of a person’s pronouns, and it is appropriate to ask, keep it simple with something like, “Would you mind sharing what pronouns I should use when speaking to you?” In the case of gender identity, it is always better to ask than to assume. For more information on LGBTQ+ inclusive language, check out the Ally’s Guide to Terminology by GLAAD.

Always use a transgender person’s chosen name. Also, a person who identifies as a certain gender should be referred to using pronouns consistent with that gender. When it isn’t possible to ask what pronoun a person would prefer, use the pronoun that is consistent with the person’s appearance and gender expression.

-From GLAAD’s Ally’s Guide to Terminology

Do your research

Inclusive language is a broad and evolving topic. As with most things, doing a little bit of solo research can go a long way. Try to utilize reliable, research-based sources whenever possible, and also seek out the voices of experts from diverse backgrounds.

Conclusion

Intentionally using and remaining receptive to the appropriate terminology are key ways of giving others the dignity they deserve. Library staff engage with an intersection of many different types of people on a day-to-day basis. It is critical that we reinforce what libraries represent as an inclusive place for all by using the language that mirrors our values.

By Michael Peever, Consultant Support Specialist at Colorado State Library

Bad Survey Questions, part 2

Bad Survey Questions – pt. 2

Don’t let those bad survey questions go unpunished. Last time we talked about leading and loaded questions, which can inadvertently manipulate survey respondents. This week we’ll cover three question types that can just be downright confusing to someone taking your survey! Let’s dig in. 

Do you know what double-barreled questions are and how to avoid them?

When we design surveys it’s because we’re really curious about something and want a lot of information! Sometimes that eagerness causes us to jam too much into a single question and we end up with a double-barreled question. Let’s look at an example: 

         How satisfied are you with our selection of books and other materials? 

O    Very dissatisfied
O    Dissatisfied
O    Neither satisfied nor dissatisfied 
O    Satisfied
O    Very satisfied

Phrasing the question like this creates two problems. First, if a respondent selected “very dissatisfied,” when you analyzed the data you wouldn’t know if they were saying they were very dissatisfied with only the books, only the materials, or both. Second, if the respondent was dissatisfied with the book selection, but was very satisfied with the DVD selection, they wouldn’t know how to answer this question. They have to just choose an inaccurate response or stop the survey altogether.  

Survey questions should always be written in a way that only measures one thing at a time. So ask yourself, “What am I measuring here?” The double-barreled issue is in the second part of the survey question. What are you measuring the satisfaction of? Books and materials. 

Two ways of spotting a double-barreled question are: 

  1. Check if a single question contains two or more subjects, and is therefore measuring more than one thing.
  2. Check if the question contains the word “and.” Although not a foolproof test, the use of the word “and” is a good indicator that you should double check (pun intended) for a double-barreled question.

You can easily fix a double-barreled question by breaking it into two separate questions.

How satisfied are you with our selection of books?
How satisfied are you with our selection of other materials?

This may feel clunky and cause your survey to be longer, but a longer survey is better than making respondents feel confused or answer incorrectly. 

Do you only use good survey questions every day on all of your surveys, always?

Life isn’t black and white, therefore survey questions shouldn’t be either. Build flexibility into your response options by avoiding absolutes in questions and answer choices. Absolutes force respondents into a corner and the only way out is to give you useless data. 

When writing survey questions, avoid using words like “always,” “all,” “every,” etc. When writing response options, avoid giving only yes/no answer options. Let’s look at the examples below:

                    Have you attended all of our library programs this summer?  O Yes   O No

The way this question and response options are phrased would force almost any respondent to answer “no.” Read literally, you’re asking if someone went to every library program you’ve ever had, whether or not it was offered this summer or for their age group. Some respondents might interpret the question as you intended, but why leave it up to chance? Here’s how you might rewrite the absolute question:

How many of our library programs did you attend this summer?

Instead of only providing yes or no as answer choices, you should also use a variety of answer options, including ranges. For instance, if you also asked the survey respondent how many books they read during the summer, your answer options could be:

O    I have not attended any
O    1-3
O    4-6
O    7-9
O    10+
O    I do not know

Chances are, a respondent would feel like they easily fall into one of these categories and would feel comfortable choosing one that’s accurate.

Have you indexed this LRS text in your brain? 

In libraryland, we LOVE acronyms and jargon, but they don’t belong in a survey. Avoid using terms that your respondents might not be familiar with, even if they’re deeply familiar to you. If you use an acronym spell it out the first time you mention it, like this: Library Research Service (LRS). Be as clear and concise as possible while keeping the language uncomplicated. For instance, if asking how many times someone used a PC in the last week, be sure to explain what you mean by PC, and include examples like below: 

In the last week, how many times have you used a PC (ipad, laptop, android tablet, desktop computer)? 

Do you remember all the tools and tips we covered in our bad survey questions segment?

Hey, that’s ok if not! Here’s a quick review of things to do and don’t do in your surveys:

   Do use neutral language.

     Don’t use leading questions that push a respondent to answer a question in a certain way by using non-neutral language.  

   Do ask yourself who wouldn’t be able to answer each question honestly.

     Don’t use loaded questions that force a respondent to answer in a way that doesn’t accurately reflect their opinion or situation.

   Do break double-barreled questions down into two separate questions.

     Don’t use double-barreled questions that measure more than one thing in a question.

   Do build flexibility into questions by providing a variety of response options.

     Don’t use absolutes (only, all, every, always, etc.) that force respondents into a corner.

   Do keep language, clear, concise and easy to understand.

     Don’t use jargon or colloquial terms. 

 

Bad Survey Questions, part 1

In our last post, we talked about when you should use a survey and what kind of data you can get from different question types. This week, we’re going to cover two of the big survey question mistakes evaluators make and how to avoid them so you don’t end up with biased and incomplete data. In other words—all your hard work straight into the trash!

Do you think a leading question is manipulative? 

Including leading questions in a survey is a common mistake evaluators make. A leading question pushes a survey respondent to answer in a particular way by framing the question in a non-neutral manner. These responses therefore produce inaccurate information. Spot a leading question by looking for any of these characteristics:

  • They are intentionally framed to elicit responses according to your preconceived notions.
  • They have an element of conjecture or assumption.
  • They contain unnecessary additions to the question.

Leading questions often contain information that a survey writer already believes to be true. The question is then phrased in a way that forces a respondent to confirm that belief. For instance, take a look at the question below. 

Do you like our exciting new programs? 

You might think your programs are exciting, but that’s because you’re biased! This question is also dichotomous, meaning they must answer yes or no. While dichotomous questions can be quick and easy to answer, they don’t allow any degree of ambivalence or emotional preference. Using the word “like” also puts a positive assumption right in the question, pushing the respondent in that direction. A better way to write this question would be: 

How satisfied are you with our new programs?

In order to avoid leading questions, remember to do the following: 

  • Use neutral language. 
  • Keep questions clear and concise by removing any unnecessary words.
  • Do not try to steer respondents toward answering in a specific way. Ask yourself if you think you know how most people will answer. This might highlight assumptions you’re making.

Why are loaded questions so bad?

Similar to leading questions, loaded questions force a respondent to answer in a way that doesn’t accurately reflect their opinion or situation. These types of questions often cause a respondent to abandon a survey entirely, especially if the loaded questions are required. Common characteristics of loaded questions are: 

  • Use words overcharged with positive or negative meaning. 
  • Questions that force respondents into a difficult position, such as forcing them to think in black and white terms. 
  • Presupposes the respondent has done something. 

Let’s look at some examples of loaded questions. Put yourself in the shoes of different respondents. Can you think of someone that would have trouble or feel uncomfortable answering them?

How would someone who has never accrued late fees answer this question? This places someone in a logical fallacy. If they answer “yes,” they are saying that they once had late fees. If they answer “no” because they never started accruing late fees, then they are saying that they are still getting charged.

Why did you dislike our summer reading program?

How would someone who likes the summer reading program answer this question? This places someone in a logical fallacy. Any answer choices they select would be inaccurate. The question is loaded because it presupposes that respondents felt negatively about the program.

When you used our “ask a librarian” service, was the librarian knowledgeable enough to answer your question?

What if the librarian wasn’t knowledgeable, but was helpful? Maybe they didn’t know the answer, but they pointed you in the right direction so that you could find the answer. This phrasing causes the respondent to think in black and white terms, either they gave you the answer or nothing. Not to mention this question assumes you’ve used the service at all! 

Here are some ways to avoid using loaded questions:

  • Test your survey with a small sample of people and see if everyone is able to answer every question honestly. 
  • If you aren’t able to test it, try putting on multiple hats yourself and ask yourself who wouldn’t be able to answer this? 
  • You can also break questions down further and use what’s called “skip logic.” This means you would first ask respondents, “Have you used our ask a librarian service?” If they answer “yes,” then you would have them continue to a question about that service. If they answer “no,” they would skip to the next section. 

How useful was this blog post for learning about surveys and helping you file your taxes?

As the bad question example above might allude to, we aren’t done with this topic! In our next post, we’ll talk about double-barreled questions and absolute questions, so stay tuned! As always, if you have any questions or feedback we’d love to hear from you at LRS@LRS.org.

Are you ready to learn about surveys? Ο Yes Ο No

1. What is a survey?

If you’ve ever responded to the U.S. Census, then you’ve taken a survey, which is simply a questionnaire that asks respondents to answer a set of questions. Surveys are a common way of collecting data because they efficiently reach a large number of people, are anonymous, and tend to be less expensive and time-intensive than other data collection methods. The purpose of surveys is to collect primarily quantitative data. Surveys can be administered online, by phone, by text, or in print. 

2. Should I use a survey to collect data? 

In our last post we talked about how to decide which data collection method fits your evaluation. The first step is figuring out your evaluation question and determining if a survey can answer it. Surveys might be the right option if you want to collect information from a large number of people about their needs, opinions, or behaviors. For instance, they can help you determine what patrons learned from a program, the different ways people use resources at your library, or even what services non-users might be interested in, among other things. 

Surveys might not be the right method if: 

  You’re primarily trying to answer questions of why or how, as these work best as open-ended questions and are better suited for interviews or focus groups. Surveys can contain open-ended questions, but they are typically supplemental to the closed questions that make up the majority of the survey.

  Participant self-reported behavior is likely to be inaccurate. For instance, surveying children on how engaging a program was might not be the best approach.

In addition to these criteria, you should also consider time and costs associated with a survey and whether these line up with the resources you have available. A more thorough breakdown of the costs associated with a survey can be found here

3. How many of these question types have you used? (Mark all that apply)

Although survey questions can be written in a multitude of ways, ultimately every question is either closed, open-ended, or a combination of both. Open-ended questions ask the survey respondent to provide an answer in their own words, like in the example below.

Why did you decide to read this blog post? 

Open-ended questions allow the evaluator to collect robust data by not limiting the respondent to a list of possible answers. For instance, maybe you’re reading this blog right now because your cat walked across your keyboard and accidentally clicked on the link. The survey is unlikely to include that answer option on a closed question, but an open-ended question can capture that sort of qualitative data. 

Although there are many pros to using open-ended questions, there are also some downsides. Most open-ended questions take a long time and a skilled evaluator to analyze the qualitative data they produce. That’s why closed questions are more commonly used on surveys.

Unlike open-ended questions, closed questions provide a set of answer choices and produce quantitative data. Let’s explore some different types of closed questions.

Multiple choice questions allow respondents to select one or more options from a set of answers that you define. A common drawback of multiple choice questions is that they limit answers to a predetermined list like below, which may not reflect everyone’s responses. Often the problem is solved by adding an “other” option where a respondent can write in their answer if it isn’t part of the list. 

How do you feel today?

  Happy

  Sad

  Other, please specify: ___________

Adding an “other” option makes part of this question open-ended. When you analyze the data for this question, pay close attention to the percentage of respondents who chose “other.” If it’s a large portion (usually more than 10 percent), you will need to do some qualitative analysis of these answers.

Likert scale questions give respondents a range of options (usually five or seven choices). They’re often used to gauge someone’s feelings or opinions and can be written as statements instead of questions (see below). Writing a likert scale can be tricky because you need to make sure your response options are balanced. We’ll talk about that more in depth in our next post. Here’s an example of a likert scale question.

I am learning something from this post on surveys.

  Strongly agree

  Agree

  Neither agree nor disagree

  Disagree

  Strongly disagree 

Demographic questions ask respondents about characteristics that are descriptive, such as age, gender, race, income level etc. Demographic questions allow you to gain a deeper insight into your data. For instance, I could use a question that asks a respondent’s age to analyze whether younger respondents were more likely to say they “disagree” or “strongly disagree” on the question above. 

These are the most common question types you’ll find on a survey, but for a deeper dive on different question formats, such as matrix, dropdown, and ranking, check out this article from SurveyMonkey. 

4. Stay tuned for surveys pt. 2?       Yes       Definitely     I wouldn’t miss it for the world

We’ve all probably taken a survey, but there’s a lot that goes into making them balanced, understandable, and unbiased. In our next post we’ll cover why the question above should never be on a survey and other common mistakes people make when writing survey questions. 

Does the (Data Collection Method) Shoe Fit?

You wouldn’t go hiking in a pair of dress shoes, right? Like the variety of shoes in your closet, there are a variety of data collection methods in all different shapes and sizes. The trick is finding which data collection method fits! Today’s post will help you determine which method is best for your evaluation.

What are Data Collection Methods?

Data collection is the process of gathering information from different sources with the goal of answering a specific question (your evaluation question). The method, or procedure, that you use to collect your data is your data collection method. Four common ones are: surveys, interviews, focus groups, and observations.

  • Survey: questionnaires that ask respondents to answer a set of questions. While these questions can be closed or open-ended, the purpose of surveys is to collect primarily quantitative data. Surveys can be administered online, by phone, by text, or in print. 
  • Interview: a conversation between two people—an interviewer and an interviewee—during which the interviewer asks primarily open-ended questions. Interviews may occur face-to-face, on the phone, or online. Interviews provide qualitative data.
  • Focus group: a dialogue between a group of specifically selected participants who discuss a particular topic. A moderator leads the focus group. Focus groups provide qualitative data.
  • Observation: a person (the researcher or evaluator) observes events, behaviors, and other characteristics associated with a particular topic in a natural setting. The observer records what they see or experience. Observations may yield quantitative or qualitative data.  
How to Pick the Right Data Collection Method

By this point in your evaluation you should have: 

Determined the goals and scope of your evaluation

  Written your evaluation question(s)

If not, you can circle back to those posts here and here, respectively. Now you’re almost ready to start collecting data—the fun part! First you need to decide which data collection method to use. Take a look at the pros and cons of each data collection method in the chart below. Use this to help you narrow down which methods might fit your evaluation.

To further narrow down your data collection method search, ask yourself the questions below. Do your answers rule out any of the methods? Reference the pros/cons chart for help. 

  What is most essential to you? Consider whether it is important for you to answer questions of how and why (more likely qualitative data) or what, how often, and to what extent (easier with quantitative data). 

  What will you be asking? Complex topics may lend themselves better to methods that allow for follow-up questions. Taboo topics may require additional anonymity. Think about what methods will make your participants feel most comfortable and safe responding to you.

  What are your constraints? Be realistic about the amount of time and resources you have. Choose a method that meets those constraints.

Conclusion

If none of these methods seem to fit your needs, don’t be afraid to branch out and find a collection method that is best for you or take a mixed-methods approach and use multiple techniques! For some other interesting ideas, here’s some additional articles on a collaborative photography method, oral histories, and other creative evaluation methods.

In our next post we’ll start our deep dive into the most popular data collection method—surveys. Stay tuned!

The Dynamic Data Duo: Quantitative and qualitative data, part 2

In our last post we introduced you to the dynamic data duo—quantitative (number) and qualitative (story) data. Like any good superhero squad, each have their own strengths and weaknesses. Quantitative data can usually be collected and analyzed quickly, but can’t really yield nuanced answers. Qualitative data is great at that! However, it often takes a lot of time and resources to collect qualitative data. So just like Batman and Robin, who balance out each other’s strengths and weaknesses when they’re together, both can also have successful solo careers. This post will walk you through a simple process to determine which data hero is right for the job!

Step 1: What is your evaluation question?

Let’s say we’re doing an evaluation where we want to find out if attending storytime helps caregivers use new literacy skills at home. If we go up to every caregiver and simply ask them, we’ll get a lot of yes/no answers, but not a whole lot of details. For example, imagine if we asked you right now: “Is this blog series helping you use new evaluation skills at work?” You might respond: “Uh…I don’t know. Maybe?” It’s a hard question to answer accurately. Often the evaluation question is too complex to directly ask participants.

Step 2: Break your evaluation question down into simple questions. 

Imagine calling up the Justice League and asking, “Hey, can you save the world?” They might answer yes, but will we know if they have the right skills or perhaps have other plans today? Similarly, our evaluation questions are often broad and abstract. We can’t always ask it outright and get a useful answer. So let’s look at some ways we can break our evaluation question down into simpler questions. 

As a reminder, our evaluation question is “does attending storytime help caregivers use new literacy skills at home?” Go word by word and see if you can come up with additional questions that would break the concepts down further. For instance, “does attending…” What are we assuming/what don’t we know? 

  • Did the caregiver attend a storytime session? 
  • Why or why not?
  • How many times did a caregiver attend a storytime session?
  • Which storytime sessions did the caregiver attend? 

Continue on with the rest of the evaluation question, keeping in mind you might not come up with simpler questions for every word or phrase. 

“Caregivers”

  • Who are the caregivers? 
  • Were they already using the literacy skills taught during storytime at home prior to attending a storytime? 

“New literacy skills” 

  • Are caregivers learning new literacy skills during storytime? (If caregivers aren’t learning new literacy skills at storytime, they can’t then use those skills at home!)
  • Why or why not? 
  • What new skills are they learning? 
  • How many new skills are they learning?

“At home”

  • Do caregivers use new literacy skills from storytime at home? 
  • Why or why not?
  • How often do they use new literacy skills from storytime at home? 

Step 3: Determine if each sub-question can be answered with numbers or a story

Go back through your list of sub-questions and try to answer each one with a number. Can you do it? If so, the question would give you quantitative data. If not, it might be a qualitative question. 

Let’s look at the question, “What new literacy skills are caregivers learning during storytime?” We need words to answer this question, not numbers—right? Not necessarily. We could create a list of 10 literacy skills that we taught during storytime and ask caregivers to check which ones they learned. By creating these parameters, we’re limiting the response options to a finite quantity (10 possible choices) and can count how many people choose each skill. This process transforms what would be an open-ended question yielding qualitative data into a question yielding quantitative data. 

You can generally apply this process to questions that either have a finite number of options or where a likert scale is appropriate. However, there are numerous (no pun intended) cases where you’ll want more nuanced, qualitative answers. For instance, try answering the question, “Why did you attend storytime today?” with a number! We could still create a list of possible answers, but it’s likely that someone would look at those choices and feel like none of them really fit. If we want to better understand our caregivers’ reasoning, then we don’t want to limit their responses. We want a story—we want qualitative data.

Step 4: Batman or Robin? Or both?

Now that you’ve classified your questions as quantitative or qualitative, do you have the means (capacity, resources, etc.) to collect data on all of them? Remember the pros and cons of each data type and review which questions are most important to you. Are a majority of them qualitative or quantitative? Knowing which type of data you need to collect will help you decide which data collection method to use. Our next several blog posts will address the different data collection methods you can use and their pros and cons, so keep reading!