Are you ready to learn about surveys? Ο Yes Ο No

1. What is a survey?

If you’ve ever responded to the U.S. Census, then you’ve taken a survey, which is simply a questionnaire that asks respondents to answer a set of questions. Surveys are a common way of collecting data because they efficiently reach a large number of people, are anonymous, and tend to be less expensive and time-intensive than other data collection methods. The purpose of surveys is to collect primarily quantitative data. Surveys can be administered online, by phone, by text, or in print. 

2. Should I use a survey to collect data? 

In our last post we talked about how to decide which data collection method fits your evaluation. The first step is figuring out your evaluation question and determining if a survey can answer it. Surveys might be the right option if you want to collect information from a large number of people about their needs, opinions, or behaviors. For instance, they can help you determine what patrons learned from a program, the different ways people use resources at your library, or even what services non-users might be interested in, among other things. 

Surveys might not be the right method if: 

  You’re primarily trying to answer questions of why or how, as these work best as open-ended questions and are better suited for interviews or focus groups. Surveys can contain open-ended questions, but they are typically supplemental to the closed questions that make up the majority of the survey.

  Participant self-reported behavior is likely to be inaccurate. For instance, surveying children on how engaging a program was might not be the best approach.

In addition to these criteria, you should also consider time and costs associated with a survey and whether these line up with the resources you have available. A more thorough breakdown of the costs associated with a survey can be found here

3. How many of these question types have you used? (Mark all that apply)

Although survey questions can be written in a multitude of ways, ultimately every question is either closed, open-ended, or a combination of both. Open-ended questions ask the survey respondent to provide an answer in their own words, like in the example below.

Why did you decide to read this blog post? 

Open-ended questions allow the evaluator to collect robust data by not limiting the respondent to a list of possible answers. For instance, maybe you’re reading this blog right now because your cat walked across your keyboard and accidentally clicked on the link. The survey is unlikely to include that answer option on a closed question, but an open-ended question can capture that sort of qualitative data. 

Although there are many pros to using open-ended questions, there are also some downsides. Most open-ended questions take a long time and a skilled evaluator to analyze the qualitative data they produce. That’s why closed questions are more commonly used on surveys.

Unlike open-ended questions, closed questions provide a set of answer choices and produce quantitative data. Let’s explore some different types of closed questions.

Multiple choice questions allow respondents to select one or more options from a set of answers that you define. A common drawback of multiple choice questions is that they limit answers to a predetermined list like below, which may not reflect everyone’s responses. Often the problem is solved by adding an “other” option where a respondent can write in their answer if it isn’t part of the list. 

How do you feel today?



  Other, please specify: ___________

Adding an “other” option makes part of this question open-ended. When you analyze the data for this question, pay close attention to the percentage of respondents who chose “other.” If it’s a large portion (usually more than 10 percent), you will need to do some qualitative analysis of these answers.

Likert scale questions give respondents a range of options (usually five or seven choices). They’re often used to gauge someone’s feelings or opinions and can be written as statements instead of questions (see below). Writing a likert scale can be tricky because you need to make sure your response options are balanced. We’ll talk about that more in depth in our next post. Here’s an example of a likert scale question.

I am learning something from this post on surveys.

  Strongly agree


  Neither agree nor disagree


  Strongly disagree 

Demographic questions ask respondents about characteristics that are descriptive, such as age, gender, race, income level etc. Demographic questions allow you to gain a deeper insight into your data. For instance, I could use a question that asks a respondent’s age to analyze whether younger respondents were more likely to say they “disagree” or “strongly disagree” on the question above. 

These are the most common question types you’ll find on a survey, but for a deeper dive on different question formats, such as matrix, dropdown, and ranking, check out this article from SurveyMonkey. 

4. Stay tuned for surveys pt. 2?       Yes       Definitely     I wouldn’t miss it for the world

We’ve all probably taken a survey, but there’s a lot that goes into making them balanced, understandable, and unbiased. In our next post we’ll cover why the question above should never be on a survey and other common mistakes people make when writing survey questions. 

Does the (Data Collection Method) Shoe Fit?

You wouldn’t go hiking in a pair of dress shoes, right? Like the variety of shoes in your closet, there are a variety of data collection methods in all different shapes and sizes. The trick is finding which data collection method fits! Today’s post will help you determine which method is best for your evaluation.

What are Data Collection Methods?

Data collection is the process of gathering information from different sources with the goal of answering a specific question (your evaluation question). The method, or procedure, that you use to collect your data is your data collection method. Four common ones are: surveys, interviews, focus groups, and observations.

  • Survey: questionnaires that ask respondents to answer a set of questions. While these questions can be closed or open-ended, the purpose of surveys is to collect primarily quantitative data. Surveys can be administered online, by phone, by text, or in print. 
  • Interview: a conversation between two people—an interviewer and an interviewee—during which the interviewer asks primarily open-ended questions. Interviews may occur face-to-face, on the phone, or online. Interviews provide qualitative data.
  • Focus group: a dialogue between a group of specifically selected participants who discuss a particular topic. A moderator leads the focus group. Focus groups provide qualitative data.
  • Observation: a person (the researcher or evaluator) observes events, behaviors, and other characteristics associated with a particular topic in a natural setting. The observer records what they see or experience. Observations may yield quantitative or qualitative data.  
How to Pick the Right Data Collection Method

By this point in your evaluation you should have: 

Determined the goals and scope of your evaluation

  Written your evaluation question(s)

If not, you can circle back to those posts here and here, respectively. Now you’re almost ready to start collecting data—the fun part! First you need to decide which data collection method to use. Take a look at the pros and cons of each data collection method in the chart below. Use this to help you narrow down which methods might fit your evaluation.

To further narrow down your data collection method search, ask yourself the questions below. Do your answers rule out any of the methods? Reference the pros/cons chart for help. 

  What is most essential to you? Consider whether it is important for you to answer questions of how and why (more likely qualitative data) or what, how often, and to what extent (easier with quantitative data). 

  What will you be asking? Complex topics may lend themselves better to methods that allow for follow-up questions. Taboo topics may require additional anonymity. Think about what methods will make your participants feel most comfortable and safe responding to you.

  What are your constraints? Be realistic about the amount of time and resources you have. Choose a method that meets those constraints.


If none of these methods seem to fit your needs, don’t be afraid to branch out and find a collection method that is best for you or take a mixed-methods approach and use multiple techniques! For some other interesting ideas, here’s some additional articles on a collaborative photography method, oral histories, and other creative evaluation methods.

In our next post we’ll start our deep dive into the most popular data collection method—surveys. Stay tuned!

The Dynamic Data Duo: Quantitative and qualitative data, part 2

In our last post we introduced you to the dynamic data duo—quantitative (number) and qualitative (story) data. Like any good superhero squad, each have their own strengths and weaknesses. Quantitative data can usually be collected and analyzed quickly, but can’t really yield nuanced answers. Qualitative data is great at that! However, it often takes a lot of time and resources to collect qualitative data. So just like Batman and Robin, who balance out each other’s strengths and weaknesses when they’re together, both can also have successful solo careers. This post will walk you through a simple process to determine which data hero is right for the job!

Step 1: What is your evaluation question?

Let’s say we’re doing an evaluation where we want to find out if attending storytime helps caregivers use new literacy skills at home. If we go up to every caregiver and simply ask them, we’ll get a lot of yes/no answers, but not a whole lot of details. For example, imagine if we asked you right now: “Is this blog series helping you use new evaluation skills at work?” You might respond: “Uh…I don’t know. Maybe?” It’s a hard question to answer accurately. Often the evaluation question is too complex to directly ask participants.

Step 2: Break your evaluation question down into simple questions. 

Imagine calling up the Justice League and asking, “Hey, can you save the world?” They might answer yes, but will we know if they have the right skills or perhaps have other plans today? Similarly, our evaluation questions are often broad and abstract. We can’t always ask it outright and get a useful answer. So let’s look at some ways we can break our evaluation question down into simpler questions. 

As a reminder, our evaluation question is “does attending storytime help caregivers use new literacy skills at home?” Go word by word and see if you can come up with additional questions that would break the concepts down further. For instance, “does attending…” What are we assuming/what don’t we know? 

  • Did the caregiver attend a storytime session? 
  • Why or why not?
  • How many times did a caregiver attend a storytime session?
  • Which storytime sessions did the caregiver attend? 

Continue on with the rest of the evaluation question, keeping in mind you might not come up with simpler questions for every word or phrase. 


  • Who are the caregivers? 
  • Were they already using the literacy skills taught during storytime at home prior to attending a storytime? 

“New literacy skills” 

  • Are caregivers learning new literacy skills during storytime? (If caregivers aren’t learning new literacy skills at storytime, they can’t then use those skills at home!)
  • Why or why not? 
  • What new skills are they learning? 
  • How many new skills are they learning?

“At home”

  • Do caregivers use new literacy skills from storytime at home? 
  • Why or why not?
  • How often do they use new literacy skills from storytime at home? 

Step 3: Determine if each sub-question can be answered with numbers or a story

Go back through your list of sub-questions and try to answer each one with a number. Can you do it? If so, the question would give you quantitative data. If not, it might be a qualitative question. 

Let’s look at the question, “What new literacy skills are caregivers learning during storytime?” We need words to answer this question, not numbers—right? Not necessarily. We could create a list of 10 literacy skills that we taught during storytime and ask caregivers to check which ones they learned. By creating these parameters, we’re limiting the response options to a finite quantity (10 possible choices) and can count how many people choose each skill. This process transforms what would be an open-ended question yielding qualitative data into a question yielding quantitative data. 

You can generally apply this process to questions that either have a finite number of options or where a likert scale is appropriate. However, there are numerous (no pun intended) cases where you’ll want more nuanced, qualitative answers. For instance, try answering the question, “Why did you attend storytime today?” with a number! We could still create a list of possible answers, but it’s likely that someone would look at those choices and feel like none of them really fit. If we want to better understand our caregivers’ reasoning, then we don’t want to limit their responses. We want a story—we want qualitative data.

Step 4: Batman or Robin? Or both?

Now that you’ve classified your questions as quantitative or qualitative, do you have the means (capacity, resources, etc.) to collect data on all of them? Remember the pros and cons of each data type and review which questions are most important to you. Are a majority of them qualitative or quantitative? Knowing which type of data you need to collect will help you decide which data collection method to use. Our next several blog posts will address the different data collection methods you can use and their pros and cons, so keep reading!

Ready to meet your (data) match? Introducing number data and story data

Shows two different shapes to illustrate the two different categories of data

Hey, there! Welcome to 2021! We’re glad to see you here. It’s a new year and we’re ready to dive into research methods. Not what you expected to rejuvenate you in 2021? Well, hold on—research methods are actually pretty rad. First, though, what are they? 

Research Methods

Research methods are the different ways we can do the research or evaluation. If you’ve already tried out our tips on doing desk research, you may have found that the data you need is just not out there. You’re going to have to collect some data yourself! 

What kind of data should you collect? Two very broad categories of data are quantitative and qualitative data. Quantitative data are numbers data and qualitative data are story data. Wait—isn’t all data numbers? Nope! Story data are real! 

Quantitative Data: how much or what extent

What kind of information can quantitative data provide? Think about questions that you could answer with a number. Here are some examples from libraries:

  • How many books were checked out this month?
  • How often did families attend more than one storytime in a month?
  • What times for storytime have the highest attendance?
  • What percentage of our patrons rely on mobile services for library access?

You can see from the examples that quantitative data can answer questions about how much, how often, what, and to what extent. Quantitative data can often be collected by consulting data you already track within your library or by distributing a survey. This data can generally be collected and analyzed relatively quickly. The downside to quantitative data is that it can’t tell you how or why something is a particular way. If you collect data on how often families attended more than one storytime in a month, you still don’t know why some families came more often. That’s where qualitative data comes in. 

Qualitative Data: why or how

What kind of information can qualitative data provide? Think about questions that are difficult to answer with a number. The questions below cover the same topics as the quantitative questions above, but approached in a qualitative way:

  • Why are some patrons super-users? 
  • Why do some families attend storytime once and never return?
  • What reasons other than convenience determine whether families attend storytime? 
  • How do patrons who use the mobile services feel about the library in general?

You can collect some qualitative data on surveys by asking open-ended questions. You also can collect qualitative data from observations, interviews, and focus groups. While it yields detailed information, qualitative data collection and analysis can be complex and time-consuming. These data don’t always yield information that is actionable right away. Going back to our storytime example, if we ask why some families attend storytime once and never return, we may get a lot of different answers and need to spend time looking for common themes. 

How to choose?

Now that you know what both types of data could look like, how do you decide what data is the best to collect for a project? Did you notice how those quantitative and qualitative questions matched up on similar topics? That was on purpose! Different types of data can give you insight into different aspects of your evaluation question. 

To get the most meaningful results, it’s a great idea to collect both quantitative and qualitative data for your project. They can work together to provide a more complete picture of the topic. An easy way to incorporate both is to create a survey that includes mostly quantitative questions, but also a few key qualitative questions.

Now, is it always realistic that your organization has time and capacity to collect both types of data? Not really, right? That’s ok. The most important thing is to match the kind of data you collect with your evaluation question. 


Now you have a basic idea of how quantitative and qualitative data are different and how they can be used to find out different kinds of information. In our next post, we’ll show you a simple process for breaking down your evaluation question into smaller questions and determining if you need to use quantitative or qualitative methods. 


RIPL Data Boot Camp Webinar Series

Is one of your new year’s resolutions to get your library’s data in shape? Then, spend the winter with the Research Institute for Public Libraries (RIPL) and participate in our Data Boot Camp Series! This free webinar series features curriculum from the RIPL 2020 national event. These will NOT be webinars where you listen to a talking head the whole time; instead, please come ready to participate in a variety of interactive learning activities, some of which will occur in small groups in breakout rooms.

Here is the schedule – go to to learn more about each webinar and register:

January 27 (1:00-2:30 ET/10:00-11:30 PT): Observations: Data Hiding in Plain Sight

February 2 (1:00-2:30 ET/10:00-11:30 PT): Can You Hear Me Now? Communicating Data to Stakeholders

February 23 (1:00-2:30 ET/10:00-11:30 PT): Nothing for Us, Without Us: Getting Started with Culturally Responsive Evaluation

March 2 (2:00-3:30 ET/11:00-12:30 PT): Meaningful Metrics for Your Organization

March 16 (2:00-3:30 ET/11:00-12:30 PT): Evaluation + Culture = Change

March 24 (1:00-2:30 ET/10:00-11:30 PT): Inclusive Data and Community Engagement: New Roles for Libraries to Shape Knowledge Creation and Use

All webinars will be recorded.


Happy Holidays!

Snow globe with mountain scene inside

We have loved having you all with us on our data journey! We are putting our blog series “Between a Graph and a Hard Place” on hold in December.

We’ll be back in January with more exciting information about doing your own evaluation, including specific ways of collecting data like surveys, focus groups, and observations.

In the meantime, we wish you all happy and safe holidays! Special thanks to Mary Bills for the beautiful artwork.

How to conduct a secondary research evaluation in four steps


In our last post, we assured you that it was possible to complete an evaluation without ever leaving your desk! So as promised, here’s how to conduct a secondary research evaluation in four simple steps.

Remember, in the scenario in our last post, you are a youth services librarian at a rural public library that serves a population of 4,000. You want to know if your summer learning program is effective at engaging youth with developmentally enriching content (our evaluation question). You don’t have the time or resources to go out and collect your own data, so you decide to conduct secondary research instead to help you make a decision about how to improve your summer learning program. In our last post, we talked about the different ways you can conduct secondary research. Now we’re going to apply the multi-data set approach. Here’s how you can do that in four simple steps.

  1. Identify your evaluation question

We’ve already determined that our evaluation question is: do summer learning programs engage youth with content that is developmentally enriching? If you need help determining your own evaluation question, you can revisit our post on the topic.  

  1. Identify a secondary data set (or sets)

Review the existing literature on your topic of interest. In our last post, we identified different external and internal data sources that you can investigate. You may find other libraries, organizations, or agencies that have explored your topic and collected data. Reach out and ask for permission to use their data if necessary. For this example, let’s say we found this publication of key research findings on public libraries’ role in youth development. To get a well-rounded understanding of your topic and enough data to analyze, you’ll probably need to find multiple data sets. For the purpose of this post, we’ll just look at one.

  1. Evaluate secondary data set

Congrats, you’ve chosen a data set! Sometimes that can be the hardest part. Now we need to evaluate whether we chose the right one. To do so, we’ll try to answer the questions below. If you need additional help understanding how to answer these questions, read this first.

  • What was the aim of the original study?
  • Who collected the data?
  • Which measures were employed?
  • When was the data collected?
  • What methodology was used to collect the data?

Based on what we found, the data set we selected comes from a reliable source and is relatively recent. Some of the libraries in the study also serve a population that is close in size to our own. However, the aim of the original study is a little different than ours (the role of libraries as a whole on youth development). Therefore, we might want to find an additional data set specifically on summer learning to help us answer our evaluation question. If one of the public libraries who participated in the study has a similar population or demographics as our library, we could also reach out to them directly and ask to see their data.

  1. Analyze secondary data set

Pick the variables from your data set that are most relevant to your evaluation question. You may also need to recode variables. For instance, maybe the data set includes a variable for school district, but that’s not important to you. You’re more interested in seeing if there’s a correlation between poverty and youth development. Therefore, you can recode the school district variable by percentage of people who live below the poverty line in each district (using another data set in tandem!). Here’s a short video on how to recode variables in Excel. Once you’ve got all your ducks in a row, you’re ready to employ all your statistics mastery (mean, median, mode, correlation, etc) to draw conclusions from your data. 


There you have it! An evaluation without ever leaving your desk. As always, if you have any questions or comments, please feel free to reach out to us at In our next post, we’ll cover another evaluation methodology, so stay tuned.

Conduct an Evaluation Without Ever Leaving Your Desk

Are you ready to get your hands dirty and start evaluating? After covering outcomes, the logic model, evaluation questions, and research ethics, our next step is to start collecting data. I know many of you might be thinking, “But we’re still in a pandemic. How could we possibly do an evaluation now?” Well that’s one of the many advantages of secondary research.

What is secondary research and why should I do it? 

Secondary research involves data that has been previously collected by someone else. As opposed to primary research, where you collect the data yourself, secondary research uses “available data” and various online and offline resources. Also called desk research because you can do it without ever leaving your desk, it’s a particularly useful evaluation method when you have a limited ability to collect your own data. In many ways, it is similar to a literature review—it gives you an idea of what information is already out there. However, secondary research focuses more specifically on analyzing existing data within the confines of your evaluation question. 

What are different ways I can use secondary research? 

Secondary research can be useful whether you have limited resources and time or have no limits whatsoever. Your evaluation might only consist of secondary research or it could simply be the first step. No matter what your goal, secondary research can be helpful. 

Let’s say you are a youth services librarian at a rural public library that serves a population  of 4,000. You want to know if your summer learning program is effective at engaging youth with developmentally enriching content (our evaluation question). You don’t have the time or resources to go out and collect your own data, so you decide to conduct secondary research instead to help you make a decision about how to alter your summer learning. 

One approach you could take is to conduct a classic literature review and in the process, look for studies on topics that align with your evaluation question. If possible, also look for data that is similar in some aspect (demographics, size, location, etc.) to data you would collect yourself. For instance, you might find a study on how public libraries facilitate youth development. Within the study, you see data was collected from another rural library. Perfect! 

Depending on your evaluation question, you may even find multiple data sets that are useful and relevant. For example, let’s say we find data on summer learning from three different libraries. Each recorded what their main activity was and participation numbers. Great! We can compare these data sets and extrapolate some conclusions. Just remember, when using multiple data sets, it’s helpful to have a variable they all share. In our example, even if one library recorded participation rates in weekly numbers and another in monthly, we can recode the data so that the variables match.

Even if you also plan to collect primary data, secondary research is a good place to start. It can provide critical context for your evaluation, support your findings, or help identify something you should do differently. In the end, it could save you time and resources by spending a little extra time at your desk!  

What are the different kinds of secondary data I can collect?

Internal sources

You don’t have to go far to find data. Your library has probably been collecting some sort of data ever since it opened! This is called internal sources—data from inside your organization. Here are a few common examples:

  • Usage data (visits, circulation, reference transactions, wifi, etc.)
  • User data (ex: number of registered borrowers)
  • Financial data 
  • Staff data
  • Program data (attendance, number of programs, etc.)

External sources

Maybe your library doesn’t have the data you’re looking for, like the demographics of children in your service area. Perhaps you are more curious about what other libraries have found successful or challenging in their summer learning programs. Or maybe you want to look at peer-reviewed research  about summer learning loss (summer slide). These are all examples of external sources—sources from outside your organization. Here are a couple of common examples:

  • Government sources
  • State and national institutions
  • Trade, business, and professional associations
  • Scientific or academic journals
  • Commercial research organizations 


Now you have the what of secondary research. Next time we’ll cover how to do secondary research in four simple steps, so stay tuned. As always, if you have any questions or comments, please feel free to reach out to us at

Research Ethics: It’s all fun and games until someone gets hurt

We’ve all heard the old adage “it’s all fun and games until someone gets hurt.” Although most people direct this phrase at children, it can just as well be applied to conducting research. It’s all ethical—until the risks outweigh the potential benefits. It’s all fair—until your participant compensation becomes coercion. It might seem like common sense delineates these areas clearly, but sometimes our good intentions can obfuscate ethical from unethical. That’s why it’s necessary to thoroughly think through these considerations prior to conducting research or an evaluation. 

Do the potential benefits outweigh the potential risks to participants? 

You may not be conducting medical research where the risks can be physical, but that simply means potential risks might be harder to identify. Your responsibility as the evaluator is to 1) eliminate unnecessary risk, and 2) minimize necessary risk. So how do you identify it?

Federal regulations define risk as, “The probability of harm or injury (physical, psychological, social, or economic) occurring as a result of participation in a research study. Both the probability and magnitude of possible harm may vary from minimal to significant.” Risk could include threat of deportation if ICE enters your library, stigmatization if someone is outed for being LGBTQ+, embarrassment if someone is illiterate, or financial loss if someone misses work. It’s impossible to eliminate all risk, but our job as evaluators is to ensure that “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.” This is called minimal risk

Let’s say you’re evaluating the effectiveness of a job skills course that your library has been conducting virtually during the pandemic. You feel more comfortable conducting interviews in-person vs. online or by phone,  and your library is allowing limited capacity indoors. Wearing a mask and staying six feet apart does minimize risk for participants, but is this a necessary risk? Would eliminating this risk negatively impact your evaluation? These are questions you should continually ask yourself when designing your evaluation. 

In an effort to eliminate unnecessary risk, you decide instead to conduct interviews virtually. Some of your evaluation participants are undocumented immigrants who are very afraid about their personal information being leaked. You’ve done what you can to ensure that their privacy will be maintained (read more on that here), but you know there is always a chance that information gets out, particularly when using the internet and different video call platforms. This is a situation where you need to assess whether the benefit of these individuals participating outweighs the potential risk. Their participation might mean that you identify critical gaps where your course did not address this community’s needs. With their data, the next offering of the course could better serve them and help other undocumented individuals, which is a huge benefit. You can minimize the risk of their participation by conducting the interview over the phone and assigning them an alias in any recordings or notes. Now their access to benefit outweighs potential risks and these participants may feel more comfortable agreeing to participate.  

Is it coercion or compensation?

Under no circumstances should you coerce individuals into participating in an evaluation. It should always be voluntary and individuals should have the choice to stop participating at any time. However, it is appropriate to compensate individuals for their time and effort. It is also appropriate to reimburse participants for any out-of-pocket expenses associated with their participation in the evaluation (such as transportation or parking). 

While reimbursement is pretty straightforward, compensation can be a bit hazy. The important things to remember are that 1) in no case should compensation be viewed as a way of offsetting risk, and 2) the level of compensation should not be so high as to cause a prospective participant to accept risks that would not be otherwise accepted in the absence of compensation. These same principles also apply to parents whose children are prospective participants. 

If your library doesn’t have the means to compensate or reimburse participants, that doesn’t mean you can’t do an evaluation. Whether you are offering compensation or not, this should be discussed in the informed consent process. If you do not have money to compensate individuals, you may choose to explain why and be sure to express your appreciation for their time and effort in other ways. 

We’ve now covered some of the most common issues in research ethics: privacy, informed consent, working with vulnerable populations, risks, and compensation. However, if you have any questions that weren’t answered in these posts, please reach out at

Not creeping continued: may we have this data?

Older man with clipboard discusses information with younger man

Welcome back! Last time we talked about how to protect the privacy of evaluation participants. Today we’re going to continue our discussion of research ethics with informed consent and how to work with vulnerable populations.

Informed Consent

In order to be a researcher and not a “creeper,” you need to: 1) ask for participants’ permission, 2) be clear with them about what is going to happen, 3) explain the purpose of your study, and 4) give them the option to stop participating at any time. Let’s take a look at one of those examples from the Urban Dictionary definition of creeper again: “stares at you while you sleep.” What if you voluntarily signed up to go into a sleep lab and be monitored, including video taping, while you slept so researchers could learn more about helping people with insomnia? Someone is still staring at you while you sleep—but you gave them permission, you knew what was going to happen, you understood the purpose, and you can stop at any time.  

Informed consent often involves a written form, which explains all the relevant information about the study and gives participants a choice—without any negative consequences—to participate or not participate. This information should be provided in the preferred language for the participant and explained verbally if needed. The participant should have a chance to ask any questions they want before they sign the form. The informed consent process should cover the purpose of the study, what data will be collected and how they will be stored, used, and shared, the participant’s rights (which include being able to stop participating at any time), and who to contact for questions. 

In a library context, this means thinking about how you will be collecting data and building informed consent into the process. For example, if you were evaluating summer learning programming, you may decide to collect feedback by interviewing caregivers of participants at the beginning and the end of the summer to know more about their expectations and their experience. In that case, you should include the informed consent process when they register for summer learning, and make sure that it’s extremely clear that if they opt out of the interviews they can still participate fully in summer learning activities. 

Children are another vulnerable group that could be part of a library evaluation. For children under eighteen, their parent or guardian needs to give consent on their behalf. It is a best practice to still ask children and teens to give assent even when they are under eighteen. Assent means that you explain what will happen to the child and give them an opportunity to ask questions and agree or decline to participate. More information about this process with children can be found here

It’s best to make the informed consent process clear and low pressure, so someone can opt in or out easily. This can be as simple as explaining at the beginning of a survey that you’ll use this information to improve the program, and asking the participant if it is ok with them to analyze their survey responses.  

Vulnerable Populations

Vulnerable groups, from a research ethics perspective, are any groups that might be at greater risk due to participating in research and therefore need special consideration. Some of the groups often considered vulnerable are: pregnant women, groups who experience discrimination, children, prisoners, and anyone with limited capacity to consent. 

It’s a great practice to reflect on who you will be collecting data from and if they may feel vulnerable or if the data collection would be risky for them in any way. If so, you need to take extra steps to ensure that your data collection process is respectful, low pressure, and comfortable for these individuals. 

Immigrant and refugee communities are one example of a vulnerable population that might be included in a library program evaluations. To ensure that the data collection process is respectful, low pressure, and comfortable for this population, you might spend extra time going over the informed consent process with them to make sure that they understand whether their data can be identified, who will have access to their data, and how their data will be used. You should consider higher levels of privacy protection for this group as well.  When working with any vulnerable group, it is helpful to consult with representatives of the group to get their input on how to work respectfully with them. And, it is a best practice to compensate individuals who provide cultural advising for their contributions to an evaluation project. 

More next time

A clear and low pressure informed consent process and being thoughtful about working with vulnerable populations are two ways that researchers make sure their work is ethical and respectful to participants. Next time, we will wrap up our discussion of research ethics considerations by discussing access to benefit, incentives, and coercion.