You wouldn’t go hiking in a pair of dress shoes, right? Like the variety of shoes in your closet, there are a variety of data collection methods in all different shapes and sizes. The trick is finding which data collection method fits! Today’s post will help you determine which method is best for your evaluation.
What are Data Collection Methods?
Data collection is the process of gathering information from different sources with the goal of answering a specific question (your evaluation question). The method, or procedure, that you use to collect your data is your data collection method. Four common ones are: surveys, interviews, focus groups, and observations.
Survey: questionnaires that ask respondents to answer a set of questions. While these questions can be closed or open-ended, the purpose of surveys is to collect primarily quantitative data. Surveys can be administered online, by phone, by text, or in print.
Interview: a conversation between two people—an interviewer and an interviewee—during which the interviewer asks primarily open-ended questions. Interviews may occur face-to-face, on the phone, or online. Interviews provide qualitative data.
Focus group:a dialogue between a group of specifically selected participants who discuss a particular topic. A moderator leads the focus group. Focus groups provide qualitative data.
Observation: a person (the researcher or evaluator) observes events, behaviors, and other characteristics associated with a particular topic in a natural setting. The observer records what they see or experience. Observations may yield quantitative or qualitative data.
How to Pick the Right Data Collection Method
By this point in your evaluation you should have:
Determined the goals and scope of your evaluation
Written your evaluation question(s)
If not, you can circle back to those posts here and here, respectively. Now you’re almost ready to start collecting data—the fun part! First you need to decide which data collection method to use. Take a look at the pros and cons of each data collection method in the chart below. Use this to help you narrow down which methods might fit your evaluation.
To further narrow down your data collection method search, ask yourself the questions below. Do your answers rule out any of the methods? Reference the pros/cons chart for help.
What is most essential to you? Consider whether it is important for you to answer questions of how and why (more likely qualitative data) or what, how often, and to what extent (easier with quantitative data).
What will you be asking? Complex topics may lend themselves better to methods that allow for follow-up questions. Taboo topics may require additional anonymity. Think about what methods will make your participants feel most comfortable and safe responding to you.
What are your constraints? Be realistic about the amount of time and resources you have. Choose a method that meets those constraints.
If none of these methods seem to fit your needs, don’t be afraid to branch out and find a collection method that is best for you or take a mixed-methods approach and use multiple techniques! For some other interesting ideas, here’s some additional articles on a collaborative photography method, oral histories, and other creative evaluation methods.
In our next post we’ll start our deep dive into the most popular data collection method—surveys. Stay tuned!
In our last post we introduced you to the dynamic data duo—quantitative (number) and qualitative (story) data. Like any good superhero squad, each have their own strengths and weaknesses. Quantitative data can usually be collected and analyzed quickly, but can’t really yield nuanced answers. Qualitative data is great at that! However, it often takes a lot of time and resources to collect qualitative data. So just like Batman and Robin, who balance out each other’s strengths and weaknesses when they’re together, both can also have successful solo careers. This post will walk you through a simple process to determine which data hero is right for the job!
Step 1: What is your evaluation question?
Let’s say we’re doing an evaluation where we want to find out if attending storytime helps caregivers use new literacy skills at home. If we go up to every caregiver and simply ask them, we’ll get a lot of yes/no answers, but not a whole lot of details. For example, imagine if we asked you right now: “Is this blog series helping you use new evaluation skills at work?” You might respond: “Uh…I don’t know. Maybe?” It’s a hard question to answer accurately. Often the evaluation question is too complex to directly ask participants.
Step 2: Break your evaluation question down into simple questions.
Imagine calling up the Justice League and asking, “Hey, can you save the world?” They might answer yes, but will we know if they have the right skills or perhaps have other plans today? Similarly, our evaluation questions are often broad and abstract. We can’t always ask it outright and get a useful answer. So let’s look at some ways we can break our evaluation question down into simpler questions.
As a reminder, our evaluation question is “does attending storytime help caregivers use new literacy skills at home?” Go word by word and see if you can come up with additional questions that would break the concepts down further. For instance, “does attending…” What are we assuming/what don’t we know?
Did the caregiver attend a storytime session?
Why or why not?
How many times did a caregiver attend a storytime session?
Which storytime sessions did the caregiver attend?
Continue on with the rest of the evaluation question, keeping in mind you might not come up with simpler questions for every word or phrase.
Who are the caregivers?
Were they already using the literacy skills taught during storytime at home prior to attending a storytime?
“New literacy skills”
Are caregivers learning new literacy skills during storytime? (If caregivers aren’t learning new literacy skills at storytime, they can’t then use those skills at home!)
Why or why not?
What new skills are they learning?
How many new skills are they learning?
Do caregivers use new literacy skills from storytime at home?
Why or why not?
How often do they use new literacy skills from storytime at home?
Step 3: Determine if each sub-question can be answered with numbers or a story
Go back through your list of sub-questions and try to answer each one with a number. Can you do it? If so, the question would give you quantitative data. If not, it might be a qualitative question.
Let’s look at the question, “What new literacy skills are caregivers learning during storytime?” We need words to answer this question, not numbers—right? Not necessarily. We could create a list of 10 literacy skills that we taught during storytime and ask caregivers to check which ones they learned. By creating these parameters, we’re limiting the response options to a finite quantity (10 possible choices) and can count how many people choose each skill. This process transforms what would be an open-ended question yielding qualitative data into a question yielding quantitative data.
You can generally apply this process to questions that either have a finite number of options or where a likert scale is appropriate. However, there are numerous (no pun intended) cases where you’ll want more nuanced, qualitative answers. For instance, try answering the question, “Why did you attend storytime today?” with a number! We could still create a list of possible answers, but it’s likely that someone would look at those choices and feel like none of them really fit. If we want to better understand our caregivers’ reasoning, then we don’t want to limit their responses. We want a story—we want qualitative data.
Step 4: Batman or Robin? Or both?
Now that you’ve classified your questions as quantitative or qualitative, do you have the means (capacity, resources, etc.) to collect data on all of them? Remember the pros and cons of each data type and review which questions are most important to you. Are a majority of them qualitative or quantitative? Knowing which type of data you need to collect will help you decide which data collection method to use. Our next several blog posts will address the different data collection methods you can use and their pros and cons, so keep reading!
Hey, there! Welcome to 2021! We’re glad to see you here. It’s a new year and we’re ready to dive into research methods. Not what you expected to rejuvenate you in 2021? Well, hold on—research methods are actually pretty rad. First, though, what are they?
Research methods are the different ways we can do the research or evaluation. If you’ve already tried out our tips on doing desk research, you may have found that the data you need is just not out there. You’re going to have to collect some data yourself!
What kind of data should you collect? Two very broad categories of data are quantitative and qualitative data. Quantitative data are numbers data and qualitative data are story data. Wait—isn’t all data numbers? Nope! Story data are real!
Quantitative Data: how much or what extent
What kind of information can quantitative data provide? Think about questions that you could answer with a number. Here are some examples from libraries:
How many books were checked out this month?
How often did families attend more than one storytime in a month?
What times for storytime have the highest attendance?
What percentage of our patrons rely on mobile services for library access?
You can see from the examples that quantitative data can answer questions about how much, how often, what, and to what extent. Quantitative data can often be collected by consulting data you already track within your library or by distributing a survey. This data can generally be collected and analyzed relatively quickly. The downside to quantitative data is that it can’t tell you how or why something is a particular way. If you collect data on how often families attended more than one storytime in a month, you still don’t know why some families came more often. That’s where qualitative data comes in.
Qualitative Data: why or how
What kind of information can qualitative data provide? Think about questions that are difficult to answer with a number. The questions below cover the same topics as the quantitative questions above, but approached in a qualitative way:
Why are some patrons super-users?
Why do some families attend storytime once and never return?
What reasons other than convenience determine whether families attend storytime?
How do patrons who use the mobile services feel about the library in general?
You can collect some qualitative data on surveys by asking open-ended questions. You also can collect qualitative data from observations, interviews, and focus groups. While it yields detailed information, qualitative data collection and analysis can be complex and time-consuming. These data don’t always yield information that is actionable right away. Going back to our storytime example, if we ask why some families attend storytime once and never return, we may get a lot of different answers and need to spend time looking for common themes.
How to choose?
Now that you know what both types of data could look like, how do you decide what data is the best to collect for a project? Did you notice how those quantitative and qualitative questions matched up on similar topics? That was on purpose! Different types of data can give you insight into different aspects of your evaluation question.
To get the most meaningful results, it’s a great idea to collect both quantitative and qualitative data for your project. They can work together to provide a more complete picture of the topic. An easy way to incorporate both is to create a survey that includes mostly quantitative questions, but also a few key qualitative questions.
Now, is it always realistic that your organization has time and capacity to collect both types of data? Not really, right? That’s ok. The most important thing is to match the kind of data you collect with your evaluation question.
Now you have a basic idea of how quantitative and qualitative data are different and how they can be used to find out different kinds of information. In our next post, we’ll show you a simple process for breaking down your evaluation question into smaller questions and determining if you need to use quantitative or qualitative methods.
Is one of your new year’s resolutions to get your library’s data in shape? Then, spend the winter with the Research Institute for Public Libraries (RIPL) and participate in our Data Boot Camp Series! This free webinar series features curriculum from the RIPL 2020 national event. These will NOT be webinars where you listen to a talking head the whole time; instead, please come ready to participate in a variety of interactive learning activities, some of which will occur in small groups in breakout rooms.
In our last post, we assured you that it was possible to complete an evaluation without ever leaving your desk! So as promised, here’s how to conduct a secondary research evaluation in four simple steps.
Remember, in the scenario in our last post, you are a youth services librarian at a rural public library that serves a population of 4,000. You want to know if your summer learning program is effective at engaging youth with developmentally enriching content (our evaluation question). You don’t have the time or resources to go out and collect your own data, so you decide to conduct secondary research instead to help you make a decision about how to improve your summer learning program. In our last post, we talked about the different ways you can conduct secondary research. Now we’re going to apply the multi-data set approach. Here’s how you can do that in four simple steps.
Identify your evaluation question
We’ve already determined that our evaluation question is: do summer learning programs engage youth with content that is developmentally enriching? If you need help determining your own evaluation question, you can revisit our post on the topic.
Identify a secondary data set (or sets)
Review the existing literature on your topic of interest. In our last post, we identified different external and internal data sources that you can investigate. You may find other libraries, organizations, or agencies that have explored your topic and collected data. Reach out and ask for permission to use their data if necessary. For this example, let’s say we found thispublication of key research findings on public libraries’ role in youth development. To get a well-rounded understanding of your topic and enough data to analyze, you’ll probably need to find multiple data sets. For the purpose of this post, we’ll just look at one.
Evaluate secondary data set
Congrats, you’ve chosen a data set! Sometimes that can be the hardest part. Now we need to evaluate whether we chose the right one. To do so, we’ll try to answer the questions below. If you need additional help understanding how to answer these questions, read this first.
What was the aim of the original study?
Who collected the data?
Which measures were employed?
When was the data collected?
What methodology was used to collect the data?
Based on what we found, the data set we selected comes from a reliable source and is relatively recent. Some of the libraries in the study also serve a population that is close in size to our own. However, the aim of the original study is a little different than ours (the role of libraries as a whole on youth development). Therefore, we might want to find an additional data set specifically on summer learning to help us answer our evaluation question. If one of the public libraries who participated in the study has a similar population or demographics as our library, we could also reach out to them directly and ask to see their data.
Analyze secondary data set
Pick the variables from your data set that are most relevant to your evaluation question. You may also need to recode variables. For instance, maybe the data set includes a variable for school district, but that’s not important to you. You’re more interested in seeing if there’s a correlation between poverty and youth development. Therefore, you can recode the school district variable by percentage of people who live below the poverty line in each district (using another data set in tandem!). Here’s a short video on how to recode variables in Excel. Once you’ve got all your ducks in a row, you’re ready to employ all your statistics mastery (mean, median, mode, correlation, etc) to draw conclusions from your data.
There you have it! An evaluation without ever leaving your desk. As always, if you have any questions or comments, please feel free to reach out to us at LRS@LRS.org. In our next post, we’ll cover another evaluation methodology, so stay tuned.
Are you ready to get your hands dirty and start evaluating? After covering outcomes, the logic model, evaluation questions, and research ethics, our next step is to start collecting data. I know many of you might be thinking, “But we’re still in a pandemic. How could we possibly do an evaluation now?” Well that’s one of the many advantages of secondary research.
What is secondary research and why should I do it?
Secondary research involves data that has been previously collected by someone else. As opposed to primary research, where you collect the data yourself, secondary research uses “available data” and various online and offline resources. Also called desk research because you can do it without ever leaving your desk, it’s a particularly useful evaluation method when you have a limited ability to collect your own data. In many ways, it is similar to a literature review—it gives you an idea of what information is already out there. However, secondary research focuses more specifically on analyzing existing data within the confines of your evaluation question.
What are different ways I can use secondary research?
Secondary research can be useful whether you have limited resources and time or have no limits whatsoever. Your evaluation might only consist of secondary research or it could simply be the first step. No matter what your goal, secondary research can be helpful.
Let’s say you are a youth services librarian at a rural public library that serves a population of 4,000. You want to know if your summer learning program is effective at engaging youth with developmentally enriching content (our evaluation question). You don’t have the time or resources to go out and collect your own data, so you decide to conduct secondary research instead to help you make a decision about how to alter your summer learning.
One approach you could take is to conduct a classic literature review and in the process, look for studies on topics that align with your evaluation question. If possible, also look for data that is similar in some aspect (demographics, size, location, etc.) to data you would collect yourself. For instance, you might find a study on how public libraries facilitate youth development. Within the study, you see data was collected from another rural library. Perfect!
Depending on your evaluation question, you may even find multiple data sets that are useful and relevant. For example, let’s say we find data on summer learning from three different libraries. Each recorded what their main activity was and participation numbers. Great! We can compare these data sets and extrapolate some conclusions. Just remember, when using multiple data sets, it’s helpful to have a variable they all share. In our example, even if one library recorded participation rates in weekly numbers and another in monthly, we can recode the data so that the variables match.
Even if you also plan to collect primary data, secondary research is a good place to start. It can provide critical context for your evaluation, support your findings, or help identify something you should do differently. In the end, it could save you time and resources by spending a little extra time at your desk!
What are the different kinds of secondary data I can collect?
You don’t have to go far to find data. Your library has probably been collecting some sort of data ever since it opened! This is called internal sources—data from inside your organization. Here are a few common examples:
Usage data (visits, circulation, reference transactions, wifi, etc.)
User data (ex: number of registered borrowers)
Program data (attendance, number of programs, etc.)
Maybe your library doesn’t have the data you’re looking for, like the demographics of children in your service area. Perhaps you are more curious about what other libraries have found successful or challenging in their summer learning programs. Or maybe you want to look at peer-reviewed research about summer learning loss (summer slide). These are all examples of external sources—sources from outside your organization. Here are a couple of common examples:
State and national institutions
Trade, business, and professional associations
Scientific or academic journals
Commercial research organizations
Now you have the what of secondary research. Next time we’ll cover how to do secondary research in four simple steps, so stay tuned. As always, if you have any questions or comments, please feel free to reach out to us at LRS@LRS.org.
We’ve all heard the old adage “it’s all fun and games until someone gets hurt.” Although most people direct this phrase at children, it can just as well be applied to conducting research. It’s all ethical—until the risks outweigh the potential benefits. It’s all fair—until your participant compensation becomes coercion. It might seem like common sense delineates these areas clearly, but sometimes our good intentions can obfuscate ethical from unethical. That’s why it’s necessary to thoroughly think through these considerations prior to conducting research or an evaluation.
Do the potential benefits outweigh the potential risks to participants?
You may not be conducting medical research where the risks can be physical, but that simply means potential risks might be harder to identify. Your responsibility as the evaluator is to 1) eliminate unnecessary risk, and 2) minimize necessary risk. So how do you identify it?
Federal regulations define risk as, “The probability of harm or injury (physical, psychological, social, or economic) occurring as a result of participation in a research study. Both the probability and magnitude of possible harm may vary from minimal to significant.” Risk could include threat of deportation if ICE enters your library, stigmatization if someone is outed for being LGBTQ+, embarrassment if someone is illiterate, or financial loss if someone misses work. It’s impossible to eliminate all risk, but our job as evaluators is to ensure that “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.” This is called minimal risk.
Let’s say you’re evaluating the effectiveness of a job skills course that your library has been conducting virtually during the pandemic. You feel more comfortable conducting interviews in-person vs. online or by phone, and your library is allowing limited capacity indoors. Wearing a mask and staying six feet apart does minimize risk for participants, but is this a necessary risk? Would eliminating this risk negatively impact your evaluation? These are questions you should continually ask yourself when designing your evaluation.
In an effort to eliminate unnecessary risk, you decide instead to conduct interviews virtually. Some of your evaluation participants are undocumented immigrants who are very afraid about their personal information being leaked. You’ve done what you can to ensure that their privacy will be maintained (read more on that here), but you know there is always a chance that information gets out, particularly when using the internet and different video call platforms. This is a situation where you need to assess whether the benefit of these individuals participating outweighs the potential risk. Their participation might mean that you identify critical gaps where your course did not address this community’s needs. With their data, the next offering of the course could better serve them and help other undocumented individuals, which is a huge benefit. You can minimize the risk of their participation by conducting the interview over the phone and assigning them an alias in any recordings or notes. Now their access to benefit outweighs potential risks and these participants may feel more comfortable agreeing to participate.
Is it coercion or compensation?
Under no circumstances should you coerce individuals into participating in an evaluation. It should always be voluntary and individuals should have the choice to stop participating at any time. However, it is appropriate to compensate individuals for their time and effort. It is also appropriate to reimburse participants for any out-of-pocket expenses associated with their participation in the evaluation (such as transportation or parking).
While reimbursement is pretty straightforward, compensation can be a bit hazy. The important things to remember are that 1) in no case should compensation be viewed as a way of offsetting risk, and 2) the level of compensation should not be so high as to cause a prospective participant to accept risks that would not be otherwise accepted in the absence of compensation. These same principles also apply to parents whose children are prospective participants.
If your library doesn’t have the means to compensate or reimburse participants, that doesn’t mean you can’t do an evaluation. Whether you are offering compensation or not, this should be discussed in the informed consent process. If you do not have money to compensate individuals, you may choose to explain why and be sure to express your appreciation for their time and effort in other ways.
Conclusion We’ve now covered some of the most common issues in research ethics: privacy, informed consent, working with vulnerable populations, risks, and compensation. However, if you have any questions that weren’t answered in these posts, please reach out at LRS@LRS.org.
Welcome back! Last time we talked about how to protect the privacy of evaluation participants. Today we’re going to continue our discussion of research ethics with informed consent and how to work with vulnerable populations.
In order to be a researcher and not a “creeper,” you need to: 1) ask for participants’ permission, 2) be clear with them about what is going to happen, 3) explain the purpose of your study, and 4) give them the option to stop participating at any time. Let’s take a look at one of those examples from the Urban Dictionary definition of creeper again: “stares at you while you sleep.” What if you voluntarily signed up to go into a sleep lab and be monitored, including video taping, while you slept so researchers could learn more about helping people with insomnia? Someone is still staring at you while you sleep—but you gave them permission, you knew what was going to happen, you understood the purpose, and you can stop at any time.
Informed consent often involves a written form, which explains all the relevant information about the study and gives participants a choice—without any negative consequences—to participate or not participate. This information should be provided in the preferred language for the participant and explained verbally if needed. The participant should have a chance to ask any questions they want before they sign the form. The informed consent process should cover the purpose of the study, what data will be collected and how they will be stored, used, and shared, the participant’s rights (which include being able to stop participating at any time), and who to contact for questions.
In a library context, this means thinking about how you will be collecting data and building informed consent into the process. For example, if you were evaluating summer learning programming, you may decide to collect feedback by interviewing caregivers of participants at the beginning and the end of the summer to know more about their expectations and their experience. In that case, you should include the informed consent process when they register for summer learning, and make sure that it’s extremely clear that if they opt out of the interviews they can still participate fully in summer learning activities.
Children are another vulnerable group that could be part of a library evaluation. For children under eighteen, their parent or guardian needs to give consent on their behalf. It is a best practice to still ask children and teens to give assent even when they are under eighteen. Assent means that you explain what will happen to the child and give them an opportunity to ask questions and agree or decline to participate. More information about this process with children can be found here.
It’s best to make the informed consent process clear and low pressure, so someone can opt in or out easily. This can be as simple as explaining at the beginning of a survey that you’ll use this information to improve the program, and asking the participant if it is ok with them to analyze their survey responses.
Vulnerable groups, from a research ethics perspective, are any groups that might be at greater risk due to participating in research and therefore need special consideration. Some of the groups often considered vulnerable are: pregnant women, groups who experience discrimination, children, prisoners, and anyone with limited capacity to consent.
It’s a great practice to reflect on who you will be collecting data from and if they may feel vulnerable or if the data collection would be risky for them in any way. If so, you need to take extra steps to ensure that your data collection process is respectful, low pressure, and comfortable for these individuals.
Immigrant and refugee communities are one example of a vulnerable population that might be included in a library program evaluations. To ensure that the data collection process is respectful, low pressure, and comfortable for this population, you might spend extra time going over the informed consent process with them to make sure that they understand whether their data can be identified, who will have access to their data, and how their data will be used. You should consider higher levels of privacy protection for this group as well. When working with any vulnerable group, it is helpful to consult with representatives of the group to get their input on how to work respectfully with them. And, it is a best practice to compensate individuals who provide cultural advising for their contributions to an evaluation project.
More next time
A clear and low pressure informed consent process and being thoughtful about working with vulnerable populations are two ways that researchers make sure their work is ethical and respectful to participants. Next time, we will wrap up our discussion of research ethics considerations by discussing access to benefit, incentives, and coercion.
In late May 2020, the Colorado State Library surveyed Colorado public library directors about their responses to the pandemic. We received responses from 76 library jurisdictions (67% of Colorado’s 113 public libraries), as well as two of eight member libraries (25%).*
Here is what we learned about public library services in Colorado during the statewide Stay at Home order (March 26-April 26) and first 35 days of the Safer at Home order (April 27-June 1).
Most public libraries closed their buildings to the public for at least 30 days, and many for much longer.
During the initial Stay at Home order, 71 of the 78 libraries in the study closed their buildings to the public.
The remaining seven allowed limited building access during this time. These libraries tended to be small (serving populations of 5,000 or fewer), have one outlet, and were more likely to be library districts.
Fifteen of the 71 libraries that closed during the Stay at Home order reported opening during the first 35 days of the Safer at Home order.
Alternative & Essential Services
Brad Glover, Adult Services Librarian, provides curbside delivery of collection items early in the pandemic. Photo courtesy of the Ruby M. Sisson Library, Pagosa Springs.
While many buildings were closed during the Stay at Home order and the first 35 days of the Safer at Home order, Colorado public libraries were very much open! Staff responded quickly to the needs of their communities during this time by providing a variety of physical and virtual services. Some of these services were available pre-pandemic, whereas others, such as curbside service and virtual programs, were new for many libraries. One library director responded: “We developed two new services that may be here to stay: Senior Services (dedicated email/chat) for reader’s advisory, questions, etc.; and a Home Delivery service; and Live Chat on the website. The Curbside morphed into “Grab Bags” and a lot of Reader’s Advisory. We have found that many of our new services are very personalized and interactive. Our previous service model was much more passive. The new services are staff intensive and require a lot of work!”
During both the Stay at Home and first 35 days of the Safer at Home order, libraries that served smaller populations (10,000 or fewer) and were district or county libraries were more likely to offer physical services such as curbside pickup, computer access, and/or home delivery.
During the first 35 days of the Safer at Home order:
9 in 10 libraries offered virtual services such as online programs and reference via phone, email, chat, and/or social media.
Nearly 9 in 10 libraries also offered curbside pickup, about 3 in 10 offered home delivery, and about 1 in 10 offered mail delivery. Libraries offering home and/or mail delivery tended to be smaller.
Twenty libraries offered limited or full access to the building, and 18 offered public computer access. These libraries tended to be library districts and serve populations of 10,000 or fewer.
Public libraries cited community need and being a provider of essential services as reasons for providing some services, even when closed due to the Stay at Home order. One library director from a combined school/public library serving a rural area shared their approach: “A closed sign was posted on the Library door with my cell phone number and patrons were encouraged to call if they need books. They would let me know what they wanted & I would meet them outside of the library with their requested materials. Because we are a school library we were available to parents and students when they came to the school to return school [assignments] and pick up new packets. We have one patron who does not own a computer, we did allow that patron to enter the library for computer access. The patron used a computer that was isolated. When finished the computer was sanitized.”
To learn more about how many Colorado public libraries offered various services during the Stay at Home order and the first 35 days of the Safer at Home order as well as how the offerings differed by library size and legal basis (e.g., county, district, etc.), please view this infographic.
Protections for Staff & Patrons
Longmont Public Library staff demonstrate the use of social distancing markers for library patrons waiting to enter the library. Photo courtesy of the Longmont Public Library.
Public libraries showed interest in taking safety precautions to protect staff and patrons, as they considered various approaches to reopening buildings and restoring library services. Nearly all libraries indicated plans for additional cleaning, limiting the number of people in the library, and following social distancing recommendations. In addition to these commonly accepted safety practices during the early stages of the pandemic, libraries were also looking at other ways to provide protection, including reduction of library seating options, installation of plexiglass sneeze guards, and providing additional personal protective equipment for staff. Also, a majority of libraries indicated allowing a telework option for library staff.
One library director expressed how the library planned to provide for public safety: “We have a 17-point plan that we need to comply the best we can. One way-in and another exit is recommended. We are able to do this for most of our buildings. We are quarantining materials and sanitizing them.”
The Colorado State Library recently published guidance for public libraries with recommendations for service modifications and safety measures. These recommendations are organized to align with the prescribed levels of precaution based on COVID-19 incidence in the community. This guidance has been drafted in coordination with other statewide agencies, and with input from public library leadership.
Library staff work the Buena Vista Public Library’s table at back to school night during the COVID-19 pandemic. Photo courtesy of the Buena Vista Public Library.
The results of this initial study provided insight into the decision making of Colorado public libraries during a crisis situation, while also raising new questions. At the time this survey was conducted, it was assumed that life would return to normalcy within another few months. At the time of writing this blog post, over three months have passed, and Colorado is still under the same Safer at Home order. As a result of this timeline, libraries have been gradually modifying services, building access, and safety precautions as more information about the virus becomes available.
We are interested in learning more about how libraries have continued to modify their operations and services and are considering the following topics for a second survey, to be conducted in late Fall 2020:
How have libraries adjusted the handling of library materials, particularly when returned by patrons, to reduce the potential spread of the virus? How have national research efforts (REALM study) had an impact on decision making about the handling of library materials?
How did staffing changes impact the restoration of services and reopening of library buildings? Did libraries experience reductions in staffing due to furlough, layoff, resignation, retirement, or temporary leave?
What new virtual services were developed in response to the pandemic? How have those services been received by the public? Will they continue to be offered into the future?
What essential services have been provided by Colorado libraries prior to and during the pandemic? How has the demand for these services changed?
What other topics would you like to see covered in a second survey for Colorado public libraries? Please send your ideas to email@example.com.
*Member libraries are part of a public library jurisdiction but make some decisions autonomously. To view response rates by LSA population category and legal basis (e.g., county, district, etc.), please see this resource.
Audry Haws, Youth Services Assistant, assembles summer reading kits during the summer of 2020, which served as a modification to in-person programming during the COVID-19 pandemic. Photo courtesy of Delta County Libraries.