How to conduct a secondary research evaluation in four steps

 

In our last post, we assured you that it was possible to complete an evaluation without ever leaving your desk! So as promised, here’s how to conduct a secondary research evaluation in four simple steps.

Remember, in the scenario in our last post, you are a youth services librarian at a rural public library that serves a population of 4,000. You want to know if your summer learning program is effective at engaging youth with developmentally enriching content (our evaluation question). You don’t have the time or resources to go out and collect your own data, so you decide to conduct secondary research instead to help you make a decision about how to improve your summer learning program. In our last post, we talked about the different ways you can conduct secondary research. Now we’re going to apply the multi-data set approach. Here’s how you can do that in four simple steps.

  1. Identify your evaluation question

We’ve already determined that our evaluation question is: do summer learning programs engage youth with content that is developmentally enriching? If you need help determining your own evaluation question, you can revisit our post on the topic.  

  1. Identify a secondary data set (or sets)

Review the existing literature on your topic of interest. In our last post, we identified different external and internal data sources that you can investigate. You may find other libraries, organizations, or agencies that have explored your topic and collected data. Reach out and ask for permission to use their data if necessary. For this example, let’s say we found this publication of key research findings on public libraries’ role in youth development. To get a well-rounded understanding of your topic and enough data to analyze, you’ll probably need to find multiple data sets. For the purpose of this post, we’ll just look at one.

  1. Evaluate secondary data set

Congrats, you’ve chosen a data set! Sometimes that can be the hardest part. Now we need to evaluate whether we chose the right one. To do so, we’ll try to answer the questions below. If you need additional help understanding how to answer these questions, read this first.

  • What was the aim of the original study?
  • Who collected the data?
  • Which measures were employed?
  • When was the data collected?
  • What methodology was used to collect the data?

Based on what we found, the data set we selected comes from a reliable source and is relatively recent. Some of the libraries in the study also serve a population that is close in size to our own. However, the aim of the original study is a little different than ours (the role of libraries as a whole on youth development). Therefore, we might want to find an additional data set specifically on summer learning to help us answer our evaluation question. If one of the public libraries who participated in the study has a similar population or demographics as our library, we could also reach out to them directly and ask to see their data.

  1. Analyze secondary data set

Pick the variables from your data set that are most relevant to your evaluation question. You may also need to recode variables. For instance, maybe the data set includes a variable for school district, but that’s not important to you. You’re more interested in seeing if there’s a correlation between poverty and youth development. Therefore, you can recode the school district variable by percentage of people who live below the poverty line in each district (using another data set in tandem!). Here’s a short video on how to recode variables in Excel. Once you’ve got all your ducks in a row, you’re ready to employ all your statistics mastery (mean, median, mode, correlation, etc) to draw conclusions from your data. 

Conclusion

There you have it! An evaluation without ever leaving your desk. As always, if you have any questions or comments, please feel free to reach out to us at LRS@LRS.org. In our next post, we’ll cover another evaluation methodology, so stay tuned.

Conduct an Evaluation Without Ever Leaving Your Desk

Are you ready to get your hands dirty and start evaluating? After covering outcomes, the logic model, evaluation questions, and research ethics, our next step is to start collecting data. I know many of you might be thinking, “But we’re still in a pandemic. How could we possibly do an evaluation now?” Well that’s one of the many advantages of secondary research.

What is secondary research and why should I do it? 

Secondary research involves data that has been previously collected by someone else. As opposed to primary research, where you collect the data yourself, secondary research uses “available data” and various online and offline resources. Also called desk research because you can do it without ever leaving your desk, it’s a particularly useful evaluation method when you have a limited ability to collect your own data. In many ways, it is similar to a literature review—it gives you an idea of what information is already out there. However, secondary research focuses more specifically on analyzing existing data within the confines of your evaluation question. 

What are different ways I can use secondary research? 

Secondary research can be useful whether you have limited resources and time or have no limits whatsoever. Your evaluation might only consist of secondary research or it could simply be the first step. No matter what your goal, secondary research can be helpful. 

Let’s say you are a youth services librarian at a rural public library that serves a population  of 4,000. You want to know if your summer learning program is effective at engaging youth with developmentally enriching content (our evaluation question). You don’t have the time or resources to go out and collect your own data, so you decide to conduct secondary research instead to help you make a decision about how to alter your summer learning. 

One approach you could take is to conduct a classic literature review and in the process, look for studies on topics that align with your evaluation question. If possible, also look for data that is similar in some aspect (demographics, size, location, etc.) to data you would collect yourself. For instance, you might find a study on how public libraries facilitate youth development. Within the study, you see data was collected from another rural library. Perfect! 

Depending on your evaluation question, you may even find multiple data sets that are useful and relevant. For example, let’s say we find data on summer learning from three different libraries. Each recorded what their main activity was and participation numbers. Great! We can compare these data sets and extrapolate some conclusions. Just remember, when using multiple data sets, it’s helpful to have a variable they all share. In our example, even if one library recorded participation rates in weekly numbers and another in monthly, we can recode the data so that the variables match.

Even if you also plan to collect primary data, secondary research is a good place to start. It can provide critical context for your evaluation, support your findings, or help identify something you should do differently. In the end, it could save you time and resources by spending a little extra time at your desk!  

What are the different kinds of secondary data I can collect?

Internal sources

You don’t have to go far to find data. Your library has probably been collecting some sort of data ever since it opened! This is called internal sources—data from inside your organization. Here are a few common examples:

  • Usage data (visits, circulation, reference transactions, wifi, etc.)
  • User data (ex: number of registered borrowers)
  • Financial data 
  • Staff data
  • Program data (attendance, number of programs, etc.)

External sources

Maybe your library doesn’t have the data you’re looking for, like the demographics of children in your service area. Perhaps you are more curious about what other libraries have found successful or challenging in their summer learning programs. Or maybe you want to look at peer-reviewed research  about summer learning loss (summer slide). These are all examples of external sources—sources from outside your organization. Here are a couple of common examples:

  • Government sources
  • State and national institutions
  • Trade, business, and professional associations
  • Scientific or academic journals
  • Commercial research organizations 

Conclusion

Now you have the what of secondary research. Next time we’ll cover how to do secondary research in four simple steps, so stay tuned. As always, if you have any questions or comments, please feel free to reach out to us at LRS@LRS.org

Research Ethics: It’s all fun and games until someone gets hurt

We’ve all heard the old adage “it’s all fun and games until someone gets hurt.” Although most people direct this phrase at children, it can just as well be applied to conducting research. It’s all ethical—until the risks outweigh the potential benefits. It’s all fair—until your participant compensation becomes coercion. It might seem like common sense delineates these areas clearly, but sometimes our good intentions can obfuscate ethical from unethical. That’s why it’s necessary to thoroughly think through these considerations prior to conducting research or an evaluation. 

Do the potential benefits outweigh the potential risks to participants? 

You may not be conducting medical research where the risks can be physical, but that simply means potential risks might be harder to identify. Your responsibility as the evaluator is to 1) eliminate unnecessary risk, and 2) minimize necessary risk. So how do you identify it?

Federal regulations define risk as, “The probability of harm or injury (physical, psychological, social, or economic) occurring as a result of participation in a research study. Both the probability and magnitude of possible harm may vary from minimal to significant.” Risk could include threat of deportation if ICE enters your library, stigmatization if someone is outed for being LGBTQ+, embarrassment if someone is illiterate, or financial loss if someone misses work. It’s impossible to eliminate all risk, but our job as evaluators is to ensure that “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.” This is called minimal risk

Let’s say you’re evaluating the effectiveness of a job skills course that your library has been conducting virtually during the pandemic. You feel more comfortable conducting interviews in-person vs. online or by phone,  and your library is allowing limited capacity indoors. Wearing a mask and staying six feet apart does minimize risk for participants, but is this a necessary risk? Would eliminating this risk negatively impact your evaluation? These are questions you should continually ask yourself when designing your evaluation. 

In an effort to eliminate unnecessary risk, you decide instead to conduct interviews virtually. Some of your evaluation participants are undocumented immigrants who are very afraid about their personal information being leaked. You’ve done what you can to ensure that their privacy will be maintained (read more on that here), but you know there is always a chance that information gets out, particularly when using the internet and different video call platforms. This is a situation where you need to assess whether the benefit of these individuals participating outweighs the potential risk. Their participation might mean that you identify critical gaps where your course did not address this community’s needs. With their data, the next offering of the course could better serve them and help other undocumented individuals, which is a huge benefit. You can minimize the risk of their participation by conducting the interview over the phone and assigning them an alias in any recordings or notes. Now their access to benefit outweighs potential risks and these participants may feel more comfortable agreeing to participate.  

Is it coercion or compensation?

Under no circumstances should you coerce individuals into participating in an evaluation. It should always be voluntary and individuals should have the choice to stop participating at any time. However, it is appropriate to compensate individuals for their time and effort. It is also appropriate to reimburse participants for any out-of-pocket expenses associated with their participation in the evaluation (such as transportation or parking). 

While reimbursement is pretty straightforward, compensation can be a bit hazy. The important things to remember are that 1) in no case should compensation be viewed as a way of offsetting risk, and 2) the level of compensation should not be so high as to cause a prospective participant to accept risks that would not be otherwise accepted in the absence of compensation. These same principles also apply to parents whose children are prospective participants. 

If your library doesn’t have the means to compensate or reimburse participants, that doesn’t mean you can’t do an evaluation. Whether you are offering compensation or not, this should be discussed in the informed consent process. If you do not have money to compensate individuals, you may choose to explain why and be sure to express your appreciation for their time and effort in other ways. 

Conclusion
We’ve now covered some of the most common issues in research ethics: privacy, informed consent, working with vulnerable populations, risks, and compensation. However, if you have any questions that weren’t answered in these posts, please reach out at LRS@LRS.org.

Not creeping continued: may we have this data?

Not creeping continued: may we have this data?

Older man with clipboard discusses information with younger man

Welcome back! Last time we talked about how to protect the privacy of evaluation participants. Today we’re going to continue our discussion of research ethics with informed consent and how to work with vulnerable populations.

Informed Consent

In order to be a researcher and not a “creeper,” you need to: 1) ask for participants’ permission, 2) be clear with them about what is going to happen, 3) explain the purpose of your study, and 4) give them the option to stop participating at any time. Let’s take a look at one of those examples from the Urban Dictionary definition of creeper again: “stares at you while you sleep.” What if you voluntarily signed up to go into a sleep lab and be monitored, including video taping, while you slept so researchers could learn more about helping people with insomnia? Someone is still staring at you while you sleep—but you gave them permission, you knew what was going to happen, you understood the purpose, and you can stop at any time.  

Informed consent often involves a written form, which explains all the relevant information about the study and gives participants a choice—without any negative consequences—to participate or not participate. This information should be provided in the preferred language for the participant and explained verbally if needed. The participant should have a chance to ask any questions they want before they sign the form. The informed consent process should cover the purpose of the study, what data will be collected and how they will be stored, used, and shared, the participant’s rights (which include being able to stop participating at any time), and who to contact for questions. 

In a library context, this means thinking about how you will be collecting data and building informed consent into the process. For example, if you were evaluating summer learning programming, you may decide to collect feedback by interviewing caregivers of participants at the beginning and the end of the summer to know more about their expectations and their experience. In that case, you should include the informed consent process when they register for summer learning, and make sure that it’s extremely clear that if they opt out of the interviews they can still participate fully in summer learning activities. 

Children are another vulnerable group that could be part of a library evaluation. For children under eighteen, their parent or guardian needs to give consent on their behalf. It is a best practice to still ask children and teens to give assent even when they are under eighteen. Assent means that you explain what will happen to the child and give them an opportunity to ask questions and agree or decline to participate. More information about this process with children can be found here

It’s best to make the informed consent process clear and low pressure, so someone can opt in or out easily. This can be as simple as explaining at the beginning of a survey that you’ll use this information to improve the program, and asking the participant if it is ok with them to analyze their survey responses.  

Vulnerable Populations

Vulnerable groups, from a research ethics perspective, are any groups that might be at greater risk due to participating in research and therefore need special consideration. Some of the groups often considered vulnerable are: pregnant women, groups who experience discrimination, children, prisoners, and anyone with limited capacity to consent. 

It’s a great practice to reflect on who you will be collecting data from and if they may feel vulnerable or if the data collection would be risky for them in any way. If so, you need to take extra steps to ensure that your data collection process is respectful, low pressure, and comfortable for these individuals. 

Immigrant and refugee communities are one example of a vulnerable population that might be included in a library program evaluations. To ensure that the data collection process is respectful, low pressure, and comfortable for this population, you might spend extra time going over the informed consent process with them to make sure that they understand whether their data can be identified, who will have access to their data, and how their data will be used. You should consider higher levels of privacy protection for this group as well.  When working with any vulnerable group, it is helpful to consult with representatives of the group to get their input on how to work respectfully with them. And, it is a best practice to compensate individuals who provide cultural advising for their contributions to an evaluation project. 

More next time

A clear and low pressure informed consent process and being thoughtful about working with vulnerable populations are two ways that researchers make sure their work is ethical and respectful to participants. Next time, we will wrap up our discussion of research ethics considerations by discussing access to benefit, incentives, and coercion.

Colorado Public Libraries and COVID-19: Despite unprecedented circumstances, libraries quickly adapted services to safely meet community needs

This blog post was co-authored by Crystal Schimpf and Linda Hofschire, and is also published on the Colorado Virtual Library blog.

In late May 2020, the Colorado State Library surveyed Colorado public library directors about their responses to the pandemic. We received responses from 76 library jurisdictions (67% of Colorado’s 113 public libraries), as well as two of eight member libraries (25%).*

Here is what we learned about public library services in Colorado during the statewide Stay at Home order (March 26-April 26) and first 35 days of the Safer at Home order (April 27-June 1).

Building Closures

Most public libraries closed their buildings to the public for at least 30 days, and many for much longer.

  • During the initial Stay at Home order, 71 of the 78 libraries in the study closed their buildings to the public.
  • The remaining seven allowed limited building access during this time. These libraries tended to be small (serving populations of 5,000 or fewer), have one outlet, and were more likely to be library districts.
  • Fifteen of the 71 libraries that closed during the Stay at Home order reported opening during the first 35 days of the Safer at Home order.

Alternative & Essential Services

A library worker sets books downs on a table outside the library for curbside delivery to a patron.
Brad Glover, Adult Services Librarian, provides curbside delivery of collection items early in the pandemic. Photo courtesy of the Ruby M. Sisson Library, Pagosa Springs.

While many buildings were closed during the Stay at Home order and the first 35 days of the Safer at Home order, Colorado public libraries were very much open! Staff responded quickly to the needs of their communities during this time by providing a variety of physical and virtual services. Some of these services were available pre-pandemic, whereas others, such as curbside service and virtual programs, were new for many libraries. One library director responded: “We developed two new services that may be here to stay: Senior Services (dedicated email/chat) for reader’s advisory, questions, etc.; and a Home Delivery service; and Live Chat on the website.  The Curbside morphed into “Grab Bags” and a lot of Reader’s Advisory.  We have found that many of our new services are very personalized and interactive.  Our previous service model was much more passive. The new services are staff intensive and require a lot of work!”

  • During both the Stay at Home and first 35 days of the Safer at Home order, libraries that served smaller populations (10,000 or fewer) and were district or county libraries were more likely to offer physical services such as curbside pickup, computer access, and/or home delivery.
  • During the first 35 days of the Safer at Home order:
    • 9 in 10 libraries offered virtual services such as online programs and reference via phone, email, chat, and/or social media.
    • Nearly 9 in 10 libraries also offered curbside pickup, about 3 in 10 offered home delivery, and about 1 in 10 offered mail delivery. Libraries offering home and/or mail delivery tended to be smaller.
    • Twenty libraries offered limited or full access to the building, and 18 offered public computer access. These libraries tended to be library districts and serve populations of 10,000 or fewer.

Public libraries cited community need and being a provider of essential services as reasons for providing some services, even when closed due to the Stay at Home order. One library director from a combined school/public library serving a rural area shared their approach: “A closed sign was posted on the Library door with my cell phone number and patrons were encouraged to call if they need books. They would let me know what they wanted & I would meet them outside of the library with their requested materials. Because we are a school library we were available to parents and students when they came to the school to return school [assignments] and pick up new packets. We have one patron who does not own a computer, we did allow that patron to enter the library for computer access. The patron used a computer that was isolated. When finished the computer was sanitized.”

To learn more about how many Colorado public libraries offered various services during the Stay at Home order and the first 35 days of the Safer at Home order as well as how the offerings differed by library size and legal basis (e.g., county, district, etc.), please view this infographic.

Protections for Staff & Patrons

Library staff stand outside the library, aligned with ground marking indicating safe social distancing.
Longmont Public Library staff demonstrate the use of social distancing markers for library patrons waiting to enter the library. Photo courtesy of the Longmont Public Library.

Public libraries showed interest in taking safety precautions to protect staff and patrons, as they considered various approaches to reopening buildings and restoring library services. Nearly all libraries indicated plans for additional cleaning, limiting the number of people in the library, and following social distancing recommendations. In addition to these commonly accepted safety practices during the early stages of the pandemic, libraries were also looking at other ways to provide protection, including reduction of library seating options, installation of plexiglass sneeze guards, and providing additional personal protective equipment for staff. Also, a majority of libraries indicated allowing a telework option for library staff.

One library director expressed how the library planned to provide for public safety: “We have a 17-point plan that we need to comply the best we can. One way-in and another exit is recommended. We are able to do this for most of our buildings. We are quarantining materials and sanitizing them.”

The Colorado State Library recently published guidance for public libraries with recommendations for service modifications and safety measures. These recommendations are organized to align with the prescribed levels of precaution based on COVID-19 incidence in the community. This guidance has been drafted in coordination with other statewide agencies, and with input from public library leadership.

Future Research

Library staff work the Buena Vista Public Library’s table at back to school night during the COVID-19 pandemic. Photo courtesy of the Buena Vista Public Library.

The results of this initial study provided insight into the decision making of Colorado public libraries during a crisis situation, while also raising new questions. At the time this survey was conducted, it was assumed that life would return to normalcy within another few months. At the time of writing this blog post, over three months have passed, and Colorado is still under the same Safer at Home order. As a result of this timeline, libraries have been gradually modifying services, building access, and safety precautions as more information about the virus becomes available.

We are interested in learning more about how libraries have continued to modify their operations and services and are considering the following topics for a second survey, to be conducted in late Fall 2020:

  • How have libraries adjusted the handling of library materials, particularly when returned by patrons, to reduce the potential spread of the virus? How have national research efforts (REALM study) had an impact on decision making about the handling of library materials?
  • How did staffing changes impact the restoration of services and reopening of library buildings? Did libraries experience reductions in staffing due to furlough, layoff, resignation, retirement, or temporary leave?
  • What new virtual services were developed in response to the pandemic? How have those services been received by the public? Will they continue to be offered into the future?
  • What essential services have been provided by Colorado libraries prior to and during the pandemic? How has the demand for these services changed?

What other topics would you like to see covered in a second survey for Colorado public libraries? Please send your ideas to lrs@lrs.org.

*Member libraries are part of a public library jurisdiction but make some decisions autonomously. To view response rates by LSA population category and legal basis (e.g., county, district, etc.), please see this resource.

A library worker sits on the floor, assembling bags with summer reading craft materials.
Audry Haws, Youth Services Assistant, assembles summer reading kits during the summer of 2020, which served as a modification to in-person programming during the COVID-19 pandemic. Photo courtesy of Delta County Libraries.

Research Ethics: How to collect data without being a creeper

A woman holds up a sign covering her face that has a question mark on it.

When you read the word “creeper,” you might think of something like this: “A person who does weird things, like stares at you while you sleep, or looks at you for hours through a window.” That definition of “creeper” was written by the user Danya at Urban Dictionary. 

Both the examples mentioned in the definition of creeper are things that evaluators and researchers actually do. And they could be very creepy! Sadly, some unethical, unsavory, and racist things have been done in the name of research and collecting data in the past. Not even the distant past. The Tuskegee Study is a particularly devastating example of unethical research. Research ethics are guidelines and regulations in place to keep something like Tuskegee and other kinds of ethics violations from happening. Whenever we collect data, we need to think about ethics. 

The most fundamental tenet of research ethics is to minimize risk to participants, but they go beyond that. Researchers must be actively respectful towards the individuals in their research. These same ethical goals apply to library evaluation projects. How can you make sure you’re treating study participants ethically? Key issues to consider include privacy, informed consent, treatment of vulnerable populations, risks and access to benefit, incentives, and coercion. In this post we will discuss privacy and how it applies in library evaluations.

Privacy

This is one of our core values in libraries, so we have a nice overlap with research ethics here. Library privacy policies should govern what you do in an evaluation too. You can read more privacy information from ALA here. Often personal information is collected as part of an evaluation study. You might collect people’s email addresses during a program to follow up with them later. Or if you’re interested in knowing if people from under-served communities are attending programs designed to reach them, you could distribute a survey at the end of the program and ask about participants’ community or identity. Regardless of your methods, you should only collect the personal information that you absolutely need to answer your evaluation questions. 

After you have collected the data, it’s your responsibility to keep it safe. Where will you store it? How will you use it? Will anyone else have access to the data? Where will the results be published? How will you present the results to protect the privacy of your study’s participants? 

How to keep the data safe depends on how much personally identifiable information (PII) is in the data. Any time information could be traced back to an individual it is PII and needs to be protected. Datasets that include information that is medical, legal, or in any way could harm the individual should have the highest possible protections in place.

Anonymous datasets don’t contain any identifying information—even the person who collected the data could not trace it back to an individual. This kind of information requires minimal protection. In some cases, the data are confidential, but not anonymous. You can say the information is confidential when you have collected some personal information, but it will be protected and only a limited number of people will have access to it. A common practice is for one evaluator to assign codes to individuals instead of names, and everyone else on the team just sees the codes. This is called de-identifying. As long as a key exists somewhere that connects those codes back to individuals, these data still have PII.

For data with PII, access should be limited to those who are analyzing them. These data should be stored in a location that is secure physically or digitally, like a locked filing cabinet or a password protected and encrypted file. Be careful with cloud-based services and email—these are generally not secure enough for data with PII. Your organization likely stores PII about staff for human resources purposes—you can find out how they keep it safe and see if you could use the same procedure to store research data securely. More information on protecting PII is available here and here.

More next time

Privacy is one of the key pillars of research ethics and you should think about it anytime you collect data. Next time, we will look at additional research ethics considerations that you need to think about as an evaluator.

42: The answer to every bad evaluation question

In the novel Hitchhiker’s Guide to the Galaxy, a group of hyper-intelligent pan-dimensional beings build a supercomputer to ask the “ultimate question…the answer to life, the universe, and everything.” After waiting millions of years, the supercomputer tells them the answer to life, the universe, and everything is…42! Some might disagree, but the lesson here is simple—if you want a useful finding, you have to ask the right evaluation question! And you don’t have to be hyper-intelligent, pan-dimensional beings to learn how to do it.

Evaluation questions are developed to guide your evaluation. They allow you to focus your study, clarify your program/service outcomes, and help check or authenticate your work. Your mission is to answer them by collecting and analyzing data. Your findings should give you important insights about your program or service. Depending on the size of your evaluation, you may have anywhere from one to five main questions. To get started, follow these four steps:

Step 1: Clarify goals and objectives of program

You can’t develop an evaluation question if you aren’t clear about the intended outcomes of your program. Otherwise, you might research a question that ends up being entirely irrelevant. Also take the time to review your logic model—you want to ensure that each question ties to one of its components. For instance, if an activity on your logic model is STEM instruction, you might ask “to what extent did staff have adequate training and support to implement proper STEM instruction to children ages 6-12.” Luckily, we’ve recently covered both of these topics more in-depth. Learn more about outcomes here or logic models here. 

Step 2: Identify key stakeholders and audiences

It’s helpful to make a list of your evaluation’s stakeholders and audiences, including taking note of their “stakes.” From this stakeholder list, identify who your evaluation serves. Is it to provide data to your library board? Do you intend to use the information to improve a program for library users? Think about whether your evaluation questions will give you answers that serve these groups of people. Additionally, consider whether your key stakeholders or audiences should have an opportunity to provide feedback on your evaluation questions.

Step 3: Write a list of evaluation questions

Now it’s time to put pen to paper and write some questions. Write as many as you can think of and then we’ll eliminate some in the next step. Here are some examples that frame the question around the objective of your evaluation:

Objective: To review the summer reading program.
Question: In what ways are participating children demonstrating interest in reading at home? 

Objective: To provide information on non-library users in the community.
Question: For what reasons do residents within our library service area not use the library?

Objective: To examine library services directed at library users being affected by housing insecurity.
Question: To what extent are library programs and services directed at housing insecure patrons meeting their direct needs?
Sub-question: You can also have a sub-question like, “What need-gaps still exist that library services could provide?”   

Step 4: Evaluate your evaluation questions

Ok, I know this might be starting to feel like the movie Inception, but bear with me. Now we need to evaluate each evaluation question based on these criteria:

Relevant: Does the question clearly apply to an aspect of the program (i.e. design, activities, outcomes)? Does it contribute valuable information to stakeholders?

Answerable: Is it possible to answer this question via empirical research methods? Can you obtain the necessary information ethically and respectfully? 

Reasonable: Can the question be addressed given the resources and constraints (time, budget, staff, etc.) of your evaluation? Is it worth the effort?

Specific: Does the question distinctly target a program component? Are there any ambiguous phrases or undefined target groups? 

Evaluative: Will data related to the evaluation question provide either formative information about the program or service for decision-making and improvement purposes, or summative information to determine the effectiveness? Is your question phrased objectively so that you are not making assumptions about your program or service prior to evaluating it? 

Complete: Will the evaluation question give ample information for stakeholders to move forward?

If questions on your list don’t meet all of these criteria, consider revising or eliminating them. It’s possible you still have too many to be able to accomplish them all within your constraints. If so, go through each one and score them based on the criteria (1 = not very relevant, 5 = very relevant, etc.). Prioritize the questions that score the highest.

 Still have questions about evaluation questions? Feel free to reach out to us at LRS@LRS.org. We’re always happy to shop talk and help you reach your library evaluation goals!   

 

 

 

The Logic Model: Take it one step at a time

When your organization designs a program, service or experience, it’s helpful to think intentionally. What do you hope happens? How would you know if it did? We wrote about determining the outcomes for your efforts last time. Identifying outcomes is an important first step in planning and evaluating a program, service or experience. What do you need to do after you’ve identified outcomes? It’s helpful to have a model to guide you through your questions, what you hope will happen, how to best collect data, and how it all connects.

There are different types of guides for this process in the evaluation world. The logic model is the one most frequently used in nonprofits and libraries, so we’ll be focusing on it. The key to this process, no matter the model, is to think carefully about the outcomes you have specified, how those outcomes will be achieved, and how success will be measured.

The logic model outlines each component of a program, service, or experience. We’ll discuss each component of the logic model using storytime programming as an example, which is shown below. Keep in mind that terminology and some of the components vary in different versions of the logic model, so what we’re sharing here is not the definitive, one and only way to create a logic model. It’s one example.

Inputs

Inputs are the resources that go into making programs, services, and experiences possible. Almost anything we do in libraries requires staff time, funding or supplies. Staff training or background research could also be inputs.

Activities

Activities include the events, services, or experiences that you hope will achieve the outcome. One of the most important steps of this process is making sure the activities could realistically lead to the outcome. For example, in our logic model our outcome is “Caregivers and children learn early literacy skills.” What activities would make it possible for this outcome to happen? The storytime would need to include instruction on early literacy skills for children and parents to be able to learn them. Logical, right?

Outputs

Outputs are the concrete results of the activities. They are usually things we can count, like the number of attendees at a storytime or circulation statistics. 

Outcomes

Outcomes are how the participants are affected by their participation. Does something change for them? Do they know, believe or can they do something differently from before they participated? Many logic models distinguish between short-term, medium-term and long-term outcomes (also called impacts). In our example, a short-term outcome is the one shown in the diagram: caregivers and children learn early literacy skills. A medium-term outcome would be that caregivers and children enjoy reading together more. A long-term outcome or impact would be that children’s literacy skills improve. The outcomes build on each other over time.

Assumptions & External Factors

The programs, experiences, and services libraries provide exist within the complicated context that is our world. Assumptions and external factors are a place to capture some of that context. Assumptions are just that—the underlying ideas and values that come with us wherever we go. How do we think things work? We often share assumptions as a profession and questioning them can be uncomfortable. It is still important to explicitly discuss our assumptions because the project could go very differently than we planned due to a faulty assumption. External factors are those elements of the world that may play a big role in how the program, experience, or service works in real life. You can think of this as the environment where the project lives. In our case, right now the pandemic has an impact on all our projects. 

Conclusion

I hope this post gives you a useful bird’s eye view of the planning and evaluation process. Using a guide like the logic model can help you identify each component of the process and how it leads to the next step. Looking at everything sequentially helps you ensure that each piece works together to achieve your outcomes. 

Further reading

I used several sources to inform this post. First, I’d like to credit them for their thoughtful and easy to follow explanations of the logic model. I’d also like to refer you to them if you want more information:

 

What’s your goal here?

Every day we assess the world around us. We ask ourselves whether that decision we made was a good idea, what makes that person trustworthy, why we should or should not change something. We form a question in our head, collect data, analyze the information, and come to a conclusion. In short, we are all experienced evaluators!  

However, that doesn’t mean setting up an outcome-based evaluation is a cake walk. It’s important to apply structure to the subconscious process occurring in our head. So where should you start? At the end. That might sound counterintuitive, but the first step in an outcome-based evaluation is figuring out how you define success for your program or service—what do you hope to achieve? 

Think of a program or service you want to evaluate. It could be something already being offered in your library, or something new. What do you want your users to know/do/understand/believe after participating in the program or experiencing the service? Remember that outcomes are goals framed around your users. It’s the impact you hope your service or program has on the people participating—the big “why” of your work.

I’m going to ask you to take a few minutes to think through some potential outcomes, but before I do, we need to talk briefly about outputs. Outputs are the tangible and intangible products that result from program/service activities. If we were talking about summer reading, an example of an output is the number of children who complete the program. So, we may aim to increase the number of completions this year by 20 percent. That’s a great goal right? Yes, but be careful not to confuse it for an outcome. Increasing completions, even though it’s addressing users, does not capture the impact we hope summer reading has on children who participate in the program. 

So what would be a good outcome for a summer reading program? Here are some potential ideas:

  • Children choose to engage in a reading activity every day.
  • Children believe that reading is an important part of their daily routine
  • Children return to school without exhibiting effects of “summer slide”

In each example, the “who” is the user (children) and the “what” is the impact we hope the experience has on them. 

Now it’s your turn! Take a few minutes to write down some potential outcomes for the program or service you’re thinking about. As you’re doing it, remember to ask yourself:

  1. Is it achievable? It’s great to have aspirational goals, but we want to choose something that can be achieved by the program, service, or experience you are offering. We all want to alleviate poverty, but a much more achievable goal might be to create economic opportunity or increase wage-earning potential for a certain target group.  
  2. Is it framed around the user? Think about who you want to have an effect on. Be as specific as possible. 
  3. Does it capture impact? Make sure to be clear in your outcome about what you want your user to know, do, believe, or understand by the end of your program or service.

Congrats! You’re on your way to being an expert evaluator. Having clear and defined outcomes is the first step to designing your evaluation plan. In our next post, you’ll use these outcomes to develop a logic model. Until then, if you have any questions, feel free to reach out to us at LRS@LRS.org

Finding your way: the difference between research and evaluation

Sign posts on the top of an alpine peak

Have you ever stayed up late, staring up at the night sky, wondering “What is the difference between evaluation and research?” No?! Well, even if you haven’t lost sleep pondering this, we think it’s an important topic. Why? In this blog series, we’ll be focused on how to do an evaluation: how to determine the value and impact of programs, services, and experiences. At the same time, we’ll be talking a lot about methods from social science research because those are our tools for collecting and analyzing data. 

Knowing how evaluation and research relate to each other gives you a better understanding of where you are now, where you’re going, and how to get there as you work on a project. It’s like having a map in your head with a little star that says “you are here!”

Let’s start with clarifying what we mean by research. We might say that we’re going to research some recipes for dinner, or some interesting STEM activities for kids. In that context, research means “go find more information about.” When we talk about research in this post, we mean original research: when a study is designed to answer a question by methodically collecting and analyzing data.

Often original research happens at a university, within a specific discipline like physics, psychology, or history. In general, original research

  • aims to answer a question
  • is based in a theory (a set of related ideas about how something works)
  • tests a hypothesis (an idea about what will happen this time)
  • comes to a conclusion that can be applied in a lot of situations (generalized)
  • increases our overall knowledge on a topic

Evaluation and research do have commonalities. They’re both processes of inquiry, or ways of finding out more information in order to answer a question. So what makes them different? The answer to that can depend a bit on who you ask (a recent survey of 522 researchers and evaluators found that they had several ways of thinking about how research and evaluation relate). 

For our purposes you just need to know which it is you are doing—evaluation or research? A broadly accepted way of thinking about how evaluation and research are different comes from Michael Scriven, an evaluation expert and professor. He defines evaluation this way in his Evaluation Thesaurus: “Evaluation determines the merit, worth, or value of things.” He goes on to explain that “Social science research, by contrast, does not aim for or achieve evaluative conclusions…Social science research does not establish standards or values and then integrate them with factual results to reach evaluative conclusions. In fact, the dominant social science doctrine for many decades prided itself on being value free.” This definition and more information are available at the Evaluation Exchange.

Put another way: evaluation and social science research use the same strategies to collect and analyze data, but the goals of each are different. A useful visualization of this concept, created by John LaVelle, is below.

An hourglass showing evaluation and research

Essentially evaluation aims to do exactly what it says—determine value. Did it work? Should we keep doing it or do something else instead? What was the value of what we did? Social science research, on the other hand, aims to maintain a more impartial stance—describe what is happening, as it is, and generally not judge or evaluate it as valuable or not.

As we move forward and learn more about the evaluation process, keep this idea in the back of your mind—that little “you are here!” star. We usually start an evaluation because we want to know if something is working and providing value in the way we hoped. Remembering that’s why you started and where you’re going can help you orient yourself throughout the project. We look forward to seeing you back here next time!