New research: first year college students need support assessing authority

New research: first year college students need support assessing authority

Intro
Can I trust this information? We use information constantly to learn, make decisions, and form opinions. Every day library staff in every setting strive to teach people how to find the information they need and how to identify trustworthy sources. But what is trustworthy? How can you tell? What about when sources contradict each other? What characteristics distinguish sources from each other?

Who
As a former information literacy librarian at a university, these questions haunted me when I was teaching. I was lucky to meet two librarians at the University of Denver (DU) who shared a passion for this topic: Carrie Forbes, the Associate Dean for Student and Scholar Services, and Bridget Farrell, the Coordinator of Library Instruction & Reference Services. Together, we designed a research project to learn more about how students thought about “authority as constructed and contextual,” as defined in the ACRL information literacy framework.

Why
As instructors, we had seen students struggle with the concept of different types of authority being more or less relevant in different contexts. Often they had the idea that “scholarly articles = good” and “news articles = bad.” Given the overwhelming complexity of evaluating information in our world, we wanted to help students evaluate authority in a more nuanced and complex way. We hoped that by understanding the perceptions and skills of students, particularly first year students, we could better teach them the skills they need to sort through it all.

How: Data collection
We designed a lesson that included definitions of authority and gave examples of types of information students could find about authors, like finding their LinkedIn page. The goal here was not to give students an extensive, thorough lesson on how to evaluate authority. We wanted to give them enough information to complete the assignment and show us what they already knew and thought. Essentially, this was a pre-assessment. (The slides for the lesson are available for your reference. Please contact Carrie Forbes if you would like to use them.)

The project was approved by the Institutional Review Board at DU. Thanks to a partnership with the University Writing Program, we were able to collect data during library instruction sessions in first year writing courses. During the session, students were asked to find a news article and scholarly article on the same topic and then report on different elements of authority for both articles: the author’s current job title, their education, how they back up their points (quotes, references, etc.), and what communities they belong to that could inform their perspective. Then we asked them to use all of these elements to come to a conclusion about each article’s credibility, and, finally, to compare the two articles using the information that they had found. We collected their responses using a Qualtrics online form. (The form is available for your references. Please contact Carrie Forbes if you would like to use it.)

Thanks to the hard work of Carrie and Bridget, the generous participation of eight DU librarians, 13 writing faculty, and over 200 students who agreed to participate after reviewing our informed consent process, we were able to collect 175 complete student responses that we coded using a rubric we created. Before the project was over, we added two new coders, Leah Breevoort, the research assistant here at LRS, and Kristen Whitson, a volunteer from the University of Wisconsin-Madison, who both contributed great insight and many, many hours of coding. Carrie and Bridget are working on analyzing the data set in a different way, so keep your eyes out for their findings as well.

Takeaway 1: First year students need significant support assessing authority

This was a pre-assessment and the classroom instruction was designed to give students just enough knowledge to complete the assignment and demonstrate their current understanding of the topics. If this had been graded, almost half (45%) of the students would have failed the assignment. Most instruction librarians get 45 minutes to an hour once a semester or quarter with a  group of students and are typically expected to cover searching for articles and using databases–at the very least. That leaves them very little time to cover authority in depth.

A pie chart showing that 45% of students would have failed the assignment if it was graded

Recommendations

  • The data demonstrates that students need significant support with these concepts. Students’ ability to think critically about information impacts their ability to be successful in a post-secondary environment. Academic librarians could use this data to demonstrate the need for more instruction dedicated to this topic.
  • School librarians could consider how to address these topics in ways appropriate to students in middle school and up. While a strong understanding of authority may not be as vital to academic success prior to post-secondary education, the more exposure students have to these ideas the more understanding they can build over time.
  • Public libraries could consider if they want to offer information literacy workshops for the general public that address these skills. Interest levels would depend on the specific community context, but these skills are important for the general population to navigate the world we live in.


Takeaway 2: First year students especially need support understanding how communities and identities impact authority in different ways.

On the questions about communities for both the news and scholarly articles, more than a third of students scored at the lowest level: 35% for news and 41% for scholarly.  This is the language from the question:

Are there any communities the author identifies with? Are those communities relevant to the topic they are writing about? If so, which ones? How is that relevant to their point of view? For example, if the author is writing about nuclear weapons, did they serve in the military? If so, the author’s military experience might mean they have direct experience with policies related to nuclear weapons. 

Bar chart showing two areas with the highest percentage of students receiving the lowest possible scoreAsking students to think about communities and identities is asking them to think about bias and perspective. This is challenging for many reasons. First, it is difficult to define what is an identity or community that is relevant to a specific topic and to distinguish between professional identities and personal identities. For many scholarly authors, there is little publicly available information about them other than their professional memberships. As Kristen observed: “it was really common for students to list other elements of authority (work experience, education) in the community fields.”

Second, how can you teach students to look for relevant communities and identities without making problematic assumptions? For example, one student did an excellent job of investigating communities and identities working with an article entitled Brain drain: Do economic conditions “push” doctors out of developing countries? The student identified that the author was trained as both an economist and a medical doctor and had unique insight in the topic because of these two backgrounds.

What the student missed, though, was that the author received their medical degree from a university in a developing country and how this may give them unique insight into doctors’ experiences in that context. In this example, the author’s LinkedIn made it clear that they had lived in a developing country. In another instance, however, a student thought that an article about the President of Poland was written by the President of Poland–which was inaccurate and led to a chain of erroneous assumptions. In another case, a student thought an article by Elizabeth Holmes was written by Elizabeth Holmes of the Theranos scandal.  While it seems positive to push students to think more deeply about perspectives and personal experiences, it has to be done carefully and based on concrete information instead of assumptions.

Third, if librarians teach directly about using identities and communities in evaluating information sources, they need to address the complex relationships between personal experience, identities, communities, bias, and insight. Everyone’s experiences and identities give them particular insight and expertise as well as blind spots and prejudices. As put by the researcher Jennifer Eberhardt, “Bias is a natural byproduct of the way our brains work.”

Recommendations 

  • There are no easy answers here. Addressing identities, communities, bias, and insight needs to be done thoughtfully, but that doesn’t make it any less urgent to teach this skill.
  • For academic librarians, this is an excellent place to collaborate with faculty about how they are addressing these issues in their course content. For public librarians, supporting conversations around how different identities and communities impact perspectives could be a good way to increase people’s familiarity with this concept. Similarly, for school librarians, discussing perspective in the context of book groups and discussions could be a valuable way to introduce this idea.


Takeaway 3: An article with strong bias looked credible even when a student evaluated it thoroughly.

When we encountered one particular article, all of the coders noticed the tone because of words and phrases like “persecution,” “slaughtered,” “murderer,” “beaten senseless,” and “sadistically.” (If you want to read the article, you can contact the research team: some of the language and content may be triggering or upsetting.) This led us to wonder if it was a news source, and look more into the organization where it was published. We found was it was identified as an Islamophobic source by the Bridge Initiative at Georgetown University.

The student who evaluated this article completed the assignment thoroughly – finding the author’s education, noting their relevant communities and identities, and identifying that quotes were included to back up statements. The author did have a relevant education and used quotes. This example illustrates just how fuzzy the line can become between a source that has authority and one that should be considered with skepticism. It is a subjective, nuanced process.

This article also revealed some of the weaknesses in our assignment. We didn’t ask students to look at the publication itself, like if it has an editorial board, or to consider the tone and language used. These both would have been helpful in this case. As librarians, we have so much more context and background knowledge to aid us in evaluating sources. How can we train students in some of these strategies that we may not even be very aware we are using?

Recommendations: 

  • It would be helpful to add both noticing tone and investigating the publication to the criteria we teach students to use for evaluating authority.
  • Creating the rubric forced us to be meta-cognitive about the skills and background knowledge we were using to evaluate sources. This is probably a valuable conversation for all types of librarians and library staff to have before and while teaching these skills to others.
  • The line between credible news and the rest of the internet is growing fuzzier and fuzzier and is probably going to keep changing. We struggled to define what a news article is throughout this process. It seems important to be transparent about the messiness of evaluating authority when teaching these skills.


Takeaway 4: Many students saw journalists as inherently unprofessional and lacking skills.

We were coding on the rubric more so than doing a thematic qualitative analysis, so we don’t know how many times students wrote about journalists not being credible in their comparison between the news and scholarly article. In our final round of coding, however, Leah, Kristen, and I were all surprised by how frequently we saw this explanation. When we did see this, the student’s reasoning was often that the author’s status as a journalist was a barrier to being a credible source, like the article could not be credible because the author was a journalist. Sometimes students had this perspective when the author was a journalist working for a generally reputable publication, like the news section of the Wall Street Journal.

Recommendations

  • This is definitely an area that warrants further investigation.
  • Being aware of this perspective, library staff leading instruction could facilitate conversations around journalists and journalism to better understand this perspective.


How: Data analysis

It turns out that the data collection, which took place in winter and spring 2018, was the easy part. We planned to code the data using a scoring rubric. We wanted to make sure that the rubric was reliable, meaning that if different coders used it (or anyone else, including you!) it would yield similar results.

The process for achieving reliability goes like this: everyone who is participating codes the same set of responses, let’s say 10. Then you feed everyone’s codes into statistics software. The software returns a statistic, Cronchbach’s alpha in this case, that indicates how reliably you are all coding together. Based on that reliability score, all the coders work together to figure out where and why you coded differently. Then you clarify the language in the areas of the rubric where you weren’t coding reliably, so you all can hopefully be more consistent the next time. Then you do it over again with a new group of 10 responses and the updated rubric. You have to keep repeating that process until you reach a level of reliability that is acceptable. In this case we used a value of .7 or higher, which is generally acceptable in social science research.

In spring 2021, our scoring and rubric tested as reliable on all 11 areas of the rubric. (We rounded up to .7 for the one area of the rubric that tested at .686.) Then Leah, Kristen, and I worked through all 175 student responses, each coding about 58 pieces of student work. In order to resolve any challenging scores, we submitted scores we felt uncertain about to each other for review. After review, we decided on a final score, which is what was analyzed here. Below you can see the percentage of student scores at different levels for each area of the rubric, as well as the reliability scores for each (Cronbach’s alpha).

A table that displays each area of the rubic, what percntage of students scored at each level of proficiency, and the Cronbach's alpha reliability statistic.

We could write a whole different post about the process of coding and creating the rubric. We had many fascinating discussions about what constitutes a news article, how we should score when students have such different background knowledge, discussing our own biases that we bring to the process–including being prone to score more generously or more strictly. We also talked about when students demonstrated the kind of thinking we were looking for but came to a different conclusion than we did, or instances where we thought we understood where they were going with an idea but they didn’t actually articulate it in their response. Evaluating students’ responses was almost as messy and subjective as evaluating the credibility of sources is!

Ultimately, we wanted the rubric to be useful for instructors so that guided the design of the rubric and our coding. “Beginning,” the lowest score, came to represent situations where we thought a student would need significant support understanding the concept. “Developing,” the middle score, indicates that a student understands some of the concept, but still needs guidance. “Skillful,” the highest score, meant that we would be confident in the student’s ability to evaluate this criteria independently. We are excited to present, after all those discussions, such a reliable rubric for your use. We hope it will be a useful tool.

But you don’t have to take my word for it!

We have shared the slides for the lesson, the assignment, and rubric for how to score it. If you would like to use the slides or assignment, please contact Carrie Forbes at the University of Denver. To use the rubric, please reach out to Charissa Brammer at LRS. Have questions? Please reach out. Nothing would delight us more than this information being valuable to you.

On a personal note, this is my last post for lrs.org. I am moving on to another role. You can find me here. I am grateful for my time at LRS and the many opportunities I had to explore interesting questions with the support of so many people. In addition to those already listed in this article, thank you to Linda Hofschire, Charissa Brammer, and everyone at the Colorado State Library for their support of this project.

How to observe without being totally awkward

Dog looks worried or confused in office

Happy fall to all you data nerds out there! We appreciate you being here with us. Last time we discussed how to get permission from your participants when you want to do an observation. You might be wondering how you can actually do the observation without it being completely awkward and perhaps even cringey. Today we are going to discuss just that!

First let’s review our goal for this project: We want to evaluate if caregivers are learning skills during storytime and using those skills with their children outside of storytime 

Based on this goal, we decided to do observations of caregivers and children in the library while they are not participating in storytime. Ideally from a research perspective, we would observe them at home, but that would not be practical or comfortable for anyone involved. Even in the library context, we are going to need to be careful to make sure that our participants feel as comfortable as possible. 

Being a participant observer

There are a variety of ways you can behave as an observer. For most library situations, I recommend a version of what researchers call “participant observation.” You’re observing while still interacting with the people you’re observing to a limited extent. This setup feels more comfortable while still giving you, as the observer, some distance from what you are observing. What would this look like for our example project? When the family you are observing tells the children’s desk that they are in the library, you would first introduce yourself to the family. Then during the observation you would talk with them only if it’s really important or necessary.

When is it really necessary to jump out of observer mode? A classic example I lived through with a team of librarian-observers was a child in the group we were observing getting a serious nosebleed. At the time there was only one library staff member who was teaching the group, but three of us were observing. One of us stopped observing and took the child to get medical attention. The instructor who actually knew the content that needed to be covered continued running the group. My best advice for when to break out of observing “mode” is to try to avoid it, but trust yourself when it feels like an appropriate time to spring into action. You are probably right!

Making people feel comfortable

When observing, we’re trying to balance getting quality data with making the participants feel comfortable. Every population, and every individual, has different needs to feel comfortable. It can help to start by thinking back to times you were in potentially awkward situations and someone made you feel more comfortable. What did they do? Remember in this case we want to go a step beyond that and treat people how they want to be treated, not just how we would want to be treated.

In a situation like this with caregivers, we should definitely reassure them that the library staff is not there to judge them. Parents feel judged a lot! It’s helpful to emphasize that you are evaluating storytime and not them. Nonetheless, don’t tell participants “We’re looking to see what early literacy skills you use outside of storytime.” Then they will inevitably show you every early literacy skill they have ever heard of! Instead, you might explain the project like this: “We want to make storytimes better. To do that, we need to understand how caregivers and children are interacting outside of storytime. We are watching so we can learn and make storytime as helpful and fun as possible. We are not evaluating you as a parent. Do you have any questions? Is there anything else I can do that would make you feel more comfortable?”

Working with children 

Children are going to want to interact with you while you’re observing them, especially if they know you. You should explain to them what you’re doing and why you are acting differently. For example: “Today, my job is to be very quiet and pay attention really carefully to the fun time you are having with your caregiver. You can look at me and I’m going to smile at you, but I’m not going to talk with you like I usually would. It doesn’t mean I’m mad at you. I’m just really focused on watching and listening today. I’ll tell you when we’re done and we can talk more then. Do you have any questions? ”

Conclusion

Having these kinds of conversations with participants before you start will help the observation go well. We observed teens for a project once, who are perhaps the most self-conscious creatures on the face of the earth. The staff observers introduced themselves to the teens at the beginning of their time together even though we already had informed consent. Although I can’t know for sure, I think we were able to collect valuable data on that project partly because the observers introduced themselves right before the observations and were very friendly and open. Remember, you set the tone for the whole interaction at the beginning.

Up next 

Next time we’ll talk about how to focus your observations and collect data that will be valuable to you. We’ll look forward to seeing you then!

LRS’s Between a Graph and a Hard Place blog series provides instruction on how to evaluate in a library context. Each post covers an aspect of evaluating. To receive posts via email, please complete this form.

How to observe: Ask first!

A kitten peaks out from between books

Welcome back! We left off talking about why you would use observations to collect data. Observation can be a great data collection tool when you want to see how different people interact with each other, a space, or a passive program. Observation is also helpful when it is difficult for someone to answer a question accurately, like when you ask them to remember something they did or—particularly with children—if you ask them to give critical or written feedback, both of which can be developmentally inappropriate.

To review, our big research question is: “Does attending storytime help caregivers use new literacy skills at home?” Our small questions within that big question are:

  • Were caregivers already using literacy skills at home prior to attending a storytime? 
  • Are caregivers learning new literacy skills during storytime? 
  • Do caregivers use new literacy skills from storytime at home?

After thinking through these in our last post, we decided that it would be both helpful and realistic to observe caregivers and their children in the library. This won’t tell us about caregivers’ behavior at home, but it’s not an option for us to follow them around their homes. While observing caregivers in the library won’t tell us what they are doing at home, it still helps us see if they’re learning skills during storytime that they’re using outside of storytime.

Ok, so how would we actually do this? Here are the key elements of any observation:

  • Get permission from your participants
  • Decide how you will approach the observation
  • Focus in on some specific things you’re looking for
  • Take notes or videos of what you’re observing
  • Code the notes or video to identify patterns

For the rest of this blog post, we will focus on the first point. It’s important to get started on the right foot. 

Why is getting permission important? 

We’ve discussed informed consent before in this blog series, and it is still important with observations. If you came to your library and a staff member followed you around the whole time without an explanation, it would feel weird and even invasive. Now, you may be thinking, “Wait a minute, if people know we’re watching, they’re going to act differently!” You’re exactly right. But the reality is, people are going to know you’re observing them no matter what. Even if you think you could soundlessly skulk around the stacks, most people are going to sense something weird is going on. 

Remember–we don’t want to be creepy when we’re doing research! Watching people without asking their permission is 1) pretty creepy, 2) not good for building and maintaining trust with our users, and 3) violates one of our core library values of privacy. Ethically, we have to ask people if it’s ok to observe them while they’re in the library, even if it does change their behavior.

How do you ask permission to observe?

In this example, you could explain at storytime that you’re doing a research project to improve storytime. You are looking for some volunteers who spend time at the library outside of storytime to let you observe them interact with their child. If they are interested, you have a form they can sign, and the next time they’re hanging out at the library, you’d like them to come say “hi” at the desk in the children’s area. That way your staff know when they are in the library and the staff member observing them can introduce themselves too, which makes the whole thing less awkward.

As part of the informed consent process, remember that you need to address both that the participants can stop participating at any time, for any reason, and how you will protect their privacy. These elements are particularly important during an observation of caregivers and children. What identifying information do you absolutely need? The more anonymous the data can be, the better. Make sure you also establish a clear and easy way for the caregiver to end the observation while it is happening. 

In the next blog post in this series, we will explore the different approaches to observation. After you’ve gotten permission, how do you actually sit, watch, and collect meaningful data? Join us next time to find out!

Nothing About Us, Without Us: Equitable evaluation through community engagement

 

This is a “guest post” from the Colorado Virtual Library Equity, Diversity, and Inclusion blog.

When you wake up, one of the first things you might do is open your weather app to see what the temperature is and if it’s supposed to rain that day. You then use that information—or data—to make important decisions, like what to wear and whether you should bring an umbrella when you go out. The fact is, we are all collecting data every day—and we use that data to inform what we do next.

It’s no different in libraries. We collect data about circulation, program attendance, the demographics of our community, and so on. When we collect the data in a formalized way and use it to make decisions, we call this evaluation. Simply put, “evaluation determines the merit, worth, or value of things,” according to evaluation expert Michael Scriven.

Equitable Evaluation

So what does this have to do with equity, diversity, and inclusion? Well…everything. If evaluation does in fact determine the merit, worth, or value of programs and services, what happens when your library’s evaluation excludes or overlooks certain groups from the data? Let’s take a look:

You are trying to evaluate patron satisfaction at your library, so you print off a stack of surveys and leave them on the lending desk for patrons to take. While everyone in your target audience may have equal access to the survey (or in other words, are being treated the same), they don’t all have equitable access. Sometimes people may need differing treatment in order to make their opportunities the same as others. In this case, how would someone who has a visual impairment be able to take a printed survey? What about someone who doesn’t speak English? These patrons would likely ignore your survey, and without demographic questions on language and disability, the omission of these identities might never be known. Upon analyzing your data, conclusions might be made to suggest, “X% of patrons felt this way about x, y, and z.” In reality, your results wouldn’t represent all patrons—only sighted, English-speaking patrons.

Inequities are perpetuated by evaluation when we fail to ensure our methods are inclusive and representative of everyone in our target group. The data will produce conclusions that amplify the experiences and perspectives of the dominating voice while simultaneously reproducing the idea that their narrative is representative of the entire population. Individuals who have historically been excluded will continue to be erased from our data and the overarching narrative, serving to maintain current power structures.

Evaluation With the Community, not On the Community

That’s a heavy burden to take on as an evaluator and a library professional, especially when taking part in people’s marginalization is the last thing you would want to do. Luckily, the research community has long been working on some answers to this problem. Community-based participatory research (CBPR) is contingent on the participation of those you are evaluating (your target population) and emphasizes democratization of the process. CBPR is defined as:

“focusing on social, structural, and physical environmental inequities through active involvement of community members, organizational representatives, and researchers in all aspects of the research process. Partners contribute their expertise to enhance understanding of a given phenomenon and integrate the knowledge gained with action to benefit the community involved.”

CBPR centers around seven key principles:

  1. Recognizes community as a unit of identity
  2. Builds on strengths and resources
  3. Facilitates collaborative partnerships in all phases of the research
  4. Integrates knowledge and action for mutual benefit of all partners
  5. Promotes a co-learning and empowering process that attends to social inequalities
  6. Involves a cyclical and iterative process
  7. Disseminates findings and knowledge gained to all partners

As one librarian put it, CBPR “dismantles the idea that the researcher is the expert and centers the knowledge of the community members.” When those that you are evaluating (whether it be patrons, non-users, people with a disability, non-English speakers, etc.) are involved in the entire process, your data will invariably become more equitable. As a result, your evaluation outcome will more effectively address real problems for your community. It’s a win-win for everyone.

However, if diving into a full community-based participation evaluation feels impossible given your time and resources, it’s okay. Think of CBPR as your ideal and then adjust to a level that is feasible for your library. The continuum of community engagement below outlines what some of those different levels might look like.

The continuum of community engagement ranges from total CBPR on the left end of the spectrum to community engagement on the right end of the spectrum. Total CBPR is full involvement in all parts of the study development and conduct. CBPR light is partial involvement in some or all parts of the study development and conduct. Community based research is research conducted in collaboration with members of the community. And community engagement is working with community members and agencies to reach community members.

The Big Takeaway

Evaluating your practices, policies, and programs in a library can lead to better outcomes for your library community. However, even the best of intentions can create harm for historically underrepresented groups when they are excluded from the very data used to make decisions that impact them. When undertaking an evaluation of any kind, think about the principles of CBPR and how you can incorporate them into your plan.

Why Observe? Watch and Learn

When I was a kid, one of my favorite summer activities was staring at hummingbirds. I would sit for hours, moving as little as possible, while I took notes about everything I saw. (Yes, I was a pretty weird eight year old.) I wanted to ask the hummingbirds so many questions, but I don’t speak hummingbird! Observing them was my only option for trying to understand their behavior. 

While it is literally impossible to ask a hummingbird to take a survey, there are many times with humans when a survey won’t work to collect the data you need either. Observation can be a great data collection tool when you want to see how different people interact with each other, a space, or a passive program. Observation is also helpful when it would be difficult for someone to answer a question accurately, like when you ask them to remember what they did or, particularly with children, if you ask them to give critical feedback or written feedback, both of which are sometimes developmentally inappropriate. 

In this post, I’m going to talk about why you might choose observation as a data collection method. Next time, I’ll talk about the logistics of observations and how you can use observational data. To better understand why you would collect data with observations, let’s use our example evaluation question from throughout this blog series: “Does attending storytime help caregivers use new literacy skills at home?” 

When we first outlined number data and story data, we talked about when to use each. We also outlined how to break your research question down into smaller questions. You really need to do that work to get to this point, so let’s go back and review what we did.  Here are some of the sub-questions we identified within our larger evaluation question:

  • Were caregivers already using literacy skills at home prior to attending a storytime? 
  • Are caregivers learning new literacy skills during storytime? 
  • Do caregivers use new literacy skills from storytime at home? 

Would a survey work to collect this data? We certainly could ask caregivers all of these questions. But we would immediately bump into some of the problems that come up when people self-report data: 1) we are not great at remembering things accurately and 2) we want to portray ourselves in the best possible light (social desirability bias). Let’s take a look at how those challenges would impact the data collection for our questions. 

  • Were caregivers already using literacy skills at home prior to attending a storytime? 
    • They may not know or accurately remember which skills they knew before attending storytime and which they learned at storytime. 
  • Are caregivers learning new literacy skills during storytime? 
    • They may report that they are learning new literacy skills at storytime because they don’t want to hurt anyone’s feelings—even if they aren’t actually learning those skills. 
  • Do caregivers use new literacy skills from storytime at home? 
    • They may report that they are using new literacy skills at home because that feels nice to say—even if they aren’t actually using those skills at home. 

So we could collect that data using a survey, but it may not be very accurate. We could get more accurate data by observing caregivers at home with children before they ever attended a library storytime and then continuing to observe after they started attending storytime. Then we could see for ourselves what skills they already knew and used at home, and which ones they learned at storytime. We could tally up how often they were using those skills too. Great! Let’s go follow people and their children around their homes 24 hours a day taking notes for several months. 

What? You don’t think that’s going to be a thrilling success? Unlike hummingbirds, who don’t seem to mind too much or alter their behavior a lot while I am watching, humans mind quite a bit and can change their behavior when they are being observed. Additionally, do you know any library staff who have the time to do this kind of intense observational study? Yeah, that’s what I thought. The time involved in observation and successfully navigating privacy concerns are two major elements that you always need to consider. 

What can we do that’s a little more realistic? Collecting data in the real world is often about doing what you can with what you have. In this case, it is unlikely anyone would let us come follow them around their home. We can, however, more easily observe caregivers and their children in the library. This would allow us to observe for indicators of caregivers learning skills during storytime and to observe if families are using early literacy skills while they are spending unstructured time in the library. Intrigued as to how we would do that? Come back for our next post where we’ll get into the nuts and bolts of how you can collect data using observation and pull out important takeaways from that data.

If you are an aspiring birdnerd, the two hummingbirds pictured are both species we have in Colorado and you can learn more about them here

Surveys: Don’t just set it and forget it!

Surveys are the rotisserie oven of the data collection methods. You simply “set it, and forget it!” That’s why it’s important to be strategic about how you’re reaching your target population. Otherwise, you may be leaving out key subsets of your audience—which are often voices that are already historically underrepresented.  

Is your survey equitable? 

Let’s say you want to send out a survey to library users, so you print off a stack of copies and leave them on the lending desk for patrons to take. While everyone in your target audience may have equal access to the survey (or in other words, are being treated the same), they don’t all have equitable access. Sometimes people may need differing treatment in order to make their opportunities the same as others. In this case, how would someone who has a visual impairment be able to take a printed survey? What about someone who doesn’t speak English? These patrons would likely ignore your survey, and without demographic questions on language and disability, the omission of these identities might never be known. Upon analyzing your data, conclusions might be made to suggest, “X% of patrons felt this way about x,y, and z.” In reality, your results wouldn’t represent all patrons—only sighted, English-speaking patrons. 

Who has access to your survey? 

Start by thinking about who you want to answer your survey—your target population. Where do they live? What do they do? What identities do they hold? Consider the diversity of people that might live within a more general population: racial and ethnic identities, sexual orientation, socio-economic status, age, religion, etc. Next, think through the needs and potential barriers for people in your target population, such as language, access to transportation, access to mail, color blindness, literacy, sightedness, other physical challenges, immigration status, etc. Create a distribution plan that ensures that everyone in your target population—whether they face barriers or not—can access your survey easily. Here are some common distribution methods you could use: 

  • Direct mail – Here’s more information about how to do a mail survey and it’s advantages and disadvantages. 
  • Online – For more information on how to make your online survey accessible, check out this article from Survey Monkey.
  • Telephone – In a telephone survey, someone calls the survey taker and reads them the questions over the phone while recording their answers. 
  • In-person – Surveys can also be administered in-person with a printed stack of surveys or a tablet. However, with this approach you might run into the dangers of convenience sampling

Depending on your target audience, surveys are rarely one-size-fits-all. The best plan is often a mixed-methods approach, where you employ multiple distribution strategies to ensure equitable access for all members of your target population. 

Who is and isn’t taking your survey?

Great! You’ve constructed a distribution plan that you feel can equitably reach your target population, but did it work? The only way to know for sure is by collecting certain demographic information as part of your survey. 

As library professionals, collecting identifying information can feel like a direct contradiction to our value of privacy. Yet, as a profession we are also committed to equity and inclusivity. When administering a survey, sometimes it’s necessary to collect demographic data to better understand who is and isn’t being represented in the results. Questions about someone’s race, ethnicity, income level, location, age, gender, sexual orientation, etc. not only allow us to determine if those characteristics impact someone’s responses, but also help combat the erasure of minority or disadvantaged voices from data. However, it’s important to note that: 

  1. You should always explicitly state on your survey that demographic questions are optional, 
  2. You should ensure responses remain anonymous either by not collecting personal identifying information or making sure access to that information is secure, and 
  3. Only collect demographic information that’s relevant and necessary to answer your particular research question. 

Compare the data from your demographic questions with who you intended to include in your target audience. Are there any gaps? If so, re-evaluate your distribution plan to better reach this sub-group(s), including speaking to representatives of the community or people that identify with the group for additional insight. Make additional efforts to distribute your survey, if necessary.

Conclusion

Inequities are perpetuated by research and evaluation when we fail to ensure our data collection methods are inclusive and representative of everyone in our target group. The absence of an equitable distribution plan and exclusion of relevant demographic questions on your survey runs the risk of generating data that maintains current power structures. The data will produce conclusions that amplify the experiences and perspectives of the dominating voice while simultaneously reproducing the idea that their narrative is representative of the entire population. Individuals who have historically been excluded will continue to be erased from our data and the overarching narrative.

Colorado Talking Book Library 2020

Results from the 2020 Colorado Talking Book Library (CTBL) patron survey are in! Survey respondents gave CTBL high marks again with 99% rating CTBL’s overall service as good or excellent in 2020. This is the ninth survey in a row (over 16 years) where 98% or more of respondents rated CTBL’s overall service as good or excellent.

The Colorado Talking Book Library provides free library services to Coloradans who are unable to read standard print materials. This includes patrons with physical, visual, and learning disabilities. The CTBL collection contains audio books and magazines, Braille books, large print books, equipment, and a collection of descriptive videos. In October 2020, CTBL was serving 6,190 active individual patrons and 605 organizations, which include health facilities and retirement homes.

In partnership with CTBL, the Library Research Service (LRS) has developed and administered a bi-annual patron survey since the fall of 2004. This year’s survey presented distinct challenges as it was administered during the COVID-19 pandemic. CTBL’s building closed to walk-in service on March 20, 2020, but the library continued to operate and provide services for CTBL patrons despite the extraordinary circumstances.This year’s survey asked questions about the devices patrons use, how they decide what to read next, how they value CTBL, and more.

To read this year’s full report, click here. To view the infographic, click here.

Report:

CTBL 2020 report

Infographic:

CTBL infographic

Guest Post: Why Use Inclusive Language

The Colorado State Library (CSL)’s Equity, Diversity, and Inclusivity Team (EDIT) is dedicated to raising awareness about EDI issues and spotlighting those values in Colorado’s cultural heritage profession. This guest post is the first in CSL’s new blog series that will regularly be posted on Colorado Virtual Library here. Twice a month, members of the LRS team will be looking at EDI research and how it applies to the library profession. We encourage you to visit the CVL website to learn more! 


Using appropriate terminology is a vital part of being an effective communicator. Using inclusive language is a way of showing consideration for everyone we meet. It is a way of recognizing, accepting, and sometimes celebrating personal characteristics such as gender, race, nationality, ethnicity, religion, or other attributes that make up a person’s identity. Using inclusive language centers the individual person and is one way of showing solidarity, allyship, and just plain old kindness. In a profession that aims to foster a welcoming, respectful, and accessible environment, inclusive language should be part of the everyday vernacular of library staff.

So, what is inclusive language?

As the Linguistic Society of America puts it:

Inclusive language acknowledges diversity, conveys respect to all people, is sensitive to differences, and promotes equal opportunities.

Inclusive language is the intentional practice of using words and phrases that correctly represent minority—and frequently marginalized—communities, such as LGBTQ+ (Lesbian, Gay, Bisexual, Transgender, and Queer/Questioning), BIPOC (Black, Indigenous, and People of Color), people with disabilities, people with mental health conditions, immigrants, etc. The key is to avoid hurtful, stereotypical language that makes individuals feel excluded, misunderstood, and/or disrespected. The use of inclusive language acknowledges that marginalized communities have ownership over the terminology that they use to refer to themselves, not the majority. It should also be noted that terminology isn’t necessarily ubiquitous across an entire group.

Keeping up-to-date

You might have said to yourself, there are so many new words or phrases nowadays, it’s hard to keep up! You might also have felt like you were worried about “saying the wrong thing.” Rest assured that language is always evolving as social, cultural, and technological changes occur, and you’re not expected to know everything all of the time. A willingness to learn and an awareness that you don’t have all the answers are extremely helpful traits that can aid in building trust with the people you meet.

One resource to keep in mind is the Pacific University’s extensive glossary of Equity, Diversity & Inclusion terms. Northwestern’s Inclusive Language Guide also offers a lot of examples of preferred terms.

Centering the individual first

Inclusive language centers the individual by referring foremost to someone as a person. Doing so reinforces the idea that someone is not defined by certain characteristics, such as race, religion, or disability. For example, it is still fairly common to refer to a person with a disability as simply “disabled.” It is now becoming more standard to use the phrase “Person with a disability.” The aim is to acknowledge the individual person first; this is also known as person-first or person-centered language. For example, “She is a person with a disability” rightfully acknowledges that this person has a disability, but they are not one-and-the-same, or synonymous with that disability. For more on inclusive language with respect to disability, check out this guide by the Stanford Disability Initiative.

Another way of thinking about centering the individual is with respect to race and ethnicity. Instead of referring to “a black” or “a Jew,” simply remembering to add the word “person” (i.e., a black person, a Jewish person) affirms that you are describing a person above all, while making it clear that you are not defining someone based on a single trait.

Pronouns: If you’re not sure, ask

Mostly we use the pronouns that are consistent with the person’s gender expression regardless of what we think their biological sex might be. If you are unsure of how to refer to an individual or what the correct words to use may be, asking respectful questions creates an opportunity for learning and the person you are asking may—or may not, as is their right—wish to affirm their identity to you. If you are unsure of a person’s pronouns, and it is appropriate to ask, keep it simple with something like, “Would you mind sharing what pronouns I should use when speaking to you?” In the case of gender identity, it is always better to ask than to assume. For more information on LGBTQ+ inclusive language, check out the Ally’s Guide to Terminology by GLAAD.

Always use a transgender person’s chosen name. Also, a person who identifies as a certain gender should be referred to using pronouns consistent with that gender. When it isn’t possible to ask what pronoun a person would prefer, use the pronoun that is consistent with the person’s appearance and gender expression.

-From GLAAD’s Ally’s Guide to Terminology

Do your research

Inclusive language is a broad and evolving topic. As with most things, doing a little bit of solo research can go a long way. Try to utilize reliable, research-based sources whenever possible, and also seek out the voices of experts from diverse backgrounds.

Conclusion

Intentionally using and remaining receptive to the appropriate terminology are key ways of giving others the dignity they deserve. Library staff engage with an intersection of many different types of people on a day-to-day basis. It is critical that we reinforce what libraries represent as an inclusive place for all by using the language that mirrors our values.

By Michael Peever, Consultant Support Specialist at Colorado State Library

Bad Survey Questions, part 2

Bad Survey Questions – pt. 2

Don’t let those bad survey questions go unpunished. Last time we talked about leading and loaded questions, which can inadvertently manipulate survey respondents. This week we’ll cover three question types that can just be downright confusing to someone taking your survey! Let’s dig in. 

Do you know what double-barreled questions are and how to avoid them?

When we design surveys it’s because we’re really curious about something and want a lot of information! Sometimes that eagerness causes us to jam too much into a single question and we end up with a double-barreled question. Let’s look at an example: 

         How satisfied are you with our selection of books and other materials? 

O    Very dissatisfied
O    Dissatisfied
O    Neither satisfied nor dissatisfied 
O    Satisfied
O    Very satisfied

Phrasing the question like this creates two problems. First, if a respondent selected “very dissatisfied,” when you analyzed the data you wouldn’t know if they were saying they were very dissatisfied with only the books, only the materials, or both. Second, if the respondent was dissatisfied with the book selection, but was very satisfied with the DVD selection, they wouldn’t know how to answer this question. They have to just choose an inaccurate response or stop the survey altogether.  

Survey questions should always be written in a way that only measures one thing at a time. So ask yourself, “What am I measuring here?” The double-barreled issue is in the second part of the survey question. What are you measuring the satisfaction of? Books and materials. 

Two ways of spotting a double-barreled question are: 

  1. Check if a single question contains two or more subjects, and is therefore measuring more than one thing.
  2. Check if the question contains the word “and.” Although not a foolproof test, the use of the word “and” is a good indicator that you should double check (pun intended) for a double-barreled question.

You can easily fix a double-barreled question by breaking it into two separate questions.

How satisfied are you with our selection of books?
How satisfied are you with our selection of other materials?

This may feel clunky and cause your survey to be longer, but a longer survey is better than making respondents feel confused or answer incorrectly. 

Do you only use good survey questions every day on all of your surveys, always?

Life isn’t black and white, therefore survey questions shouldn’t be either. Build flexibility into your response options by avoiding absolutes in questions and answer choices. Absolutes force respondents into a corner and the only way out is to give you useless data. 

When writing survey questions, avoid using words like “always,” “all,” “every,” etc. When writing response options, avoid giving only yes/no answer options. Let’s look at the examples below:

                    Have you attended all of our library programs this summer?  O Yes   O No

The way this question and response options are phrased would force almost any respondent to answer “no.” Read literally, you’re asking if someone went to every library program you’ve ever had, whether or not it was offered this summer or for their age group. Some respondents might interpret the question as you intended, but why leave it up to chance? Here’s how you might rewrite the absolute question:

How many of our library programs did you attend this summer?

Instead of only providing yes or no as answer choices, you should also use a variety of answer options, including ranges. For instance, if you also asked the survey respondent how many books they read during the summer, your answer options could be:

O    I have not attended any
O    1-3
O    4-6
O    7-9
O    10+
O    I do not know

Chances are, a respondent would feel like they easily fall into one of these categories and would feel comfortable choosing one that’s accurate.

Have you indexed this LRS text in your brain? 

In libraryland, we LOVE acronyms and jargon, but they don’t belong in a survey. Avoid using terms that your respondents might not be familiar with, even if they’re deeply familiar to you. If you use an acronym spell it out the first time you mention it, like this: Library Research Service (LRS). Be as clear and concise as possible while keeping the language uncomplicated. For instance, if asking how many times someone used a PC in the last week, be sure to explain what you mean by PC, and include examples like below: 

In the last week, how many times have you used a PC (ipad, laptop, android tablet, desktop computer)? 

Do you remember all the tools and tips we covered in our bad survey questions segment?

Hey, that’s ok if not! Here’s a quick review of things to do and don’t do in your surveys:

   Do use neutral language.

     Don’t use leading questions that push a respondent to answer a question in a certain way by using non-neutral language.  

   Do ask yourself who wouldn’t be able to answer each question honestly.

     Don’t use loaded questions that force a respondent to answer in a way that doesn’t accurately reflect their opinion or situation.

   Do break double-barreled questions down into two separate questions.

     Don’t use double-barreled questions that measure more than one thing in a question.

   Do build flexibility into questions by providing a variety of response options.

     Don’t use absolutes (only, all, every, always, etc.) that force respondents into a corner.

   Do keep language, clear, concise and easy to understand.

     Don’t use jargon or colloquial terms. 

 

Bad Survey Questions, part 1

In our last post, we talked about when you should use a survey and what kind of data you can get from different question types. This week, we’re going to cover two of the big survey question mistakes evaluators make and how to avoid them so you don’t end up with biased and incomplete data. In other words—all your hard work straight into the trash!

Do you think a leading question is manipulative? 

Including leading questions in a survey is a common mistake evaluators make. A leading question pushes a survey respondent to answer in a particular way by framing the question in a non-neutral manner. These responses therefore produce inaccurate information. Spot a leading question by looking for any of these characteristics:

  • They are intentionally framed to elicit responses according to your preconceived notions.
  • They have an element of conjecture or assumption.
  • They contain unnecessary additions to the question.

Leading questions often contain information that a survey writer already believes to be true. The question is then phrased in a way that forces a respondent to confirm that belief. For instance, take a look at the question below. 

Do you like our exciting new programs? 

You might think your programs are exciting, but that’s because you’re biased! This question is also dichotomous, meaning they must answer yes or no. While dichotomous questions can be quick and easy to answer, they don’t allow any degree of ambivalence or emotional preference. Using the word “like” also puts a positive assumption right in the question, pushing the respondent in that direction. A better way to write this question would be: 

How satisfied are you with our new programs?

In order to avoid leading questions, remember to do the following: 

  • Use neutral language. 
  • Keep questions clear and concise by removing any unnecessary words.
  • Do not try to steer respondents toward answering in a specific way. Ask yourself if you think you know how most people will answer. This might highlight assumptions you’re making.

Why are loaded questions so bad?

Similar to leading questions, loaded questions force a respondent to answer in a way that doesn’t accurately reflect their opinion or situation. These types of questions often cause a respondent to abandon a survey entirely, especially if the loaded questions are required. Common characteristics of loaded questions are: 

  • Use words overcharged with positive or negative meaning. 
  • Questions that force respondents into a difficult position, such as forcing them to think in black and white terms. 
  • Presupposes the respondent has done something. 

Let’s look at some examples of loaded questions. Put yourself in the shoes of different respondents. Can you think of someone that would have trouble or feel uncomfortable answering them?

How would someone who has never accrued late fees answer this question? This places someone in a logical fallacy. If they answer “yes,” they are saying that they once had late fees. If they answer “no” because they never started accruing late fees, then they are saying that they are still getting charged.

Why did you dislike our summer reading program?

How would someone who likes the summer reading program answer this question? This places someone in a logical fallacy. Any answer choices they select would be inaccurate. The question is loaded because it presupposes that respondents felt negatively about the program.

When you used our “ask a librarian” service, was the librarian knowledgeable enough to answer your question?

What if the librarian wasn’t knowledgeable, but was helpful? Maybe they didn’t know the answer, but they pointed you in the right direction so that you could find the answer. This phrasing causes the respondent to think in black and white terms, either they gave you the answer or nothing. Not to mention this question assumes you’ve used the service at all! 

Here are some ways to avoid using loaded questions:

  • Test your survey with a small sample of people and see if everyone is able to answer every question honestly. 
  • If you aren’t able to test it, try putting on multiple hats yourself and ask yourself who wouldn’t be able to answer this? 
  • You can also break questions down further and use what’s called “skip logic.” This means you would first ask respondents, “Have you used our ask a librarian service?” If they answer “yes,” then you would have them continue to a question about that service. If they answer “no,” they would skip to the next section. 

How useful was this blog post for learning about surveys and helping you file your taxes?

As the bad question example above might allude to, we aren’t done with this topic! In our next post, we’ll talk about double-barreled questions and absolute questions, so stay tuned! As always, if you have any questions or feedback we’d love to hear from you at LRS@LRS.org.