Reading (and Recording) the Room: Focus Groups

Ready to polish up your people skills? This month we are taking a step back from analysis and turning to data collection again. In previous posts we touched on different data collection methods, and in May we discussed the process of coding qualitative data. Understanding the basics of qualitative analysis opens up a world of possibilities for evaluation, and it definitely helps to know what you are taking on before beginning qualitative research since coding is an extensive process. With this background on coding qualitative data hopefully you feel more equipped to begin collecting it. Today we will focus our attention on a qualitative data collection method that we have mentioned several times but have yet to delve into—focus groups!

Introductions

Focus groups consist of several selected participants that partake in an intentional conversation directed by a moderator in order to gather community input. They are used to answer open-ended questions and gain a deeper understanding from diverse viewpoints. In this post we will outline what a successful focus group looks like as well as acknowledge their limitations.

If you have not participated in or led a focus group before, here’s a general rundown of how they start. Once everyone is brought together in the same space (everyone includes a moderator, usually an assistant moderator or note taker, and the participants), the moderator facilitates introductions. Icebreakers and friendly chatter can help the participants feel more relaxed, which will lead to a more productive session. Ground rules that foster a respectful, safe environment should be clearly established by the moderator, and consent to participate is received from all participants. As the moderator, you want to be mindful of time during the introductions. Remember, the participants are spending their valuable time to be there, and you want to make sure the resources you spent to bring them together yield valuable insights. 

Moving Past the Small Talk

Jumping into your predetermined questions will begin the true discussion. Throughout the allotted time, moderators have the incredibly important job of fostering a safe environment for participants to speak candidly. The quality of data you collect hinges on the ability of the moderator to maintain control of the conversation while ensuring participants are at ease. A level of trust needs to be established to ensure that the participants are comfortable speaking their mind, which is essential to gather reliable data. Moderators can foster this environment through these ten actions:

  • Giving verbal and nonverbal listening cues
  • Asking follow up questions
  • Asking for clarification before assuming what someone means
  • Being flexible if important, unanticipated points arise 
  • Steering the conversation back on topic if it goes astray
  • Emanating confidence
  • Staying neutral 
  • Always being respectful of differing opinions
  • Ensuring everyone has an equal chance to share
  • Avoiding leading questions

If you find yourself moderating a focus group, remember that you are ultimately the leader of the group, which gives you the power to politely direct the conversation. This is particularly important if one participant is dominating the conversation and others are being left out. 

All in all, leading a focus group requires impeccable social and leadership skills. 

Active Listening 

Conducting a focus group can reveal surprising information and perspectives that may be crucial to your research but you would not have thought to include in an interview or survey on your own. Selecting a diverse group of participants can reveal aspects of your study you never knew you were missing. While conducting a single focus group still ranks low on the Community-based participatory research (CBPR) continuum, it can be one tool out of many to begin incorporating more CBPR practices into your library. 

Because you may learn something significant that steers your research, it is best to conduct a focus group early on, so it is easier to incorporate your findings into your research moving forward. The focus group questions should all directly relate to your purpose, but the unique and most advantageous aspect of a focus group is the social dynamic and dialogue that prompts complex idea sharing. 

Inhibitions

Before you get too excited by all the thought provoking feedback that focus groups can produce, make sure you are also aware of some of their pitfalls as well. For starters, focus groups are often too small to be a statistically significant sample size for the population. If you are hoping to generalize your data to a large population or to make transformative planning decisions, a focus group should not be your only data source, but it can be a powerful source of data when combined with other methods. 

Although a single focus group may take less time and money to conduct than five separate interviews, focus groups also take a lot of planning to be well executed and should not be used as a rushed way to collect data. Similarly, focus groups require a skilled moderator in order to collect reliable data, and not having the right person available could cause your focus group to be less effective.

Possibly the most important thing to consider is that focus groups cannot guarantee anonymity due to the number of participants listening to each other. Sensitive topics that participants may not want to discuss in front of others are not well suited for focus groups since people are likely to hold back and not share their true internal reactions. In any focus group, regardless of topic, the moderator’s and participants’ expressed reactions can significantly influence what is subsequently shared. Therefore, focus groups can be susceptible to producing a consensus that may in fact be misleading, otherwise known as groupthink. This is another reason focus groups are often used in conjunction with other data collection methods. 

Takeaways

Clearly, focus groups have their strengths and weaknesses, but if well conducted they can bring invaluable, diverse perspectives to your research. Don’t forget to record the session so you have a complete transcript available to code! To wrap up, here are five key points to remember about focus groups. 

  1. It is essential to have a skilled moderator who will build a safe sharing space. 
  2. Remember to gain consent from each participant and lay ground rules to ensure everyone’s contributions are respected.
  3. Focus groups are generally not made up of a statistically significant sample size and therefore data from a single focus group should not be generalized to a large population. 
  4. Make sure participants are aware that confidentiality cannot be guaranteed. 
  5. Conduct focus groups early on in your study so that the unique insights they bring can inform your work moving forward. 

Thanks for reading! Next, we will cover some important logistical points such as how to select participants and ask the right questions. If you have any questions or comments we would love to hear from you. You can contact us at LRS@LRS.org.

Mapping the Methods: Content Analysis Part 2

Welcome back! I am excited to dive back into content analysis with you. It is no secret that content analysis can be far from a walk in the park and is possibly more comparable to following a treasure map across a remote island. Therefore, I will fill this post with a review of what we have already discussed, the final steps for analysis, obstacles to be aware of along the way, and a few helpful hints. 

Returning to Coding

We left off last week on coding, where short labels are applied to qualitative data which represent meaning within this data. Returning to this process throughout your analysis allows you to condense repetitive codes and make sure you are heading in a direction that is consistent with your data as a whole to avoid bias and better answer your research questions (or find the treasure, so to speak). 

Coding is actually the culmination of two steps. First, decoding data to find meaning and second, encoding it by applying a word or phrase that represents this meaning. Keeping these two steps in mind helps demystify the process of coding as a whole. The codes you create and how you apply them will depend solely on your data and the questions you are trying to answer. When creating codes it is helpful to think of them as not only labels but also links that piece your data together. 

Coding is a key step that organizes your data for further analysis. Once codes are applied to your data, you are ready to begin the more abstract part of analysis by categorizing the codes and searching for themes. 

Categorization of Codes

Categorizing codes is essentially synthesizing them into your analysis by identifying patterns in the coded data. Categories for codes are phrases that encompass an idea which multiple codes fall within. While searching for patterns of underlying meaning in your data, remember that patterns often develop by grouping similarities but can also develop through grouping data by outliers, frequency, order, causes, or other relationships. These possible pattern configurations are all helpful tools for categorizing your coded data. 

To continue with our previous example, in a response to the question “Do you feel that the library is an essential community resource, why or why not?” the code “family support” may fall into the category “highly valued early learning programs.” Other codes such as children’s programs, reading development, and storytime may also fall within this category.

Data outliers should not be viewed as problems but as points of interest and discovery. In this case, evidence that some patrons raising children do not use the library’s early learning resources does not necessarily make the previously mentioned category (“highly valued early learning programs”) wrong, but may lead to a new category entirely or reveal how these programs are more accessible to certain patrons than others. 

Developing Themes

Themes are a more abstract level of insight content analysis might reveal, meaning they are general and applicable beyond a single study. Themes may not always evolve from your coding and that is OK. Codes and categories can still be informative and point to paths for further research to answer your key questions accurately if themes do not develop.

Themes develop when you identify consistent patterns that stretch across the coded and categorized data and bring insight to your main research question. Once you have charted these patterns it is time to start digging for the treasure! Themes are not found by leaping to conclusions and away from your data, they develop through careful analysis of codes and categories to triangulate meaning based on evidence.

If multiple categories point to it, a theme developed from our example study could be “early learning programs bring people together from across the community” This is a concrete answer to the overarching research question “How can libraries increase a population’s sense of community?” and it could be used to inform decision making on future programming in your library.

Obstacles

Content analysis is a time intensive and sometimes frustrating process. You must be willing to dedicate time and effort to it for your conclusions to be accurate and limit bias. Also, it focuses solely on the content of your data without taking into consideration outside factors such as societal context. This limited focus, and the reduction of data to codes, categories and themes, may allow nuances of meaning to be lost. Condensing the data can be problematic if important aspects of it are ignored, or it can be exactly what you need to do to find the buried answers you are looking for. 

A Helpful Hint!

Coding does not have to be a lonely process. In fact, collaborating with a team can help you navigate this work and make the whole journey more enjoyable. As we discussed in the last post, each person will not apply the same codes to each excerpt and that is OK. Being open to a range of perspectives will bring insights to the data that you may never see alone. The possibilities for coding are enormous and narrowly focusing on one route can obscure key information and get you stuck.

Finally, make sure to take careful notes of your process and the codes you use. This will be helpful for you to refer back to throughout your analysis, and it will be helpful for those you share your study with to understand the work you put into it! 

Conclusion

Initially, qualitative data may feel overwhelming or ambiguous, but coding provides a map for condensing the data until you can categorize it and find meaning based within the text. It is rewarding when themes appear that were initially buried in the data. As you are putting time and effort into it, make sure to keep reminding yourself of the research question, the importance of your work, and your end goal. You may uncover key information when you least expect it! Coding reappears in other methods for qualitative analysis, so be sure to keep this information in mind as we continue this chapter.

Mapping the Methods: Content Analysis Part 1

Hello data enthusiasts! Let’s return to our exploration of qualitative analysis. Last time we uncovered a few ways qualitative analysis can expand research findings by looking beyond number data for better insight on human experiences. Now I want to explore strategies for putting qualitative analysis into practice.

Concentrating on Content Analysis

As we discussed last time, qualitative analysis is flexible and adaptive to different types of data. As you may have already guessed, this means there are multiple methods for qualitative analysis depending on the kind of research you are conducting, the form of your data and the questions you are asking. Content analysis is one of many methods of qualitative analysis. It carefully filters, categorizes and condenses qualitative data sets (often text based) to discover hidden (or not-so-hidden) meanings! It is one of the most common methods of conducting qualitative analysis, and so it is a great place for us to start this chapter.

The key question that content analysis helps answer is, how do you categorize this textual data to best identify important patterns, anomalies, and relationships that answer your research question?

Planning for Success

You can visualize the content analysis process as following a treasure map where the treasure (buried in the data) is the insights your analysis will eventually reveal!

First, there’s quite a bit of preparation that needs to take place to ensure your analysis goes as smoothly as possible. For content analysis you should clearly identify the main question you want your data to answer. In other words, what is the treasure that you want to find? Content analysis is a long, strenuous process and having a specific goal will help direct you along the way. To build off our example from the last post, a survey that asks the open ended question, “Do you feel that the library is an essential community resource, why or why not?” may have a driving analysis question of, “How can libraries increase a population’s sense of community?”

However, as the analyst, you may now be staring at fifty lengthy responses, all of which have a person behind them with their own unique perceptions and experiences they want to share with you. You know there is useful information within the responses, and you want to make sure you are considering everyone’s responses by using correct research methods. 

This means it’s time to read your data, then read it again! While it takes both patience and time, this step can also streamline the rest of the process. You don’t need to read it with any specific goal in mind except to be open minded, consciously consider any biases you have, and take notes of your general impressions. Instead of fixating on specific responses try to take a step back and look at the data as a whole. You want to know your data thoroughly as you embark on the next step, just as you would want to know the map before setting off on an adventure!

Next Steps

Once you know your data backward and forward it is FINALLY time to start your content analysis with a method called coding. Coding is essentially categorizing the text with descriptive labels, or codes.

Coding has multiple steps, but the process is also repetitive and cyclical. Remember, you can always return to previous steps and adjust something so your analysis better encompasses the data. It’s unlikely you will find the treasure immediately, so always be willing to backtrack if necessary! 

Meaning Units

Before you create codes for your data, it may help to condense text into sections that hold meaning, called meaning units. Don’t let this step intimidate you. You still want these meaning units to be close to, if not literally, the text of the data. For example, perhaps someone’s response to our example question includes the sentence, “I’ve always enjoyed the library, but it was a particularly great resource for me while raising my children.” A condensed meaning unit you may take from this is “a great resource while raising my children.”

It is important that your meaning units relate as directly to the text as possible, and you are careful not to over-interpret or otherwise misrepresent the responses in the data set . 

Codes

You begin to interpret the data and take it to a slightly more abstract form with the next step, which is applying codes. An example of coding is taking the meaning unit “a great resource while raising my children” and assigning the label “family support.” If these are not the specific words you would use to describe this meaning unit that is OK. Codes will vary person to person and also change depending on the focus of your driving question.

As long as you work diligently to keep the coding faithful to the text, while acknowledging and limiting your bias from previous experiences on the subject, codes will not be right or wrong. Many codes will be used repetitively throughout your data analysis. You may assign the code “family support” to other meaning units from other survey participants if appropriate. There will likely be certain codes that are used often and other outlying codes that are not. There is also no right or wrong answer for the number of codes that you use. It depends on the size of your data set and the variations within it. 

It can be helpful to use your intuition while creating codes as long as you are still basing these labels in the text and staying aware of how your biases will affect their selection. Similarly, there may be aspects of a map you intuitively understand, but it wouldn’t be very smart to throw the map away entirely and assume you know the way yourself.   

While coding you may need to change meaning units that suddenly don’t make sense moving forward. Remember this backtracking is a normal part of the process to make sure the codes you are using reflect the whole of the data the best that they can. 

Conclusion

After creating codes and applying codes to your data your analysis is off to a good start! Stay tuned next week to learn where to go from here. We will explore categorizing the codes you create to find themes and finish your analysis! 

Here is a quick reflection on what this post covered:

  1. Content analysis is a common method for qualitative analysis that categorizes data to reveal key research findings.
  2. There is a lot of preparation involved in this long process. Take notes to track your work and know your research question and data thoroughly!
  3. Meaning units help you identify the meaningful parts of the text that you will code. 
  4. Codes are descriptive labels you apply to meaningful parts of your data to make sense of it all. 
  5. Your intuition can be helpful, but only if you are aware of how your biases may affect your analysis. Find a balance!

Libraries Invested in Our Planet

Happy Earth Day! The theme for this Earth Day is “Invest in Our Planet,” and so it is a fantastic time to reflect on our library’s role within the community, how it supports American Library Association’s (ALA) core value of sustainability, and the importance of investing our resources, time and energy towards sustainable practices.

As the climate crisis develops, I have been thinking a lot about what I can do to have the most far-reaching impact on this global issue (with very real local effects already plain to see here in Colorado). The answer is quite simple: talk about it! By talking about it, you bring awareness of the climate crisis to others around you which influences them to act alongside you and participate as one small, but crucial, building block of an international effort. In order to make the large-scale, systematic changes required to reduce greenhouse gas emissions and revive our planet’s ecosystems, all kinds of people from all walks of life must work together, and the first step to this collaboration is talking about the issue. 

 As both a community hub and a provider of reliable information, libraries are set up to help facilitate conversations and connections in the community. Below is a visualization of the three key aspects the ALA outlines to foster sustainability within libraries.

There are close ties between ALA’s three aspects of sustainability and the wording of this year’s Earth Day theme.  

Economy / Invest

As a public service, libraries must be sure they are investing responsibly in order to best meet their community’s needs. The word “invest” in this year’s Earth Day theme implies a focus on monetary resources, as well as time and energy. Bringing economic values to the forefront of sustainability efforts shows how economic and environmental values do not have to contradict each other. Sustainable initiatives are not at odds with economic growth; in actuality, economic growth is unstable if it is based on unsustainable resources and practices. Economic and sustainable goals go hand in hand, and it is crucial that libraries invest their resources, time and energy in a socially responsible manner to support both!

Equity / Our

We need to celebrate the diversity of humanity on our planet. As environmental challenges increase economic disparities and displace large numbers of people from their homes, sustainability will only be achieved if we support each other and strive for equity. Equity refers to addressing the differing needs of human populations so we can all reach our full potential. This planet belongs to all of us, and growing equity requires that underrepresented populations are both included and celebrated within the “our” of “invest in our planet.” Access to information and educational opportunities is key to increasing equity. This serves as a reminder of the library’s responsibility to increase equity by continually responding to the needs of the whole community and paying particular attention to vulnerable or marginalized groups, many of which bear the worst impacts of climate change.

Environment / Planet

The first picture of Earth taken from space mobilized the environmental movement and is said to have led to the first Earth Day! The perspective of this photo forced people to admit, many for the first time, just how finite our planet is. The environment is not a concept that anyone can fully disconnect from because we exist as a part of the environments around us. All our material belongings have their roots in the natural resources that are provided by our planet. Remembering that we only have one planet Earth brings to light the importance of valuing our environment. Our home planet truly does deserve our time and resources. 

Of course, none of this is news to libraries. Libraries at their core are a sustainable system. If a book is borrowed five times, production costs and paper resources are saved, assuming it saves five individual copies from being purchased. Scale that up to incorporate all the books checked out in every library system and it becomes clear that libraries are a natural model for the sustainable conservation of resources.

Call to Action!

That being said, our rapidly changing times demand evolving solutions. Over thirty years ago the Green Library Movement was created to lead in sustainable practices. Today, one of the greatest challenges the Green Library Movement faces is thinking outside the box so that its impact encompasses efforts such as efficient building design, but also extends beyond a library building’s footprint to influence the community at a larger scale. You can find some more inspiration to incorporate economically feasible, socially equitable, and environmentally sound projects into your library here!

While it may be difficult to quantify, a library’s sphere of influence is powerful. So this Earth Day, and every day, it is worth taking time to reflect on how your library is investing in our planet.

New Season, New Chapter: Qualitative Analysis

Hello Everyone! I am honored to have the opportunity of continuing the LRS series staple Between a Graph and a Hard Place. Our last post began with “happy fall,” and now we are well on our way to Colorado’s mud season. Along with the change in seasons, a shift in topics feels like a great way to begin the series anew.

We left off discussing the use of observation as an important method for data collection and how to observe as unobtrusively as possible. To begin this new chapter, let’s take a step back to explore a type of research where observation is a crucial tool: qualitative analysis. 

The importance of numbers in research is impossible to overstate. While they can still be misleading, poorly displayed or simply inaccurate, we can all agree there is something reassuring about having the number data to back up your assertions. However, in a world filled with as many unique human experiences as ours, numbers alone (meaning quantitative research) can’t always give comprehensive and nuanced answers to every question, and that is where qualitative analysis shines.  

Background

In previous posts we established that qualitative analysis delves into data derived from stories and answers questions such as “why” and “how.” Now, let’s dig a little deeper. Qualitative analysis is not only a tool for stories. It is used to examine survey responses, feedback from focus groups, narratives, anecdotes, social media posts, secondary and primary sources and even artwork! In fact, practically anything that includes human expressions, perceptions, emotions, assumptions and/or experiences can be analyzed qualitatively. As muddy as this may seem at first glance, understanding the potential for qualitative analysis through multiple mediums is crucial to identifying where and how you can incorporate it into your own research.

What you might have picked up on already is that, in its broadest and most simplified sense, qualitative analysis is used to make sense of data that is not numerical. Qualitative analysis is a tool for qualitative research where language and behavior are studied to find patterns and/or anomalies that convey information about a data set.

Helpful Aspects of Qualitative Analysis

The social sciences have applied qualitative analysis to their research for over a century, but until the middle to late 20th century, qualitative data collection and analysis were thought to conflict with quantitative research methods. One reason for this is that qualitative research, as opposed to quantitative research, accepts that the researcher is never fully objective and detached from the data being examined. When analyzing language and behavior, it is important for researchers to be aware of and limit their biases by understanding and reflecting upon how their own experiences and assumptions are a lens through which the data is viewed. The researcher will be an aspect of the study in a qualitative analysis. 

The fact that qualitative analysis plays by different rules than quantitative analysis does not lessen the value of the insights that qualitative analysis uncovers. Additionally, quantitative and qualitative research do not have to be at odds with each other. These methods can work together to provide a better picture of the phenomenon under investigation. Here is an example of how qualitative analysis can bring new insights to a study:  

Let’s say a Likert scale, a quantitative tool, is used in an attempt to assess the extent to which library patrons view their library as an essential community resource. Participants in the study are asked how they feel about the statement “libraries are an essential community resource,” and one of many patrons selects the answer “strongly agree.” This is considered quantifiable because their selection of “strongly agree” counts as a tally toward the total number of participants selecting this answer. However, when the same question is asked and the same participant has the opportunity to give a narrative response they write, “I agree with this statement, but the library is also where I found my community. I met my closest friends through library programs and afterwards became involved in community outreach through them.” 

Of course, this is only one response, and a detailed qualitative analysis would include an extensive data set. However, themes such as finding community and community outreach appear in this short answer and imply that the library builds community as much as it acts as a resource for it. If this is a consistent theme through multiple participant responses, it could be crucial enough to shift the focus of the study or open up avenues to find support for the library system in the future. 

Hopefully this sheds some light on how qualitative analysis can give you insights where quantitative analysis, when used alone, might fall short. Qualitative analysis can feel subjective and potentially problematic when compared to quantitative analysis, but the important thing to remember is that the two methods exist separately as apples and oranges. In fact, the entire order of the research can be turned upside down in qualitative analysis because it is not always necessary to start with a single hypothesis that you are attempting to prove or disprove. Qualitative analysis allows for more flexibility throughout a study because you analyze data as data collection is taking place, not only at the end as is done in quantitative analysis. 

Conclusion

If you’re feeling a bit lost, or are just swamped with things to do today and need something quick to skim. Here’s a recap of five points we just covered:

  1. There are times when your research questions will not be comprehensively answered by quantitative data, let qualitative data help! 
  2. Qualitative data takes many different forms. Widening your vision of what qualifies as “data” can reveal new opportunities for learning. 
  3. There are a variety of methods for analyzing qualitative data, but whether the researcher is using their intuition or a computer software, they will never be fully removed from the research findings.
  4. Qualitative and quantitative analysis are two completely different approaches to data, viewing one through the lens of the other will only lead to frustration.
  5. Qualitative analysis allows for more flexibility to shift focus throughout the study as the data is analyzed.

These parameters for qualitative analysis will be used throughout the rest of this chapter as we begin to answer questions such as, how do you analyze this data accurately? When is it appropriate to incorporate qualitative analysis into your research? And what limitations does qualitative analysis have? I hope to explore the answers with you in future posts, and in the meantime, happy spring!  

New research: first year college students need support assessing authority

New research: first year college students need support assessing authority

Intro
Can I trust this information? We use information constantly to learn, make decisions, and form opinions. Every day library staff in every setting strive to teach people how to find the information they need and how to identify trustworthy sources. But what is trustworthy? How can you tell? What about when sources contradict each other? What characteristics distinguish sources from each other?

Who
As a former information literacy librarian at a university, these questions haunted me when I was teaching. I was lucky to meet two librarians at the University of Denver (DU) who shared a passion for this topic: Carrie Forbes, the Associate Dean for Student and Scholar Services, and Bridget Farrell, the Coordinator of Library Instruction & Reference Services. Together, we designed a research project to learn more about how students thought about “authority as constructed and contextual,” as defined in the ACRL information literacy framework.

Why
As instructors, we had seen students struggle with the concept of different types of authority being more or less relevant in different contexts. Often they had the idea that “scholarly articles = good” and “news articles = bad.” Given the overwhelming complexity of evaluating information in our world, we wanted to help students evaluate authority in a more nuanced and complex way. We hoped that by understanding the perceptions and skills of students, particularly first year students, we could better teach them the skills they need to sort through it all.

How: Data collection
We designed a lesson that included definitions of authority and gave examples of types of information students could find about authors, like finding their LinkedIn page. The goal here was not to give students an extensive, thorough lesson on how to evaluate authority. We wanted to give them enough information to complete the assignment and show us what they already knew and thought. Essentially, this was a pre-assessment. (The slides for the lesson are available for your reference. Please contact Carrie Forbes if you would like to use them.)

The project was approved by the Institutional Review Board at DU. Thanks to a partnership with the University Writing Program, we were able to collect data during library instruction sessions in first year writing courses. During the session, students were asked to find a news article and scholarly article on the same topic and then report on different elements of authority for both articles: the author’s current job title, their education, how they back up their points (quotes, references, etc.), and what communities they belong to that could inform their perspective. Then we asked them to use all of these elements to come to a conclusion about each article’s credibility, and, finally, to compare the two articles using the information that they had found. We collected their responses using a Qualtrics online form. (The form is available for your references. Please contact Carrie Forbes if you would like to use it.)

Thanks to the hard work of Carrie and Bridget, the generous participation of eight DU librarians, 13 writing faculty, and over 200 students who agreed to participate after reviewing our informed consent process, we were able to collect 175 complete student responses that we coded using a rubric we created. Before the project was over, we added two new coders, Leah Breevoort, the research assistant here at LRS, and Kristen Whitson, a volunteer from the University of Wisconsin-Madison, who both contributed great insight and many, many hours of coding. Carrie and Bridget are working on analyzing the data set in a different way, so keep your eyes out for their findings as well.

Takeaway 1: First year students need significant support assessing authority

This was a pre-assessment and the classroom instruction was designed to give students just enough knowledge to complete the assignment and demonstrate their current understanding of the topics. If this had been graded, almost half (45%) of the students would have failed the assignment. Most instruction librarians get 45 minutes to an hour once a semester or quarter with a  group of students and are typically expected to cover searching for articles and using databases–at the very least. That leaves them very little time to cover authority in depth.

A pie chart showing that 45% of students would have failed the assignment if it was graded

Recommendations

  • The data demonstrates that students need significant support with these concepts. Students’ ability to think critically about information impacts their ability to be successful in a post-secondary environment. Academic librarians could use this data to demonstrate the need for more instruction dedicated to this topic.
  • School librarians could consider how to address these topics in ways appropriate to students in middle school and up. While a strong understanding of authority may not be as vital to academic success prior to post-secondary education, the more exposure students have to these ideas the more understanding they can build over time.
  • Public libraries could consider if they want to offer information literacy workshops for the general public that address these skills. Interest levels would depend on the specific community context, but these skills are important for the general population to navigate the world we live in.


Takeaway 2: First year students especially need support understanding how communities and identities impact authority in different ways.

On the questions about communities for both the news and scholarly articles, more than a third of students scored at the lowest level: 35% for news and 41% for scholarly.  This is the language from the question:

Are there any communities the author identifies with? Are those communities relevant to the topic they are writing about? If so, which ones? How is that relevant to their point of view? For example, if the author is writing about nuclear weapons, did they serve in the military? If so, the author’s military experience might mean they have direct experience with policies related to nuclear weapons. 

Bar chart showing two areas with the highest percentage of students receiving the lowest possible scoreAsking students to think about communities and identities is asking them to think about bias and perspective. This is challenging for many reasons. First, it is difficult to define what is an identity or community that is relevant to a specific topic and to distinguish between professional identities and personal identities. For many scholarly authors, there is little publicly available information about them other than their professional memberships. As Kristen observed: “it was really common for students to list other elements of authority (work experience, education) in the community fields.”

Second, how can you teach students to look for relevant communities and identities without making problematic assumptions? For example, one student did an excellent job of investigating communities and identities working with an article entitled Brain drain: Do economic conditions “push” doctors out of developing countries? The student identified that the author was trained as both an economist and a medical doctor and had unique insight in the topic because of these two backgrounds.

What the student missed, though, was that the author received their medical degree from a university in a developing country and how this may give them unique insight into doctors’ experiences in that context. In this example, the author’s LinkedIn made it clear that they had lived in a developing country. In another instance, however, a student thought that an article about the President of Poland was written by the President of Poland–which was inaccurate and led to a chain of erroneous assumptions. In another case, a student thought an article by Elizabeth Holmes was written by Elizabeth Holmes of the Theranos scandal.  While it seems positive to push students to think more deeply about perspectives and personal experiences, it has to be done carefully and based on concrete information instead of assumptions.

Third, if librarians teach directly about using identities and communities in evaluating information sources, they need to address the complex relationships between personal experience, identities, communities, bias, and insight. Everyone’s experiences and identities give them particular insight and expertise as well as blind spots and prejudices. As put by the researcher Jennifer Eberhardt, “Bias is a natural byproduct of the way our brains work.”

Recommendations 

  • There are no easy answers here. Addressing identities, communities, bias, and insight needs to be done thoughtfully, but that doesn’t make it any less urgent to teach this skill.
  • For academic librarians, this is an excellent place to collaborate with faculty about how they are addressing these issues in their course content. For public librarians, supporting conversations around how different identities and communities impact perspectives could be a good way to increase people’s familiarity with this concept. Similarly, for school librarians, discussing perspective in the context of book groups and discussions could be a valuable way to introduce this idea.


Takeaway 3: An article with strong bias looked credible even when a student evaluated it thoroughly.

When we encountered one particular article, all of the coders noticed the tone because of words and phrases like “persecution,” “slaughtered,” “murderer,” “beaten senseless,” and “sadistically.” (If you want to read the article, you can contact the research team: some of the language and content may be triggering or upsetting.) This led us to wonder if it was a news source, and look more into the organization where it was published. We found was it was identified as an Islamophobic source by the Bridge Initiative at Georgetown University.

The student who evaluated this article completed the assignment thoroughly – finding the author’s education, noting their relevant communities and identities, and identifying that quotes were included to back up statements. The author did have a relevant education and used quotes. This example illustrates just how fuzzy the line can become between a source that has authority and one that should be considered with skepticism. It is a subjective, nuanced process.

This article also revealed some of the weaknesses in our assignment. We didn’t ask students to look at the publication itself, like if it has an editorial board, or to consider the tone and language used. These both would have been helpful in this case. As librarians, we have so much more context and background knowledge to aid us in evaluating sources. How can we train students in some of these strategies that we may not even be very aware we are using?

Recommendations: 

  • It would be helpful to add both noticing tone and investigating the publication to the criteria we teach students to use for evaluating authority.
  • Creating the rubric forced us to be meta-cognitive about the skills and background knowledge we were using to evaluate sources. This is probably a valuable conversation for all types of librarians and library staff to have before and while teaching these skills to others.
  • The line between credible news and the rest of the internet is growing fuzzier and fuzzier and is probably going to keep changing. We struggled to define what a news article is throughout this process. It seems important to be transparent about the messiness of evaluating authority when teaching these skills.


Takeaway 4: Many students saw journalists as inherently unprofessional and lacking skills.

We were coding on the rubric more so than doing a thematic qualitative analysis, so we don’t know how many times students wrote about journalists not being credible in their comparison between the news and scholarly article. In our final round of coding, however, Leah, Kristen, and I were all surprised by how frequently we saw this explanation. When we did see this, the student’s reasoning was often that the author’s status as a journalist was a barrier to being a credible source, like the article could not be credible because the author was a journalist. Sometimes students had this perspective when the author was a journalist working for a generally reputable publication, like the news section of the Wall Street Journal.

Recommendations

  • This is definitely an area that warrants further investigation.
  • Being aware of this perspective, library staff leading instruction could facilitate conversations around journalists and journalism to better understand this perspective.


How: Data analysis

It turns out that the data collection, which took place in winter and spring 2018, was the easy part. We planned to code the data using a scoring rubric. We wanted to make sure that the rubric was reliable, meaning that if different coders used it (or anyone else, including you!) it would yield similar results.

The process for achieving reliability goes like this: everyone who is participating codes the same set of responses, let’s say 10. Then you feed everyone’s codes into statistics software. The software returns a statistic, Cronchbach’s alpha in this case, that indicates how reliably you are all coding together. Based on that reliability score, all the coders work together to figure out where and why you coded differently. Then you clarify the language in the areas of the rubric where you weren’t coding reliably, so you all can hopefully be more consistent the next time. Then you do it over again with a new group of 10 responses and the updated rubric. You have to keep repeating that process until you reach a level of reliability that is acceptable. In this case we used a value of .7 or higher, which is generally acceptable in social science research.

In spring 2021, our scoring and rubric tested as reliable on all 11 areas of the rubric. (We rounded up to .7 for the one area of the rubric that tested at .686.) Then Leah, Kristen, and I worked through all 175 student responses, each coding about 58 pieces of student work. In order to resolve any challenging scores, we submitted scores we felt uncertain about to each other for review. After review, we decided on a final score, which is what was analyzed here. Below you can see the percentage of student scores at different levels for each area of the rubric, as well as the reliability scores for each (Cronbach’s alpha).

A table that displays each area of the rubic, what percntage of students scored at each level of proficiency, and the Cronbach's alpha reliability statistic.

We could write a whole different post about the process of coding and creating the rubric. We had many fascinating discussions about what constitutes a news article, how we should score when students have such different background knowledge, discussing our own biases that we bring to the process–including being prone to score more generously or more strictly. We also talked about when students demonstrated the kind of thinking we were looking for but came to a different conclusion than we did, or instances where we thought we understood where they were going with an idea but they didn’t actually articulate it in their response. Evaluating students’ responses was almost as messy and subjective as evaluating the credibility of sources is!

Ultimately, we wanted the rubric to be useful for instructors so that guided the design of the rubric and our coding. “Beginning,” the lowest score, came to represent situations where we thought a student would need significant support understanding the concept. “Developing,” the middle score, indicates that a student understands some of the concept, but still needs guidance. “Skillful,” the highest score, meant that we would be confident in the student’s ability to evaluate this criteria independently. We are excited to present, after all those discussions, such a reliable rubric for your use. We hope it will be a useful tool.

But you don’t have to take my word for it!

We have shared the slides for the lesson, the assignment, and rubric for how to score it. If you would like to use the slides or assignment, please contact Carrie Forbes at the University of Denver. To use the rubric, please reach out to Charissa Brammer at LRS. Have questions? Please reach out. Nothing would delight us more than this information being valuable to you.

On a personal note, this is my last post for lrs.org. I am moving on to another role. You can find me here. I am grateful for my time at LRS and the many opportunities I had to explore interesting questions with the support of so many people. In addition to those already listed in this article, thank you to Linda Hofschire, Charissa Brammer, and everyone at the Colorado State Library for their support of this project.