Do the Data Have an Alibi?

It’s hard to know who you can trust out there. “Fake news” is now a prevalent concept in society and the internet makes it possible for anyone to publish—well, anything. As library professionals, you’ve probably already acquired a great deal of experience determining the credibility of sources. Many of the same skills are needed to evaluate the credibility of data. So amid the bombardment of new (sometimes conflicting) data out there, here are some questions to ask when verifying sources and checking for bias. Although not fool-proof, these strategies can help you avoid misleading information and false claims.

Let’s establish something important upfront: ANY scientific paper, blog, news article, or graph can be wrong. No matter where it’s published, who wrote it, or how well supported are the arguments. Every dataset, claim, or conclusion is subject to scrutiny and careful examination. Sounds like a lot of work? As we incorporate these strategies into our everyday consumption of information, they become second-nature, like checking your blind spots when changing lanes in traffic. You don’t have a sticky note on your dashboard reminding you to do it, but you do it without even thinking about it. 

When reviewing any kind of research, we often think of the five Ws—where, when, who, what, and why. The same can apply here.

Question 1: Where are the data published? 

Imagine a spectrum of credibility with reputable scientific journals on one end and blogs on the other. Government, public sector, and news media sources fall somewhere in between. What distinguishes one source from another on the spectrum is their verification process. Most journal articles undergo peer-review, which checks if a researcher’s methods are sound and the conclusions consistent with the study’s results. But be aware of “predatory journals” that charge authors to publish their research. Here’s a nifty infographic to help tell the difference. 

On the other end of the spectrum, anyone can create a blog and publish unfounded claims based on questionable data. Somewhere between these two extremes are government and public-sector (think tanks, non-profits, etc.) sources. It wouldn’t make much sense for an organization to publicize data that doesn’t support their mission. So while the data might be accurate, it might not contain the whole story. On the other hand, objectivity serves as a foundation for journalism, but the speed at which journalists are forced to get the news out there means mistakes happen. For instance, data might be misrepresented to visualize the attention-grabbing portion of the story leading to distortions in a graph’s axis. When mistakes happen, it’s a good sign if the news source posts retractions and corrections when appropriate. 

Question 2: When were the data collected?

I was looking at a dataset recently on the use of technology in a given population. Only 27 percent accessed the internet and 74 percent had a VCR in their home. In this case it’s probably safe to assume the data are outdated, but it might not always be so obvious. Additionally, some data become outdated faster than others. For instance, technology and medicine can change day to day, but literacy rates don’t skyrocket overnight. Always try to contextualize information within the time it was written. 

Question 3: Who are the authors?

Part of the reason we aren’t talking specifically about COVID-19 data is because we aren’t experts in the fields of public health or epidemiology. When looking at data, take a minute to assess who conducted the research. Google their name to find out if they’re an expert on the topic, have published other articles on the subject matter, or are recognized as a professional in a related field. 

Question 4: What are their sources?

Police don’t take a suspect’s alibi at face value, they corroborate the story. Sources should also corroborate their claims by citing other credible sources. Journal articles should have a lengthy works cited section and news articles should name or describe their sources. Charts and graphs usually cite a source somewhere underneath the visual. For primary data, try to find a methodology section to check things like how the data were collected and if there was a sufficiently large sample size.

Question 5: Why did they publish this? 

For years, one of the world’s largest soft drink companies funded their own research institute to publish studies that said exercise was far more important than diet in addressing obesity. Their motivation? As obesity levels rose globally, they wanted to ensure people would continue buying their products. In short, everyone has a motivation and it’s our job to uncover it. A good place to start is by tracing the money trail. Who is funding the publication or study? Do they have an incentive (political, business, financial, to gain influence, etc) other than getting the data out there? Use this information to decide what kinds of bias might be impacting the findings.

Whew, that was a lot. Here’s a simple chart that summarizes the info if you need a review. Just remember, it’s hard to ever know if something is 100% accurate. By asking these questions, we aren’t just accepting information on its face, but rather taking the time to review it critically.

LRS’s Between a Graph and a Hard Place blog series provides strategies for looking at data with a critical eye. Every week we’ll cover a different topic. You can use these strategies with any kind of data, so while the series may be inspired by the many COVID-19 statistics being reported, the examples we’ll share will focus on other topics. To receive posts via email, please complete this form.

 

Habits of Mind for Working with Data

Welcome back! We’re excited to have you with us on this data journey. To work with data, it helps to understand specific concepts—what is per capita, what is an average, how to investigate sources. These are all valuable skills and knowledge that help you navigate and understand data. What you may not realize is that the mindset you use to approach data is just as important. That’s what this post is about: how to work with data and not melt your brain. I have melted my brain many times, and it can happen no matter how great your hard skills are. 

Imagine that working with data is a bit like working with electricity. Electricity is very useful and it’s all around us. At the same time, you can hurt yourself if you’re not careful. If you need to do something more involved than changing a light bulb, you should turn off the power and take off metal jewelry. Those are good habits that keep you safe. You need good habits to take care of yourself when you work with data too. Today I’m sharing four habits that I learned the hard way—by NOT doing them. Please learn from my mistakes and give them a try the next time you work with data.

Habit 1: Give yourself permission to struggle and permission to get help 

A big part of my job is teaching people to work with data. At the beginning, almost everyone feels self-conscious that they aren’t “numbers people.” Every time I work with a new dataset, I have a moment where I think, “What if I just look at these data forever and they’re gibberish to me?” I have to keep reminding myself that working with data is hard. If you look at a graph and think, “I have no idea what this says,” don’t assume that it’s beyond your comprehension. Talking to other people and asking them what they think is a vital tool for me. Sometimes they understand what’s going on and can explain it to me; other times they are equally confused. Either way, that feedback is helpful. When you get stuck, remember to be patient with yourself. Think about why you’re interested in what these data say, and focus on your curiosity about them. Working with data is a skill you can learn and get better at. It’s not a test of your intelligence. When it’s hard, that’s because working with data is hard. 

Habit 2: Acknowledge your feelings about the topic

It’s natural to have feelings about the world we live in, and data are a representation of our reality. Recently I did some research on suicide rates in rural Colorado, which is an important issue for libraries. I felt sad when I reviewed those data. Feelings can be even more tricky with data we collect about programs and services that directly involve us. Here at the State Library, we ask participants to complete a workshop evaluation whenever we provide training. When I get feedback that someone found my workshop useful, I am so excited. When I get feedback that they were bored, I feel bad. Check in with yourself when you’re working with data that may bring up negative feelings and take a break if you need to. Then see Habit 3.  

Habit 3: Like it or not, data provide an opportunity to learn

Don’t confuse your feelings about the topic with the value of the data or the data’s accuracy. We all have beliefs and values that impact how we see the world. That’s normal. At the same time, our beliefs can make certain data hard to swallow. If the data make us feel bad, and we wish they were different, it’s easy to start looking for reasons that the data are wrong. This applies both to data that directly involve us and large-scale, community data. Remember how I said I felt bad when I got feedback that someone was bored in my presentation? I still need to review and use those data. What if I read results from a national survey that a large percentage of people think libraries are no longer valuable? I don’t feel good about that, but it’s still true that the people surveyed feel that way. Try to think of the data like the weather. You can be upset about a snowstorm in April—but that doesn’t mean it’s not snowing. You could ignore that data and go outside in shorts and sandals, but you’re the one who suffers. Better to face the data and get a coat. Data—whether you like their message or not—give you an opportunity to learn, and often to make more informed and effective decisions. Acknowledge your feelings and then embrace that opportunity.

Habit 4: Take breaks

Between trying to understand what the data say, reminding yourself you’re smart and capable, and acknowledging your feelings about the topic, you can wear yourself out quickly. It’s important to take breaks, do something else, and come back when you’re ready. Think of analyzing data like running as fast as you can. You can run really fast for short periods of time, but you can’t run that fast all day every day. Learn to notice when the quality of your thinking is starting to deteriorate. Usually I reach a point when I start to feel more frustrated and confused, and I picture my synapses in workout clothes, and they’re all out of breath and refusing to get up and run more. That’s a good signal for me that it’s time to take a break.

Conclusion

Learning something new is hard. Many of us received very limited training in how to work with and understand data. As you learn these strategies, keep in mind that how you approach data is just as important as the hard skills you’re learning. Take care of yourself out there and we’ll see you back here next week.

LRS’s Between a Graph and a Hard Place blog series provides strategies for looking at data with a critical eye. Every week we’ll cover a different topic. You can use these strategies with any kind of data, so while the series may be inspired by the many COVID-19 statistics being reported, the examples we’ll share will focus on other topics. To receive posts via email, please complete this form.

How to Compare Apples to Oranges

As our brains process information, we constantly make comparisons. It’s how we decide if something is good or bad—by it being better or worse than something else. However, like apples and oranges, not all things can readily be compared, even if they appear similar enough on the surface. We often make this mistake with data because we want to be able to draw simple conclusions. But when our goal is accurate information, it’s imperative to look at presentations of data  through a critical lens by applying these basic strategies.

So who’s better? 

Let’s say you wanted to determine whether Library A or B was doing a better job at reaching its community. To do so, you compare annual visits at both. This chart would lead you to conclude Library B has much more annual traffic and is therefore reaching more of its community than Library A. But are Library A and B comparable?

Library A serves a population of 5,400 while Library B serves a population of 30,500. When making comparisons among different populations, data should be represented in per capita measurements. Per capita simply means a number divided by the population. For instance, when we compare countries’ Gross Domestic Products (GDP), or value of economic activity, we usually express it as GDP per capita because it would be misleading to compare China’s GDP to that of Denmark. China’s GDP trounces Denmark’s, but that doesn’t mean Denmark’s economy is struggling. China is larger both in terms of the land it covers and the number of people that live there. It would be really weird if they had similar GDPs without the per capita adjustment. The same is true in this example. Take a look at how we draw an entirely different conclusion when total visits are expressed in a per capita measurement.

*Due to a 2-month closure, Library B’s data was only collected over a 10-month period

Now we can see that Library A has 18.5 visits per person i(100,000/5,400), whereas Library B only has 6.6 visits per person (200,000/30,500). These are the same data, but expressed in more comparable terms.

Let’s say Library B also closed for two months to do some construction on their building. Therefore, their annual visits account for 10 months of operation, not 12. Contextual information like this – which has a direct effect on the numbers – needs to be clearly called out and explained, like in the example above.

Breaking it down…

To check for comparability, it’s helpful to keep three things in mind: completeness, consistency, and clarity.

Completeness: are the data comparing at least two things? 

It would be incorrect to say “Library B has 100,000 more visits.” More visits than…Library A? Than last year? Also be wary of results indicating that  something is better, worse, etc. without stating what it is better or worse than.

Consistency: are the data being compared equivalent? And even if they appear equivalent , what information is needed to confirm this assumption? 

One of the best examples of inconsistency occurs when comparing data from different populations, particularly when we focus on total counts. “Totals” are often a default metric because it’s simple for a range of audiences to understand, but it can be very misleading, like in the first chart above. By expressing the data as per capita measurements, we can account for population differences and create a basis of similarity. Additionally, even if data appear similar enough to compare, you also need to review how they were collected. Any reliable research will include these details  (big red flag if it doesn’t !). For instance, it would be important to know that Library A and B were counting visits in the same way. If Library A is counting one week during the summer and multiplying that by 52 that wouldn’t be consistent with Library B who is counting during a week in the winter.

Clarity: Is it obvious and clear what is being compared? 

Data visualizations allow our brains to interpret information quickly, but that also means we may jump to conclusions. Be a critical data consumer by considering what underlying factors might also be at play. The second chart above clarifies that two months of data were missing from Library B. This could be one reason why Library B’s total visits per capita were so much lower than Library A’s. Also beware of unclear claims supposedly supported by the data, like “Library A has higher patron engagement than Library B.” Perhaps Library A defines engagement in terms of number of visits, but Library B’s definition is based on material circulation and program attendance. The data above do not provide enough information to support a comparable claim on engagement.

Comparisons are messy. Whether in library land or elsewhere, keep in mind that comparisons are always tricky, but also very useful. By engaging critically using the strategies above, we CAN compare apples and oranges. They are both fruit afterall…

LRS’s Between a Graph and a Hard Place blog series provides strategies for looking at data with a critical eye. Every week we’ll cover a different topic. You can use these strategies with any kind of data, so while the series may be inspired by the many COVID-19 statistics being reported, the examples we’ll share will focus on other topics. To receive posts via email, please complete this form.

New blog series: Between a Graph and a Hard Place

Hello, world!

We can all agree that these are strange times we are living through. Here at the Library Research Service, we’ve been thinking about how we can help. What skills could we share that might be useful to library staff and our communities?

As library and information professionals, before this pandemic we already spent a lot of time thinking about information, what it means, and how reliable it is. Here at LRS, we are data geeks in addition to being regular library geeks, so we think about data a lot too—the good, the bad, and the misleading.

Critically analyzing information is what librarians are trained to do. We can’t help ourselves. For me, this means every time I talk to my mom and she shares a statistic with me, I ask her about her source. I’m not trying to be a pain. This is just how my mind works.

Right now, we are all seeing a lot of data about the pandemic, and it can be challenging to understand. And this is where we come in.

Let’s be clear: we are not epidemiologists, we are not medical doctors, we are not experts in public health. We are not going to provide data about COVID-19 or interpretations. There are already good resources for both, and we don’t think it would help to add our voices.

What we can do—and we are going to do—is share strategies for looking at data with a critical eye. We’re going to cover a different strategy every two weeks, like thinking about the underlying data behind a visualization, identifying bias, evaluating the credentials of different experts, understanding that how the data are presented can impact how you perceive them, and how to find multiple perspectives on the same information.

We will also discuss how to engage with data carefully, with your mental well-being in mind. Data can make us feel a lot of things, and we all need to take care of ourselves.

This series is inspired by the current situation, but the examples we will share will focus on other topics. You can use these strategies with any kind of data.

We look forward to seeing you here every other Wednesday and hope that these strategies are helpful in this time of information overload. In the meantime, if you’d like some less serious data about a situation that many of us can relate to right now, check out this pie chart.

If you want to subscribe to receive the blog posts from this series by email, please complete this form.