New research: first year college students need support assessing authority

New research: first year college students need support assessing authority

Intro
Can I trust this information? We use information constantly to learn, make decisions, and form opinions. Every day library staff in every setting strive to teach people how to find the information they need and how to identify trustworthy sources. But what is trustworthy? How can you tell? What about when sources contradict each other? What characteristics distinguish sources from each other?

Who
As a former information literacy librarian at a university, these questions haunted me when I was teaching. I was lucky to meet two librarians at the University of Denver (DU) who shared a passion for this topic: Carrie Forbes, the Associate Dean for Student and Scholar Services, and Bridget Farrell, the Coordinator of Library Instruction & Reference Services. Together, we designed a research project to learn more about how students thought about “authority as constructed and contextual,” as defined in the ACRL information literacy framework.

Why
As instructors, we had seen students struggle with the concept of different types of authority being more or less relevant in different contexts. Often they had the idea that “scholarly articles = good” and “news articles = bad.” Given the overwhelming complexity of evaluating information in our world, we wanted to help students evaluate authority in a more nuanced and complex way. We hoped that by understanding the perceptions and skills of students, particularly first year students, we could better teach them the skills they need to sort through it all.

How: Data collection
We designed a lesson that included definitions of authority and gave examples of types of information students could find about authors, like finding their LinkedIn page. The goal here was not to give students an extensive, thorough lesson on how to evaluate authority. We wanted to give them enough information to complete the assignment and show us what they already knew and thought. Essentially, this was a pre-assessment. (The slides for the lesson are available for your reference. Please contact Carrie Forbes if you would like to use them.)

The project was approved by the Institutional Review Board at DU. Thanks to a partnership with the University Writing Program, we were able to collect data during library instruction sessions in first year writing courses. During the session, students were asked to find a news article and scholarly article on the same topic and then report on different elements of authority for both articles: the author’s current job title, their education, how they back up their points (quotes, references, etc.), and what communities they belong to that could inform their perspective. Then we asked them to use all of these elements to come to a conclusion about each article’s credibility, and, finally, to compare the two articles using the information that they had found. We collected their responses using a Qualtrics online form. (The form is available for your references. Please contact Carrie Forbes if you would like to use it.)

Thanks to the hard work of Carrie and Bridget, the generous participation of eight DU librarians, 13 writing faculty, and over 200 students who agreed to participate after reviewing our informed consent process, we were able to collect 175 complete student responses that we coded using a rubric we created. Before the project was over, we added two new coders, Leah Breevoort, the research assistant here at LRS, and Kristen Whitson, a volunteer from the University of Wisconsin-Madison, who both contributed great insight and many, many hours of coding. Carrie and Bridget are working on analyzing the data set in a different way, so keep your eyes out for their findings as well.

Takeaway 1: First year students need significant support assessing authority

This was a pre-assessment and the classroom instruction was designed to give students just enough knowledge to complete the assignment and demonstrate their current understanding of the topics. If this had been graded, almost half (45%) of the students would have failed the assignment. Most instruction librarians get 45 minutes to an hour once a semester or quarter with a  group of students and are typically expected to cover searching for articles and using databases–at the very least. That leaves them very little time to cover authority in depth.

A pie chart showing that 45% of students would have failed the assignment if it was graded

Recommendations

  • The data demonstrates that students need significant support with these concepts. Students’ ability to think critically about information impacts their ability to be successful in a post-secondary environment. Academic librarians could use this data to demonstrate the need for more instruction dedicated to this topic.
  • School librarians could consider how to address these topics in ways appropriate to students in middle school and up. While a strong understanding of authority may not be as vital to academic success prior to post-secondary education, the more exposure students have to these ideas the more understanding they can build over time.
  • Public libraries could consider if they want to offer information literacy workshops for the general public that address these skills. Interest levels would depend on the specific community context, but these skills are important for the general population to navigate the world we live in.


Takeaway 2: First year students especially need support understanding how communities and identities impact authority in different ways.

On the questions about communities for both the news and scholarly articles, more than a third of students scored at the lowest level: 35% for news and 41% for scholarly.  This is the language from the question:

Are there any communities the author identifies with? Are those communities relevant to the topic they are writing about? If so, which ones? How is that relevant to their point of view? For example, if the author is writing about nuclear weapons, did they serve in the military? If so, the author’s military experience might mean they have direct experience with policies related to nuclear weapons. 

Bar chart showing two areas with the highest percentage of students receiving the lowest possible scoreAsking students to think about communities and identities is asking them to think about bias and perspective. This is challenging for many reasons. First, it is difficult to define what is an identity or community that is relevant to a specific topic and to distinguish between professional identities and personal identities. For many scholarly authors, there is little publicly available information about them other than their professional memberships. As Kristen observed: “it was really common for students to list other elements of authority (work experience, education) in the community fields.”

Second, how can you teach students to look for relevant communities and identities without making problematic assumptions? For example, one student did an excellent job of investigating communities and identities working with an article entitled Brain drain: Do economic conditions “push” doctors out of developing countries? The student identified that the author was trained as both an economist and a medical doctor and had unique insight in the topic because of these two backgrounds.

What the student missed, though, was that the author received their medical degree from a university in a developing country and how this may give them unique insight into doctors’ experiences in that context. In this example, the author’s LinkedIn made it clear that they had lived in a developing country. In another instance, however, a student thought that an article about the President of Poland was written by the President of Poland–which was inaccurate and led to a chain of erroneous assumptions. In another case, a student thought an article by Elizabeth Holmes was written by Elizabeth Holmes of the Theranos scandal.  While it seems positive to push students to think more deeply about perspectives and personal experiences, it has to be done carefully and based on concrete information instead of assumptions.

Third, if librarians teach directly about using identities and communities in evaluating information sources, they need to address the complex relationships between personal experience, identities, communities, bias, and insight. Everyone’s experiences and identities give them particular insight and expertise as well as blind spots and prejudices. As put by the researcher Jennifer Eberhardt, “Bias is a natural byproduct of the way our brains work.”

Recommendations 

  • There are no easy answers here. Addressing identities, communities, bias, and insight needs to be done thoughtfully, but that doesn’t make it any less urgent to teach this skill.
  • For academic librarians, this is an excellent place to collaborate with faculty about how they are addressing these issues in their course content. For public librarians, supporting conversations around how different identities and communities impact perspectives could be a good way to increase people’s familiarity with this concept. Similarly, for school librarians, discussing perspective in the context of book groups and discussions could be a valuable way to introduce this idea.


Takeaway 3: An article with strong bias looked credible even when a student evaluated it thoroughly.

When we encountered one particular article, all of the coders noticed the tone because of words and phrases like “persecution,” “slaughtered,” “murderer,” “beaten senseless,” and “sadistically.” (If you want to read the article, you can contact the research team: some of the language and content may be triggering or upsetting.) This led us to wonder if it was a news source, and look more into the organization where it was published. We found was it was identified as an Islamophobic source by the Bridge Initiative at Georgetown University.

The student who evaluated this article completed the assignment thoroughly – finding the author’s education, noting their relevant communities and identities, and identifying that quotes were included to back up statements. The author did have a relevant education and used quotes. This example illustrates just how fuzzy the line can become between a source that has authority and one that should be considered with skepticism. It is a subjective, nuanced process.

This article also revealed some of the weaknesses in our assignment. We didn’t ask students to look at the publication itself, like if it has an editorial board, or to consider the tone and language used. These both would have been helpful in this case. As librarians, we have so much more context and background knowledge to aid us in evaluating sources. How can we train students in some of these strategies that we may not even be very aware we are using?

Recommendations: 

  • It would be helpful to add both noticing tone and investigating the publication to the criteria we teach students to use for evaluating authority.
  • Creating the rubric forced us to be meta-cognitive about the skills and background knowledge we were using to evaluate sources. This is probably a valuable conversation for all types of librarians and library staff to have before and while teaching these skills to others.
  • The line between credible news and the rest of the internet is growing fuzzier and fuzzier and is probably going to keep changing. We struggled to define what a news article is throughout this process. It seems important to be transparent about the messiness of evaluating authority when teaching these skills.


Takeaway 4: Many students saw journalists as inherently unprofessional and lacking skills.

We were coding on the rubric more so than doing a thematic qualitative analysis, so we don’t know how many times students wrote about journalists not being credible in their comparison between the news and scholarly article. In our final round of coding, however, Leah, Kristen, and I were all surprised by how frequently we saw this explanation. When we did see this, the student’s reasoning was often that the author’s status as a journalist was a barrier to being a credible source, like the article could not be credible because the author was a journalist. Sometimes students had this perspective when the author was a journalist working for a generally reputable publication, like the news section of the Wall Street Journal.

Recommendations

  • This is definitely an area that warrants further investigation.
  • Being aware of this perspective, library staff leading instruction could facilitate conversations around journalists and journalism to better understand this perspective.


How: Data analysis

It turns out that the data collection, which took place in winter and spring 2018, was the easy part. We planned to code the data using a scoring rubric. We wanted to make sure that the rubric was reliable, meaning that if different coders used it (or anyone else, including you!) it would yield similar results.

The process for achieving reliability goes like this: everyone who is participating codes the same set of responses, let’s say 10. Then you feed everyone’s codes into statistics software. The software returns a statistic, Cronchbach’s alpha in this case, that indicates how reliably you are all coding together. Based on that reliability score, all the coders work together to figure out where and why you coded differently. Then you clarify the language in the areas of the rubric where you weren’t coding reliably, so you all can hopefully be more consistent the next time. Then you do it over again with a new group of 10 responses and the updated rubric. You have to keep repeating that process until you reach a level of reliability that is acceptable. In this case we used a value of .7 or higher, which is generally acceptable in social science research.

In spring 2021, our scoring and rubric tested as reliable on all 11 areas of the rubric. (We rounded up to .7 for the one area of the rubric that tested at .686.) Then Leah, Kristen, and I worked through all 175 student responses, each coding about 58 pieces of student work. In order to resolve any challenging scores, we submitted scores we felt uncertain about to each other for review. After review, we decided on a final score, which is what was analyzed here. Below you can see the percentage of student scores at different levels for each area of the rubric, as well as the reliability scores for each (Cronbach’s alpha).

A table that displays each area of the rubic, what percntage of students scored at each level of proficiency, and the Cronbach's alpha reliability statistic.

We could write a whole different post about the process of coding and creating the rubric. We had many fascinating discussions about what constitutes a news article, how we should score when students have such different background knowledge, discussing our own biases that we bring to the process–including being prone to score more generously or more strictly. We also talked about when students demonstrated the kind of thinking we were looking for but came to a different conclusion than we did, or instances where we thought we understood where they were going with an idea but they didn’t actually articulate it in their response. Evaluating students’ responses was almost as messy and subjective as evaluating the credibility of sources is!

Ultimately, we wanted the rubric to be useful for instructors so that guided the design of the rubric and our coding. “Beginning,” the lowest score, came to represent situations where we thought a student would need significant support understanding the concept. “Developing,” the middle score, indicates that a student understands some of the concept, but still needs guidance. “Skillful,” the highest score, meant that we would be confident in the student’s ability to evaluate this criteria independently. We are excited to present, after all those discussions, such a reliable rubric for your use. We hope it will be a useful tool.

But you don’t have to take my word for it!

We have shared the slides for the lesson, the assignment, and rubric for how to score it. If you would like to use the slides or assignment, please contact Carrie Forbes at the University of Denver. To use the rubric, please reach out to Charissa Brammer at LRS. Have questions? Please reach out. Nothing would delight us more than this information being valuable to you.

On a personal note, this is my last post for lrs.org. I am moving on to another role. You can find me here. I am grateful for my time at LRS and the many opportunities I had to explore interesting questions with the support of so many people. In addition to those already listed in this article, thank you to Linda Hofschire, Charissa Brammer, and everyone at the Colorado State Library for their support of this project.

Pilot to Measure Social and Emotional Learning at Denver Public Library

By Hillary Estner, Katie Fox and Erin McLean

Why evaluate?

How can you measure relationship-building abilities? How can you understand which of your library’s programs best support users’ development of skills like problem-solving? How can you determine whether the youth who come to your library need help learning how to ask a question?

At Denver Public Library (DPL), we wanted to answer these questions, which address a vital set of skills called social and emotional learning, or SEL. A key goal of our public library, like many libraries, is to provide experiences that positively impact participant learning and growth. Particularly with our youth participants, we hoped that library programs fostered SEL, but we had not yet found a way to measure it.

In summer 2017, at the urging of the executive level of our library, we launched a pilot project to explore methods of evaluating youth outcomes from library summer programming, with a focus on SEL. We partnered with the Colorado State Library’s Library Research Service, and the three of us—a reference librarian, branch librarian, and research analyst—set out to measure SEL.

Who participated?

While we assessed several components of the library’s summer programming, here we will focus on a collaboration with the Denver Public Schools program, Summer Academy. DPS offers Summer Academy to students whose reading scores are below grade level and students in the English Language Acquisition program. Youth who were invited to Summer Academy were also invited to participate in the library programming. Library programming participants attended literacy instruction during the morning and two hours of library enrichment in the afternoons for four weeks.

Library programming participants were split into two groups based on age, with one group of youth entering first, second, and third grades in the fall and the other entering fourth, fifth, and sixth grades. For both classrooms, typically the youth had some unstructured time at the beginning of the library-led programming, which often was time playing outside or LEGO® free time. After that unstructured time, participants in the younger classroom had a choice of two structured activities which had a clearly defined end product. Participants in the older classroom had several self-directed activities they could choose from and often ended up designing their own projects that did not have a defined end result.  

How did the evaluation work?

We knew SEL would be challenging to measure, so we tried several strategies. Library instructors facilitated individual smiley face surveys about specific activities, youth created end of summer reflective projects to share their experience, and our team observed four days of the program, focusing on SEL behaviors.  Unfortunately, the smiley face surveys did not work because it was challenging to consistently administer them, and participants reported that every activity was fun and easy. Our observations indicated that these reports were not always accurate–we saw youth struggle and disengage at times. The youths’ responses to the reflection prompts were largely positive and vague.

It is very possible that the youth we were working with were too young to share an opinion that was not positive. For example, in response to reflection questions about what they liked and disliked about the program, one youth wrote “I liked everything,” and drew hearts. Another limitation of this assessment was that the participants, particularly the younger age group, were still developing their reading and writing abilities. While we tried to minimize this issue by using smiley faces for response categories, it was still a problem.

Observational rubric

The observational behavior rubric was the most challenging and fruitful component of our project. After reviewing the literature, we were not able to find a freely available observational behavior rubric focused on SEL, so we developed our own. We initially observed youth with some key social and emotional behaviors in mind, and through the process of coding these observations, we developed a coding scheme and observational rubric.  

To create our coding scheme and rubric, we first identified three key areas of SEL, according to the Collaborative for Academic, Social, and Emotional Learning (CASEL). We chose to focus on self-management, relationship skills, and decision making. We used a behavior rubric that the Logan School for Creative Learning generously shared with us as a model to get started.  

After our initial design, we tested and refined the rubric repeatedly so that we could code consistently.  For example, under the category of self-management, the rubric included both “dis-engagement” and “engagement.” Engagement included behaviors like listening, being on task, task-completion, observing peers or teachers, and being responsive to directions. The relationship skills category included behaviors like “kind comment,” “unkind comment,” and “friendly chatting with peers or instructor.” The responsible decision making category included behaviors like “letting someone else do it for you,” “pride in work,” and “helping peers.”

Ultimately, coding our observations yielded preliminary, but valuable, results which are being used to inform youth programming and staff training.  

Results

Of the thirty-three enrolled students, twenty-six were in the younger group and seven were in the older group. We received nineteen consent forms for the younger group and six for the older group. There was inconsistent attendance, so the amount of time we were able to observe each participant varied. We observed seventeen participants in the younger group, and five in the older group. Due to the small sample size for the older group, as well as the open-ended design of their program, we decided to only analyze the data for the younger group.

Through analyzing our observational data, we found that during certain activities we saw more youth showing specific social and emotional skills and behaviors. For example, during the complex activity of making a solar-powered toy bug, youth participants were more frequently engaged in positive problem-solving and decision-making than during the simpler activity of painting a tree and attaching buttons to make a “button tree.”

Youth also displayed the highest rates of positive relationship skills–such as friendly chatting and sharing–during slime and leaf imprint activities, which are both open-ended, exploratory activities (projects with multiple ways to successfully complete the task).  Participants also had the highest rate of positive self-management during these two activities. We saw an even higher percentage of positive relationship skills during unstructured activity time, often LEGO® time.

Our sample per activity was quite small (sometimes we observed as few as three students completing an activity), so we are cautious about drawing overarching conclusions. Nonetheless, these results yielded helpful information about which types of activities could provide environments that foster SEL, which can inform our design of programs tailored to SEL skills.  

Resources for libraries

We want the library community to benefit from our experience trying to measure SEL, and in particular we want to share our observational behavioral rubric as a free tool for organizations to use to conduct their own evaluation.  For a copy of our rubric, click here. You may use and modify the rubric as long as you cite us.  For more information about this project, please contact Katie Fox at Library Research Service.

Reported challenges in Colorado’s public libraries nearly doubled from 2016 to 2017

LRS’s latest Fast Facts report summarizes the results of our annual investigation into the materials that are challenged in public libraries across Colorado. This Fast Facts details the number, type, and reasons for the challenges reported in the 2017 Public Library Annual Report. The information that public libraries provided to us about these challenges help demonstrate the attitude toward intellectual freedom in Colorado now and over time.

The number of challenges reported in Colorado nearly doubled from last year, rising from 22 challenges reported in 2016 to 41 challenges in 2017. It is unclear whether this is due to an actual increase in the challenges that occurred, or if it is a result of more thorough reporting. Despite the increase this year, the number of reported challenges has dropped 47% in the past ten years.

Keeping consistent with previous years, adult materials were challenged more often than children’s and young adult (YA) materials. About half (47%) of the materials challenged were intended for adults. Challenges for YA and children’s materials switched places, with YA challenges making up about a third (34%) of reported challenges, and children’s materials in a close third at 28%. Nearly three-quarters (72%) of all challenges resulted in no change, which has been the most common result since 2008.

The top reason for a reported challenge was Unsuited to Age Group, making up nearly a third (31%) of reported challenges, replacing Sexually Explicit (25%), which had been the top reason for challenges since 2012. Offensive Language (19%), Other (19%), and Insensitivity (16%) rounded out the top five reasons for a challenge in 2017.

Books were challenged more often than videos for the first time since 2014, accounting for about 3 in 5 (63%) of the reported challenges. Videos made up a quarter (25%) of reported challenges while computer (6%) and periodical (6%) challenges made up the rest.

For more results from the Public Library Challenges Survey, check out the full 2017 Challenged Materials in Public Libraries Fast Facts report. And, more information about intellectual freedom issues in libraries can be found here.

Note: This post is part of our series, “The LRS Number.” In this series, we highlight statistics that help tell the story of the 21st-century library.

More than 75,000 4-Year-Olds Received a Free Book During the 2016 One Book 4 Colorado

summerreadingwn

One Book 4 Colorado (OB4CO) began in 2012 as a statewide initiative to distribute free copies of the same book to every 4-year-old in Colorado. In 2016, the book chosen was Giraffes Can’t Dance by Giles Andreae, which was distributed in both English and Spanish. More than 75,000 books were given away at more than 500 sites, including Denver Preschool Program preschools and both military and public libraries. LRS surveyed caregivers and participating agencies to learn more about the impact of this year’s OB4CO program on Colorado’s children. The results are compiled in our newest Fast Facts report.

After receiving Giraffes Can’t Dance, nearly three-quarters (72%) of caregivers who responded to a survey agreed that their child was more interested in books and reading, and more than two-thirds (68%) said that their child talked more about books and reading. Caregivers who reported reading to their child less than once a day were more likely to agree that the OB4CO book helped their child become more interested in books and reading. After participating in OB4CO, 4 in 5 (80%) caregivers felt that their community promoted a culture of reading.

The participating agencies surveyed also felt that the program had a positive impact. Nearly all agencies who responded to the survey (98%) reported that the 4 year-olds were excited to receive their copies of Giraffes Can’t Dance, and 9 in 10 (89%) said that the children talked about their book with others. Agencies also noticed an impact on the children’s parents; 7 in 10 (70%) of the participating agencies felt that parents showed an increased awareness of the importance of childhood reading and over half (54%) said that the OB4CO program brought new families to the library.

Voting for next year’s OB4CO will open in early January. Be on the lookout for the 2017 book options and vote for your favorite! More information about the OB4CO program can be found here.

Note: This post is part of our series, “The LRS Number.” In this series, we highlight statistics that help tell the story of the 21st-century library.

We’re hiring!

jobpostingfinal

You are fascinated by statistics. You are deeply passionate about libraries. You understand the importance of data-driven decision making, but most importantly, your driving motivation is to make data and evaluation accessible and useful to a wide variety of audiences, from frontline librarians to policymakers and stakeholders.

So, when your phone buzzed this morning to let you know that a new job was posted to LibraryJobline.org, your heart skipped a beat when you saw five glorious words: Research…Analyst… Library…Research…Service.

The Research Analyst will lead a variety of research and evaluation efforts for and about libraries in Colorado and beyond – designing studies, analyzing the results, and presenting the findings in a variety of formats, ranging from scholarly journal articles to press releases. This person will also share her/his passion for data with the library community by providing training and professional development opportunities about evaluation in venues ranging from regional workshops to webinars to the national Research Institute for Public Libraries. The ideal candidate for this position…

  • Believes that evaluation can transform library practice
  • Has a strong background in statistical analysis, knowing which statistical methods are appropriate and how to correctly conduct data analysis using those methods
  • Has superior writing skills and can make research findings understandable to a broad spectrum of readers
  • Is an experienced trainer who can make data and evaluation topics accessible and interesting to lay audiences
  • BONUS: Has an eye for design and experience creating infographics

For more information and to apply, please see https://www.libraryjobline.org/job/5378/Research-Analyst . The application deadline is April 27, 2016.

Come work with us!

Come work with us!

lrs_jobs

Are you interested in joining the LRS team? We’re hiring for two positions:

Position #1: Research Analyst

Do you hold a firm belief that statistics are fascinating? Do you often find yourself diving deep into data analysis to find the meaning behind the numbers? Do you enjoy making complex research results accessible to a wide variety of audiences? If so, we have the job for you! The Colorado State Library’s Library Research Service (LRS) has an opening for the position of Research Analyst.

The Research Analyst will lead a variety of research and evaluation efforts for and about libraries in Colorado and beyond – designing studies, analyzing the results, and presenting the findings in a variety of formats, ranging from scholarly journal articles to press releases. This person will also share her/his passion for data with the library community by providing training and professional development opportunities about evaluation in venues ranging from regional workshops to webinars to the national Research Institute for Public Libraries. The ideal candidate for this position…

  • Believes that evaluation can transform library practice
  • Has a strong background in statistical analysis, knowing which statistical methods are appropriate and how to correctly conduct data analysis using those methods
  • Has superior writing skills and can make research findings understandable to a broad spectrum of readers
  • Is an experienced trainer who can make data and evaluation topics accessible and interesting to lay audiences
  • BONUS:  Has an eye for design and experience creating infographics

For more information and to apply, see https://www.libraryjobline.org/job/5243/Research-Analyst?ref=page1. The application deadline is March 4, 2016.

Position #2: Research Assistant

Do you have a strong opinion about the Oxford comma and singular “they”? Do you find yourself looking for the “real story” behind the numbers reported in the media? Are you intrigued by the popularity of infographics? Do you like a combination of collaborative and independent work? If so, we’ve got the job for you! The Colorado State Library’s Library Research Service (LRS) has an opening for the position of Research Assistant.

This person will collaborate with LRS colleagues to write about research findings for the library community, develop content for LRS.org, and support evaluation projects. The ideal person for this opening is passionate about libraries, appreciates data and numbers, and is looking for a position that is part job, part discovery, and part learning. S/he is the type of person who knows…

  • how to write well, especially for non-experts and the web
  • why accurate data are so important
  • how to proofread and edit technical documents
  • how to provide excellent customer service
  • how to be part of a team and work independently
  • when to obsess about the small stuff and when to focus on the big picture
  • BONUS: experience or interest in data visualization

For more information and to apply, see https://www.libraryjobline.org/job/5242/Research-Assistant?ref=page1. The application deadline is March 4, 2016.

Half of public library respondents report internet connectivity speeds of more than 10 Mbps

Digital Inclusion_speed

Image credit: Digital Inclusion Survey

We’ve shared the Digital Inclusion Survey with you before, and now new research results dive into data specifically about broadband speeds in public libraries. More than 2,200 public libraries from 49 states reported upload and download speeds at their libraries for wired and Wi-Fi connections. City libraries reported median download speeds of 30 Mbps (wired) and 13 Mbps (Wi-Fi), while rural libraries reported medians of 9 Mbps (wired) and 6 Mbps (Wi-Fi).

According to the most recent data, about half (49.8%) of all libraries reported download speeds of more than 10 Mbps, up from just 18% that had achieved those speeds in 2009. The percentage of libraries with the slowest public Internet speeds of 1.5 Mbps or less dropped to 1 in 10 in 2013 from 42.2% in 2009. While the strides being made are exciting, the reality is that just 2% of public libraries meet national benchmarks set by the Federal Communications Commission for minimum speeds serving smaller communities (100 Mbps) and more than 50,000 people (1 Gbps).

Technical issues also abound, as might be expected when it comes to Internet connectivity speeds. Captured speeds—both at individual user’s devices and for uploads—lag behind subscribed network speeds. Peak use times meant reduced speeds, particularly for city libraries which saw direct download speeds drop 69% during heavy usage when compared to light usage periods.

Read the full report, including additional breakdowns by locale and connection type, here. This broadband discussion is even more timely considering Pew’s recent analysis of Census data about broadband access among households with children and the “homework gap” and what this information might mean for libraries. We’ll bring you more on that research soon.

 Note: This post is part of our series, “The Weekly Number.” In this series, we highlight statistics that help tell the story of the 21st-century library.

LIS starting salaries are up almost 3% for new graduates according to Library Journal survey

lj_salaries

Image credit: Library Journal

As part of our periodic look at Library Journal’s Placements &Salaries Survey, we found good news rolling out overall for 2013 graduates. The 2014 survey looked at just over 2,000 of last year’s LIS graduates in order to assess changes in job description, salary, and geographic distribution across the profession. The general trend appears to be for positive growth – average starting salaries are up 2.6% across the board compared to 2013, and average starting salaries have risen above $45,000. The graduates also reported a slightly shorter job search, at an average of 4.2 months.

One component driving this improvement was an expansion of responsibilities across the digital sector of the field. Librarians are increasingly taking on responsibilities such as managing social media, digital asset/content, and digital projects. Out of all of the positions reported, those whose applicants garnered the highest starting salaries were data analytics, emerging technologies, knowledge management, and user experience/user interface design, all positions that offered an average starting salary over $55,000. Graduates entering into user experience/ user interface design positions started with salaries a staggering 53% higher than the average LIS graduate, at $70,026.

But here is the catch. Many of these digital positions still only account for a small portion of the total positions being filled by new graduates. For example, digital content management jobs were only a fraction (3%) of the total placements, and while they had a significant concentration in Western states and salaries were slightly higher than average, the overall starting salary for this position actually decreased somewhat from 2013 (by 5%). So what does all of this mean? Positions with substantial digital components are becoming more common, especially in private industry, archives, and public libraries, but this growth is not necessarily consistent across library type and geographical area. In the coming years, we will certainly have to keep an eye on this trend towards the digital LIS professional, as well as how positions and wages compare to those across the field.

Want to see how your library position or region is faring? You can access the full data from the survey here.

Note: This post is part of our series, “The Weekly Number.” In this series, we highlight statistics that help tell the story of the 21st-century library.

New Public Library Data Tools

new_lrsi_public_slide

We are excited to present a brand new set of tools for interacting with data from our Public Library Annual Survey. The new tools are packed with features, including:

  • Quickly locate data for a single year and statistic group
  • Build custom data sets by specifying years, statistics, libraries, etc.
  • Visualize data using graphs and maps
  • Export data in .csv format

Did you know that Library Research Service now has over 25 years’ worth of public library data available? Our new tools make finding and analyzing this data simple!

Follow me to the new public library interactive tools