Chapter 5

Assessing for Collection Standards

Abstract

All librarians want to build good collections, but what does that really mean? The answer is different for every library and demand-driven acquisitions (DDA) programs can help libraries achieve this goal. This chapter addresses fears that librarians might have about the destiny of collections when embarking on a DDA program. It offers ideas and examples of institutions that used DDA to meet their collection development goals. It addresses several methods that librarians can use to assess their collections in the context of their users, in light of other libraries, and in line with their future goals for establishing and preserving collections that serve their communities.

Keywords

Demand-driven acquisitions; research; ebooks; collection development; collection management; selection; assessment; preservation; peer institutions; usage; collection quality

How do we gage the quality of a collection? Is a healthy collection one that satisfies the needs of individual users or brands the library as a place to find authoritative information? Are these desires even at odds? Librarians have been struggling with these issues probably forever, but certainly long before catalog-integrated ebook demand-driven acquisitions (DDA). In 1987, Emory University librarians Elizabeth Futas and David Vidor took to Library Journal with philosophical questions that are still as puzzling today as they were then. They asked about defining good collections, “something that is good is valuable, but to whom? Something that is good is useful, but how is it used? Are useful and used the same?” (Futas & Vidor, 1987). They alluded to the idea that collections become good through some combination of use and value, but the specific parameters of goodness are different for every institution.

Rick Anderson’s article in The Scholarly Kitchen, “What Patron-Driven Acquisition (PDA) Does and Doesn’t Mean: An FAQ,” makes a persuasive point that any “collection that fails to meet the actual real-life needs of the scholarly population it is supposed to serve is not a ‘good’ collection in any meaningful sense.” (R. Anderson, 2011c). This metric means that as far as we have known that a hearty portion of our print collections never circulate (Trueswell, 1969) we have been making collections that do not serve the scholarly needs of our communities as well as they could. There are reasons to keep a noncirculating collection in a library. Some of these reasons include accreditation, preservation, and subject specialization in a greater community of libraries, but each individual institution should still strive to optimize their budget to produce collections that serve their communities to the highest possible degree. DDA has a role to play in this mission and several institutions have used DDA as a tool in their collection management strategy to ensure that their collections are good in all possible definitions.

Librarians have always worked hard to establish collections that would stand the test of time and be useful for both current and future patrons (Walters, 2012). The data we retrieve from print collections make feedback in this process very difficult. There are known holes in print circulation-based data, it leaves all noncheckout uses out of the data and does not give us any information about how the material was actually used postcheckout. If usage is our indicator for the hallmark of a good collection, we need to examine the more robust data that come from ebooks and put them into practice to adjust our selections and discovery pools.

There is also the argument that good collections do not necessarily provide for every patron’s immediate needs but rather provide a selection that represents the academic and learning goals of the institution. A librarian’s role might be to select or provide access to materials that provide the most educational value. Students may find resources that they think would be good additions to their research, but a good library collection could help correct this path towards materials with greater value. This point is very fair, but this is no longer the dichotomy that library patrons experience when looking for resources. In a digitally enabled world filled with information, users are not choosing between the resource they want immediately and what the library has available, they are choosing between library resources and the wealth of mixed-quality information they have at their fingertips through the web. Corrective collection building may not have the power to influence that choice anymore, but with this loss of power comes a great responsibility for libraries to teach information literacy across groups and institutions. Information literacy should not be confined to freshman composition courses in academic libraries, but should be a lifelong learning endeavor. A recent Pew research study (Anderson, 2015) found that 74% of adult learners have a preference for learning in person and this is particularly true for adults without college-level information-literacy training. Information literacy is a public good that could open a world of digital resources to our users across all library types. Our collections do not serve as a walled teaching garden anymore, but we are still teachers, and our message is more important than ever.

The exponentially expanding universe of information also has the effect of making scholarship more interdisciplinary and flighty than traditional library acquisitions strategies can consistently follow. Even the most tuned-in librarian selectors are limited by budget and ordering cycles that may lock them out from the most current digital research. DDA can serve to overcome this barrier by letting users access materials from a broad range at the point of need. DDA also lets department need shape budget allocations. It is difficult for academic libraries with traditional selector models to reformat their budget allocations each year even in the face of declining or expanding enrollments in particular disciplines. Even an interdisciplinary slush fund alongside this model helps fill the gap for rising or falling needs between departments. If one of our criteria for establishing good collections is that they serve our communities we will want them to serve all our communities equally (Walters, 2012), not let growing departments suffer shortages while selectors for stagnating departments struggle to spend allocations.

5.1 Measures of Collection Quality

DDA research often examines measures of collection quality, but the criteria are specific to each institution. A common measure of quality is the percentage of holdings in peer libraries. Grand Valley State University’s DDA program showed a correlation between items that were held in one or more peer institutions and increased circulations. The University’s DDA program was set up to serve many disciplines and generate a broad collection tailored to the professional programs offered there. Prior to adopting a DDA program, they had a 31% circulation rate. They also found that their interlibrary loan rate had increased by 330% in 5 years. As a result of this, they created an interlibrary loan-to-purchase program to allocate funds away toward patron-driven selection. They set up a modest initial pilot that allocated $5000 and limited purchases to items published in the past 3 years and that cost less than $75. They found that 36% of the items acquired through this method circulated after the trigger checkout, compared to 19% of traditionally selected titles acquired during the same time period. Titles held by one or more peer institutions circulated at an even higher rate, 42% of titles held by one other institution and 49% of titles held by two or more institutions had circulations after the trigger checkout (Way, 2009). DDA was an effective strategy for Grand Valley State University and suggests that there might be value in peer analysis as a measure of collection quality.

A riff on this idea is asking librarians to rank or select titles from DDA discovery records that they would have purchased under ordinary circumstances. Swineburne University was one of the first programs to use integrated catalog records for DDA and started their project in 2006. The researchers assessed this strategy by comparing patron selections to the items that librarians would select in their subject areas. A total of 100,319 discovery records were distributed to librarians who marked their selections and then all the selections were compiled into spreadsheets. The librarians selected 8567 titles. Then the discovery records were loaded into the catalog for the DDA pilot. The pilot was 16 weeks and 637 titles were triggered. A total of 116 of the student selections were also selected by librarians. The researchers found that the size of a department or discipline did not correlate with the number of titles triggered in that discipline, suggesting that there is a nonlinear relationship between students and library research and that some disciplines tend to use library resources more than others. They also found a similar disparity in the subject librarian selections. Some disciplines suggested a much larger proportion of items for purchase. Though there was not much overlap in the selections, student selections were no less sophisticated than librarian purchases, with most selected titles listed at the advanced academic level. Patrons tended to purchase more titles at the supplementary level though, suggesting that their needs are perhaps more esoteric than subject librarian selections give them credit for. They did find, however, that there was a disproportionately high overlap rate between librarian selections and student selections in supplementary titles. Both librarian and student selections averaged less than $100. They also found that students tended to select titles that were not necessarily in their vendor-designated area of study, suggesting that many of the titles purchased through DDA have interdisciplinary interest. They noted the small portion of overlap titles and believe that it might be a combination of subject librarians being out of touch with the needs of students and students needs trending towards immediate while librarians look at the long haul (Shen et al., 2011).

There are also advantages to acquiring items that may have been left out of traditional acquisitions programs at other institutions. Assessment of interlibrary loan-to-purchase programs revealed that in many programs there were significant holes in collection development that were being filled through interlibrary loan. The interlibrary loan-to-purchase program at Purdue University analyzed the items that had been sent to interlibrary loan by subject area and found that 80–99% of these were appropriate for their disciplines and collection (Ward, Wray, & Debus-López, 2003) A similar situation was observed in the Geisel Library at Saint Anselm College. Librarians at the Geisel Library began an interlibrary loan-to-purchase DDA program in 2004. They did a peer analysis of 432 items acquired through their DDA program and found that the overlap between those items and the items acquired by their peer libraries was surprisingly small, only 15% of the items were owned by two or more peer libraries. Despite this, they found that items were being ordered in appropriate subject areas and in similar subject areas to items acquired through librarian-selected acquisitions (Waller, 2013). The fact that DDA aligns with the subject areas of interest for the Library could mean that for the Geisel Library, the fear that DDA will not contribute to a good institutional collection is unfounded. Patrons are selecting in their disciplines, even if they are not selecting the same titles that librarians at peer institutions have selected.

There have been no cohesive studies on the appropriateness of DDA as a standalone acquisitions strategy, but there is plenty of research that demonstrates that DDA selections are worthy items to hold in a collection, even if they are not exactly what selector librarians would have chosen. In 2011, Loyola Marymount University conducted a DDA pilot in conjunction with a move towards digital items. They worked in 2010 to move nearly half of their periodicals to digital format and this laid the groundwork for their successful DDA program. They selected a small but diverse group of subjects for the trial: biology, business, communication studies, philosophy, political science, sociology, and theological studies. They developed separate profiles for each discipline in the trial and withdrew weekly purchase reports from Ebrary. They considered the project a success because they met their budget, their purchases were academic in nature but expanded the scope of materials they would have purchased through selection. For the continuation of the trial, they made several tweaks to the profile, including limiting the pool to the most recent edition to prevent duplication and adding turnaway protections to their triggered materials (Hillen & Johnson-Grau, 2011).

Libraries that find that their DDA purchased do not mirror their selected acquisitions might need to adjust their price caps and trigger settings. This was the case when the University of Arkansas undertook a PDA program in 2012 using YBP’s Gobi system. They evaluated the program by circulation, subject area distribution, and academic relevance. They found that 21% of their discovery records were triggered and half of the titles purchased had over five uses during the trial period. Humanities users triggered the biggest purchase of materials because the price cap excluded many technical titles in the sciences. Despite these restrictions, they also found that the materials were well selected, with 98% of the triggered titles found in 50 or more libraries (Gilbertson, McKee, & Salisbury, 2014).

Librarians can also test the quality of DDA collections by comparing them to the demographics of their institutions. Checking that the disciplines collected through DDA programs match the enrollment rate and library usage of the same departments at the University or ensuring that library patrons are triggering materials from all community groups can be an effective test for the collection development value of these strategies.

The University of Nebraska-Lincoln found that about 74% of the requests in their interlibrary loan-to-purchase program were made by undergraduates and 66.4% of the requests were in the arts, humanities, and social sciences. This echoes previous research (Anderson et al., 2002; Bombeld & Hanerfeld, 2004; Foss, 2007; Houle, 2004; Tyler, Xu, Melvin, Epp, & Kreps, 2010; Ward, 2002; Way, 2009).

The University of Kansas Libraries conducted a citation analysis of their faculty members’ publications in 2012 and 2013 in the humanities, social sciences, and sciences to determine whether the library was providing the kinds of materials that were needed for research. They found that journal citations accounted for most of their faculty citations (66%) and that the University had access to about 86% of the cited items. A fairly significant portion of these (45%) were available in both print and electronic format and many of these were journal articles, suggesting there is a large overlap between print and electronic collections in the library. The citations that the Library did not provide were evenly split between monographs and journal articles. They found that the citations spanned a larger range than they were expecting, leading them to consider purchasing more backfiles. They were relieved to find that citation numbers reflected the budgets spent on serials and monographs, respectively (Currie & Monroe-Gulick, 2013).

The University of Minnesota Law Library began an interlibrary loan-to-purchase program in 2007. They used some careful evaluative criteria on the interlibrary loan specialists side to determine whether the request would be fulfilled using loan or purchase. Staff considered the potential cost of processing, shipping, and fulfilling the request via interlibrary loan and also factors like how well the item would fit into the University’s collection before determining that a monograph would be purchased rather than borrowed (Zopfi-Jordan, 2008). The law library also uses the Copyright Clearance Center to obtain individual articles and issues of journals that the library does not subscribe to in full. Librarians evaluate items, both monographs and articles, by time and cost. Because purchase on demand is less expensive and faster than interlibrary loan, the library finds that these methods are worthwhile for their students (Zopfi-Jordan, 2008).

In addition to determining whether DDA programs provide quality materials for the community, librarians can also assess the collections built by these strategies to determine whether the materials are useful for research. Loyola Marymount University’s DDA program turned up some unexpected purchases, like duplicates of print copies, less scholarly materials, and more long-tail-type esoteric scholarly materials. These would not have been purchased through approval plans, but they were valuable for research (Hillen & Johnson-Grau, 2011). The University of Nebraska-Lincoln plotted the percentage used and the average circulations per volume for each Library of Congress subclass obtained via the interlibrary loan-to-purchase program and found that 90.8% of the books fell into classes that were above average in either circulations or breadth of use (Tyler et al., 2010). This means that most of the items acquired by this method either fit well into research demands of the collection or were items of high value to users.

Quality collections are those that fulfill users’ needs, so surveys are also a good way to assess these programs for quality. Oregon State University (OSU) Libraries conducted a survey of users of their purchase-on-demand program to see whether they enjoyed the experience and what they could do to improve. This is a particularly interesting case because OSU made sure that patrons were aware of the program when they received their books by way of an informative book band on the physical item that explained the purpose of the POD program. Not surprisingly, the patrons liked the items they had recommended for purchase, over 55% said that they would borrow the item again and 53% said that they would recommend that the title be added to reading lists (Hussong-Christian & Goergen-Doll, 2010).

5.2 Factors Influencing Collection Quality

Newness is one of the most objective measures we have for collection quality and it is a powerful measure of the kind of usage we will observe in materials. Kent State University suggested that 92% of books were triggered within 1.5 years of publication. The average upload to trigger time was 300 days. The researchers suggest that removing the “dead weight” of unused discovery records in the system is a worthy goal and one that is not yet adequately discussed in the research (Yin, Downey, Urbano, & Klingler, 2015). The alternative argument for removing discovery records from the catalog is that they cost nothing to maintain and only expand the options for readers (Joyner Cramer, 2013). Each institution should determine its own comfort level with untriggered discovery records, but there is some research that suggests that items are more unlikely to be triggered as they age.

The University of Iowa showed a fairly linear relationship between triggered titles in their 4-year DDA program, sessions on those titles, and publication year (Fischer & Diaz, 2014). The University of Liverpool found in their examinations of package ebook deals that the newer titles in their collections showed more usage, but there was robust usage throughout the collections, even for materials that were over 5 years old. Users were very interested in accessing new content, but still sought out older content at a slightly declining rate for each year between its publication date and the present (Bucknell, 2010).

In 2008, Ohio State University embarked on their ordering structure for DDA, beginning a pilot in 2009 with a $25,000 deposit, 93,000 ebook discovery records, and a restriction to items under $300, that were published after 2007. They also limited the publishers and subject headings and excluded manuals, foreign languages, technical law, and fiction books. This left around 16,000 discovery titles for the initial pilot. Ohio State University librarians were able to show that imprint date affected student use of the books, but their two trials had an unequal distribution of items (Hodges, Preston, & Hamilton, 2010).

Simple newness might not be the only factor in whether patrons access ebook materials. Alain Lamothe compared static ebooks, which had content that stayed the same after purchase, and dynamic ebooks, with material that was updated after purchase and found that there were circulation advantages to the dynamic titles (Lamothe, 2015). While the two styles represented slightly different types of information and users in the library, it suggests that users may be thinking about newness critically, and after the best information for their particular project, rather than simply excluding these titles from their searches based on publication dates.

Indiana University looked at the number of checkouts and percentage of collection circulation for items purchased in 2003. They found strong variations between different publishers in checkout percentage. In contrast to many studies of newness, Indiana University found that books purchased in 2003 that were more than 3 years old at the time of purchase circulated more heavily than the average newly published title. This could be because librarians were very selective about purchasing older titles. They actually had great overall usage in their collections with 75% of the collection circulating on average and 6.6 checkouts on average for their items, though they did specify that this could be influenced by factors like checkout period and renewals (Adams & Noel, 2008).

5.3 Questions for Assessing Collections Based on Quality

ent Are the items acquired through DDA programs similar to items acquired through other acquisitions strategies?

ent Do they represent the same research level?

ent Do they represent research areas that are used in the community?

ent Are they items that are held by comparable libraries?

ent Do they represent a standard of scholarship similar to items acquired in other ways?

ent What can we learn from the materials that fall outside of what we traditionally acquire?

ent Do they represent new research trends in particular disciplines?

ent Are they materials that fall in between the disciplines we traditionally serve?

ent Do they represent materials that we have traditionally excluded from our collection building (textbooks, international publications, dissertations) that we should consider collecting?

ent Do the materials collected through DDA align with institutional and learning goals?

ent Do they represent the research interests of faculty members and the study needs of students?

ent Are they useful for research in the community?

ent What do librarians think about the content that is being purchased through DDA?

ent Can we conduct a survey to gage whether patron interests and librarian collection development goals are aligned?

ent What can we do to ensure that the collections triggered through DDA are as well developed as possible?

ent Are we using librarian expertise to help develop and refine discovery pools? If not, how can we use these skills better?

ent How can we develop our informal DDA strategies, like faculty requests and librarian/faculty relationships, into formal strategies like catalog-integrated ebook DDA?

ent How do our circulations change as materials age?

ent Are there records that we can remove from our discovery profile as they become unlikely to be triggered?

ent Does the circulation on our purchased items decline as these items age?

ent Can we use the data from ebook collections we already hold to determine a good bottom age range for our discovery profile?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset