CANADIAN FACEBOOK GROUPS
Canadian Federation of Library Associations Facebook news
Librarianship.ca Facebook news and resources
CANADIAN FACEBOOK GROUPS
Canadian Federation of Library Associations Facebook news
Librarianship.ca Facebook news and resources
More than 50 years in Library and Information Science
BY NANCY WILLIAMSON
In 1950 this author emerged from the then University of Toronto Library School to begin the first of two careers in the field. Much has happened to change libraries and librarianship over the years since then. Those changes were experienced through fifteen years of public librarianship (6 years in reference and 9 years in technical services) followed by 40 years as a library and information science educator with involvement in teaching, research, professional associations and consultancies in North America and abroad. In this article, it is from this perspective that the changes are observed and some personal experiences drawn.
We no longer, store, access and retrieve information in the same way that we did in 1950. That stalwart, the catalogue, still exists but it too has changed. Some of the changes have been rather mundane and some quite exciting. Some have improved things and some have had a somewhat detrimental effect. The changes have been gradual rather than dynamic, and evolutionary rather than revolutionary. Through it all, one thing has remained constant – the two basic needs of information seekers: the need to locate particular items about which they have some information and the need to be able to browse or to survey the contents of a domain or field in which they have some interest. What is stored and how it is stored has a profound effect on what and how it can be retrieved. As this paper proceeds, it looks, more or less chronologically, at changes in methods of storing of information, changes in methods of retrieval, and the factors that have precipitated those changes – especially, but not only, computer technology.
The 1950's represent a traditional and a print environment. As this author prepared to enter the library world, there were plenty of jobs of all kinds. How the author got to Hamilton as a reference librarian is somewhat of an accident. She was looking for a cataloguing job and was interviewed and hired for a reference position. Six years later she became a cataloguer. Looking back now I regard this as the best approach one could have taken at the beginning of a library career. Throughout my career, my advice to students has been to suggest that one of the best approaches to a career in librarianship is to gain experience in the methods and practices used in both the storage and the retrieval of information.
The year 1950 was itself a momentous year in Canadian library history. That year, the National Library of Canada came into existence, not as a library but as a service under the name National Bibliographic Centre. The prime purpose of the Centre was two-fold: 1) the creation and publication of Canadiana, the national bibliography, and 2) the planning and development of a National Union Catalogue that would serve to locate and co-ordinate the totality of Canada's literary heritage. It also became the depository for copyright materials. As a building, the Library itself didn't materialize until 1967, but from 1950, the National Bibliographic Centre began to collect materials by and about Canada and Canadians published anywhere in the world. NLC could never be a definitive collection of all retrospective materials that could be considered “Canadian” [although in a digitalized world that might change]. Much early Canadian material was already housed in other libraries – the British Museum, the Bibliothèque Nationale (Paris) the Library of Congress, the New York Public Library and other major libraries within Canada. However, a union catalogue could provide Canadians, and Canadian libraries with a means by which to identify, locate and access as many of the materials as possible, and would permit the sharing of resources among libraries across the country.
How was this to be accomplished? It began with a massive filming of main entries from the catalogues of more than 350 libraries across Canada. The cards were filed in the National Union Catalogue in Ottawa and requests could be sent (or telephoned) to locate specific items in institutions where they could be seen and/or might be borrowed on interlibrary loan. Libraries whose catalogues had been filmed were committed to sending to Ottawa catalogue copy for any future additions to that catalogue: first as cards, later electronically. The National Union Catalogue on cards continued up to 1980 when a database was started. This was the beginning of cooperation and sharing of Canadian catalogue data.
What was reference work like in the 1950s? First of all, it was all based on manual searching of the catalogues, pamphlet files, books, the large journal indexes and abstracting services such as the Reader's Guide to Periodical Literature, Business Index, and the many specialized published subject indexes that existed. Secondly, we had to create our own indexes to some materials. For example, there were many periodicals and journals (particularly Canadian periodicals) that were not indexed anywhere (the Canadian Index to Periodicals and Documentary Films began in 1962). As to my own situation, one of my tasks between answering reference questions, was the responsibility for indexing a list of journals, issue-by-issue as they were received. The results were filed in a card file that also served as a “where to look” file. Moreover, this file included local information (e.g., who are the local coin collectors?), and information gleaned from the results of research on hard to answer questions, or had been missed by the commercial indexes. Newspapers were marked and clipped for important topics. These were placed in clipping files. Local history topics (e.g., notorious murder cases) were made into scrapbooks. Subject headings were also assigned to pamphlets and they were placed in pamphlet files. Picture files were also maintained. While there were many good commercial indexes and finding tools, in terms of coverage they were not the universe and local requirements often had to be taken care of locally.
In some libraries, uncatalogued materials were indexed by methods whose origins were from outside of traditional librarianship. For example, edged notched cards might be used to index technical reports. This approach avoided the complexities of cataloguing, natural language descriptors could be applied, and the reports could be processed and retrieved quickly. The significance of this particular method was its similarity to Boolean searching by computer, which came at a much later date.
Cataloguing in the 1950s also had much about it that was manual. In 1956 when I arrived in the Hamilton Public Library cataloguing department, cataloguing rules were pre-AACR1, and the standard tools (DDC, LCC and LCSH) were in use. Automation had not come to the catalogue or any other operations and there was little coordination of workflow. Each step in the processing – acquisitions, cataloguing, and physical processing – was handled separately. The books were ordered and orders were recorded on multi-form acquisitions slips in rainbow colours that could be distributed across numerous files. When the books arrived, they were checked in by the acquisitions department and sent on to cataloguing. The next step was to search for Library of Congress copy in the LC catalogues and the proof slip files. If matching copy was found, cards were ordered. One printed card became the working copy and was used to add headings and class numbers to the other cards to make up the card sets for the catalogues. If no copy was found original cataloging was done. It was assumed that cataloguers did not type. A master copy was created using “library hand.” From it a typist prepared card sets for the catalogue (multiple sets might be needed). Coverage by LC copy was very low. Even in the mid 60's the literature reported coverage as low as 45% and cataloguing-in-publication had not yet been invented. Canadiana was used for copy where possible, but the availability of copy tended to be slow. Gradually this situation improved and cooperative cataloguing entered into the picture – but much later. In the late 50's and early 60's card production became mechanized – catalogue data was typed on a “master” sheet and run through an offset press. This was great improvement over typing every card. Full card sets could be produced from one copy and all that had to be done was to add subject and added entry headings to each set. Ultimately, the operations were streamlined in what became “technical services.” In the later 1960s it became a mark of prestige to be able to say that your library had a technical services division.
Through the 1950's into the 1960's there were major changes in Canadian library education. This was somewhat related to the automation of the libraries because it brought about changes in curriculum needs. But it also had to do with the number of librarians thought to be needed. It was the beginning of the age of expansion in the number of libraries and the number of educational institutions. The early 1950's brought the introduction of a sixth year MLS programme. Then in 1964, McGill went to a two year MLS and Toronto followed suit in 1970. The PhD programmes came later in 1971 at Toronto and at Western Ontario in 1973. (Ex Libris 2004). In the early 60s a shortage of librarians was predicted. Up until then there were only three schools. In response to the shortage, in quick succession 5 more schools were founded – Montréal and UBC in 1961, Western Ontario in 1966, Alberta in 1968, and Dalhousie in 1969. It was thought that 400 graduates per year were needed. In preparation for the increase in the number of students, U of T began to expand its faculty and prepare for new quarters. In September of 1965, changes were already in the works. On Labour Day 1965, the School of Library Science (its new name) moved from the Ontario College of Education (OCE) at Huron and Bloor streets to larger temporary quarters at College and McCaul streets, where it remained until 1970 when the present quarters were opened. This author arrived on the scene on the day after that Labour Day move. Between 1964 and 1972 the full time faculty increased from 4 to 23 (Ex Libris 2004, p.10) and the student body grew from 100 to 200. After that high point, in the leaner years, the faculty shrank to 15/17. Currently approximately 140+ new MISt students are admitted annually and there are approximately 50 students in the PhD programme. At the present point in time both the student body and the number of full-time faculty are on the increase. Over recent years the programmes have diversified and various aspects of information handling have converged at FIS – library science, archives and information systems and in 2006 museology.
The addition of a PhD programme had a profound effect on the master's programme. The learning environment changed significantly. In particular, the curriculum was enriched in content and there were important and interesting opportunities for students to participate in research and to work more closely with faculty. Faculty members upgraded their qualifications and became more deeply involved in research and the development of standards and participated in international conferences and seminars.
At the beginning of the 1960's both reference and cataloguing were traditional and fairly standard. For example, periodical indexes were similar to each other and catalogues were uniform in format; cataloguing rules and standards had been stable for the first half of the 20th century. Information was contained mainly in paper format but with the increasing development of non-print media and the introduction of computers into the field all of this began to change. Catalogues were no longer uniform in format. Some began to appear as book catalogues, others went to microform format and ultimately were transformed into OPACS. Electronic databases differed from periodical indexes in print formats and in the ways they were accessed and searched. As automation entered the library world in the mid 1960s, libraries and information centers appeared to be on different wave lengths from each other. In 1965 the first courses in automation were offered at what is now FIS. The MARC record was developed and the first online databases came into existence. In the process, librarianship became much more international in scope.
In 1967 AACRI was published. Previous catalogue codes had been national in scope. With the development of AACRI there was much international consultation and discussion. The aim was a code based on logical principles, which could be a model for codes all over the world. Discussions went back to first principles and at an international meeting, the Paris Principles were approved as the basis for catalogue codes generally. and the IFLA Standing Committee on Cataloging developed the International Standard for Bibliographic Description (ISBD) as the international standard for document description. AACRI and the ISBDs both recognized the importance of rules for handling the cataloguing of non-print materials.
The early years of automation brought serious growing pains. Many mistakes were made and the implementation of the technology was often ill-conceived. Librarians and systems experts had difficulty in achieving a clear understanding of each other's needs. Librarians, needed to better understand that print formats are not always suitable for computer display; and that information that is implicit in print needs to be made explicit for the machine. They understood electronic databases, which came first, but catalogues were a different kettle of fish. There was less data to play with and the possibilities for retrieval were different. For the systems experts the major problem was understanding the purposes and use of the data being stored. In the broadest sense the advent of the computer brought numerous changes and the impact of automation was reflected in a number of ways:
The earliest step was a report on Automation and the Library of Congress (King 1963), which spawned the MARC project that created the record format for handling catalogue data in machine-readable form. A pilot project was carried out to test the viability of machine-readable catalogue records. Records on magnetic tape were distributed to 16 specially selected libraries. U of T was the only Canadian library involved. Participation in the project afforded a library a certain prestige and participants were expected to experiment with the tapes. However, only 2 or 3 of the participants carried out projects. One of these was the University of Toronto. Using the machine-readable data, through ONULP (the Ontario New University Libraries Project), in 1967, book catalogues were created for the collections of 5 new university libraries in Ontario – Guelph, Brock, Trent, and the U of T satellites, Scarborough and Erindale. The catalogues were one source of evidence that the data could be manipulated to produce acceptable cataloguing output. From the results of the project, the format was adjusted and MARC II (forerunner of ISO 2709) was developed and made available in 1968. This meant that libraries could buy and exchange catalogue data in machine-readable form and use it in a local system. While LC developed the original MARC, the format was modified and adapted for use in other countries – France, UK, Sweden, and Canada, etc. Compatibility was an issue Each country had its own version. Ultimately a version called UNIMARC was produced to be used as a kind of switching language (or format), making it possible to exchange data across countries. MARC formats were also created for other kinds of files , e.g. authority records, classification etc.
Bibliographic databases preceded the OPACS with the first bibliographic systems appearing in the mid 60s. They were developed on completely different bases from catalogues. Rules, standards and content were different. Some of them had been print services prior to automation; others were begun as databases. In most cases, abstracts as well as controlled descriptors were included. But prior to 1980, “the few available online information retrieval (IR) systems were expensive and complicated to search, so end users either delegated their searches to trained intermediaries or searched card catalogues and print indexes on their own. In the mid 1980s online IR system suppliers introduced simpler front end interfaces to online IR systems and marketed search services to end users, but few end users ever used these services because of their high cost. By the late 1980s, end users could go to libraries to search the most popular online IR databases on CD-ROMS or through the online catalog's interface.” (Markey, 2006, p. 1073). An important feature was that in many of these databases the citations were accompanied by abstracts as well as subject descriptors, thus increasing the potential for successful retrieval.
Catalogue records, on the other hand had no abstracts. In the early years the computerized catalogues operated as separate and distinct systems with their own approaches to access. The enhanced content of databases meant that they would be better search tools than OPACS. Databases had their own systems of subject descriptors – usually thesauri – not LCSH. The first printed thesauri, in the modern sense, ASTIA Thesaurus of Descriptors and Chemical Engineering were published in 1960. As a tool the thesaurus came from outside of traditional librarianship, was developed on the basis of accepted standards, and was well suited to design and application in computerized systems. Experimentation with OPACS began in the 1960's but they did not become really functional until the 1980s. Some libraries went from cards to the microform catalogues before they moved to OPACS (e.g. U of T). Subsequently, OPACS became the predominant form of catalogue in North American libraries. Originally they were developed locally in libraries. Examples are the UTLAS system at U of T, Melvyl at UCLA, and the first catalogue at Northwestern University. In the beginning, small libraries contracted with the larger ones. Over time the production of OPACS was taken over by commercial vendors. Often this was for economic reasons. On annual budgets, libraries had difficulties in handling the financial aspects and experienced problems with cash flow.
The systems themselves grew from various origins. Some expanded out of circulation systems; others were designed as catalogues; still others were one component of integrated online systems. The bibliographic standards did not change but there were no generally accepted standards for ways in which data were displayed, the amount of data to be displayed and the manner in which the user had to interact with the OPAC. There were differences in what you could search on, and varying amounts of authority control. By 2006, many libraries have gone through at least three generations of OPACS. The first generation OPACs was rather crude. Some were virtually one-liners with the briefest of entries. Typefaces were frequently old fashioned. Most had no authority control and so lacked cross-references for names and subjects. Subject access was crude, by today's standards. Those OPACS that emerged from cataloguing systems had some of the idiosyncrasies of those systems For example, U of T's first computerized catalogue displayed records in the order they were added to the catalogue. Since it is a union catalogue, 7 copies of a title in 7 different libraries meant separate records for each and they often were not displayed together. Much was learned and second generation catalogues capitalized on fixing mistakes. A dilemma for many libraries was what catalogue to “buy”? As each vendor capitalized on advancements, each new OPAC became better than the last and was soon overtaken by something even more efficient and more elegant. Most third generation catalogues are “webbed” or web-based. That is, they are “in the web” so to speak. This permits hyperlinks to other catalogues and other databases. The result in increased functionality – hyperlinks can be created so the bibliographic records from other institutions can be located and cut and pasted into one's own catalogue. Also the searcher can move into various databases.
Automated systems spawned numerous consortia, networks and cooperative and shared cataloguing agreements among libraries and information centers. Their main purpose was to achieve faster and more efficient processing of materials, and albeit to do it more cheaply. Cheaper didn't mean better. Cataloguing departments began to accept records without change or editing of such data as classification numbers and subject headings. There was less and less tailoring to local needs. OCLC came in to being as the Ohio College Library Center in 1967. It expanded nationally in 197l and changed its name in 198l. The world embraced it as the cheapest and fastest way to online bibliographic data. By 1999, 33,700 libraries across the world were participating. Today OCLC serves 53,548 libraries of all types in the U.S. and 96 countries and territories across the world. Think about it. Oxford University moved to using LCSH.
With one or two exceptions, the 1980s were primarily years of consolidation and development of the tools and methods that were available. The 1980s also brought the development of online systems for the major classification schemes – first DDC, later LCC and UDC. Dewey and UDC lend themselves to this because of their numerical notations. Many thought that it would be impossible to convert LCC because the hierarchies are reflected in the captions but not in the notation. At LC's suggestion, this author investigated the implications for conversion. Eyeballing 3690 pages of schedules, I produced a report (Williamson, 1995) on the problems with some suggestions as to how they could be solved, and, yes indeed, LCC was converted to machine-readable form.
Another innovation of the 1980's was the technical services work station, First used in cataloguing departments to facilitate workflow. Designs differed depending on the networking environment, systems capabilities, and customizing by staff. They use microcomputer technology and can be built onto PC, Macintosh, UNIX or Power PC systems. There must be enough power to run the programmes and they should do multitasking with local storage facilities. Access should be to working files and cataloguing support tools. Operating requirements include high-resolution monitors and screens large enough to accommodate multiple windows. A tool of this nature is available from LC as the Catalogers' Desk Top, that includes access to AACR2 Rule interpretations, the Subject Cataloging Manual, dictionaries, thesauri, etc. and can accommodate locally produced documents. The tools are accessed through a uniform interface.
On the retrieval side, databases grew and thesauri multiplied but were still in paper format. However, there was one major piece of research that had implications for the future. In the early 1980s, research was carried out on some systems that were variously called viewdata, videotext, telidon, and PRESTEL (UK). These were to be general information systems available to everyone through their television sets. Involved in the research were the Department of Communications of the Canadian government and Bell Canada (Williamson 1980a and 1980b). Access to the systems was hierarchical and menu driven using tree-like structures. Some few examples were put out for comments from the general public and specific individuals were asked to subscribe. The content was advertising and it was assumed that anybody could be a provider. These systems didn't catch on, primarily because while you could follow the trees there was no direct access to topics. One academic was heard to say ''52 steps to the weather and you can make a gyroscope on the way.“ They would never be suitable for large databases, so most of the projects died. However, they are of great significance because they were under- developed forerunners of the Internet.
As we all know the 1990s brought the Internet – an imperfect tool – nevertheless extremely popular and useful as a method of storage and retrieval of information. It was, and still is, unwieldy and many people have worked at harnessing its power. Individual websites have been organized by librarians, novices etc. with some good results and some bad. Various methods of categorization have been used, including classification schemes and taxonomies, etc. and some very good indexes have been developed. Some people fear that Google will replace the library and others ponder the “the future of the catalogue.” Most notable among these is the Library of Congress itself (Calhoun, 2006). It was impossible to organize the whole, but there are now some promising efforts to organize parts of the whole.
In 1981 at an ALA conference in San Francisco, in a paper entitled “Is there a catalogue in your future? Access to information in 2006” (Williamson, 1982) I predicted that there would still be a catalogue but that it would be a very different kind of catalogue – a catalogue that would no longer be the focal point, but a catalogue whose role would be defined by our ability to redefine its procedures in terms of the whole needs perspective in relation to other access tools. It would require the development of a service as opposed to separate collections and separate databases. There is definite evidence that this is actually happening but there is still some distance to go. In another article entitled ” Invisible thesauri: the year 2000“ Jessica Milstead , a private consultant and expert on thesaurus construction wrote of thesauri that “their uses in information storage and retrieval will be quite different, as they are blended into systems of machine-aided indexing and text retrieval systems in which the boundaries between the 'thesaurus' and other semantic tools are vague and which will aid the user far more in defining research than is commonly the case today.” (Milstead 1995, p. 93), These two articles point the way to “whole” or integrated services, i.e., one stop shopping and the blending and integration of tools and information content. This is already happening. Portals and gateways are customized collections that includes the web-based catalogue in a new role.
One example of this integration can be seen in the role that the U of T web-based catalogue has begun to play in retrieval. A search for documents via literature indexes such as Information Science and TechnologyAbstracts (ISTA) and Library and Information Science Abstracts (LISA) retrieves a list documents in brief entry form. Included with each entry is a signal “Get it UTL.” A click on the signal will retrieve an abstract and where possible an indication that free text is available; a further click will tell the user whether full text is available online and lead to it. If no full text is available the user is directed to the U of T catalogue where it is possible to determine whether a library on the campus has a print copy and where it can be found. From the retrieval point it is also possible to take advantage of two other services. “Ref Works” allows a user to add a citation to his/her personal collection (i.e. bibliography). The second service permits users to make interlibrary loan requests.
Another approach is to search directly through a portal or gateway, such as Scholars'Porta1, or CSA Illumina. These are among the newest tools coming to the aid of information seekers. They provide access to a long list of web resources and databases through a single facility. These tools tend to be either client or subject oriented and are usually quality controlled and created by human editors and subject specialists to ensure a high level of quality. Completeness and balance are sought in collection development and policies are set up to ensure that the contents are up to date. Formalized content is recommended as are deep levels of classification and subject browsing structures and thesauri and other controlled vocabularies are called for. There are, as yet no prescribed standards, so portals will differ in their approach. As one example Scholars' Portal gives the user four choices. Searches can be carried out on databases which the user can select from a list. If several databases are chosen they are searched simultaneously. In other services, specific “electronic journals” owned by the library can be accessed. Again users can automatically create their own personal databases of citations and can activate interlibrary loan requests. The system connects information seekers to both print and electronic resources and the catalogue has an important roll to play here. To achieve these services libraries subscribe to a system referred to as SFX, which controls the maintenance and updating of the system.
A third approach is through the individual databases (e.g. ERIC, Agricola, etc.) which can still be accessed directly. Recently the ERIC System has been redeveloped to meet the requirements of these reconfigured systems. A small database (e.g. Unesco Documents and Publications) best illustrates what is happening. While these systems are not for the casual Internet use, they will greatly facilitate the work of information professionals and serious researchers.
Yes Virginia there will still be a catalogue and there is a logical role for it to play. There are some who are still debating its future. In the light of the fact that “the destabilizing influences of the Web, widespread ownership of personal computers, and rising computer literacy have created an era of discontinuous change in research libraries” (Calhoun, 2006, p. 5) and that students and scholars are known to be bypassing the catalogue for other sources it is important that such debates take place. It does not mean that the catalogue should be eliminated or down graded. Rather we need to determine the rightful role of the catalogue in the scheme of things. Catalogue code revisers are certainly not anticipating the demise of the catalogue. In a related activity the catalogue code revisers are hard at work on a new code. Originally referred to as “AACR3” it has been assigned a new working title “RDA: Resource Description and Access” and is projected to be published in 2008. In order to accommodate changing conditions it takes a more open approach than the one taken by its AACR predecessors – a kind on umbrella approach to all information objects.
It is safe to say that the Internet will never be a perfect information system. It is too large and there are too many information providers. Even if guidelines could be crafted to improve quality and consistency, only a very few users would follow them. However, one can see that some things are moving in the right direction. There are new tools designed to improve searching; hyperlinks are being used to advantage and some well structured directories are being developed. True to predictions the thesaurus has become an important tool in retrieval. The guidelines for thesaurus construction and display – American, British and International – have been drastically revised to encompass the electronic environment.
At the same time, there is still much room for improvement in aids to retrieval Frequently the user is presented with long alphabetical lists to be searched in situations where more use of categorization would help. While the thesaurus has emerged as an important online search tool, many database and website designers seem afraid to indicate its presence to users. In some cases there is an online thesaurus, but it is a stand-alone tool and not hyperlinked to its database. There is hope that this will come. Moreover, almost invariably, it is assumed that users consulting a thesaurus know the precise term he/she needs to look for. When the term is input, the system brings up the term asked and its NTs BTs and RTs. Some navigation among the related terms may be possible, but in almost every case, it is not possible to browse the entire thesaurus. The facility to browse is crucial but not always available. The black box is still with us. Improvements in one direction may create drawbacks in another. It not easy to predict where things will go or how successful they will be. So far the situation might be summed up in the words of Lewis Carroll in Alice in Wonderland. The Mad hatter asked “But my dear was there progress?” and Alice earnestly replied, ” Well, there was change.“ In the wonderland of information and technology of the 21st century there must be progress as well as change.
This is a slightly revised version of a talk presented at the Annual Meeting of CASLIS, May 2006.
Calhoun, Karen. (2006). The Changing Nature of the Catalog and its Integration with other Discovery Tools: Final Report . Prepared for the Library of Congress. Washington: Library of Congress.
Ex Libris Association. Library Education Anniversary Committee. (2004). “A History of Library and Information Science Studies in Canada.” Special Issue of ELAN, Spring 2004.
King, Gilbert W. (1963). Automation and the Library of Congress. Washington: Library of Congress.
Marcum, Deanna B. “The future of cataloging.” Library Resources and Technical Services.50 (1):5-9
Markey, Karen. (2007). “Twenty-five years of end-user searching. Part 1: Research findings.” Journal of the American Society for Information Science and Technology 58 (8): 1071-1081.
Milstead, Jessica. (1995). “Invisible thesauri: the year 2000.” Online & CD-ROM Review 19 (2): 93-94.
Williamson, Nancy J. (1980a). “Viewdata systems: Designing a database for effective user access.” Canadian Journal of Information Science. 6 :1-14
Williamson, Nancy J. (1980b).”An Optimum Structure for the Viewdata system to be used in the VISTA project field trial: Final report.“ Prepared for Bell Canada, Headquarters Business Development, (Unpublished)
Williamson, Nancy J. (1982) “Is there a catalogue in your future? Access to information in 2006. Library Resources and Technical Services 26 (April-June): 113-128.
Williamson, Nancy J. (1995). The Library of Congress Classification: a Content Analysis of the Schedules in Preparation for Their Conversion into Machine-Readable Form. Washington: Library of Congress Distribution Service.