Room | Rotunda Video playlist
| Room 216 Video playlist
| Room 217 Video playlist
|
08:30 - 09:00 | Check-in & Coffee |
09:00 - 09:10 | Welcome |
|
|
09:10 - 10:15 | Kickoff Facilitator: Jason Kovari LD4P began as tool for the transformation of library metadata production from workflows based in the MARC formats to linked data. As the project evolves, however, it becomes an opportunity to reevaluate the library’s role in a developing, worldwide information ecosystem. |
While much of the world knows about Wikipedia, the emergence of Wikidata as a key global structured data project has only recently emerged as a key way to engage libraries, archives and museums. Andrew will discuss the ways in which the Wikimedia movement has adopted Wikidata and how it interfaces to institutions and collections through linked open data initiatives and innovative reuse. He will talk about notable projects that showcase these and a vision for the future which includes projects such as Structured Data on (Wikimedia) Commons, a global citation database in WikiCite and unified collections contributions workflows now being engineered tougher by the Wikimedia community and GLAM institutions. |
Through considering a case study written for the Design for Diversity project, we will consider how metadata and aggregation across collections has simultaneous potentials: to perhaps surface a more diverse range of histories and cultures; to perhaps surface those histories but through metadata that still lacks cultural relevance or respect; or to perhaps only re-inscribe the largely white, largely male histories represented in U.S. library, archive, and museum collections. Amanda Rust will first briefly introduce the Design for Diversity project, and Dorothy Berry will then discuss her work making African American materials more discoverable through digitization and metadata aggregation in Umbra Search. |
|
|
|
10:15 - 10:30 | Break |
Block 1: 10:30 - noon (90 min) | Discovery 1 Facilitator: Greg Reeve The larger questions of how and where to augment discovery processes with linked data are potentially relevant to end-users and institutions both within and beyond GLAM areas. We propose a session where we can present the design questions, processes, and findings from our LD4P2 work and begin a discussion around related discovery interfaces that use external data and/or linked data to support end-users. |
This talk will present on work at UW-Madison to enhance the library catalog with Linked Data-derived info cards for the authors, contributors and topical names found in bibliographic works. This feature was added to the production search interface for the UW-Madison Libraries in January and Steve will discuss how the feature works as well as an assessment of it by library staff and patrons. |
| Modeling (Video) Facilitator: Asaf Bartov
I will present analysis and recommendations compiled by the SHARE-VDE Transformation Council’s working groups regarding SHARE-VDE data modelling, with special focus on SuperWorks/BF:Works and Master Instances. Further activities and analysis planned thanks to the SVDE Working groups directions will also be briefly outlined. |
This presentation explores the intersection of philosophical theories of time and their experimental serializations in RDF and related languages. Implications for adoption of particular models is addressed as well as future areas for exploration. |
We consider rich semantics beyond typical approaches to linked data. For instance, we have explored supporting access to the NEH/LC NDNP digitized historical newspaper collection. Retrieval with dynamic, structured “community models” seems more promising than traditional indexing. We also applied rich semantic modeling to scientific research reports so that they would be implemented as knowledgebases rather than as text. Upper ontologies, frame semantics, and object-oriented programming-language semantics are among the approaches that may be incorporated into such “direct representation". |
This presentations describes a six-month project to build an intermediate knowledge layer between a knowledge base of classical texts built using FRBRoo and the CITE architecture and the Perseus catalog of published works. The process entailed automatic extraction of works and expressions from multiple MODS records and their recombination into FRBRoo statements, which were then made available through a SPARQL endpoint. |
UC San Diego's participation in the LD4P2 Cohort will revolve around describing Open Access (OA) resources. This presentation will cover some of the driving forces behind this focus on Open Access materials, the opportunities and challenges we see for describing them with the BIBFRAME model, and our current training and preparation for the release of the Sinopia editor. |
| Wikidata Tutorial 1 (Video) Facilitators: Amber Billey & Will Kent
The Wikidata Tutorial will provide a hands-on introduction to the basics of Wikidata and the Wikidata Query Tool. Participants will learn the basic functions of both and how to make simple manual edits to Wikidata items and create simply database queries. The workshop will also discuss how institutions can use Wikidata effectively and employ automated tools like the Distributed Game and OpenRefine. A laptop or mobile device will be necessary to fully participate in the workshop. |
|
noon - 13:00 | Lunch |
Block 2: 13:00 - 15:00 (2 hours) | Discovery 2 Facilitator: MJ Han Stanford implemented Schema.org in its Blacklight-based catalog SearchWorks. This lightning talk will discuss the implementation, plans for enhancements, and the path forward to shareability with other Blacklight-based discovery environments. |
Learn how bibliographic data provided by libraries is used at Google and why machine-readable linked data is the way to go. |
I will describe my attempt so far, to learn what linked data is and to implement a simplified project at my institution. I use my university’s strategic goals to convince our administration and colleagues and I want to inspire them with a demonstration showing the results of higher visibility of a unique collection to benefit anyone interested in this collection. |
A visual workflow not only helps students and volunteers with cataloging, but also supports Linked Data with a focus on concepts instead of terms. The worksheet tool at DressDiscover.org explores this approach for describing artifacts of historic clothing, providing visual choices backed by terms from thesauri that are available as linked data. |
One of the chief promises of linked data for libraries is search engine visibility. Few studies, however, have attempted a quantitative analysis of linked data's impact on search engine rankings. This presentation will review a linked data project from a search engine optimization perspective. |
eagle-i (www.eagle-i.net) is an open platform, open ontology discovery tool for research resources, including microscopes, equipment, stem cells, transgenic mice, and software. It is a distributed node network of 40 institutions linked by a central search. For ten years, it has been a stable resource but has been somewhat forgotten. As science and research have moved to a more open model, what role can eagle-I play, and how can the linked data that describes the resources be used in the research and publishing cycles? This presentation shares answers to these questions. |
| Special Formats 1 (Video) Facilitator: Mary Seem This presentation covers current work related to the EU-funded LITMUS (Linked Irish Traditional Music) project at the Irish Traditional Music Archive (ITMA) in Dublin, Ireland. LITMUS is a two-year effort to create linked data tools specific to the needs of Irish traditional music and dance. These linked data tools included the first ontology specific to Irish traditional song, instrumental music and dance as well as two bi-lingual thesauri for traditional instruments and tune types. In this presentation, I will focus on linked data considerations particular to Irish and other traditional musics, and how these challenges were met within the context of the LITMUS project. Examples drawn from LITMUS will illustrate over-arching concepts related to linked data relationships in music and consider future directions for such work. |
This presentation will describe steps taken to integrate Discogs as a copy cataloging source in an RDF cataloging tool for describing Sinatra 45 RPM vinyl records. Because Discogs does not natively offer its data as RDF it might not seem as an obvious choice for use; why and how this data was chosen for conversion to RDF in this particular project will be provided, along will lessons learned about generalizing this workflow. |
This session will not be recorded.Representatives of the Rare Materials Affinity Group will take part in a facilitated discussion session on the topic of how linked data might and might not play well with special collections cataloging. We’ll examine ways that linked data promises to make our materials more accessible to users, and identify potential areas of tension with long-standing descriptive bibliographical tradition. |
| Wikidata Tutorial 2 (Video) Facilitators: Amber Billey & Will Kent
For those with a basic understanding of the design and organization of Wikidata items, this session introduces the best practices for working with collections of data and their associated ontologies. Ideally, attendees will bring a laptop device to better experiment and try out the tools introduced, but a mobile device can also suffice.The session will cover key tools for discovering, uploading, fixing and reusing Wikidata content for collections, and include notable case studies with GLAM institutions such as the Smithsonian Institution and The Metropolitan Museum of Art. Tools to be covered include: - Wikidata user-selectable gadgets such as Recoin, EasyQuery and Hotcat.
- WikiProjects that
- SQID for browsing Wikidata entity relationships
- Wikidata Query advanced methods
- Wikidata Graph Builder for investigating ontologies
- TABernacle for editing and fixing collections data
- Quickstatements for buik uploading and item modification
- Wikidata Game for mobile-friend crowdsourced Wikidata contributions and fixes
- Petscan for advanced querying across Wikimedia projects
- PAWS and Python tools for advanced scripting and needs
|
|
15:00 - 15:30 | Break |
Block 3.1: 15:30-16:30 (60 min) | Library of Congress Special Topics Facilitator: Greg Reeve From MARC to BIBFRAME and back again The Library of Congress must convert its existing MARC data to BIBFRAME in order to migrate to BIBFRAME. What has always been recognized as equally necessary, but has otherwise been far less prominent, is the need to convert BIBFRAME data back into MARC. There are numerous reasons the Library of Congress must do this. After full adoption, LC must still distribute MARC records as part of its distribution service until such time there is no longer serious demand. LC’s Voyager system, including the OPAC, is MARC-based, and there are innumerable other systems at LC that not only expect but might only be able to consume MARC data. These realities therefore means that LC will need to convert MARC records to BIBFRAME and then back into MARC, either for system consumption and/or distribution. These scenarios raise countless questions: How will more or less granular data be converted? Do we have the data and/or mapping to reconstruct a MARC 008 and 007 fields, or Leader? How do we merge Works and Instances back into a MARC record? Can we faithfully reconstitute a MARC record based on given LCCN? What is the future role of the LCCN in this environment? This session will describe how LC is addressing some of these questions as well as demonstrate some concrete steps to being able to do this.
Issues around building a system to house and test BIBFRAME data The presentation will deal with major topics such as issues relating to ingesting converted descriptions and linking them properly, dealing with the differences between editor content and conversion content, and identifiers when URIs do not exist. |
| Digital Collections & Institutional Repositories 1 (Video) Facilitator: Nancy Fallgren This is a pilot project the USF Libraries Linked Data Team has done to experiment transforming the digital collections into linked data. The Team chose a small oral history collection to work on and the team was able to reconcile the data, transform the data into triples and design SPARQL queries to support basic search. Throughout the process, the Team was inspired to streamline the workflow and further enhance the final product. |
A case study of introducing linked data concepts to a cataloging request and the successful partnership between cataloging and a campus data archive. |
| Omeka Tutorial ( Video) Facilitator: Michelle FutornickIn this hands-on workshop (using the sandbox at https://omeka.org/s/download/#sandbox) we will begin with a brief introduction to the popular digital humanities publication platform Omeka S and how it implements Linked Open Data. We will focus on content creation and, of course, metadata creation options using a variety of LOD vocabularies, and how content and metadata are represented in the JSON-LD-based API. To get the most out of this tutorial, participants should bring a laptop and have access to a few files (images or other content) and some metadata for those files. |
|
16:30 - 16:40 | Break |
Block 3.2: 16:40 - 17:15 (35 minutes) | The National Archives API: A Five-Year Journey from Idea to ImperativeFacilitator: Will KentThis session is about the story of the National Archives' first catalog API—our design choices, use cases, philosophies—and how it has evolved over 5 years of development and use, and become engrained in our daily work. It is a story not just about the API itself, but how the act of designing an API from scratch has provoked us to change old ways of thinking about discovery, reference, and, ultimately, archival work itself. |
| Digital Collections & Institutional Repositories 2 (Video) Facilitator: Michelle Durocher This lightning talk will discuss the Perseus Catalog, a research project of the Perseus Digital Library, and current work to convert the legacy metadata collection and related bibliographic data to linked data standards. |
Starting in 2018 as part of the Cultivating a Latin American Post-Custodial Archival Praxis grant funded by the Andrew W. Mellon Foundation, the LLILAS Benson Post Custodial project team began working on developing and migrating the Latin American Digital Initiatives (LADI) digital repository to the Drupal 8/Islandora 8 (formerly CLAW)/Fedora 4 repository framework. One core component of this work includes investigations and implementation of linked data capabilities for better discoverability, access, and analysis. |
| Managing Local Data ( Video) Facilitator: Michelle FutornickImplementation of BIBFRAME or other linked data cataloging workflows at scale will require institutions to address both how the data will be managed over time and how administrative activities that have been driven by MARC data will need to adapt. Bring your ideas, concerns, and questions to this conversation about what is necessary in the short-term as we experiment with new approaches to description and what is required in the long-term to make adoption of best practices feasible. Potential discussion questions include:- If you are cataloging materials in a linked data editor, does your institution plan to represent those materials in the current integrated library system, or other local systems?
- Is it best practice to continue to manage a MARC dataset alongside RDF entities or other linked data datasets? Or, will it be best to transform historic data so that bibliographic data is uniformly represented in a local system? What are the risks and benefits associated with each of the options?
- What are the functional requirements for managing linked data for description in a system such as FOLIO that is intended to be format agnostic for bibliographic data?
|
|
17:15 - 19:15 | Reception |