Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

We used various scripts to analyze different data sources and set up code for viewing Fuseki data

LOC Hub Analysis

  • We used client-side AJAX queries to retrieve the first 10,000 hubs from LOC and then navigate to related works and instances to analyze how many LOC Hubs provide two or more instances with ISBNs or LCCns.
  • We wrote scripts to further analyze these groupings of hubs to see how many catalog matches we could get.
    • LCCN analysis
      • Finding catalog matches for LCCN sets grouped under LOC Hubs
        • This file (HubSetsLccn.csv) lists an LOC Hub on each line followed by a list of LCCNs from instances that fall under that hub.
        • A script (processlccn.rb) reads in this file and then generates the file (lccnhubonlyfirst) which lists the LCCN rows that matched at least two catalog items, and then ends with a summary.  (The output says "ISBN" but is in fact "LCCN" because the same code was copied/used the ISBN analysis).
      • Finding catalog matches for LCCN sets grouped under LOC Hub to Hub relationships
        • Each line in the file (prophublccnsets.csv) lists the name of the relationship (e.g. "hasTranslation") that links two different hubs, followed by the LCCNs that fall under those hubs. 
        • A script (processrellccn.rb) reads in this file and then generates the file (lcchubrels) which starts with a list of the property and LCCN groups that resulted in at least two catalog matches (e.g. "hasTranslation : 2017328875,92911176,93910013") followed by a summary of the total number of rows and LCCNs in the original file and the number of matching rows/LCCNs.  In addition, the file also lists those hub relationship and LCCN groupings from the original CSV file that resulted in exactly one match in the catalog. 
    • ISBN analysis
      • Finding catalog matches for ISBN sets grouped under LOC Hubs
        • The script (processcsv.rb) analyzes the file (HubSets.csv), which lists LOC Hubs with the groups of ISBNs that fall under each hub, and generates a file (tenthousandresults).  This resulting file first lists the sets of ISBNs from the original CSV where each set has at least two catalog matches.  The file ends with a summary of the total number of rows processed from the original file and the rows that matched at least two catalog items (i.e. ISBN sets).
      • Finding catalog matches for ISBN sets grouped under LOC Hub to Hub relationships
        • The script (processrelcsv.rb) analyzes the file (prophubsets.csv) .  This CSV file (you can sense a pattern now) which lists the property connecting two LOC Hubs followed by a list of ISBNs that fall under the two hubs related by this property.  The analysis results in the file (updateHubRelResults) which lists the relationship and ISBN groups that result in at least two catalog matches.  The file ends with a summary of total rows processed from the original file and the number of rows which resulted in two catalog matches.

...

  • We wanted to analyze the POD (Platform for Open Data) transformation provided by Jim Hahn (University of Pennsylvania) to see if we could retrieve matches for our set of ISBNs that fell under the same LOC Hubs and that only had a single match in the Cornell catalog.  This transformation provided sets of CSV files per institution, where the headers represented MARC fields and the rows contained values per MARC record mapped to those fields.
    • This file contains the list of ISBN sets that resulted in a single match in the Cornell catalog.
    • This script reads in this file and compiles the ISBNs that occur in the file.  The script then reads in the transformed CSVs which contain POD data mapped to MARC fields and values.  If the script finds any of the target set of ISBNS represented within an institution's transformed data, the script then outputs the transformed rows which match any of these ISBNs.  The results of this script are included here, with matching rows for Brown, Chicago, Columbia, Dartmouth, Duke, Harvard, Penn, and Stanford.
      • A separate script retrieves the Cornell catalog record information for matching ISBNs, resulting in this file which lists the original set of ISBNs we were querying against, followed by the catalog id and title of the record, followed by the ISBNs for that item captured in the record itself.
    • Using the results from the previous step, this script reads in the information for MARC records matching the original set of ISBNs we are querying against, and uploads information to a Solr index we set up specifically to allow us to store and search across these multiple records.  We also add the institution information to the record, to specify where the data is coming from. If there are records that are not added due to insufficient information, the output identifies those records. In this case, three records from Brown did not have 001 fields and were not added to our Solr index.
    • This script uses the original file with ISBN sets that result in a single Cornell catalog match, and queries both the LD4P3 copy of the Cornell Solr index and the POD index set up for this analysis to find which catalog records across these institutions matches these ISBNs sets.  The output generated is in the form of an HTML page available here.

Fuseki UI

  • We set up this lightweight code to enable the viewing of data accessible through a Fuseki SPARQL endpoint.
    • To try out this UI, please download the contents of this directory to your local machine.
    • The Fuseki SPARQL endpoint needs to be specified in the js/config.js file.  To create this file, copy over this example file to the js directory, and set the SPARQL endpoint URL as the value of the property "fusekiURL" which is currently commented out. 
    •   You can now click on and open up the view file in your browser to review the classes and predicates present in your data. Clicking on the class and predicate links will show example entities and statements respectively.  More details and screenshots can be found in this report.
    • If you want, you can see a set of random statements extracted using the "feeling lucky" page.
  • We also started some preliminary work to visualize hubs and relationships here but this code probably requires a lot more review and work.