Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • A self-contained search application
    • maintains its own index
    • exists as a web application
    • send it requests to
      • search
      • add, update, or delete records
  • May live in the same Tomcat as VIVO, or may not
    • Example Solr runs in Jetty
  • Built on Lucene
  • Open source

How does VIVO use Solr?

...

VIVO uses the Solr search engine in two ways:

  • as a service to the end user,
  • as a tool within the structure of the application.

Solr for the end user.

Like many web sites, VIVO includes a search box on every page. The person using VIVO can type a search term, and see the results. This search is conducted by Solr, and the results are formatted and displayed by VIVO.

Image Added

Image Added

Solr allows for a "faceted" search, and VIVO displays the facets on the right side of the results page. These allow the user to filter the search results, showing only entries for people, or for organizations, etc.

Solr within VIVO

VIVO is based around an RDF triple-store, which holds all of its data. However, there are some tasks that a search engine can do much more quickly than a triple-store. Some of the fields in the Solr search index were put there specifically to help with these tasks.

For example, the browse area on the home page shows how many individuals VIVO holds for each class group. Image Added

VIVO could produce this data by issuing a SPARQL query against its data model. However, this would take several seconds for a large site, and we do not want the user to wait that long to see the home page. To avoid this delay, the class group of each individual is stored in the Solr record for that individual. Solr can count these fields very quickly, so VIVO issues a Solr query against the index, and displays the results on the home page.

Record counts on VIVO's index pages are obtained using the same type of Solr query.

Image Added

How is the index kept up to date?

  • When an individual is added/edited/deleted, Solr is given the new information and updates the index.
  • Sometimes the index must be rebuilt
    • Most commonly, after an ingest, since some of the ingest mechanisms bypass the usual VIVO framework
      • It would be too slow to update the Solr index on each new statement from the ingest
      • Working to add a search-aware ingest method, which Harvester or other tools could use.
    • There is currently no way to rebuild only a section of the index.
      • Either it is up to date, or it must be fully rebuilt.
      • Plans are discussed to rebuilt named graphs, or a list of URIs.
    •  

      A rebuild is done on the side, then replaces the previous index, and Solr switches to the rebuilt one.

...

Note

(Recap: look through all of the steps with Mark Ludwig)