Table of Contents maxLevel 3 include h2 outline true absoluteUrl true style none
Panel | |||||
---|---|---|---|---|---|
|
back up to How to plan data ingest for VIVO
previous topic: Typical sources of VIVO data
Children Display |
---|
Major options
Data ingest for VIVO is a process of transforming an existing or new source of data into RDF and loading that RDF into VIVO's data store, called a triple store after the three-part data statements it contains. VIVO ships configured to use an open source triple store called Jena SDB implemented using one of several off-the-shelf databases – in most cases MySQL, although at least one VIVO site, Melbourne's Find an Expert site, is exploring IBM triple store technology implemented via Oracle. In addition, developers at Cornell and Florida are leveraging the new RDF API implementation in VIVO 1.5 to experiment with other triple stores including Sesame, Virtuoso, and Owlim.
...
The Harvester has been extensively documented throughout it's lifetime by its original developers at the University of Florida and through the work of other VIVO developers and implementers at other institutions. Please see the Data ingest and the Harvester Ingesting and maintaining data section for full details.
...
The logic and application of semantic mappings are discussed extensively in the recommended book, "Semantic Web for the Working Ontologist", including many short examples and a step-by-step introduction of RDF and OWL capabilities.
Children Display | ||||||||
---|---|---|---|---|---|---|---|---|
|
...
next topic: Typical ingest processes