Integration test opportunities with the switch to TDB - requires startup/shutdown of external Solr ..via Maven
Tickets
Status of In-Review tickets
Notes
Notes
Bruce: Leadership group is moving along with the VIVO in a Box project. Hoping technical group can define what may be easy to do with respect to this idea and what may require more work. Need to see what may be possible/time requirements.
Need better definitions around some portions.
Sense that kind of features that have been proposed to make things easier, but quite different from how VIVO currently operates.
Are we thinking about a new different product? Orthogonal to what current group is interested in?
Doubt around VIVO project at Texas A&M would pay off, but few early wins around publications where info was being harvested and profile system that helps with promotions. Simple profiles at first.
If buy-in at organization, can get more resources .
Potential users
Administrator: Button push for deploy and installation
Librarian: a place to put the data but don’t want to install the machine
Making VIVO studio to help librarians to extract data and put that in VIVO. Have Vivo but not easy to put data in. Want to collaborate with non-developers.
In this studio, instances of VIVO, Kafka, Jena, OWL API
Plugin environment
Quite complicated but nice solution to have
Make the process more like DSpace where librarians can install and run it without additional tech support.
William: VIVO Studio packaged eclipse. Runs VIVO on a port?
Michel: No. When you open studio, have all the environment to install, start, and stop VIVO.
William: could be interesting if VIVO Studio were an environment where librarians or others could experiment with customizations / ontology extensions / data loads, and when satisfied, request a merge to production VIVO.
With Remote Application Platform https://www.eclipse.org/rap/ could obviate need for local installation entirely
Michel: may be too complex to add this; too many things going on.
William: probably better to use RAP if built in from the ground up.
William: any plans for new data sources?
Michel: Thinking about adding Swagger w/ REST API for admin tasks, e.g. adding users. Plus middleware that converts those requests to SPARQL UPDATEs
William: Studio may be more expedient solution for goals of VIVO in a Box. Could be a good starting point.
William: plans for intermediate database? Or no intermediate database?
William: Active Directory / LDAP consumer for Kafka stream would be the biggest bang for the buck.
Discussion of making source-specific graphs a standard part of VIVO pipeline. Michel and Brian both like to transform data sources to RDF as quickly as possible and then both map into the VIVO ontology using SPARQL and coalesce the different source graphs into a combined VIVO graph. Michel likes having separate Fuseki servers hosting certain sources rather than having to combine into one store. RDF and SPARQL are great for the low-level combining of data from different sources; their weaknesses come in supporting real-time queries and having a perfect or near-perfect coalesced RDF graph that a public VIVO typically requires in order to look good.
Brian: Is there a way we can avoid the need for a perfect coalesced graph and instead index directly from the source graphs and links between them while hiding problem areas where links may not exist?
William: Are there going to be generalized solutions that work across institutions for the first transformation into the source graphs?