Call-in Information

Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)

To join the online meeting:



(star)  Indicating note-taker

  1. Ralph O'Flinn 
  2. Steven McCauley 
  3. Andrew Woods
  4. Brian Lowe (star)
  5. Don Elsborg
  6. Maria Amalia Florez
  7. Rafael Mancera
  8. Jose Mongui
  9. Huda Khan
  10. Jim Wood
  11. Benjamin Gross


  1. VIVO 1.11.0 release... 
    1. External Search

      1. Merge 'sprint-search' (vivo, vitro) into 'develop'?
      2. Related branches
      3. Related tickets
  2. Integrating content and navigating between VIVO installations
  3. VIVO Scholars Task Force updates
  4. Tickets in-review
    1. Needing one more review - any volunteers? 


  1. Status of In-Review tickets

  2. Received

      1. (re-)Raises interest in reconsidering first-time, every-time, tdbconfig design

      1. Should be low-hanging
      1. Where does this stand? What is needed to add more person identifiers to VIVO?

      1. Mike Conlon : thoughts on where this stands?

  3. Bugs (1.11)


Draft notes in Google-Doc  



  1. VIVO 1.11.0 release 
  1. External search
      1. Ralph:  Think we’re done.  Plan is to finish up merging and get it all cleaned up.  Was a good experience.
      2. Don: tested two things; one was external search.  No issue there. Also tested the smoke test; behaved well.  Was pretty straightforward.
    1. Blazegraph integration
      1. Andrew:  documentation related to 1.11 would mention Blazegraph but with caveats that it is not ready for production.
    2. Other “pet” tickets for release?
      1. Switch to disable TPF server seems like the way to go, at least for now (
        1. Andrew:  would also be good to refactor all the TPF code not directly related to VIVO out of the main codebase: seems to be a lot of source code copied and pasted from another project.
      1. Brian: Triple Pattern Fragment server not respecting visibility settings?
  1. Adagio use case
    1. Maria: (call her Amalia): received information from Huda via Slack. Would like to have what Huda did with the Connect project: pretty much what they’re looking for.
      1. Example:  Researcher from University A connected to University B because they have products in common.  But current use case is how to connect people who don’t already have products in common.
      2. Want to be able to search and find everyone who is connected in the same ecosystem, and add them to a capability map (even if they have no works in common).
      3. Want to connect their ecosystem to other VIVOs abroad.
      1. We provide services for universities in Latin America.  Provide instance that each university can see their researchers.  Project is to connect those people within the same research ecosystem.
    2. Huda: Feature requirements?
      1. If searching for “Biology” from VIVO A, the results of people and research are not just from Institution A but also from other universities?
      1. Amalia:  Want access to index from another VIVO.  Don’t want to duplicate content but harvest [discover?] content in other VIVOs?
        1. Don’t want everything duplicated.  Just pull content in order to expand the capability map.
      2. Huda: 
        1. Use sameAs relationship from UNAVCO to Cornell, traverse sameAs dynamically in the interface to render content from Cornell VIVO.
        1.  For UNAVCO/Cornell Connect example, there are pieces of code that were added to each VIVO instance in order to bridge across different profiles for the same person.
        2. For search index, we set up whitelists to allow e.g. UNAVCO to search Cornell’s SOLR:
        3. Miles Worthington’s VIVO Searchlight might also be relevant (
      3. Rafael:
        1. Do the VIVO instances communicate directly with one another to read one another’s SOLR index or is there some intermediating layer?
        2. Huda:  Yes. In UNAVCO/Cornell example, each had two SOLR URLs configured (local and remote);  could search both directly, because each server had access to the other’s SOLR.
        3. Code still exists but Cornell’s instance does not, so this setup is no longer running live.
      4. Huda: 
        1. Also a component of reading profile JSON from another VIVO.
      5. Amalia:
        1. Interested also in a JSON approach as we are integrating non-VIVO sites using JSON.
      6. Huda:
        1. VIVO community also working now on GraphQL and other new developments that might also address this use case.
      7. Amalina:  Open to collaborating/collecting with others working in this space.
      8. Andrew:  Was there discussing of including the UNAVCO/Cornell work in a release of VIVO
        1. Huda:  JSON part was shot down because it was a direct translation of Java objects and not the kind of JSON people were expecting to see.  Can’t remember if we discussed the search part.
        2. Benjamin says there was a pull request at some point, but wasn’t acted on.
        3. Huda:  Can possibly do a code walkthrough on a future call.
      1. Search across multiple VIVOs from one single VIVO website.
      2. Capability Map:  also bring back data from multiple VIVO instances?
  2. VIVO Scholars Task Force update
    1. One GraphQL query mapping to a template
    2. Should be easier to maintain with standard web development skills
    3. Use data from any source and query it in a unified fashion
    1. Jim:  In theory it is possible; don’t know at this time how much work would be involved.
    1. Jim:  Underlying architecture now is Solr.  With Spring data abstraction layer, possibility to use ElasticSearch.  Can be done at the lower level of code so as not to impact the GraphQL endpoint.  Not something to launch with.
    2. Andrew:  Solr is a different Solr from the main VIVO Solr?
      1. Jim: Yes.
  3. Don:  Amalia, do you use the editing interface of VIVO?
    1. Plan to develop a workflow where the researcher can request that the university upload something to the platform.


Previous Actions