Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Background

The VIVO 1.6 development time frame is roughly November through June with the goal of having a VIVO 1.6 out before the VIVO Conference, to be held this year in St. Louis, Missouri from August 14-16.

...

  • NIHVIVO-1126 support for editing ref:type of individuals via primary editing interface, not just the admin editing interface
    • The question has come up here about whether to remove statements that are no longer appropriate based on the domain of the property with respect to the new rdf:type.  
      • We think not – the VIVO rendering interface will continue to show the statements, and existing statements will be editable
      • If the user removes a statement, the option to add a new statement may not be offered.  This is appropriate.
      • If the user changes his or her mind and changes back the rdf:type of the individual, the original statements would still be there
  • NIHVIVO-1125 class-specific forms for entering new individuals
    • These can be implemented as needed without new code functionality, although the custom forms would require adding the appropriate editing configurations to the code

Anchor
Ingest
Ingest
Ingest Tools

  • Integrating Mummi Thorisson's Ruby-based CrossRef lookup tool for searching and loading publications into VIVO, on GitHub along with OAuth work for retrieving information from a VIVO profile in another application
  • Improving and documenting the Harvester scoring and matching functions

Anchor
Internationalization
Internationalization
Internationalization

Also referred to and documented as Multiple Language Support in VIVO

  • Moving text strings from controllers and templates to Java resource bundles so that other languages can be substituted for English
  • Internationalization for ontology labels – important because much of the text on a VIVO page comes directly from the ontology
  • Improving the VIVO editing interface(s) to support specification of language tags. VIVO 1.5 will respect a user's browser language preference setting and filter labels and data property text strings to only display values matching that language setting whenever versions in multiple languages are available – but there has not yet been a way to specify language tags on text strings.

Anchor
Provenance
Provenance
Provenance

Adding better support for named graphs in the UI (the application already handles named graphs internally and through the Ingest Tools menu).

...

  • Allowing the addition of statements about any named graph such as its source and date of last update
  • Making this information visible in the UI (e.g., on mousing over any statement) to inform users of the source and date of any statement, at least for data imported from systems of record

Anchor
Visualization
Visualization
Visualization

  • improved caching of visualization data – a student project at Indiana University has investigated and traced the issue as a problem in allowing multiple concurrent threads trying to create the cache of data for the same type of visualization. 
    • This has been reported instead as a problem with scalability (e.g., for a Map of Science from the 32,000 plus publications in the University of Florida VIVO, or at UPenn)
    • This may solve the problem and will at least make it easier to determine whether further work on caching is necessary – if so, a solution for caching intermediate data vs. the final resulting page or image is likely to make sense
  • HTML5 (phasing out Flash) – not likely to be addressed in 1.6

Anchor
QueryExportReporting
QueryExportReporting
Data Query, Export and Reporting

  • Limiting SPARQL queries by named graph, either via inclusion or exclusion. 
    • This is allegedly supported by the Virtuoso triple store. This would help assure that private or semi-private data in a VIVO could be exposed in via a SPARQL endpoint
    • If this functionality is dependent on the underlying triple store chosen for VIVO, it's not something that can easily be managed in VIVO
  • There are other possible routes for extracting data from VIVO including linked data requests – if private data is included in a VIVO, all query and export paths would also have to be locked down. Linked data requests respect the visibility level settings set on properties to govern public display, but separate more restrictive controls may be required for linked data.
  • Being able to get the linked data for a page as N3, Turtle, or JSON LD, not just RDF/XML
  • Enhancing the internal VIVO SPARQL interface to support add and delete functions, not just select and construct queries – see "Web Service for the RDF API" above

Anchor
CVGeneration
CVGeneration
CV generation

  • Improving the execution speed and formatting of the existing Digital Vita CV tool as implemented in the UF VIVO, perhaps changing it to email the generated CV as a rich text or PDF document asynchronously
    • The latter is the more promising approach – if people don't have to wait but can have the document emailed to them, then the perception of slow performance is largely moot
    • That said, it may be possible to improve the queries.  Florida and Stony Brook are known to be using this functionality so should be involved in prioritizing any changes
  • Developing a UI for selecting publications or grants and adding required narrative elements, based on the specification developed in the Digital Vita VIVO mini-grant
    • This has been on "the list" for two years, but makes most sense as a different application
    • Many people have voiced the opinion that getting the data out in an form that is editable in Word or other common document editing tools is much more important than managing the selection or ordering of content from within VIVO

Anchor
SearchIndexing
SearchIndexing
Search indexing improvements

  • Provide a way to re-index by graph or for a list of URIs, to allow partial re-indexing following data ingest as opposed to requiring a complete re-index
    • The same applies for re-inferencing, which is typically more time consuming
  • Implementation of additional facets on a per-classgroup basis – appropriate facets beyond ref:type, varying based on the nature of the properties typically present in search results of a given type such as people, organizations, publications, research resources, or events.
    • Huda Khan has been implementing the ability to configure additional search facets for the Datastar project; some improvements may make it into 1.6
  • An improved configuration tool for specifying parameters to VIVO's search indexing and query parsing
    • Question – are any of these run-time parameters or are they all parameters that must be baked in at build time, requiring re-generation of the index?
    • Relates to another suggestion for a concerted effort to explore what search improvements Apache Solr can support and recommendations on which to consider implementing in what order
    • Changes are not expected for 1.6 – more requirements are needed before this work can be prioritized or scoped.
  • Improved default boosting parameters for people, organizations, and other common priority items
    • Here the question immediately becomes "improved according to what criteria"
    • This is a prime area for a special interest group of librarians or other content experts willing to document current settings and recommend improvements, including documenting use cases and developing sample data that could be part of the Solr unit tests listed above under "Installation and Testing"
  • Improving the efficiency and hence speed of search indexing in general – we have no indications at the moment that search indexing is being a bottleneck.  It can take several hours to completely reindex a major VIVO such as Florida or Cornell, but the ability to specify a single named graph or list of URI's to index would address most of the complaints around the time required search indexing after adding new data via the Harvester, which does not trigger VIVO's search indexing or re-inferencing listeners

Anchor
Modularity
Modularity
Modularity

Jim Blake did significant work during the 1.5 development cycle learning about and the OSGi framework and exploring how it could be applied to VIVO, as documented at Modularity/extension prep - development component for v1.5.

...

There are no plans to implement additional modularity inside VIVO for 1.6, although the Web service for the RDF API work element would enable other applications to write as well as read VIVO data and support a more modular approach to adding functionality to VIVO.

Anchor
ToolsOutside
ToolsOutside
Tools outside VIVO (not linked to the 1.6 release)

Weill's VIVO Dashboard

Paul Albert has been working with a summer intern and others at Weill Cornell to develop the Drupal-based tool for visualizing semantic data. This project provides a number of candidate visualizations and reports that will likely be of interest to other VIVO adopters, and there may be enhancements to VIVO that make this kind of reporting dashboard easier to implement.

...