Calls are held every Thursday at 1 pm eastern time (GMT-5) – convert to your time at http://www.thetimezoneconverter.com – and note that the U.S. is on daylight savings time.
View and edit this page permanently at https://wiki.duraspace.org/x/pd0QAg, or use the temporal Google Doc for collaborative note taking during the call.
VIVO 2013 Conference
- List of sessions and poster posted here (PDF)
- Chris Barnes is organizing a BOF for a new VIVO Apps & Tools Working Group – stay tuned for the day and time.
- Register before July 19 for best rate and conference hotel availability
- Conference hotel is Hilton St. Louis at the Ballpark Hotel
- Wednesday night baseball game (Cardinals vs Pirates) and cold beer!
- Johns Hopkins
- Stony Brook
- Texas A&M
- Virginia Tech
- Weill Cornell
Notable list traffic
Returning RDF (Tammy)
- I'm reading the wiki at https://wiki.duraspace.org/display/VIVO/Getting+VIVO+Attributes+from+RDF+Using+R and it says " If you issue this URI to an appropriate function, a chunk of RDF is returned." Does anyone know the method that is called once that .rdf link is clicked? (I understand the purpose of that particular wiki page is to introduce a "simpler way to do this." But I'm also interested in how vivo is handling it right now.)
- The control is in IndividualController, which uses IndividualRequestAnalyzer to redirect to the URL you cite. The actual RDF is assembled by IndividualRdfAssembler
"An error occurred on your VIVO site at ..." (Giuseppe)
I've started receiving a number of these e-mails from our VIVO install and I was wondering how to work out what's causing it.
Restricting user access to data from named graphs (Eliza)
Where we left things last week:
- Weill wants to allow people to add publications, but ideally not editing ones that have come from the separate publications process pulling from Scopus and Pubmed.
- Our mental model of the application is still based on triples while in some places we are starting to take advantage of named graphs
- The RDF Service is just answering sparql queries, so you could put in variables for the graphs -- but at least with SDB, the queries become much, much slower than when you don’t ask for graphs. If this feature were a high priority to implement, would query only at the time you need very specific information about whether a given triple is editable, rather than trying to revamp the application to always include a graph identifier with the triple.
- There may be ways to meet our needs in other more limited, prescribed ways -- the way page management can now be limited to data in a particular graph. The application configuration ontology could also potentially be involved to rule out editing of a particular class or property.
Latest version of the Harvester (Andy)
Stephen updated links on the Sourceforge site to point to the current Harvester repository on GitHub
URL structure and human readable URLs (Mark)
- I am interested in getting some feedback on some approaches to producing human readable URLs that mirror the information hierarchy of a vivo site. The nature of vivo tends to a very flat URL structure - just a giant bag of URLs at the level of http://sitename/individual/idWith griffith research-hub, we provide a degree of structure for the site with our search engine - linking to a category of results as part of the core navigation, eg: http://research-hub.griffith.edu.au/researchers etc. but subsequent links follow the standard uri pattern. To get around this we provide breadcrumbs on the page - but this contextual information is not part of the URL and is therefore lost when users copy and past URLs into emails etc.
- The goal for both usability and search engine optimisation would be to replace a standard URL like:
- We would still keep the original url as the uri of the subject, but standard site links would point to the human readable version. We have systems in place that would ensure that these human readable URLs would not collide due to similar names (based off our email address naming rules). An immediate issue with this strategy is that we would have essentially duplicate content - which is strongly penalised in google. But we could handle this via either a 301 redirect or rel=canonical strategy (see http://moz.com/blog/canonical-url-tag-the-most-important-advancement-in-seo-practices-since-sitemaps)
- The bigger problem is how to implement the URL in the first place. Any suggestions on what you think the most generic best practice approach would be? Are there any concerns with the basic goal? Any thoughts would be appreciated.
A thought from Jim
See the vivo-dev-all archive and vivo-imp-issues archive for complete email threads
Date: Every Thursday, no end date
Time: 1:00 pm, Eastern Daylight Time (New York, GMT-04:00)
Meeting Number: 641 825 891
To join the online meeting
To join the audio conference only