Deprecated. This material represents early efforts and may be of interest to historians. It doe not describe current VIVO efforts.

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

contents

Business model


Basic elements to determine operational (not development) cost
  • Amount of data from the institution
    • this affects the processing time that needs to be allocated as well as the increment to the size of the index
  • Frequency of update (again based on the processing and oversight/validation required for indexing)
  • Support
    • providing feedback on bad data, especially to people new to ontologies and RDF
    • addressing performance issues at the distributed data sources (especially if harvesting degrades the function of their production VIVO app)
    • there will have to be a startup fee with some number of hours of support included, and then the ability to redirect further support to a list of consultants or companies willing to provide help
Relationship of services to sponsorship

It will be much cleaner to separate sponsorship from participation in production services.

  • An institution sponsors VIVO to support the effort, as well as influence and hasten its development
  • An institution signs up for a service if you want your data to be included in that service
  • Question: what will our policy be around in-kind development/testing/requirements gathering?
    • Simplest answer: the institutions contributing in-kind will have the most influence, but not exclusive influence or veto power.  They should be contributing based on the importance of search to themselves, both in general and/or specific features
    • Question: how does the Kuali community handle this?  I believe from what Penn representatives said that they have two distinct aspects to sponsorship:
      • join the Kuali foundation where an investment in all the infrastructure, and pay for the legal entity
        • when do that, participate in the foundation governance -- e.g., on the Kuali Rice board
      • then contribute to project costs for the product you want to use
        • e.g., participate on the governance, technical, and functional councils of the OLE project
        • as well as strategic governing board

What questions does this leave unanswered?

  • If an institution has no interest in sponsoring VIVO as software – say, if they run Profiles or another tool – but they want to sponsor development and ongoing improvement of the VIVO search tools, do we have a special category of sponsorship for that?
    • Answer: we already offer the standard DuraSpace bronze ($2500), silver ($5,000), and gold ($10,000) levels for VIVO, but these do not included participation or voting rights on the VIVO Sponsors Committee (see prospectus)
    • Follow-up question: how will non-voting sponsors affect direction/priorities for any aspect of VIVO?
  • Open question: will there be a forum for sponsors to address priorities for search?
  • Open question: will VIVO search be governed differently than VIVO?
Areas that may get messy
  • Founding sponsors may expect not to be nickel & dimed
  • Balancing in-kind support, sponsorship, and service fees
  • VIVO multi-institutional search is not entirely separable from internal search at one VIVO institution 
    • There were pilot efforts to extend local search to the 8-institution index in Spring, 2011
    • This will likely come up again on wish lists
    • There may be interest in doing this from other platforms
      • Is the VIVO searchlight relevant here?
      • Would the OpenSocial platform-neutral approach be relevant?
Limits to what we can charge
  • The code is all open – universities may prefer to run their own (in which case you request that they sponsor to help keep the code updated)
  • Service providers may decide they could host competitive search services, with their own value added in tweaks to relevance ranking, etc.
  • As the price goes up, a cost benefit analysis will steer people to other options including custom Google search appliances, etc.

Technical Risks


Indexing is too slow

This could be a problem for two reasons

  • Indexing consumes too many resources and is costly to support
  • Updates cannot happen with sufficient frequency to satisfy local requirements or incentivize updates
    • People have more confidence in a system which they can control themselves by seeing corrections happen

What will contribute to slowness?

  • Indexing more detail, especially detail that is more remotely connected to the individual entity being indexed
    • e.g., if you want to get the names of all co-authors for a researcher, not just the titles of their papers
  • Doing more to align or disambiguate data at indexing time
    • e.g., including in the index a list of possible alternative organizations, people, journals, etc. to facilitate corrections during use
  • Doing more queries, computation or analysis to improve relevance ranking
    • e.g., boosting relevance based on the number of papers or grants
Indexing interferes with performance on the site being indexed

This may become the countervailing pressure from distributed sites --

Relevance ranking proves intractably messy

Some people will be hard to please

 

  • No labels