Page tree
Skip to end of metadata
Go to start of metadata




  1. Status of current testing
  2. Create graphs and summaries of completed tests
  3. Next steps

    1. Finalize remaining tests?
    2. Investigate other features: versioning? batch-ops?
    3. Make call to community?


  • Status of Current testing
    • Nick will update tests results.  2 month test appears to have failed.   Will run tests again on new equipment.  
    • He will run RDF serialization improvements from Aaron Coburn  on his new hardware. 
    • Yinlin -   100K Items - 230MB files - 20 Mbs per client. Takes about 1 week.  
  • General agreement on the value of  aggregating and summarizing the results of the tests that we have run so far.
  • There is general agreement that it would be good to summarize relative improvements between runs of a given test (graphs in addition to any other details/observations)
    • Factors would be good to include in the summary: 
      • Hardware specifics
      • Total execution time
      • Average response time over the course of the execution
      • Fedora version
      • Database type/specs
      • Client count

  • Colin suggested it might be helpful to have a basic test to establish baseline conditions in the environment to account for variations in network performance characteristics,  disk performance, etc.  
  • The team sees promise in  expending effort to develop an automated system for performance tests that would
    • enable us to perform tests against on a consistent set of hardware  hardware and network resources
    • automatically run the test suite against new  tags /  branches / forked repos? 
    • focus on time-limited tests with known inputs and expected execution time framef.
  • Aaron Coburn would like a test for understanding how memory is affected by specific kinds of serializations (Turtle and N-Triples) of RDF Sources and differing degrees of concurrency.


  • Colin will look into putting together a script for baselining hardware and network characteristics to be factored into each test run.  
  • Nick:  Add the log files to be added to the performance test results on the Test Plan page ( ) 
  • Aaron will create a summary of what he would like to see out of a test.
  • Danny: Summarize the existing test data.



  1. enable us to perform tests against on a consistent set of hardware  hardware and network resources

    Oxford has proved willing to provide this in the past, via a dedicated VM cluster.

    1. Indeed, which was very helpful. They have since scaled back such services. Getting sponsorship for this sort of testing infrastructure from another community stakeholder is a great idea.