Update on graphs and summaries of completed tests
https://wiki.duraspace.org/display/FF/2016-08-15+Performance+-+Scale+meeting#id-2016-08-15Performance-Scalemeeting-CurrentSummaries |
Getting community sponsorship for testing infrastructure
Establishing environment baselines
CG: Model time it takes to run tests based on benchmarks of operating system. First pass: https://gist.github.com/grosscol/f997f2b3cef80edb640266a03f829a77. How long it takes to write to disk. How long it takes with Fedora. Model is a weight. Faster disks predict faster performance. Linear or logarithmic? Goal is to normalize testing across conditions. Currently working on I/O with JUnit tests. Possibly, other tests for memory, network, CPU.
AW: Topic has come up a few times. Tests so far have been capacity tests (small files, large files, empty Fedora resources, Fedora resources with RDF). Another suite of tests could focus on demonstrating particular aspects of performance, to be run in a short amount of time but long enough to characterize performance. Slice off classes or methods in integration tests for the REST API.
AW: Logic at ModeShape layer has been biggest determinant in previous testing. Lower level environment characteristics may not be visible in test results.
CG: May only be, say, 10%, but will know how to consider in interpreting test results, or if bigger issue, say, at 20%.
CG: Has tried HTTP API integration tests. Will try JMeter performance tests; may need help limiting runs.