This page is intended to be moved into the FAQ documentation once it has been more completely populated.
The use cases and expectations that fall under the category of Fedora "performance and scalability" are diverse and evolving. It is therefore important to establish and maintain a set of reproducible benchmarks that will be run over various configurations of Fedora releases.
Team
- Andrew Woods (DuraSpace)
- Yinlin Chen (Virginia Tech)
- Nick Ruest (York University)
- Colin Gross (University of Michigan)
- Danny Bernstein (DuraSpace)
- Trey Pendragon (Princeton University)
- Longshou Situ (University of California, San Diego)
- Kevin Ford (Art Institute of Chicago)
Project Plans
- Performance and Scalability Test Plans (Update to Fcrepo 4.7.x and later version)
Benchmark Categories
Resource Scale
- Large datastreams (i.e. binaries, non-RDFSources)
- Multi-TB datasets
- Large number of objects (i.e. containers, RDFSources)
- Many members
Performance Characteristics
- Ingest rates
- RDFSource property update rates
Prior Results
- Performance Testing
- Performance Summary
- Assessment Plan - Performance
- Test 1 Results Summary
- Test 2 Results Summary
- Test 3 Results Summary
- Test 4 Results Summary
- Test 5 Results Summary
- Many Members Performance Testing
Project Tools
- Fcrepo performance analysis: https://github.com/fcrepo4-labs/fcrepo_perf_analysis
- Fedora 4 Ansible: https://github.com/VTUL/fcrepo4-ansible
- Many member testing scripts: https://github.com/dbernstein/fcrepo-performance-test-scripts
Presentation
- Open Repositories: Presentations
Workshop
- Code4lib 2017 workshop: Performance and Scale Testing of Fedora
- Fedora Camp NYC - 28-30 November 2016
Other Tools
- JMeter
- Grinder
- https://blazemeter.com/ - commercial service for running our JMeter tests