Call in details:
- Dial-in Number: (712) 775-7035
- Participant Code: 479307#
- International numbers: Conference Call Information
- You may also call in using the VoIP dialer from a web browser, or Android/iOS apps
- IRC:
- Join the #fcrepo chat room via Freenode Web IRC (enter a unique nick)
- Or point your IRC client to #fcrepo on irc.freenode.net
Meeting 01 - August 29, 2016
- Nick Ruest
- Justin Simpson
- Joshua Westgard
- Esmé Cowles
- Youn Noh
- Andrew Woods
- Michael Durbin
- Benjamin Armintor
- Yinlin Chen
- Bethany Seeger
Agenda
- Introductions
- Logistics
- Virtual daily stand-ups?
- IRC?
- Conference calls?
- Email check-in?
- Sprint retrospective?
- Phase 1 priorities
- Support transacting in RDF
- Support allowing the option to include Binaries
- Support references from exported resources to other exported resources
- Support import into a non-existing Fedora container
- Support export of resource and its "members" based on the ldp:contains predicate
- The URIs of the round-tripped resources must be the same as the original URIs
- Support transacting in RDF
- Assign work
Minutes
- Introductions
- Logistics
- Virtual daily stand-ups on IRC by 10 am ET
- What you did yesterday
- What you plan on doing today
- Blockers
- Conference calls on this line if needed
- Sprint retrospective on Sep 9 at 3:30 pm ET
- What worked
- What didn’t work
- Try to wrap up by Thursday
- Virtual daily stand-ups on IRC by 10 am ET
- Phase I Priorities
- Camel serializer
- Supports transacting in RDF and has option to include binaries
- Listens for messages on queues; need to run indexer before running export
- Basic case: dump to disk and gzip
- Need to parse RDF to figure out information required for import, e.g., base url for repository
- Provides options for RDF serialization supported by REST API and supplies default serialization
- Camel serializer
- Looking ahead to Phases II and III
- Versions
- JCR versioning will include any version of parent
- For Camel serializer, does Fedora send out a message when it creates a version?
- Possible way forward is to consider basic case as export of current version and then to build in capability to export version that is not the current version with additional metadata
- Need to consider layout of versions on disk
- Fedora API specification is leaning towards making versions first-class resources
- Bags
- Client tool
- Generate from REST API calls to repository or from filesystem?
- Only LDP basic containers? Connected graph from root; basic containers if not starting at root
- Stakeholder use cases include exporting individual resources, not entire graph, and being able to generate resources that can be imported into other repositories (such as APT) from documented export formats
- Versions
- Design considerations
- Camel may make bar too high for users; code may need to be re-implemented
- Some discussion of using bash or python for prototyping; final consensus was to use java
- Testing plan should define success, describe tests that will be run, and include test data
- Integration test suite in java; unit tests included with utilities
- Logging
- In-flight tickets
- FCREPO-2127: Document how the Camel RDF Serializer exports content to disk
- FCREPO-2128: Review Camel RDF Serializer implementation
- FCREPO-2129: Create import-export github repository in the fcrepo4-labs organization
- FCREPO-2130 (close out FCREPO-1990): Create skeletal import client utility
- FCREPO-2131: Create skeletal export client utility
- FCREPO-2132: Create a test plan
- FCREPO-2133: Create sample test dataset
- FCREPO-2134: Document sprint resources
- FCREPO-2135: Create user documentation
- FCREPO-2136: Create Import/Export wiki documentation
Meeting 02 - September 2, 2016
- Nick Ruest
- Justin Simpson
- David Wilcox
- Esmé Cowles
- Youn Noh
- Andrew Woods
- Michael Durbin
- Bethany Seeger
- Joshua Westgard (2nd half)
Agenda
- Open issues
- Review ticket process
- Squashing commits
- Review use of IRC
- Helping everyone use jar and load data
- We need data with cross-references and external references
- Consider refactoring export to have 'ldp:contains' passed in?
- IMPORT!
- Review tickets
- AuthZ
Minutes
Ticket review
- 2127 justin will add step by step instructions with output to this ticket, to provide more info.
- 2130 Esmé will take over ticket, and start on import utility, get done what he can for others to review on Monday. this will be a basic, skeletal version, working with a small repo, containing a couple of small handbuilt resources.
- 2132 we reviewed Youn's test plan doc and resolved comments https://docs.google.com/document/d/1WW0dU9LDWvnRPGbzKpIE-ODkGq39md17sKkWVXC0P8U/edit
Nick and Youn will work on adding this to the wiki - 2133 Josh has sample data in sample data repo, almost ready to close ticket.
Youn has sample data, and is working on a script to put that data into a fedora repo, currently living in a google drive folder.
Esmé provided feedback on how to modify the existing sample owl file, Justin will help next week with scripting the loading of owl data, if needed. - 2134 will leave open for extra comment/input untill next week
- 2135 leaving open and unassigned until next week
- 2139 low priority
- 2143 and 2144 leaving open til next week
- 2146 discussed, needs to be revisited, once basic import tool exists - some discussion about how to pass in and save config (command line and
- 2155 Andrew is working on integration tests - run export and import from command line, to test roundtripping.
Esmé suggested writing the test in the other order - import a fixture, then export it. Andrew will commit initial driver level tests first, then create a follow on ticket for sub-module level tests
Some take aways from the ticket review discussion:
General process for making sample datasets:
write up a document that provides links to raw data
write a script to import that into a fedora repo
do an fcr:backup on that repo
tar up the output of fcr:backup
add all the outputs from above steps to fcrepo sampledata github repo
small test fixtures will be tested from integration tests
larger sample datasets (like in fcrepo sampledata) will be tested that work with the fcrepo-vagrant as the test environment, and documented in the test plan, which will be added to the wiki.
we reviewed process for how to use jira:
-make sure to take ownership of tickets you are working on
-mark them as in progress, or ready for review, etc as you go
we reviewed the Pull Request process:
-put in a pull request
-others comments
-based on comments, you make subsequent commits (don't squash)
-once the pull request is complete, then squash down to one commit before merging to master
This is not a hard and fast rule, there can be exceptions,
-sometimes it is better to not squash, or squash to a set of commits, if there is logical separation between commits
-sometimes the branch has to be rebased, due to other work going into master, and in that case it can make sense to squash when rebasing.
Esmé provided a link to a good blog post on squashing commits by Jeremy Friesen: http://ndlib.github.io/practices/one-commit-per-pull-request/
discussed communication
-using irc, make sure irc client can notify you when your name is used, try to keep irc on so you can participate during the sprint
discussed how to actually get the code, making sure all of us can build and/or acquire the jar file and run it. Ask in irc if there are questions.
talked about process of dependency management and use of ldp:contains, Esmé and Andrew will do some refactoring.