Page History
...
Tim is unavailable next week (May 1). Will be at a staff retreat. Should we cancel the meeting?
(Ongoing Topic) DSpace 7 Status Updates for this week (from DSpace 7 Working Group)
(Ongoing Topic) DSpace 6.x Status Updates for this week
- 6.4 will surely happen at some point, but no definitive plan or schedule at this time. Please continue to help move forward / merge PRs into the dspace-6.x branch, and we can continue to monitor when a 6.4 release makes sense.
- Upgrading Handle Server:
(READY for Reviews!)Jira server DuraSpace JIRA serverId c815ca92-fd23-34c2-8fe3-956808caf8c5 key DS-4205 - Upgrading Solr Server for DSpace (Mark H. Wood )
- Auto-reindexing in Solr:
Jira server DuraSpace JIRA serverId c815ca92-fd23-34c2-8fe3-956808caf8c5 key DS-3658 - Should this only happen for major releases? Should it be configurable? Can we find a more precise trigger? When do we need to reindex?
- Dump/restore tool for the a
uthority
core.
Or should we useJira server DuraSpace JIRA columns key,summary,type,created,updated,due,assignee,reporter,priority,status,resolution serverId c815ca92-fd23-34c2-8fe3-956808caf8c5 key DS-4187 solr-export-statistics
?
- Auto-reindexing in Solr:
- DSpace Backend as One Webapp (Tim Donohue )
- PR: https://github.com/DSpace/DSpace/pull/2265 (PR is finalized & ready for review)
- A follow-up PR will rename the "dspace-spring-rest" module to "dspace-server", and update all URL configurations (e.g. "dspace.server.url" will replace "dspace.url", "dspace.restUrl", "dspace.baseUrl", etc)
- DSpace Docker and Cloud Deployment Goals (Terrence W Brady )
Update sequences on initialization
https://github.com/DSpace/DSpace/pull/2362 - update sequences port
https://github.com/DSpace/DSpace/pull/2361 - update sequences port
- Brainstorms / ideas (Any quick updates to report?)
- (On Hold, pending Steering/Leadership approval) Follow-up on "DSpace Top GitHub Contributors" site (Tim Donohue ): https://tdonohue.github.io/top-contributors/
- Bulk Operations Support Enhancements (from Mark H. Wood)
- Curation System Needs (from Terrence W Brady )
- Tickets, Pull Requests or Email threads/discussions requiring more attention? (Please feel free to add any you wish to discuss under this topic)
...
These topics are ones we've touched on in the past and likely need to revisit (with other interested parties). If a topic below is of interest to you, say something and we'll promote it to an agenda topic!
- Brainstorms / ideas
- (On Hold, pending Steering/Leadership approval) Follow-up on "DSpace Top GitHub Contributors" site (Tim Donohue ): https://tdonohue.github.io/top-contributors/
- Bulk Operations Support Enhancements (from Mark H. Wood)
- Curation System Needs (from Terrence W Brady )
- Management of database connections for DSpace going forward (7.0 and beyond). What behavior is ideal? Also see notes at DSpace Database Access
- In DSpace 5, each "Context" established a new DB connection. Context then committed or aborted the connection after it was done (based on results of that request). Context could also be shared between methods if a single transaction needed to perform actions across multiple methods.
- In DSpace 6, Hibernate manages the DB connection pool. Each thread grabs a Connection from the pool. This means two Context objects could use the same Connection (if they are in the same thread). In other words, code can no longer assume each
new Context()
is treated as a new database transaction.- Should we be making use of
SessionFactory.openSession()
for READ-ONLY Contexts (or any change of Context state) to ensure we are creating a new Connection (and not simply modifying the state of an existing one)? Currently we always useSessionFactory.getCurrentSession()
in HibernateDBConnection, which doesn't guarantee a new connection: https://github.com/DSpace/DSpace/blob/dspace-6_x/dspace-api/src/main/java/org/dspace/core/HibernateDBConnection.java
- Should we be making use of
- Bulk operations, such as loading batches of items or doing mass updates, have another issue: transaction size and lifetime. Operating on 1 000 000 items in a single transaction can cause enormous cache bloat, or even exhaust the heap.
- Bulk loading should be broken down by committing a modestly-sized batch and opening a new transaction at frequent intervals. (A consequence of this design is that the operation must leave enough information to restart it without re-adding work already committed, should the operation fail or be prematurely terminated by the user. The SAF importer is a good example.)
- Mass updates need two different transaction lifetimes: a query which generates the list of objects on which to operate, which lasts throughout the update; and the update queries, which should be committed frequently as above. This requires two transactions, so that the updates can be committed without ending the long-running query that tells us what to update.
...