This Confluence wiki site, maintained by DuraSpace prior to the recent merger with LYRASIS, will transition from the duraspace.org domain to the lyrasis.org domain on Saturday, Nov 16 beginning at approximately 7pm ET. A period of downtime of 2-3 hours is expected. After the transition, this wiki will be available at https://wiki.lyrasis.org/. All links to duraspace.org wiki pages will be redirected to the correct lyrasis.org URL. If you have questions prior to or following the transition please contact: wikihelp@lyrasis.org.
Page tree
Skip to end of metadata
Go to start of metadata

Release Notes

Issues discovered in testing

Issues discovered in testing:


Issuebbdbnw
1

Re the orphaned chunk issue ( DURACLOUD-1155 - Getting issue details... STATUS ), it appears that orphanned chunks are not removed when the file that is being updated is identical to the one in duracloud.    There is one way around it:  jumpstart mode.  When the synctool is in jumpstart mode, the file is retransferred and the chunks are removed. 

So the scenario we are talking about is as follows: 

  1. user uploads 20 GB file with 1 GB chunks producing 20 chunks.
  2. user uploads the same file using a pre-6.1.0 version of the synctool but changes the chunk size to 5 GBs.
  3. Now there are still 20 chunks, 0000-0003 are 5GBs,  0004-0019 are 1GB.  The 1GB chunks are the orphans.
  4. Using the new (6.1.0) version of the synctool,  the orphans won't be removed in non-jumpstart mode because the synctool will detect that the unchunked checksum will match the manifest.  In that case, the synctool moves on.  When jumpstart is enabled, cleanup will be invoked, but at the cost of retransferring the content.


 We have a tool for identifying orphanned chunks and removing them which we have used successfully in the recent past.    We could force the synctool to perform a cleanup when it detects matching files.  Or we could support that feature when a flag is enabled.  Or it would also be possible to prevent the cleanup, when a flag is present.  

As far as duracloud operations are concerned,  making cleanup on matching files the default behavior and providing a flag to suppress it for power users is optimal since we won't have to worry about scrubbing spaces after users upgrade to 6.1.0. 

For the users, the best option is for us not to implement "cleanup on checksum match" behavior at all and scrub their repositories once they upgrade to the new synctool.

Thanks for calling this out Danny. I know that the scenario you're calling out has occurred, but it is definitely an edge case. Considering that we have other tooling to discover and clean up this issue directly I don't think it's necessary to burden the SyncTool process with checking for orphaned files on each file it touches. That has the potential to significantly slow down the sync process, especially if many small files are being transferred.

I'll make the call to leave the SyncTool as-is for 6.1. We will definitely want to encourage all users to upgrade to the new SyncTool as soon as it is available.



2

DWP-802 - Getting issue details... STATUS

Resolved in PR: https://github.com/duracloud/duracloud/pull/107



I had this happen to me while I was testing multi-space delete in the last few days.
3

When deleting more than 2 spaces as a root user, the "are you sure" dialog appears (and require a confirmation click) 1 time less than the number of spaces being deleted. So if attempting to delete 5 spaces, you need to click "ok" 4 times.

Resolved in PR: https://github.com/duracloud/duracloud/pull/106



I'm seeing this behavior now, too. Looking into it.

Testing of Completed Issues

Itembbdbnw

DURACLOUD-398 - Getting issue details... STATUS


(tick)(tick)

DURACLOUD-680 - Getting issue details... STATUS


(tick)

(tick)

DURACLOUD-743 - Getting issue details... STATUS


(tick)(tick)

DURACLOUD-1065 - Getting issue details... STATUS


(tick)(tick)

DURACLOUD-1154 - Getting issue details... STATUS

(tick)(tick)

DURACLOUD-1155 - Getting issue details... STATUS

(tick)(tick)

DURACLOUD-1192 - Getting issue details... STATUS

(warning)(tick)(tick)

DURACLOUD-1203 - Getting issue details... STATUS


(tick)(tick)

DURACLOUD-1227 - Getting issue details... STATUS



(tick)

DURACLOUD-1229 - Getting issue details... STATUS

N/AN/AN/A

DURACLOUD-1231 - Getting issue details... STATUS


(tick)(tick)

DURACLOUD-1234 - Getting issue details... STATUS


(tick)(tick)

DURACLOUD-1237 - Getting issue details... STATUS


(tick)(tick)

DURACLOUD-1240 - Getting issue details... STATUS


(tick)(tick)

DURACLOUD-1243 - Getting issue details... STATUS


(tick)N/A

DURACLOUD-1245 - Getting issue details... STATUS  (added as part of release - issue #2)

(tick)

Regression Testing

Taskbbdbnw
Perform Regression Tests
(tick)
Use ZAProxy to perform a security analysis
  • Use a test DuraCloud account with very little content
  • Start a Manual Explore, log in to DurAdmin, perform an "Active Scan"
  • Remove any sites not relevant to DuraCloud from the "Sites" list
  • Generate an HTML and XML report and attach to this page

duracloud-6.1.0-zaproxy-report.html

Build Tests

Testbbdbnw
mvn clean install (full build + integration tests) - DuraCloud

(warning) Integration tests fail frequently due to a "space already exists" error. See issue #2.

(tick) After all updates applied

(tick) (Two tests failed, but upon further investigation it was clear that they were false positives)



Release Actions - for each baseline (in this order): DB, DuraCloud, MC, Snapshot, Mill


  • No labels