Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Repairing is a process that allows for node-to-node resolution of corrupt files in Chronopolis. Rather than being an automated process, a node administrator must choose which files to repair and which node to repair from. This is to prevent unnecessary repairs (e.g. due to errors from a filesystem being offline) and also to allow for discussion and investigation about the collection prior to it being repaired.

Links

Installation

Download and install the rpm

Installation Notes

The rpm creates a Chronopolis user if it does not exist, and creates the following files/directories:

/etc/chronopolis
/etc/chronopolis/repair.yml
/etc/init.d/chron-repair
/usr/lib/chronopolis
/usr/lib/chronopolis/chron-repair.jar
/var/log/chronopolis

Configuration

The configuration for the repair service is done in the repair.yml under /etc/chronopolis

Code Block
titlerepair.yml
linenumberstrue
collapsetrue
# Application Configuration for the Chronopolis Repair

# cron timers
## cron.repair: how often to check the Ingest Server repair endpoint
## cron.fulfilment: how often to check the Ingest Server fulfillments endpoint
cron:
  repair: 0 0/1 * * * *
  fulfillment: 0 0 * * * *

# general properties
## repair.stage: staging area to replicate files to before they are moved to preservation storage
## repair.preservation: preservation storage area
repair:
  stage: /export/repair/staging
  preservation: /preservation/bags

# Chronopolis Ingest API configuration
## ingest.endpoint: the url of the ingest server
## ingest.ucsername: the username to authenticate as
## ingest.password: the password to authenticate with
ingest:
  endpoint: http://localhost:8000
  username: node
  password: nodepass

# rsync configuration for fulfillments
## rsync.path: used if chrooting users rsyncing - the path under the chroot context
## rsync.stage: a staging area which fulfillments will be copied to
## rsync.server: the fqdn of the server nodes will replicate from 
rsync:
  path: /export/repair/outgoing
  stage: /export/repair/outgoing
  server: loach.umiacs.umd.edu

# ACE AM configuration
## ace.am: the local ACE AM webapp
## ace.username: the username to authenticate as
## ace.password: the password to authenticate with 
ace:
  am: http://localhost:8080/ace-am/
  username: admin
  password: admin

# spring properties
## spring.profiles.active: the profiles to use when running
##                         recommended: default, rsync
spring:
  profiles:
    active: default, rsync

# logging properties
## logging.file: the file to write logging statements to
## logging.level: the log level to filter on
logging.file: /var/log/chronopolis/repair.log
logging.level.org.chronopolis: INFO

Running

The Repair Service ships with a SysV style init script and has the basic start/stop/restart options. Customization of the script may be necessary if your java location needs to be specified.

service chron-repair start|stop|restart

Workflow

Note that the workflow involves two nodes: one with CORRUPT data and one with VALID data

  1. CORRUPT notices data in their Audit Manager (ACE-AM) is showing File Corrupt, indicating that checksums on disk have changed
    1. Discussion happens internally about who has this and can repair it
    2. SSH keys exchanged so that data transfer can occur for files which are to be repaired
  2. CORRUPT logs on to the Ingest server and selects 'Request Repair' in order to create a 'Repair Request'
    1. Inputs ACE AM credentials to query for the corrupt collection
    2. Select the Collection
    3. Select the Files to repair and the Node where they will be Repaired
  3. VALID logs onto the Ingest server and selects 'Fulfill Repair' in order to stage data for the repair
  4. At this point, both CORRUPT and VALID nodes should start the Repair service
    1. The Repair service running at VALID will stage data and update the Repair
    2. The Repair service running at CORRUPT will
      1. Pull data from VALID into a staging area
      2. Validate that the data transferred and matches the checksums in the ACE AM
      3. Overwrite the corrupt files
      4. Audit the files in the ACE AM
      5. Update the Repair with the result of the audit
  5. Once complete, the Repair Service at each node can be stopped

Repair File Transfer Strategies

During design of the Repair service, it was noted that there are different ways of transferring content between Chronopolis Nodes:

  • Direct transfer
    • through rsync
    • through ACE AM
  • Indirect transfer through the Ingest Server

During development, support was added for each type of transfer, but only the direct rsync strategy was implemented. The direct ACE-AM transfer strategy requires additional development in the Audit Manager in order to support API Keys which can be used to access content. The indirect transfer through the Ingest Server was omitted as it was not deemed onerous for Chronopolis Nodes to exchange ssh keys.

Repair Types

Currently the Repair workflow handles repairing corrupt files, but does not cover other types of failure which can occur in the system. For example, in the past we have had issues with the Audit Manager (ACE-AM) having received invalid checksums from the underlying storage system, which then needed to be updated in order for an audit to pass successfully. We have also see ACE Token Stores be partially loaded which results in the need to re-upload the ACE Token Store so that we can ensure we are auditing against the ACE Tokens we created on ingestion of the collection.

Release Notes

Release 1.5.0

26 June, 2017

Initial release for the Chronopolis Medic (Repair) software to process Repair requests on the Chronopolis Ingest Server

  • Repairs have namespaced areas when staging as to not interfere with other ongoing Repairs
  • Staging files done via symbolic links (other staging options supported later)
  • Rsync support for the main protocol for distributing files (other protocols supported later)
  • Comparison with a nodes ACE-AM before files are moved into production storage
  • Staging areas for both repairing and fulfilling nodes cleaned upon completion of a Repair

This page will serve to map out a general process for restoring/repairing content between nodes in Chronopolis, and denote areas where discussion/development is needed.

Chronopolis Repair Design Document

Michael Ritter, UMIACS October 10, 2016

Background

Within the standard operating of Chronopolis, it is likely due to the volume of data we ingest that we will at some point need to repair data held at a node. In the event a node cannot repair their own data, a process will be in place so that the data can be repaired through the Chronopolis network. In this document a basic design proposal for a protocol through which we can repair collections in a combination of manual and automated work will be outlined.

Considerations

As this design is still living, there are still open questions as to how everything should be finalized and what impact they will have on the final result.
 

1. Transfer Strategies

  • Multiple types of transfers are allowed, however each will need to be implemented.
    • Node to Node: Transfer between replicating nodes using rsync + ssh with no intermediary step
    • Node to Ingest: Push content to the Ingest node from which a node can repair from
    • ACE: Use ACE with https as the transfer mechanism for serving files

2. Should we put a limit on the number of files being repaired in a single request?

  • At the moment this is unbounded, but we may want to look into it in the future

3. Should we include tokens in this process, but leave implementation out for now?

  • Initial version will only handle files, tokens can be added on later

Repair Flow

Basic flow: nodei = invalid; nodev = valid

  1. nodei sees invalid files in ACEi
  2. nodei gathers invalid files and issues a repair request to the ingest server
    1. POST /api/repair
    2. Handled manually
    3. Consider having multiple requests in the event many files are corrupt
  3. nodev sees the repair request
    1. Handled manually, likely from discussion in the chron group
  4. nodev checks ACEv to see if the files are valid
    1. POST /api/repair/<id>/fulfill if valid
  5. nodev stages content for nodei
    1. P2P: make a link (or links) to the files in a directory for nodei
    2. Ingest: rsync the files up to the ingest server
    3. ACE: create a token for nodei and make that available
  6. nodev notifies ingest server that content is ready for nodei
    1. POST /api/repair/fulfillment/<id>/ready
  7. nodei replicates staged content
    1. GET /api/repair/fulfillment?to=nodei&status=ready
  8. nodei validates staged content
    1. communicates with ACE compare API
    2. if not valid, end here
  9. nodei copies staged content to preservation storage
  10. nodei issues an audit of the corrupt files
  11. nodei responds with the result of the audit
    1. if the audit is not successful a new replication request will need to be made, but we might want to do that by hand
    2. POST /api/repair/fulfillment/<id>/complete

Turning this into a graph might be useful

Transfer Strategies

Node to Node

Node to Node transfers would require additional setup on our servers, and would likely require

a look in to how we deal with security around our data access (transferring ssh keys, ensuring

access by nodes is read only, etc). A feasibly staging process could look like:
 

1. nodev links data (ln -s) in nodei’s home directory

2. nodei rsyncs data from nodev:/homes/nodei/depositor/repair-for-collection

Node to Ingest

Node to Ingest, while lengthy, would have the least amount of development and setup effort

associated with it. Since we will most likely not be repairing terabytes of data at a time, one

could say this is "good enough". The staging process for data would look similar to:
 

1. nodev rsyncs data to the ingest server

2. nodev notifies that the data is ready at /path/to/data on the ingest server

3. nodei rsyncs data from the ingest server on /path/to/data
 

ACE

Repairing through ACE would require additional development on ACE, as it currently does not

have any concept of API keys, but otherwise provides the same benefits of Node-to-Node repair

with some constraints from http itself. Staging would become quite simple, and amount to:

1. nodev marks the collection as allowing outside access (for API keys only?)

2. nodev requests a new temporary API key from ACE

3. nodei downloads from ACEv using the generated API key

API Design - Move to Sub Page

The API can be viewed with additional formatting and examples at

http://adaptci01.umiacs.umd.edu:8080/

HTTP API

The REST API described follows standard conventions and is split in to two main parts, repair

and fulfillment.

Repair API

GET /api/repair/requests?<requested=?,collection=?,depositor=?,offers=?>

GET /api/repair/requests/<id>

POST /api/repair/requests

POST /api/repair/requests/<id>/fulfill

Fulfillment API

GET /api/repair/fulfillments?<to=?,from=?,status=?>

GET /api/repair/fulfillemnts/<id>

PUT /api/repair/fulfillments/<id>/ready

PUT /api/repair/fulfillemnts/<id>/complete

Models - Move to Sub Page

A repair request, sent out by a node who notices they have corrupt files in a collection

Repair Request Model

{

"depositor": "depositor-with-corrupt-collection",

"collection": "collection-with-corrupt-files",

"files": ["file_0", "file_1", ..., "file_n"]

}

A repair structure, returned by the Ingest server after a repair request is received

Repair Model

{

"id": 1,

"status": "requested|fulfilling|repaired|failed",

"requester": "node-with-corrupt-file",

"depositor": "depositor-with-corrupt-collection",

"fulfillment": 3,

"collection": "collection-with-corrupt-files",

"files": ["file_0", "file_1", ..., "file_n"]

}

A fulfillment for a repair, returned by the Ingest server after a node notifies it can fulfill a repair

request. Credentials are only visible to the requesting node and administrators.

Fulfillment Model

{

"id": 3,

"to": "node-with-corrupt-file",

"from": "node-with-valid-file",

"status": "staging|ready|complete|failed",

"credentials": { ... }

"repair": 1

}

Credentials ACE

{

"type": "ace",

"api-key": "ace-api-key",

"url": "https://node_v/ace-am" # ?? Not sure if really needed

}

Credentials Node-to-Node

{

"type": "node-to-node",

"url": "node_i@node_v.edu:/homes/node_i/path/to/repair"

}

Credentials Node-to-Node

{

"type": "ingest",

"url": "node_i@chron.ucsd.edu:/path/to/repair"

}

-------------------------

Previous iterations:

Repair Design Document, October 2016

...