This command will be introduced in the DSpace 6.4 and DSpace 7.0 releases.
It is recommended that all DSpace instances with legacy identifiers perform this one-time upgrade of legacy statistics records.
This action is safe to run on a live site. As a precaution, it is recommended that you backup you statistics shards before performing this action.
Note: a link to this section of the documentation should be added to the DSpace 6.4 and DSpace 7.0 Release Notes.
Note: https://groups.google.com/forum/#!topic/dspace-tech/HbdmAGw2C1E gives instructions for running SolrUpgradePre6xStatistics.
The DSpace 6x code base changed the primary key for all DSpace objects from an integer id to UUID identifiers. Statistics records that were created before upgrading to DSpace 6x contain the legacy identifiers.
If you have sharded your statistics repository, this action must be performed on each shard.
Arguments (short and long forms):
- i or - -index-name
|Optional, the name of the index to process. "statistics" is the default|
Optional. Total number of records to update (defaut=100,000).
To process all records, set -n to 10000000 or to 100000000 (10M or 100M)
Number of records to batch update to SOLR at one time (default=10,000).
NOTE: This process will rewrite most solr statistics records and may temporarily double the size of your statistics repositories. Consider optimizing your solr repos when complete.
Technical implementation details
After sharding, the Solr data cores are located in the [dspace.dir]/solr directory. There is no need to define the location of each individual core in solr.xml because they are automatically retrieved at runtime. This retrieval happens in the static method located in the org.dspace.statistics.SolrLogger class. These cores are stored in the statisticYearCores list. Each time a query is made to Solr, these cores are added as shards by the addAdditionalSolrYearCores method. The cores share a common configuration copied from your original statistics core. Therefore, no issues should be resulting from subsequent
The actual sharding of the of the original Solr core into individual cores by year is done in the shardSolrIndex method in the org.dspace.statistics.SolrLogger class. The sharding is done by first running a facet on the time to get the facets split by year. Once we have our years from our logs we query the main Solr data server for all information on each year & download these as CSVs. When we have all data for one year, we upload it to the newly created core of that year by using the update csv handler. Once all data of one year have been uploaded, those data are removed from the main Solr (by doing it this way if our Solr crashes we do not need to start from scratch).