- DSpace 7.x (Current Release)
- DSpace 8.x (Unreleased)
- DSpace 6.x (EOL)
- DSpace 5.x (EOL)
- More Versions...
This documentation relates to an old version of DSpace, version 6.x. Looking for another version? See all documentation.
Support for DSpace 6 ended on July 1, 2023. See Support for DSpace 5 and 6 is ending in 2023
Please be aware that individual search engines also have their own guidelines and recommendations for inclusion. While the guidelines below apply to most DSpace sites, you may also wish to review these guidelines for specific search engines:
Anyone who has analyzed traffic to their DSpace site (e.g. using Google Analytics or similar) will notice that a significant (and in many cases a majority) of visitors arrive via a search engine such as Google or Yahoo. Hence, to help maximize the impact of content and thus encourage further deposits, it is important to ensure that your DSpace instance is indexed effectively.
DSpace comes with tools that ensure major search engines (Google, Bing, Yahoo, Google Scholar) are able to easily and effectively index all your content. However, many of these tools provide some basic setup. Here's how to ensure your site is indexed.
For the optimum indexing, you should:
We are constantly adding new indexing improvements to DSpace. In order to ensure your site gets all of these improvements, you should strive to keep it up-to-date. For example:
Additional minor improvements / bug fixes have been made to more recent releases of DSpace.
First ensure your DSpace instance is visible, e.g. with: https://www.google.com/webmasters/tools/sitestatus
If your site is not indexed at all, all search engines have a way to add your URL, e.g.:
DSpace provides a sitemap feature that we highly recommend you enable to ensure proper indexing. Sitemaps allow DSpace to expose its content in a way that makes it easily accessible to search engine crawlers. Sitemaps also help ensure that crawlers do NOT have to visit every page in your DSpace (which means the crawlers can get in and get out quickly, without taxing your site). Without sitemaps, search engine indexing activity may impose significant loads on your repository.
HTML sitemaps provide a list of all items, collections and communities in HTML format, whilst Google sitemaps provide the same information in gzipped XML format.
To enable sitemaps, all you need to do is run
[dspace]/bin/dspace generate-sitemaps once a day.
Just set up a cron job (or scheduled task in Windows), e.g. (cron):
Once you've enabled your sitemaps, they will be accessible at the following URLs:
This command accepts several options:
|Explain the arguments and options.
|Do not generate a sitemap in sitemaps.org format.
|Do not generate a sitemap in htmlmap format.
|Notify all configured search engines that new sitemaps are available.
|Notify the given URL that new sitemaps are available. The URL of the new sitemap will be appended to the value of URL.
You can configure the list of "all search engines" by setting the value of
Even if you've enabled your sitemaps, search engines may not be able to find them unless you provide them with a link. There are two main ways to notify a search engine of your sitemaps:
Provide a hidden link to the sitemaps in your DSpace's homepage. If you've customized your site's look and feel (as most have), ensure that there is a link to
/htmlmap in your DSpace's front or home page. By default, both the JSPUI and XMLUI provide this link in the footer:
Announce your sitemap in your robots.txt. Most major search engines will also automatically discover your sitemap if you announce it in your robots.txt file. By default, both the JSPUI and XMLUI provide these references in their robots.txt file. For example:
Search engines will now look at your XML and HTML sitemaps, which serve pre-generated (and thus served with minimal impact on your hardware) XML or HTML files linking directly to items, collections and communities in your DSpace instance. Crawlers will not have to work their way through any browse screens, which are intended more for human consumption, and more expensive for the server.
The trick here is to minimize load on your server, but without actually blocking anything vital for indexing. Search engines need to be able to index item, collection and community pages, and all bitstreams within items – full-text access is critically important for effective indexing, e.g. for citation analysis as well as the usual keyword searching.
If you have restricted content on your site, search engines will not be able to access it; they access all pages as an anonymous user.
Ensure that your robots.txt file is at the top level of your site: i.e. at http://repo.foo.edu/robots.txt, and NOT e.g. http://repo.foo.edu/dspace/robots.txt. If your DSpace instance is served from e.g. http://repo.foo.edu/dspace/, you'll need to add /dspace to all the paths in the examples below (e.g. /dspace/browse-subject).
Some URLs can be disallowed without negative impact, but be ABSOLUTELY SURE the following URLs can be reached by crawlers, i.e. DO NOT put these on Disallow: lines, or your DSpace instance might not be indexed properly.
/browse (UNLESS USING SITEMAPS)
/*/browse (UNLESS USING SITEMAPS)
/browse-date (UNLESS USING SITEMAPS)
/*/browse-date (UNLESS USING SITEMAPS)
/community-list (UNLESS USING SITEMAPS)
Below is an example good robots.txt. The highly recommended settings are uncommented. Additional, optional settings are displayed in comments – based on your local configuration you may wish to enable them by uncommenting the corresponding "Disallow:" line.
WARNING: for your additional disallow statements to be recognized under the
User-agent: * group, they cannot be separated by white lines from the declared
user-agent: * block. A white line indicates the start of a new user agent block. Without a leading user-agent declaration on the first line, blocks are ignored. Comment lines are allowed and will not break the user-agent block.
This is OK:
This is not OK, as the two lines at the bottom will be completely ignored.
To identify if a specific user agent has access to a particular URL, you can use this handy robots.txt tester.
For more information on the robots.txt format, please see the Google Robots.txt documentation.
It's possible to greatly customize the look and feel of your DSpace, which makes it harder for search engines, and other tools and services such as Zotero, Connotea and SIMILE Piggy Bank, to correctly pick out item metadata fields. To address this, DSpace (both XMLUI and JSPUI) includes item metadata in the <head> element of each item's HTML display page.
If you have heavily customized your metadata fields away from Dublin Core, you can modify the crosswalk that generates these elements by modifying
In addition to Dublin Core <meta> tags in the HTML HEAD, DSpace also includes Google Scholar specific metadata fields in each item's HTML display page.
These meta tags are the "Highwire Press tags" which Google Scholar recommends. If you have heavily customized your metadata fields, or wish to change the default "mappings" to these Highwire Press tags, they are configurable in
Much more information is available in the Configuration section on Google Scholar Metadata Mappings.
Make sure that you never redirect "direct file downloads" (i.e. users who directly jump to downloading a file, often from a search engine) to the associated Item's splash/landing page. In the past, some DSpace sites have added these custom URL redirects in order to facilitate capturing statistics via Google Analytics or similar.
While these URL redirects may seem harmless, they may be flagged as cloaking or spam by Google, Google Scholar and other major search engines. This may hurt your site's search engine ranking or even cause your entire site to be flagged for removal from the search engine.
If you have these URL redirects in place, it is highly recommended to remove them immediately. If you created these redirects to facilitate capturing download statistics in Google Analytics, you should consider upgrading to DSpace 5.0 or above, which is able to automatically record bitstream downloads in Google Analytics (see DS-2088) without the need for any URL redirects.
While DSpace offers a PDF Citation Cover Page option, this option may affect your content's visibility in search engines like Google Scholar. Google Scholar (and possibly other search engines) specifically extracts metadata by analyzing the contents of the first page of a PDF. Dynamically inserting a custom cover page can break the metadata extraction techniques of Google Scholar and may result in all or much of your site being dropped from the Google Scholar search engine.
For more information, please see the "Indexing Repositories: Pitfalls and Best Practices" talk from Anurag Acharya (co-creator of Google Scholar) presented at the Open Repositories 2015 conference.
Feel free to support OAI-PMH, but be aware that in general it is not useful for search engines: