Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Many thanks to the Cornell Usability Working Group for carrying out the usability tests and recording the results.  Special thanks to Wendy Kozlowski and Kevin Kidwell who facilitated the tests and took notes. After providing some context around the tests and high level outcomes,  we are including the text of the Usability Working Group's final report  below.  The report is also available here (TO DO: Link). 

...

Huda discussed the goal for this study as well as potential tasks with the Usability Working Group.  The group discussed the format and helped refined the tasks for the study.  Kevin Kidwell and Wendy Kozlowski took notes and facilitated the sessions and the Usability Working Group formulated the final report included below.  The tasks used in the study with accompanying screenshots showing possible interactions in the interface are available here.  Given the current situation with the pandemic, and an already over-taxed faculty and student population, we decided to recruit participants from user representatives who work with the Cornell Blacklight discovery team.  The usability tests were conducted by providing a URL to the participants where they could We used Zoom to enable participants to share their screens and provided them a URL they could use to access the website , and Zoom was used to interact with and capture their screens.  The recordings are and attempt the tasks.  Their interactions with the screen were recorded while not recording their faces.  The recordings are linked below (TO DO: provide a link). 

...

June 22-25, 2020

KEY LEARNINGS

  • In general, participants were able to use the information displayed in the autosuggest to distinguish between types of entities and to identify headings related to variant labels or connected using "see also" properties.

    • Participants were able to find authors, subjects, locations, and genres when asked to find entities from each of these types in the tasks.

    • Participants were able to use the number in the autosuggest results to answer the question regarding the number of items in the catalog for a particular topic.

    • Participants were able to use the descriptive text retrieved from Wikidata to distinguish between authors with similar names but different occupations.  One participant did want to see if typing in the occupation along with the name would help in retrieving results, but we noted that we are not matching on occupation text.

  • For both authors and subjects, all participants were able to search for a variant label and find the preferred heading. Most participants understood what the term "aka" stood for in the results and thought this display of information was useful. When searching for an author, one participant indicated they weren't sure which heading was authorized since both the variant and preferred versions ended with date strings.

  • For the shared pseudonym example, participants were able to find the names linked with "see also" but it was not clear to all that the searched name was a pseudonym shared between the linked names. Suggestions for clarifying this connection included using the term "pseudonym".

  • 4 out of 5 participants appreciated the resulting knowledge panel on the results page as a way of confirming the search they had conducted.
  • One participant noted the discrepancy between the authors and subjects where the former shows descriptive information from Wikidata while the latter does not.
    • We suggest that future work should explore how to provide consistent information about the same person regardless of whether the URI returned is for an authorized name or a for a subject heading. 

June 2020 – LD4P2 Auto-Suggest Report

...