Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Responses to "Any other feedback you would like to provide?"

19 responses (a few split for categorization)

Some of the use cases appear to be duplicative, or at least the distinction they are intended to capture is not clear. Examples: c-10 vs c-9; c-13 vs c-15. c-12 appears to describe a characteristic of the user rather than a capability of the system. It's not totally clear if the use cases describe human agents or potentially also machine agents used by cataloguers. The ranking given to some features, e.g. c-7, may differ according to which type of agent is involved. The restriction to maximum and minimum allowable entries in each category makes this exercise somewhat artificial. I left out c-10 and c-15 because they appeared duplicative and I needed to stay within the maximum of 6 entries per category, and I demoted c-6 to a ranking lower than I thought was warranted just to be able to submit a response to the survey.
Context
As a cataloger, I am typically looking at a resource with some information about an entity and an authority source with some information and hoping to discern points of agreement which add up to sufficient congruence between the two to persuade me they are the same. Contextual information in both the resource and the authority file description are crucial to this process. The use cases which suggest linking or matching without more information and without a more circumspect decision process about the congruence of the two entities makes me nervous. Whether a quick match is possible depends on how well differentiated each label (or label visibly associated with a URI) is. We haven't yet seen how this discernment of points of agreement could be automated based on available information from a bib resource and an authority source.
Change management
Regarding c-27, it would be ideal to be alerted when changes are made. I would be afraid of those cases where a person is flipped to an incorrect identity based on a similar name. However, this probably just me thinking in terms of strings and not in terms of things. Of course, if the linked data is derived from an existing faulty MARC authority record that has, for example, conflated two persons, then there is the potential for an incorrect "flip" to take place. MARC-derived linked data is only as good as the original authoritative source.
I didn't want to say anything was unimportant, but it wouldn't take the form until i did. ANd the details of the search results were difficult to rank without seeing exactly what was meant (like the ranking questions)
For c-26 and c-27 we don't see this as an either or proposition. We would like a combination approach that allows us to specify categories of changes that could be approved automatically and others that can be reviewed first.
C-26 seems obvious - of course the display should match the authoritative source. What else would it match? I guess I don't really understand what the use case is. 
Left-anchored
COMMENT #1: Creating and maintaining entity identifiers is precise, context-sensitive work. When working with an entity, catalogers build and develop detailed entity attribute and relationship knowledge concerning the entity. Most NACO catalogers rely heavily upon left-anchored browse result lists to navigate entity databases. The prospect of keyword-derived search results makes me extremely uneasy. COMMENT #2: The maximum capacity criteria for each level of importance for ranking the features presented in this survey is limiting; I ran out of spots available for feature ranking and was unable to include some features I found important. COMMENT #3: The descriptions of some of the features in this survey are unclear. When read by different library participants at my institution, some descriptions meant different things to different people. That is unfortunate, and probably not helpful for those responsible for interpreting the Survey Results. COMMENT #4: The requirement that each category of importance must include 3 entries appears artificial; especially in the case of the final "Not at all important" category. There should be another "Important" category between "Slightly important" and "Not at all important."
Many of the statements were very similar.
some of these two categories "MODERATELY IMPORTANT" and "SLIGHTLY IMPORTANT" would be good to have them if possible.
Because of the 6 user stories limit in each ranking box, some stories received lower ranking.
I'm working late and it's been a long year. Sorry I haven't the energy to add more feedback. Also, being forced to enter at least 3 use case scenarios may have corrupted the value of what I entered in the last three boxes.
There were a lot where I didn't know what they meant and wanted to leave out. But the survey required at least 3 in each box, so I did the best I could.
For c-26 and c-27 we don't see this as an either or proposition. We would like a combination approach that allows us to specify categories of changes that could be approved automatically and others that can be reviewed first. For c-12 it was unclear whether you were proposing to search the keywords or just see them on retrieval. C-8 seemed to have a strong value judgment statement that people know what they are looking for, that seemed inappropriate in the survey.
Even at the widest screen this format was extremely difficult to use--Once I got toward the end I couldn't get responses into the lowest boxes because they were below the screen and it didn't scroll down. Please don't use this format again.
Please, please keep in mind that there are vocabularies where a singular/plural distinction indicates a real semantic difference. Stemming in searches should be an option, never a requirement.
please note that, once I had placed it within a ranking box, I did not rank a user story against other user stories in the same box.
C-12 is not a cataloger need. It indicates what the cataloger knows. Can't rank it. C-26 seems obvious - of course the display should match the authoritative source. What else would it match? I guess I don't really understand what the use case is. The reason linked data elements (URIs, etc.) are lower in importance right now is because we aren't in an environment that can use them so other parts of authority work have higher priority. If we had an environment where they were used they would be much higher in rank.
My current workflow does not include adding URIs so I left those unranked. I do not understand in (c2) what you mean by "label from an external authoritative sources" - property label? source label? It would help if you clarify what "standard indexing" is in (c5) Are c8 and c10 the same?
The biggest issues encountered previously when using lookups was that they either never appeared (the search took too long or timed out) or the returned entities were in an order that was unhelpful or illogical (getting East New York when searching New York - and New York, NY not even appearing as an entity) . Those two fixes would be incredibly impactful. And thank you for all of your hard work!
This was a difficult task as I see many of the options as very similar -- I am not sure I see a significant difference between wanting to SEE broader/narrower terms and being able to STEP INTO broader/narrower terms, which may have pushed others 'down' the list. As to the ones I left on the left -- URIs are at this point not a real consideration for me. I appreciate their (future) utility, but at the moment I am more concerned with textual strings because that is how I do my work, especially since the NAF, which I work in, doesn't allow non-Latin script variants as authoritative forms.
As a cataloger, I am typically looking at a resource with some information about an entity and an authority source with some information and hoping to discern points of agreement which add up to sufficient congruence between the two to persuade me they are the same. Contextual information in both the resource and the authority file description are crucial to this process. The use cases which suggest linking or matching without more information and without a more circumspect decision process about the congruence of the two entities makes me nervous. Whether a quick match is possible depends on how well differentiated each label (or label visibly associated with a URI) is. We haven't yet seen how this discernment of points of agreement could be automated based on available information from a bib resource and an authority source.
Timeouts & Accuracy
The biggest issues encountered previously when using lookups was that they either never appeared (the search took too long or timed out) or the returned entities were in an order that was unhelpful or illogical (getting East New York when searching New York - and New York, NY not even appearing as an entity) . Those two fixes would be incredibly impactful. And thank you for all of your hard work!
Index structure
Please, please keep in mind that there are vocabularies where a singular/plural distinction indicates a real semantic difference. Stemming in searches should be an option, never a requirement.
URIs
As to the ones I left on the left -- URIs are at this point not a real consideration for me. I appreciate their (future) utility, but at the moment I am more concerned with textual strings because that is how I do my work, especially since the NAF, which I work in, doesn't allow non-Latin script variants as authoritative forms.
My current workflow does not include adding URIs so I left those unranked.
The reason linked data elements (URIs, etc.) are lower in importance right now is because we aren't in an environment that can use them so other parts of authority work have higher priority. If we had an environment where they were used they would be much higher in rank.
Survey - Ranking issues from survey structure
I considered nearly all of the stories to be extremely, very, or moderately important. It was impossible to put them in the slightly or not important categories.
please note that, once I had placed it within a ranking box, I did not rank a user story against other user stories in the same box.
This was a difficult task as I see many of the options as very similar -- I am not sure I see a significant difference between wanting to SEE broader/narrower terms and being able to STEP INTO broader/narrower terms, which may have pushed others 'down' the list.
I'm working late and it's been a long year. Sorry I haven't the energy to add more feedback. Also, being forced to enter at least 3 use case scenarios may have corrupted the value of what I entered in the last three boxes.
Many of the statements were very similar.
some of these two categories "MODERATELY IMPORTANT" and "SLIGHTLY IMPORTANT" would be good to have them if possible.
Because of the 6 user stories limit in each ranking box, some stories received lower ranking.
I didn't want to say anything was unimportant, but it wouldn't take the form until i did.
The maximum capacity criteria for each level of importance for ranking the features presented in this survey is limiting; I ran out of spots available for feature ranking and was unable to include some features I found important. 
The requirement that each category of importance must include 3 entries appears artificial; especially in the case of the final "Not at all important" category. There should be another "Important" category between "Slightly important" and "Not at all important."
Some of the use cases appear to be duplicative, or at least the distinction they are intended to capture is not clear. Examples: c-10 vs c-9; c-13 vs c-15. c-12 appears to describe a characteristic of the user rather than a capability of the system. It's not totally clear if the use cases describe human agents or potentially also machine agents used by cataloguers. The ranking given to some features, e.g. c-7, may differ according to which type of agent is involved. The restriction to maximum and minimum allowable entries in each category makes this exercise somewhat artificial. I left out c-10 and c-15 because they appeared duplicative and I needed to stay within the maximum of 6 entries per category, and I demoted c-6 to a ranking lower than I thought was warranted just to be able to submit a response to the survey.
Survey - UI issues
Even at the widest screen this format was extremely difficult to use--Once I got toward the end I couldn't get responses into the lowest boxes because they were below the screen and it didn't scroll down. Please don't use this format again.
Survey - Clarity of user stories
 I do not understand in (c2) what you mean by "label from an external authoritative sources" - property label? source label? It would help if you clarify what "standard indexing" is in (c5) Are c8 and c10 the same?
There were a lot where I didn't know what they meant and wanted to leave out. But the survey required at least 3 in each box, so I did the best I could.
the details of the search results were difficult to rank without seeing exactly what was meant (like the ranking questions)
The descriptions of some of the features in this survey are unclear. When read by different library participants at my institution, some descriptions meant different things to different people. That is unfortunate, and probably not helpful for those responsible for interpreting the Survey Results.
For c-12 it was unclear whether you were proposing to search the keywords or just see them on retrieval. C-8 seemed to have a strong value judgment statement that people know what they are looking for, that seemed inappropriate in the survey.
C-12 is not a cataloger need. It indicates what the cataloger knows. Can't rank it. I considered nearly all of the stories to be extremely, very, or moderately important. It was impossible to put them in the slightly or not important categories.