
Advanced Search Usability Study
Advanced Search Usability Study
Conduct Expert Review and Usability Testing for INsource Advanced Search Website Tool
Client: Wolters Kluwer | Spring 2020
My role: UX Research, Expert Review, Usability Testing, Meeting & Process Facilitation
Overview
Wolters Kluwer is a global provider of professional information, software solutions, and services for clinicians, accountants, lawyers, and tax, finance, audit, risk, compliance and regulatory sectors.
The company’s goal is to provide timely, actionable insights that help customers make informed decisions.
Project Scope & Goals:
Focus on Advanced Search interface within the context of primary task flow and scenario
Verify that recommendations from previous usability study have been implemented
Assess the overall effectiveness of ‘Advanced Search’
Identify obstacles to completing successful searches
Define key areas to continue conducting usability tests
Key Questions
How learnable is the system for a new user?
How easily can users perform basic search functions?
How closely does the flow of the software reflect users’ mental model?
How easily do users find the tools or options they want?
What obstacles prevent users from completing their tasks?
What indication do users have that their search results are accurate?
Process
Project consists two main parts: Expert Review and Usability Testing
Expert Review
Team individually assessing the usability of each step in the NILS INsource task flow according to the guidance provided by our usability heuristics.
We then individually ranked the priority of the usability findings based on the likelihood and impact the of the finding.
We synthesized findings based on themes, determining the highest priority and providing recommendations. Our team identified a total of 142 findings:
Global Findings (System-wide)
Task Specific Findings
Usability Findings Priority Airtable
We used Jakob Nielsen’s 10 Usability Heuristics as the base for our Expert Review
Sample Expert Review Findings
Expert Review Global Findings:
Theme: Make Accessibility a Priority
Finding: Lack of contrast between navigational elements on the page could create a disorienting experience for users with color blindness.
Recommendation: Heighten contrast and create other indicators that rely on form/state change to communicate user’s location within the site.
Theme: Advanced Search Feature Difficult to Locate within NILS INsource Ecosystem
Finding: The ‘Advanced Search’ interface is meant to replace ‘basic search’ in the future. It currently sits under ‘Hub’, and is seemingly remote and removed from the main interaction of the tool.
Recommendation: Facilitate easier access to the main purpose of the tool - make search the first page users see.
Expert Review Task Specific Findings (Partial):
Theme: Difficult to Determine Relevance of Content in View
Finding: The layout of the results page does not provide significant context to the user to determine the relevance of the results without diving into content individually.
Recommendation: Shift the page format to provide more information on screen to aid in determining content relevance.
Theme: Further Refining of Results Not Consistent Throughout the Experience
Finding: The filter interface is inconsistent, requiring the user to begin a new search if results were not as expected.
Recommendation: The filter interface should mirror the options in initial search, and allow the user to filter dynamically from the results page for further refinement.
Usability Testing
Usability Test Participants Recruitment
Recruitment Criteria:
Active insurance compliance professionals/product developers
Min. of 1 year experience in insurance compliance/product development
Access to the NILS INsource product
Ideally a current customer
Ability to share screen and provide access to their webcam
2. Usability Testing Methodologies and Plan
“Within-subjects”: Each participant tests the interface for critical tasks within a scenario
Remote (Zoom Video Call), task-based tests
Each participant was presented with a specific scenario and tasks based on business and user goals
For each specific usability testing session, there will be 1 moderator and 2 observers/note-takers
3. Set up Severity Scale for Quantitative Analysis
Upon completion of each task, we asked users to rank the following on a scale of 1 - 5 based on their experience:
a) Ease of Use (1 = very difficult - 5 = very easy)
b) Level of Satisfaction (1 = very dissatisfied - 5 = very satisfied)
Based on the average across users, we provided a numerical ranking for both measurements, as well as a severity rating ranging from ‘critical - low’ severity. We recommend the severity rating be used to guide which tasks require prioritization.
Severity Scale
4. Usability Test Task Flow
We asked the participants to conduct the advanced search with this scenario in mind:
Scenario: “As a compliance professional at XYZ Car Insurance Company. You have been asked to identify the maximum amount of time that an insurance company can take to process a car accident claim for a consumer in Illinois and Indiana.”
5. Synthesized findings for each task:
Overall Findings
Task Specific Findings
Usability Testing Overall Findings :
Participants generally rated their ease of use and satisfaction with the system highly across all tasks
Regular users displayed great dexterity with system, especially for tasks related to typical job functions
Issues of discoverability across several product functions
System feedback absent or difficult to perceive
Task Specific Findings (Partial):
There are a substantial amount of filters and index terms options to navigate through
There are so many filter options, it was overwhelming for new users to search through. It was frustrating for expert users to continue needing to enter them.
Users struggled to interpret usefulness of results
The page provides results without providing meaningful information to determine relevance of the results
Refining a search can mean starting over
Unclear system guidance is provided, leaves user unsure how to refine their search
Sample Usability Testing Findings
Recommendations
Make it easy for users to understand whether they have completed a task successfully or not
All three participants experienced instances where they were not confident that they had completed a task, such as creating a filter, or exporting documents. Provide users system feedback when they complete a task.
Match terminology to the language industry professionals use
All users had difficulty identifying the right search terms, or using the right search terms to either construct a search or refine search results.
Make the system’s constraints clearer to users for critical tasks
The majority of users constructed searches in a manner that did not yield any results but had no understanding of what they had done incorrectly forcing them to figure out what they had done wrong through trial and error.
Integrate user support
The majority of participants were unclear on how to complete a task within the scenario and did not have a clearly distinguishable mechanism for getting support.
Make useful features visible to users
A majority of users noted that they had not encountered features within tasks (Cite Watch, Export Documents) despite being long-time users.
Results: Client Stakeholders Love Our Findings!
Our team presented to 10 client leaderships and received great appraisal from them. Client team is currently implementing our findings before their next release.
We are proud of the work!