Jacob Nielsen (1997) asserts that the only way to validate a designs usability is to watch one user at a time use the product, and this is where we focused all our attention in testing our prototype.
We developed our test plan, creating activities that would encompass some of the main journeys through the application. Accompanying questionnaires were made to capture the participants thoughts after each activity, and before and after the entire session. These are outlined in our test script which was essential for ensuring we had consistency between our tests and comparable results. Because the number of test participants was small (6) most of the insights that were gleaned through this offered more qualitative data than quantitative. These were the text field answers in the surveys, and the notes taken by the interview observer, documenting what the test participants said while using the prototype.
Due to issues testing the voice functionality of our prototype over online conferencing calls we tried to conduct most of our test sessions in person. The exception to this was one user, who we recruited through our user research questionnaire aimed at drivers with a wheelchair accessible parking permit. This was conducted online which impacted the effectiveness of the voice functionality.
The first activity was designed to bring the user through one of the primary journeys of the app:
You have an appointment in Rathgar, Co. Dublin, and have decided you will drive there. You downloaded a new app to your phone called ParkPal and decide to give this a try. It is your first time using the app. Use the app to find a parking space in Rathgar.
it was designed to see whether participants would elect to skip or read the onboarding section, and to see if the map was intuitive and easy to understand. Here is a participants experiences charted for this task (results of ease of use questionnaire):
Issues with the prototype were the main concerns for users here. Not all parking spots/ parking spot clusters were implemented as clickable items, and search bar functionality was not implemented. This led to us prompting the user to look for another way to complete this task, sending them back on track, either towards the voice assistant or the map functionality.
Having all map items be clickable elements would have exponentially increased the complexity of the prototype, and was not really a viable option for us in the timeframe. Not having the Map search implemented, even if it only had one clickable search option, was a mistake however. It detracted from the users experience of the app, and made their observations less about the design elements we wanted to validate, and more about the prototype itself. I attribute this oversight to time constraints, and to scope creep — we took it upon ourselves to redesign the whole experience of the app, without due regard for how many permutations and potential paths there was.
Our results generally were more positive though, we grouped the qualitative answers and, though issues with limitations of the prototype were raised by a few of the users, most of the design related issues were relatively minor.
Our SUS exit questionnaire showed that most people found the app relatively simple, but there were participants that had concerns that the functionality wasn’t always easy to find.
Were we continuing with this project we considered adding labels to the ‘more’ button on the home screen, and not rely solely on an icon that some people didn’t recognise, and to make it clearer that he numbers on parking spot clusters referred to the amount of spaces that were there.
Nielsen, The Use and Misuse of Focus Groups, 1997.