Critical appraisal of Microsoft Teams

Sam Cummins
4 min readNov 7, 2020

Microsoft Teams (referred to as Teams from here) is a self described collaboration hub, which provides online meeting, chat, video call, and other collaboration tools aimed at businesses and education. This blog post is the first of four, that detail the stages of an iterative investigation and production process my team and I carried out to research and identify a user experience issue in MS Teams, and to suggest an interface that would address and improve this experience through a paper prototype.

Heuristic Evaluations

To first evaluate Teams each member of the group individually carried out a UX heuristic evaluation, based on the ten user experience heuristics developed by Jakob Nielsen, outlined by Nielsen in this article on the Nielsen Norman group website.

The purpose of this was to evaluate how Teams conformed to established best practices, and see if it could highlight any problem areas of the application that might help us focus our further investigations. Our team met up to discuss our analyses, and our consensus was:

  • MS Teams largely conforms to most of these Heuristics very well.
  • Some idiosyncrasies in app level navigation: While not dissimilar in functionality to other apps of this sort (e.g. slack), Teams lays out its functionality in a more novel way. (possible recognize not recall violation)
  • There were other small problems too, like call to action buttons not having enough visual distinction, but nothing that would be an appropriate size for the scope of this assignment.

My Heuristic Evaluation

User Base Research

We then tried to evaluate public opinions on the platform. Because of the scope and time constraints of this project we could not do a full user research cycle that we would otherwise like to do, so to try and get a high level overview of issues that users of teams were facing, we used a variety of online resources:

These investigations did not really help us highlight any key usability issues — the vast majority of complaints were related to bugs and configuration issues, not usability issues.

Task Analysis

We then performed several task analyses,

  • To better understand what users are trying to achieve when they use Teams,
  • To identify where there may be excess steps, or unoptimized flows.
  • To keep our investigations activity centered in order to always be focused on users and their needs:

… understanding the activities people perform with a set of tools can more favorably influence the design of those tools.
(from “About Face: The Essentials of Interaction Design” by Alan Cooper, Robert Reimann, David Cronin, Christopher Noessel, on Activity-Centered Design).

Again this work was initially carried out separately, and then the different tasks we studied were demonstrated and discussed next time we met up as a group.

task analysis of a teacher or lecturer trying to pose a question to a classroom full of students, and get a high level view of the classes response in real time.

Above is a task analysis of a teacher or lecturer trying to pose a question to a classroom full of students, and get a high level view of the classes response in real time. With this we were trying to see what tools Teams provides to help audience engagement, which was one of the tasks we settled on as a team to push forward with. We chose this as it was a common theme in our interviews with teachers, and something we had experience of ourselves as students.

The other Task we chose to move forward with in our investigation was booking a meeting in Teams, outlined by the task analysis below. We chose this as it was an area that was highlighted as an issue by Teams users the group interviewed, it didn’t become the main focus of our project however as we didn’t think any of our proposed solutions were the right scope for this assignment.

booking a meeting in Teams task analysis

My individual task analysis one and two.

Reflections

Pros:

  • On a general note we had concerns about combining and evaluating our research over online meetings, as none of this project could be done face to face, but given a little time I feel it worked very well, the group dynamic worked well and our communication became more and more efficient.
  • By doing the work separately and then sharing our findings I feel like we were less likely to be lead in any one direction as a group, and our findings were more varied than if we had done each evaluation together.

Cons:

  • Time constraints led to a reduced research phase.
  • Our online research of forums etc. didn’t really yield a lot and we found it difficult to find common issues across posts that would help us focus our research.

My main takeaway is that this research, although limited by time and access to users, really laid a strong foundation for the subsequent steps we took in this process, and helped inform our future decisions keeping them user centric and concise.

--

--