At Etch, usability is at the core of our work. Recently, we got the opportunity to carry out some usability testing with mobile Figma prototypes. It’s been a while since we tested prototypes directly with users, so this project gave us a chance to revisit the process and experiment with some new tools.
Choosing the Right Tool
For this test, we already had a set of completed prototypes in Figma, which we wanted to test. So, we just needed to find a tool to distribute the prototypes, monitor user interactions, and collect feedback remotely. A quick industry call-out for recommendations narrowed the hundreds of Google search results to four promising candidates.
We uploaded a sample prototype to each tool to further reduce our options and evaluated them based on usability, setup simplicity, and available features. Here’s what we found:
UXTweak

UXBerry

Lookback

Maze (spoiler-alert Our Winner!)

Uploading Prototypes to Maze
After selecting our chosen tool, it was time to upload our prototype and create the survey. Maze allows Figma prototypes to be uploaded directly and offers a range of tools to direct users through the prototypes.
However, we found a few challenges uploading to Maze. Here are some of the issues we encountered and how we solved them:
Handling Multiple Prototypes
Maze allows only one prototype per project, but we found an easy workaround:

File Size Limitations
Uploading large prototypes can be tricky due to file size constraints within Maze. Here’s how we managed:
- Separate Prototype File: We moved the prototype into it’s own separate file, where we only included relevant screens, assets and components.
(Tip: If your prototype involves variables, which won’t copy across to a new file easily, one trick is to instead duplicate the file and delete everything you don’t need in the new copy.)

- Streamline layers: Hidden layers and deep-linked components can secretly grow Figma file sizes. To tackle this, we removed unnecessary layers and detached components that were not essential to interactions. Figma’s new AI features were useful to automate the identification of elements to remove.
Interaction Challenges
Unfortunately, we found that some of Figma’s features didn’t work or translate well into Maze. For example:
- Broken Prototype Interactions: Maze currently only supports a subset of Figma prototyping interactions. In our case, we had a nice loading screen to test, which would launch into the screen after a delay - Maze doesn’t currently support delays, so we had to leave it out of this test.
- Missing Content: Our prototype involved a lot of nested component interactions and variable dependent display values. We found that some nested content was missing within Maze. Our solution was to locally copy nested components into the prototype file and simplify the make-up of some interactions where possible.
Recruiting Participants
For this project, we needed a simple test group, so we used Maze’s built-in link-sharing feature. They also offer participant sourcing directly within the application, though we haven’t tested this out ourselves.

Analysing the Results
Once we had our results, we were able to analyse the data using Maze’s built-in reporting tools.

However, we also found that a number of these tools were buggy, and we had to manually check results and feedback. This was a bit frustrating, but we were mostly able to get the information we needed.
It’s worth noting that the heatmaps were particularly buggy on prototypes utilising component level interactions over those with page level clicks. This is something we would bare in mind when choosing to use Maze again.
The long-form user feedback, was particularly useful as it gave us insight into the users’ thought processes and reasoning behind their actions. It also allowed us to cross reference this feedback with the heatmaps and click tracking to get a more complete picture of the user experience.
Our Takeaways
Overall, we really enjoyed the process of testing our prototypes with real users and were able to pull out several insights into their behaviours. We are keen to do more user testing in the future.
However, we realised that Maze had some flaws which made it less ideal for this project. We would recommend it for smaller or more question-based projects. We’re, also, keen to trial it as a way of collecting in-context client and stakeholder feedback more directly.
If we were to do a similar prototype test again, with complicated Figma interactions with variables and components, we would consider using a different tool or approach to testing.