At Etch, usability is at the core of our work. Recently, we got the opportunity to carry out some usability testing with mobile Figma prototypes. It’s been a while since we tested prototypes directly with users, so this project gave us a chance to revisit the process and experiment with some new tools.
Choosing the Right Tool
For this test, we already had a set of completed prototypes in Figma, which we wanted to test. So, we just needed to find a tool to distribute the prototypes, monitor user interactions, and collect feedback remotely. A quick industry call-out for recommendations narrowed the hundreds of Google search results to four promising candidates.
We uploaded a sample prototype to each tool to further reduce our options and evaluated them based on usability, setup simplicity, and available features. Here’s what we found:
UXTweak
- Pros: It has a robust feature set, including direct Figma integrations and click tracking.
- Cons: We noticed a few UX issues with UXTweak, while creating a test survey. This also made us concerned about the usability of the final survey we would be sharing with participants.

UXBerry
- Pros: UXBerry has a great range of features, including importing Figma prototypes natively and offering a high quality end survey.
- Cons: They limit their price plans by the number of participants you can survey, which would heavily limit our use.

Lookback
- Pros: Lookback is focused on gathering user voice or video recordings, allowing for more in-depth product feedback and gathering of user intentions.
- Cons: Lookback’s focus on recordings was less suitable for our needs, this time. In the past, we’ve found user recordings can lead to performative behaviours, impacting the authenticity of results. That said, we’d consider Lookback for different future use cases.

Maze (spoiler-alert Our Winner!)
- Pros: Maze emerged as the top choice for its seamless integration with Figma and its usability-focused features. It’s always a good sign when a usability testing tool is easy to use!
- Cons: Maze operates on a freemium model, which meant that several of the tools we would have liked to try out were unfortunately locked behind a paywall.

Uploading Prototypes to Maze
After selecting our chosen tool, it was time to upload our prototype and create the survey. Maze allows Figma prototypes to be uploaded directly and offers a range of tools to direct users through the prototypes.
However, we found a few challenges uploading to Maze. Here are some of the issues we encountered and how we solved them:
Handling Multiple Prototypes
Maze allows only one prototype per project, but we found an easy workaround:
We created a secret ‘menu’ screen linking our prototypes, which no one will actually see. But it let us test multiple prototypes under one Maze project. Once the prototype was uploaded to Maze, we simply selected the ‘real’ starting screens for each test within our survey.

File Size Limitations
Uploading large prototypes can be tricky due to file size constraints within Maze. Here’s how we managed:
- Separate Prototype File: We moved the prototype into it’s own separate file, where we only included relevant screens, assets and components.
(Tip: If your prototype involves variables, which won’t copy across to a new file easily, one trick is to instead duplicate the file and delete everything you don’t need in the new copy.)

Compress Images: Our prototypes included a few larger images, which needed to be compressed. We found the tool Downsize was great as it allowed us to work directly within Figma.
- Streamline layers: Hidden layers and deep-linked components can secretly grow Figma file sizes. To tackle this, we removed unnecessary layers and detached components that were not essential to interactions. Figma’s new AI features were useful to automate the identification of elements to remove.
Interaction Challenges
Unfortunately, we found that some of Figma’s features didn’t work or translate well into Maze. For example:
- Broken Prototype Interactions: Maze currently only supports a subset of Figma prototyping interactions. In our case, we had a nice loading screen to test, which would launch into the screen after a delay - Maze doesn’t currently support delays, so we had to leave it out of this test.
- Missing Content: Our prototype involved a lot of nested component interactions and variable dependent display values. We found that some nested content was missing within Maze. Our solution was to locally copy nested components into the prototype file and simplify the make-up of some interactions where possible.
Recruiting Participants
For this project, we needed a simple test group, so we used Maze’s built-in link-sharing feature. They also offer participant sourcing directly within the application, though we haven’t tested this out ourselves.
While Maze’s free plan doesn’t support using there built-in screening questions, Our screening requirements were luckly fairly minimal. So, instead we improvised by asking our participants to honestly exit the survey if they didn’t meet the criteria. To do this we used a yes/no question within Maze’s survey builder.

Analysing the Results
Once we had our results, we were able to analyse the data using Maze’s built-in reporting tools.

Maze offers a number of pre-built reports and insights, including heatmaps, click tracking, and user feedback. We found these features useful for quickly identifying areas of interest and concern.
However, we also found that a number of these tools were buggy, and we had to manually check results and feedback. This was a bit frustrating, but we were mostly able to get the information we needed.
It’s worth noting that the heatmaps were particularly buggy on prototypes utilising component level interactions over those with page level clicks. This is something we would bare in mind when choosing to use Maze again.
The long-form user feedback, was particularly useful as it gave us insight into the users’ thought processes and reasoning behind their actions. It also allowed us to cross reference this feedback with the heatmaps and click tracking to get a more complete picture of the user experience.
Our Takeaways
Overall, we really enjoyed the process of testing our prototypes with real users and were able to pull out several insights into their behaviours. We are keen to do more user testing in the future.
However, we realised that Maze had some flaws which made it less ideal for this project. We would recommend it for smaller or more question-based projects. We’re, also, keen to trial it as a way of collecting in-context client and stakeholder feedback more directly.
If we were to do a similar prototype test again, with complicated Figma interactions with variables and components, we would consider using a different tool or approach to testing.