Practices
Testing a Digital Platform for Food Transformation: What We Learned
By EPFL+ECAL Lab
When the Digital Hub Experience reached its first complete version, we needed to understand whether it actually worked for the people it was designed to serve. How do you evaluate a platform meant to connect diverse users - chefs, citizens, policymakers - around sustainable eating?
We designed a formal usability study combining multiple methods: structured task-based testing with 21 participants, standardized UX questionnaires, and qualitative interviews. The approach balanced rigor with flexibility, essential when testing a platform operating across different cultural contexts.
What worked in our testing approach: We used real-world scenarios rather than abstract tasks. Instead of "find a recipe," we asked: "You've just moved to Göteborg and want to explore the local SWITCH hub." This scenario-based testing revealed navigation issues we wouldn't have caught otherwise - participants struggled to distinguish between the homepage and hub-specific pages, finding the dual structure confusing.
Combining standardized metrics (UEQ for user experience, SUS for usability, VisAWI for visual appeal) with open-ended interviews proved invaluable. The quantitative scores told us we were on the right track - a SUS score of 77.4 falls in the "good" range. But qualitative feedback revealed the nuances: while users loved the environmental data in the Food Index, they couldn't interpret the numbers without reference scales.
We also included a free exploration phase after structured tasks. This is where participants revealed what genuinely interested them, and what confused them. Many gravitated toward recipes but couldn't understand the "Switch Verdict" scoring, asking: "Is this about sustainability, health, or both?"
Key lesson: Test what you can actually test. We deferred testing the AI chatbot and dynamic data features until they were fully functional, avoiding misleading feedback on incomplete functionality. However, we still included them as static elements to gauge user interest.
The study's real value wasn't just validation-it surfaced specific, actionable issues: navigation confusion, unclear scoring systems, missing search functionality. These concrete findings directly shaped the redesign phase that turned into this very app, now functional.
Takeaway for others: Multi-method testing with real scenarios and mixed quantitative-qualitative data can give both the validation scores you need for stakeholders and the specific insights you need to actually improve the platform.


