Reflections from executing a Design Sprint
How we (partially) applied the Google Design Sprint Kit in a project, and its reflections.
We're experimenting hard in Grayhat right now to figure out a design flow that actually works.
We're throwing everything at the wall, trying out all the best practices, using both physical tools (like, chart papers on walls, stickies, real Kanbans, paper and pens... the works) and digital ones (Figma, obviously 💃), and seeing how far we can push AI to help us out.
We're not starting from zero, though. We're taking a lot of inspiration from the Google Sprint Kit and mixing it with solid SDLC/GDLC principles. It's important that we first learn the basics, and don't reinvent the wheel.
We started with a startup, called Camply (now it's called Enrichly). We were in the "Planning Phase," just trying to figure out what we were even building, who for, and what problem it was going to solve. Camply is a web app for finding and signing up for camps, and the whole goal was to make it super smooth and hassle-free for customers.
Our main focus was on fixing two big pain points for users. To do that, we had to dive into research - checking out competitor websites like Outschool, Allschool, you know, the ones our client used and recommended. The cool part was our client was the user, so she knew exactly where all the problems were on these other sites. She was our Onsite Customer.
I moodboarded all these competitor sites in Figma. Compared their features, their flows, what worked and what totally didn't. From talking to our client and looking at the pain points, we realized we needed a design that was just… intuitive. Something that guides the user on its own, you know? Easy to understand and just fixes their problems.
We had a bunch of features to think about - camp listings, search and filters, authentication, all that. But with the budget and timeline, we had to pick what would have the biggest impact. We narrowed it down to the search mechanism. We really had to put our brains into designing the best search. We even looked at sites like Amazon because, let's be honest, their search is just good, and users are familiar with the patterns in it (especially moms!).
Then came the system design phase. The team set milestones and planned out the Camply flow for the POC. We had a ton of collaborations just to make sure the design and the tech architecture were compatible and could actually solve the business problems. We did the usual stuff - daily standups, cross-disciplinary meetings, weekly sprints, design handoffs.
During the design phase, I made user personas for the "parent" and the "camp manager." Started with rough sketches, got initial client feedback, and discussed everything with the team. I had to constantly update the system architect on every little design change so we could track compatibility. Every design decision I made, I had to be sure it could actually be built well and on time.
After the initial sketches (which we did with the system architect using paper and colored pens, lol), I turned them into wireframes in Figma and got the client's feedback. Once the flows were locked, I moved to high-fidelity prototypes. I used the Material Design library because it's neat, has great guidelines, and makes everything consistent. Client feedback was a constant loop until everything was finalized.
Once the functional Figma prototypes were ready, we shipped them to the business team. They did the initial user surveys and interviews to test the designs. That led us into the development phase.
When development started and the first version of the app went live, I was deep in design QA. I had to make sure the experience was smooth, the flows were accurate, and all the features worked as designed. We recorded all the issues, prioritized them, and got them fixed by the devs systematically.
After version one was out, we planned to test it with real users: usability testing.
Usability testing is ideal to:
- Validate your hypothesis
- Fix issues before the big launch
- See how users actually use your product vs. a competitor's
You should do it at every step. Depending on the stage, you ask different questions. There are three main times to ask users questions: before the test (to screen them), during the test (to answer your research questions), and after the test (for overall feedback).
The most important thing? You gotta have the correct audience. Not everyone is your user. So you do a quick screening test first with questions like:
- How does your everyday life look?
- How often do you use products like this?
- What do you identify as? etc.
Then, during the test, your questions shouldn't lead the user. Let them think aloud. You just watch them interact, notice their body language, and ask things like:
- I noticed you did [this action], why so?
- What features do you find the most valuable?
- Can you try performing [this specific action]?
After the test, you ask more open questions:
- How was your overall experience?
- How likely are you to use this product?
And as a designer, you HAVE to note the "critical incidents."
- You expected the user to do one thing, but they did something totally different.
- They seemed frustrated, or happy, or confused.
- You assess what was a success and what was a failure.
Tools like website heatmaps also help a ton. They show you the hottest areas on your site - where people click, scroll, move. It helps you understand where to focus on improving the experience.
And lastly, you HAVE to document all the test feedback properly. Spreadsheets, screen recordings, audio notes… everything. You'll be amazed at what you notice when you rewatch it.
So what do you do after usability testing?
You analyze all that data. Look for patterns and common pain points. You need to look at both the numbers (quantitative data, like task completion rates) and the observations (qualitative data). Then you make informed decisions.
But listen, NOT EVERY ISSUE IS IMPORTANT. You have to prioritize. Think about how often a problem happens, how much it messes up the user experience, and the potential ROI for fixing it.
Collaboration between design and dev is crucial here. You gotta discuss possible solutions together.
And then you make iterative changes. Don't try to fix everything at once.
After you've made the initial changes, you go for another round of testing. See how the experience has improved and track the data.
So what could we have improved?
In our case, we did apply the best practices for usability testing. We went through all three stages, but we didn't record the feedback in a systematic way. Which led us to just assume we had to change everything a user didn't like.
We really needed to group the common user issues and prioritize them properly. And you can't forget that as you move from a POC to an MVP, the client always has new feature demands.
If we had recorded what needed to be worked on after the usability testing, and also tracked the new features, in a proper, systematic way… it would have led to a much happier client and way more realistic deadlines.
We still have a long way to go... We're trying to incorporate technologies like Dev Mode, AI, scripting and more. Also, our Design offering has expanded beyond just UI/UX: We now also focus on Graphic Design, Character Design, Animation and more (coming soon - 3D is exciting!).
I'd love to learn more about what your design process is, and how you think we can improve.
Some resources
- https://designsprintkit.withgoogle.com/
- https://designlab.com/blog/guide-to-usability-testing-methods
- https://designlab.com/blog/what-is-the-ux-design-process
- https://www.lyssna.com/guides/usability-testing-guide/usability-testing-methods/
- https://www.lyssna.com/blog/website-heatmaps/
- https://www.linkedin.com/pulse/next-steps-after-conducting-ux-testing-your/