Equalista’s onboarding course

The opportunity

  • The team behind a bootstrap education app (Equalista) were on a mission to educate users about gender inequality, via bite-sized courses.

  • The interface copy and course content was written by passionate academics. But not tested with users.

  • They reached out for content design help.

Scope the work

  • I paired with a friend and UX writer (because two brains 🧠 are better than one!) to work with the client to define their content needs into chunkable slices.

  • It was pro bono work and we were working across time zones. We wanted to set clear expectations.

The sliced-up briefs

Brief 1: User-test the intro course to propose a revised flow.

Brief 2: Review the home screen and propose content revisions.

Brief 3: Review glossary push notification.

Brief 4: Prototype and test new paywall copy.

Spotlight on brief 1

Brief 1: User-test the intro course to propose a revised flow.

(We’ll focus on Brief 1 for now).

  • A compulsory intro course that users must complete to unlock future (paid) content.

  • The client wanted it to give users the basics. As well as explain why they asked users to pay.

  • It needed to engage enough for users to finish it. And leave them wanting to try the next session.

  • It was 64 screens in total.

Screenshot of the Equalista module 'Introduction to Equalista'. It tells users what to expect in the module and promises to answer common questions about the course. There is a button where users can 'start session'.

Talk to (target) users

🍋 Work with what you’ve got

The client was unable to access analytics on completion stats or behaviour from early sign-ups to help us understand drop-off behaviour. So we made the call to run user tests to inform content revisions. The client was also hesitant to contact subscribers, so we recruited test users who fit their user personas. In an ideal world, we’d use the data.

We pulled together testing goals and a discussion guide for some user research (1:1 user interviews and a moderated card sort/ranking activity).

We wanted to know:

  • How did users feel during the course? What sparked joy, felt new, or important?

  • What were their key takeaways? Did they match what the client wanted users to walk away with?

  • Did users feel motivated to try another course?

Our client also wanted to know:

  • Is there too much information? Is the course too long?

  • Did users recall the key concept of ‘intersectionality’?

  • Would they be motivated to learn more? Why, why not?

  • Do they understand why we ask them to pay? How do they feel about it?

Equalista research plan. We split it into 5 stages. 1. Pre knowledge test. 2. Test course flow. 3. Post course questions. 4. Card sort. 5. Closing remarks.

User testing plan.

Card sort (of sorts)

  • We took the 64 screens and summarised them into 16 key messages* (with a client OK of course).

  • We ran a moderated activity in Miro, and asked users to assign each message to one of four buckets — talking out loud about their reasoning.

Users took the key messages and assigned them to 1 of 4 buckets

  1. I need to know this is the intro course (keep it).

  2. I don’t need to know this in the intro course (but it’s important, so tell me some other way).

  3. I don’t need to know this (at all).

  4. Undecided.

*This was pre ChatGPT. I’d love to see if that would speed this up.

A moderated card sort (of sorts) to understand users’ take on the content.

Deliver

We took user feedback and proposed a revised (user-informed) onboarding course that reduced the number of screens from 64 to 15 screens.


That’s 24% fewer clicks for the user. While ensuring the user is left with the key takeaways desired by the client (nice!).

The new flow presented to the client was 15 screens. The proposed new flow starts with 'our story', followed by 'Gender and bias',  'Credible sources, 'Why pay', 'Meet the characters in the app', 'data privacy' and a final 'thank you screen'.

Recommended new flow for the client.

Result

To show how the we cut down of content, we contrast 64 pink radishes (the original number of screens) just 15 radishes (the new revised number of screens). "From 64 screens to 15 screens, still a radical radish!"

Less screens for the user to move through, but still the educational takeaways. (FYI - The app featured a radish icon.)

The client took the recommendations, but work unfortunately paused.

To measure impact, I would look for:

  • 📈 Increased course completion rate by new users.

  • 📈 Increase in the number of new users starting the next course.

For ongoing improvements, I would:

  • 📈 Explore efforts to capture user behaviour to understand drop-off points. A quicker more sustainable approach to future changes.

  • 📈 Explore ways to capture feedback from new users who drop out before completing the intro course.

Next
Next

Verification flow