smartmockups_kiw8vpm2.jpg

Sexcellent App: Progressive Sex Education App For Teens

Project Details

Timeline: July 2020 - September 2020

Team: Manuela Odell - Lead UX Researcher & Designer, Phil Gorindo - Product Manager, Scott O’Toole - iOS Developer

Background

The Sexcellent app is a progressive sex education app for teens. At the foundation of this app is factually accurate articles written by doctors and experts about everything and anything sex and teen related. The app is available in iOS in the app store but we are still testing it.


The Problem

Teens are complex, intellectually curious people and as such, they have questions about the world - unique questions that may not get answered by the current version of our app. How do we create a platform that allows teens to safely ask questions?

 

Research

Sexcellent is a unique app in that while the target audience is teens, the purchasers of the app are their parents. It was important that we interview parents and teens alike to understand the fears, concerns, and motivations for their sex education. In addition to interviewing teens and parents, we needed to understand where they were currently going to ask questions, if they were doing so at all.

Research Questions:

  • How are teens currently asking for advice?

  • Where are teens going to seek advice?

  • What is a teen’s relationship with technology?

  • What are a parents concerns about online spaces for teens?

  • What are parents currently doing to provide sexual education guidance?

  • What are parents currently doing to manage (if at all) teen online usage?

We read through Common Sense Media’s research around teens technology usage, as well as the Guttmacher Institute’s research on how teens receive sexual education. Here are some of the most important learning points:

  • Teens use Instagram, Reddit, YouTube, and TikTok predominantly

  • Teens today have grown up in a world in which data breaches are common and thus are very careful about their information leaking, typically have “finstas” or are private. They have a lot of distrust with technology despite using it frequently.

  • Parents have not been successful in fully monitoring their teens online behavior and many parents have found hardcore porn on their teens computer history.

  • Teens are not particularly satisfied with their sex education as the only way to learn about certain things and in addition, they reach out to friends’ older siblings or the internet to compliment their education.

Our research also gave us insight into the types of users that would be using our platform.

 

Define

We had started to identify the main issues we needed to consider when building this space for teens. It was not going to be easy. Our main goals for this feature were the following:

  • How do we create a platform that allows kids to ask any question in a safe environment?

  • How do we give parents the comfort that their kids are safe?

  • How do we equip kids with age-appropriate information without encouraging behavior?

  • How do we satisfy a kids curiosity and provide them age-approriate information?

The easiest and most obvious solution was a forum in which teens could ask other teens questions. However, there were some major issues with an open forum and we had to face some big concerns:

  • Trolling - trolling behavior is so dangerous for teens. How could we protect them?

  • Teens can lack good judgment or self-regulation. How do we make sure the answers being shared are valuable, based in good judgment, and safe? What happens when it’s not?

  • Is there a way to create a safe space where teens can ask questions without their parents seeing and still give parents the comfort that they are safe?

Finding a solution

There is no perfect solution when you are creating a platform where anyone can contribute. There is always risk, there will always be blind spots, and it’s not always possible to keep your users safe at any given moment. We decided on some key features that would solve the majority of the issues that were coming up:

  • Channels: The idea of channels was to create unique spaces for specific questions. This allows teens to target their questions to a specific audience interested in the same content

  • Moderators: Like Reddit, we want to rely on our community to keep everyone safe. By identifying users who exhibit commenting prowess, we can give them special privileges as a moderator to flag content, delete content, and alert us to any behavior that goes against our policy.

  • Flagging capabilities: Flagging capabilities will be extended to anyone. We will operate under the premise that regardless of what the content is, we should take it down until we learn otherwise.

  • Upvoting/downvoting: Downvoting and upvoting posts, comments, and replies is a way to ensure that users are seeing the most valuable content. We will include nudges to encourage upvoting when a post, comment, or reply is helpful, factually accurate, and does not go against our policy, and downvoting for the opposite.

  • Users tag if they think it’s appropriate for younger teens or older teens: When users make a post, we will have an option that allows them to choose what specific teen age group, divided by younger teens 13-15 and older teens 16-18. This allows parents to put in age controls to censor certain content.

Design

Once we had an idea of the features we needed to build, I started to design the wireframes and user stories of how these features would be implemented.

Frame 1 (2).png




Working with our iOS developer, I made sure that the features we were building would be able to be implemented in a timely manner, as well as work within the restrictions of Swift.

Since our design system was already built, it was easy to create high-fidelity wireframes that we could put in front of users and iterate on those.

High Fi Wireframes Sexcellent.png



Validating the idea

We are fortunate to have a group of teen interns that work with us. We wanted to get it in their hands so that they could provide us a lot of feedback.

We had them go through three tasks:

  1. Apply to become a moderator

  2. Create a post and use relevant tags

  3. Report a post

They had a lot to say about how to keep their community safe and what they felt would work and would not work.

Affinity+Map+-+Your+Space+-+New+frame+%2810%29.jpg

An affinity map highlighted that we were still not providing value to our users while maintaining their security. Our users weren’t really interested in being moderators. Of course this doesn’t mean that there would not be any teens who would be willing to moderate but the value wasn’t really there. Additionally, they didn’t feel like we were doing the best job to prevent trolls.

Going back to basics

At this point it was clear that our first iterations were not cutting it. So I went back to our user personas. Their main need was to feel safe. Were our solutions actually providing that to our users? I decided to look elsewhere for the answers.

I decided to take a deeper look at Instagram, Reddit, and Twitter and learn about some of the tools they use. Of course, we don’t have the same resources as these huge companies.

The Shadow Ban

The shadow ban has long been applauded as a powerful took to keep users safe by blocking recurrent trolls without letting them know. Trolls can be blocked and removed from a platform but they an easily create another account. Shadow banning blocks their content so that it will not be apparent to them that they have been blocked.

Restricting Specific Users

Instagram added “restrict” which allows you to restrict a commenter without them knowing. This is valuable when there are issues with a specific commenter. It provides the user with more control.

Prototype Take 2

Conclusion

Although the work of keeping our users safe is not done (it will never be done, in my opinion), this was an important feature for us to implement. In fact, after speaking to parents, even they felt better about the forum, knowing that their teens had a bit more control.

  • What went wrong and why? From the get-go, it felt like we were over-complicating what ended up being a simple problem. Creating a moderator system would take a lot of development work and time, and ultimately, was unclear if it was providing any value to our users. Age-appropriate tagging, while easy to implement, had its own issues; mainly how do we define what is appropriate and what is not to certain age groups? Especially in the schism of the USA, where teens across the country are culturally and socially different? By really understanding the problem, I was able to develop something that actually solved it.

  • What could I have done better? Interviewing both parents and teens made us realize a very unique problem. The needs of our main users (teens) and our secondary users (parents) were in opposition. It confused me greatly but then someone made me realize something - the parents needs are the business’ needs but the teens needs are the users needs. Do you want to design for the business or for the user? By focusing on the users’ needs from the beginning, I believe I could have saved time. Instead I tried to create features that would satisfy both parties.

  • What did I learn? I learned that there is no one-size-fits-all solution to keeping users safe. What works on Reddit may not work on Instagram or Sexcellent. Understanding the unique needs of the products users is key in creating a feature that will work for your product. I kept looking at what other similar products were doing, instead of just trying to focus on the actual problem. Once I did, the answer was right in front of me.

Previous
Previous

Designing a better table

Next
Next

Venmo | Adding a personal finance feature