Quora, widely recognized for answering specific questions users have, also allows users to explore their intellectual curiosities further through a personalized feed. Quora infers various topics and interests generate a feed, supplemented with followed content (users, topics, or communities). A successful feed experience, captivating users for an indefinite duration, relies on 1) Quora’s machine learning (ML) models to deliver relevant content and 2) interfaces that allow users to curate content through providing feedback for the ML models to use. Combining these elements leads to serendipitous discoveries that align with users' personal interests, maximizing engagement and retention, and driving revenue growth.
Design lead, experimentation, A/B testing, AI/ML, strategy, systems design, data analysis
1 product manager, 3-4 engineers, 1-2 data scientists, 1 user researcher
Mar 2021 – Mar 2023
Quora's revenue relies heavily on feed engagement, which requires delivering highly relevant, personalized content to each user. To achieve this, ML algorithms need explicit and implicit feedback from users to understand their preferences and tailor the feed accordingly.
Increase feed engagement and revenue by empowering users to personalize their feeds through maximizing feedback collection by running iterative weekly experiments.
Machine learning is a method to make predictions based previous data. Feed systems have endless content options, so a subset is curated for user by the algorithm, based on their likes and dislikes. This "training data" is used by an ML model to output a prediction with the best option. Feedback given on the prediction refines the training data, creating a continuous cycle to pick the next prediction, to generate an infinite, personalized feed.
For each prediction (i.e. piece of content) shown, the ML model needs to learn the user’s reaction to it. Design enables ML to provide relevant content by building interfaces to accurately capture and interpret user preferences and intent through feedback known as signals.
Each user action provides different feedback signals. These actions are categorized to align the UI with the interpretation of the ML models, determining whether feedback is positive or negative, public or private, and how it affects the user or system and different feed content types it applies to.
Feedback taxonomy chart
Feed design focused on optimizing for a system, using user research to guide what levers to pull. Feed metrics are highly sensitive, so multiple experiments were conducted to A/B test incremental changes. Here are the final versions shipped as a result.
Engaging with the action bar is the easiest and most effective way for users to give explicit feedback on stories. While the action bar supports both positive and negative actions, the primary goal is to capture positive sentiment through upvoting content and the secondary goal is to capture negative sentiment, through downvoting.
User research revealed that users found the action bar cluttered and confusing and lacked an understanding of what some the actions represented. By simplifying the action bar and clarifying its actions, users are more likely to participate in giving feedback and upvoting content. Giving feedback allows users to tailor their feed to display the most relevant and interesting content. Data shows that users who actively give feedback, rather than passively scrolling, are also more likely to have longer feed sessions.
Feedback response menus are crucial for understanding why users take specific actions, allowing the ML model to better categorize feedback and make more accurate content predictions, especially for hiding or downvoting content. For instance, does hiding content mean the user dislikes the content, the topic, or the author? Does downvoting indicate they want to read a better answer or dislike the question?
The previous post-hide feedback menu was unengaging, with redundant options, dead-ends, and unclear language. The most impactful change was adding an option for users to specify whether they were uninterested in the question or the answer, helping the ML model understand why the content was hidden. Previously, after downvoting an answer, users were shown only a confirmation message, leaving them in a dead-end state without additional follow-up options or the ability find a more satisfying answer.
The more explicit and frequent the feedback, the more accurately and quickly ML models can adapt to provide the best content. Beyond the action bars and post-feedback response menus, there were limited avenues for additional explicit feedback. The overflow menu offered options like "Thank" and "Downvote Question," and periodic surveys appeared on feed stories, but these methods were infrequently used.
The overflow menu's low usage stemmed from its cluttered design and difficulty in parsing options. We removed low-impact options and added icons and section for quicker scannability, along with the persistent explicit ability to hide irrelevant content. Additionally, we revamped the overflow menu to include custom feedback components, such as the feed story survey, to gather explicit feedback more frequently.
Through a number of experiments and projects, the feed design team contributed significantly towards overall feed metric goals:
~10% increase year over year, scaling from ~300 to ~400 million monthly active users over three years, contributing significantly towards a ~$1M increase in revenue per year