Spark +AI
About
Spark is a cross-platform email client with over 17.5 million users worldwide. It's built to help individuals and teams manage email more efficiently.
Spark has introduced powerful AI features that help users draft emails, proof-read, rephrase, and summarize threads.
My role
As a Product Designer at Spark, I led the end-to-end design of the "My Writing Style" AI feature, aimed at enhancing adoption and user satisfaction with AI-driven email drafting.
I collaborated closely with the Product Manager to explore solutions and partnered with the Engineering team to enhance prompting for more accurate tone matching of users' writing styles.
My specific contributions included:
• Investigating the underlying reasons for low adoption of Spark’s +AI features through data analysis and user feedback.
• Validating core problems through user interviews and usability testing.
• Developing and testing hypotheses to guide design direction.
• Designing how Spark could effectively learn and adapt to each user’s writing style.
• Prototyping multiple solutions and conducting usability tests to identify the most effective and intuitive experience.
Outcomes
Increase in Spark +ai features adoption.
Growth in Spark Premium renewal rates.
Increase in user retention among AI feature users
Problem
Low adoption of one of a key AI features - Draft with AI
Pain points
Lack of Authenticity:
Users often felt AI-generated content didn't reflect their voice, limiting adoption. Through user studies, we discovered 78% of users cited "mismatch in voice/tone" as their primary reason for abandoning AI drafts.
High Editing Effort:
Draft +AI required significant editing to add a personal touch and capture the user's unique voice. This additional effort reduced its overall usefulness, resulting in low engagement.
Unclear Value:
Many users didn't see the value in continuing to use the AI. If the output wasn't send-ready and didn't feel like their writing, the feature's promise of convenience wasn't materializing in practice, leading to drop-off after initial use. Retention metrics showed only 12% of users still using AI drafting features after 30 days.
Key insight
Users craved an AI that could maintain their authentic voice. If we could deliver drafts that felt like they wrote them, we'd address both the trust issue and the editing overhead issue. This became the cornerstone of our solution strategy.
Validation
Usage Analytics:
I analyzed in-app data and found a significant drop-off in repeated usage. While a quarter of our premium users tried the AI drafting, the majority stopped after just 2–3 generated emails. This indicated initial curiosity but failure to become a habit. It correlated with users feeling the drafts weren't helpful enough.
User Surveys:
A survey to active Spark users revealed that over half of them felt AI-written drafts “did not sound like something I would write.” Many noted the tone was too generic or overly formal. Notably, 70%+ of respondents said they would use the AI more if it better matched their personal writing style – a huge signal that personalization was the missing piece.
User Interviews & Feedback Sessions:
In one-on-one interviews with a dozen users who had tried and abandoned Spark +AI, users described feeling "detached" from the AI's text – it lacked their typical humor, warmth, or brevity. A common theme: "I spent almost as long fixing the AI draft as I would've writing it myself." This clearly highlighted that the supposed efficiency gain wasn't being realized due to heavy editing requirements.
Beta Feedback on Early Concepts:
We gathered input from an internal beta (around 100 users, including power users and colleagues) on the idea of style personalization. Feedback was encouraging – users loved the concept of the AI learning from them. Some had even tried workarounds like feeding ChatGPT their writing samples. This validated our direction and provided insight into concerns (e.g., privacy of analyzing personal emails, ensuring the AI doesn't pick up embarrassing quirks or errors).
Main
If Spark’s AI could learn each user’s unique writing style and draft messages in their voice, users would trust it more and integrate it into their routine, driving up adoption and satisfaction.
Opportunity
This was a significant opportunity to differentiate Spark in the crowded email client market – no major email apps at the time offered personalized AI writing. We were solving a real user pain point (AI's lack of authenticity).
Key Components
The solution had several key components: a setup flow, an AI style-learning process, (seamless) integration into the drafting experience. All should be consistent across desktop, iOS, and Android.
Users were invited via a "What's New" update. With just one explanatory screen and a tap to enable, the process was quick and frictionless.
I designed multiple onboarding variants and conducted A/B testing with 110 users to determine which approach yielded highest opt-in rates. The winning design increased adoption by 22% compared to our control version.
If the user skip introduction screen we show the additional tip to enable my writing style right after user will try to draft with ai.
Spark automatically selects a small sample of the user’s recent emails to feed the AI. We chose three as the number of emails to analyze (enough to glean patterns, but not so many as to invade privacy or take too long).
If a user was uncomfortable with one of the choices (say an email that was too personal or not reflective of their style), they could swap it out or remove it. This gave a sense of control. In usability tests, this step greatly increased trust – even though most left the default samples, knowing they could inspect them made people feel at ease.
I ensured that the core flows (enabling the feature, reviewing samples, generating a draft) were familiar on each device while respecting each platform's capabilities. On desktop, the setup appeared as a modal wizard; on mobile, it was a full-screen in-app experience.
I refined our in-house design system with new AI components for writing that maintained platform-specific patterns while ensuring consistent mental models across devices.
Prompt Accuracy
Through iterative testing with our AI department, we enhanced the prompt with more context. Over multiple rounds of tweaking and blind tests (where testers weren’t sure which version of the prompt was used for a given output), we achieved a notable jump in perceived accuracy of tone-match.
Outcomes
Time saved on average per email draft when using the 'My Writing Style' feature, a 43% improvement over generic AI drafting.
of active AI users enabled the feature within the first two weeks of release, exceeding our expectations.
In-app feedback mechanisms showed a jump in satisfaction. Before, only about 30% of AI drafts were rated positively; after My Writing Style, positive ratings climbed to ~54%.