Voice AI
Role:
Lead Product Designer
Timeline:
Jun 2023 - Apr 2024
Team:
UX research, product design, and GTM collaboration
Project Summary
VoiceAI is a web-based AI transcription platform within NeuralSpace. I led the design of the product from MVP to scale, with a focus on reducing adoption friction by translating complex speech and language models into clear, task-driven workflows for both technical and non-technical users.
Impact
30% increase in user adoption post-launch
45% reduction in transcription turnaround time
Expansion across Arabic, Indic, and European languages
Strong internal and external feedback on clarity and usability
The project involved designing a full-fledged product from MVP to scale, balancing AI capability, usability, and market readiness.
Product Demo
Below is a working overview of the VoiceAI platform, showcasing the core user flows across transcription, AI interaction, usage tracking, and developer configuration.
Problem & Constraints
Although NeuralSpace’s speech and language models were strong, the product was primarily accessible through APIs and technical workflows. I designed VoiceAI to make these capabilities visible and understandable to non-technical users, enabling stakeholders to explore, demo, and evaluate STT and TTS outputs directly and make informed decisions about enterprise adoption.
Design Strategy
The goal was to make a complex AI system usable without requiring technical context.
Focused on task-based workflows instead of model settings or API concepts
Designed for non-technical users exploring and demoing VoiceAI, not just developers
Prioritized visibility into system behavior over exposing advanced controls
The MVP focused on a narrow set of core tasks to validate demand and surface early friction.
Helped identify where users got confused or lost trust
Feedback from usage data, usability tests, and sales conversations informed iteration
Shaped how processing steps, system states, and AI outputs were presented
Design decisions were shaped early with input from product, engineering, sales, and marketing.
Go-to-market teams helped surface non-technical perspectives and customer expectations
Early alignment ensured the product, messaging, and demos evolved together
MVP Scope
The initial MVP focused on validating core transcription workflows and demand.
Features that were included in the MVP:
Real-time transcription
File-based transcription
Sentiment analysis and transcript summaries
Transcript translation with side-by-side comparison
In-product feedback collection for bugs and workflow issues
This version established a baseline for how users explored speech outputs and where friction appeared across workflows.
Insights
How insights were gathered
In-office usability testing with 10 regular and occasional users
Click tracking to identify drop-offs and friction
In-product feedback submissions
Community discussions and support tickets
What we learned
Once users understood the core transcription workflows, they wanted more flexibility and control
(Seen in usability tests and internal Slack feedback)Long uploads and processing times created uncertainty when system status wasn’t visible
(Observed through click drop-offs during file uploads)VoiceAI began to be used as a place to explore speech workflows, not just generate transcripts
(Patterns emerged from repeated internal usage and feedback submissions)
Key Design Changes (Post-MVP)
Expanded configurations through “Ask Me Anything”
Enabled users to ask questions directly within transcripts, giving them more flexibility once they understood the core workflows.
Real-time upload and processing visibility
Made file upload and transcription states visible to reduce uncertainty during long processing times.
Introduced text-to-speech as a core capability
Expanded VoiceAI beyond transcription into a hub for speech workflows, supporting reuse, accessibility, and content generation.
Together, these changes shifted VoiceAI from a transcription tool into a flexible environment for exploring speech-based workflows.
After Launch
After launch, design responsibility extended beyond the product interface to how VoiceAI was introduced and understood by non-technical and enterprise audiences.
Design Focus
Make AI capabilities easy to understand for non-technical audiences
Reduce perceived complexity during demos
Maintain consistency with the NeuralSpace rebrand across touchpoints
Launch and enablement assets
Feature walkthroughs for Text-to-Speech and Ask Me Anything
Social and launch assets grounded in real-world use cases
Campaign visuals highlighting multilingual and dialectal speech recognition
Learnings and Takeaways
This project reinforced my ability to:
Take a technically complex AI system and turn it into workflows people can actually understand and use
Design for both technical and non-technical users without building two separate products
Make product decisions that balance usability, system constraints, and go-to-market needs
Work closely with engineering, product, data science, sales, and marketing to ship and position a real product
Core takeaway: I can design AI products that are grounded in how systems work, but still feel clear, usable, and ready for real-world adoption.








