Design quality at Teradata was inconsistent and evaluated subjectively, leaving the design org with little leverage when pushing back on suboptimal work. Over roughly a year, the Product Design leadership team developed six design principles, translated them into a multi-stakeholder scorecard, and trained designers, PMs, engineers, and user proxies to evaluate work together before shipping. The result was a shared quality language that crossed discipline lines organically, a measurable trend toward higher scores across 12 evaluations, and a direct connection between design criteria and post-ship instrumentation.
Context //
Design reviews at Teradata had no shared standard. Feedback was personality-driven, quality was evaluated inconsistently, and the design org had little credibility when pushing back on work that wasn't ready. Usability and user satisfaction scores reflected the problem. Without an agreed-upon definition of "good," there was no common ground to advocate from, and no mechanism to hold quality accountable across design, product, and engineering.
Company //
Teradata
My role //
Design ops director, facilitator and program manager
Team size //
Product Design, Content Design, and Design Operations leadership
Timeline //
Q4 2024 (initiated) through Q4 2025 (12 evaluations completed)
Approach //
I facilitated workshops with the full Product Design leadership team, spanning Product Design, Content Design, and Design Operations, to define what quality meant within the context of Teradata's business. We anchored the principles in established design thinking but adapted them to business realities: adoption and consumption are core to Teradata's model, so those lenses were embedded into the principles themselves.
We landed on six: Be Easy (simplify content and interactions, guide users through the easiest path), Be Scalable (cohesive interfaces that grow with users and the ecosystem), Be Trustworthy (clarity, inclusivity, and accessibility), Be Delightful (bold, creative solutions), Be Informed (design with continuous learning from user behavior), and Be Open & Connected (interoperability and integration by design).
Each principle was stress-tested with design leads, then with partners in product management and engineering. From the principles, we developed specific criteria and a scoring rubric. After implementation but before shipping, cross-functional teams would independently evaluate the work, align on scores in a group session, and share results with design, product, and engineering leadership.
Challenges //
The biggest barrier was evaluation alignment. Even within the design team, it took a training session and multiple evaluations before the process felt natural. New evaluators from other disciplines frequently came in as outliers, which is how we discovered the group alignment session was not optional. We codified a calibration step before finalizing scores, which smoothed outliers without eliminating the value of diverse perspectives.
There was also predictable resistance to adding process overhead. Our counterargument was grounded in data: usability and satisfaction scores indicated we needed to improve what we were shipping, and the best product companies measure quality systematically.
The results //
Conducted 12 scorecard evaluations across Q2-Q4 2025, spanning multiple product areas and features
Evaluation cycle time improved from approximately 3.5 weeks during pilots to approximately 2 weeks at steady state, while expanding to 6-12 evaluators across 1-3 features per cycle
Scores trended from predominantly 2s ("meets some expectations") early in the program to predominantly 3s and high-4s ("meets most" to "meets all expectations") in later evaluations
Shifted team culture from "ship and forget" to "ship and iterate," with low-scoring criteria directly informing triage for upcoming hotfixes and major releases
"Be Informed" criteria created a direct line to post-ship instrumentation: features that scored well had OKRs, Pendo tracking, and feedback mechanisms in place, giving the Insights team what it needed to build proper measurement instruments
Principle language became embedded in day-to-day team conversations beyond formal reviews. Engineers began referencing principles organically in sprint discussions, prompting design and PM alignment without a formal prompt
What I learned //
I would have brought Product Operations in earlier. They own cross-functional rollout and training, and we essentially built that function ourselves through the pilot phase. Their involvement from the start would have reduced evaluator ramp-up friction and gotten us to a steady state faster.