A Q&A with User Researcher Amy Dexter
User researcher Amy Dexter is featured in this edition of our Spotlight series, where we sat down to discuss her career spanning healthcare and consumer technology. From her early curiosity about how people interact with complex systems to leading research strategy for tech teams, Amy’s journey has been defined by exploration, adaptability, and a passion for bringing the user’s voice into product development. In this conversation with Interwoven’s Yukiko Naoi, Amy shares insights from her path into human factors and user research, her approach to testing and validation, and how AI is shaping the future of research practice. She answers twelve questions about her career, methodology, and perspective on user research.

Amy earned a master’s degree in mechanical engineering from Tufts University after completing a Bachelor of Science in mathematics and statistics at Austin Peay State University. Her career spans roles as a human factors specialist, customer insights and experiences strategist, and user researcher at Sonos, where she optimized user experience and product performance. Amy also worked in the healthcare sector, applying her expertise to improve patient care and medical device usability. She blends rigorous research methods with a deep understanding of human behavior, translating insights into actionable design strategies. Amy’s analytical mindset and passion for understanding people drive her approach to collaboration and innovation, whether working with teams or engaging directly with end users.
Q:
Can you briefly introduce yourself and share how you started your career in user research?
A:
I’ve always been a curious person interested in a wide range of fields, which made choosing a career path challenging. Before college, I spent time job shadowing professionals like a lawyer, an ultrasound technician, and a photographer to get a sense of what their daily work looked like. I interviewed them about what they enjoyed and what was difficult, which helped me think more deeply about different possibilities. In college, I continued to explore by taking courses in photography, medicine, education, and even considered becoming a therapist. Eventually, I chose to major in mathematics and statistics with a minor in photography, reflecting both my analytical and creative sides.
After college, I went to graduate school for mechanical engineering, thinking I wanted to understand how things work and how to make them work better. The turning point came when I volunteered for a research study focused on teamwork under stress. Participating in that study opened my eyes to how fascinating human behavior and team dynamics could be, especially in challenging environments. I realized I was more interested in how people interact with systems and technology than in the mechanics themselves.
That realization led me to human factors engineering and usability testing in medical devices, and eventually into user research in consumer technology. Throughout my journey, curiosity about people, systems, and the places where they meet has always driven me. Looking back, I see that my early career exploration and interviews were actually a form of user research, just focused on understanding my own needs at the time rather than others’. I never knew user research existed as a field, but my curiosity ultimately led me exactly where I was meant to be.
Q:
What do you love about your job?
A:
I support an ecosystem of designers, product managers, and engineers by leading research studies, building research infrastructure, and coaching teams on methodology. My role is a blend of hands-on research and research operations, which includes creating self-service tools, reviewing study protocols, and integrating AI into our workflow. What I love most about my work is being a voice for the user, especially in spaces where their needs are often overlooked or misunderstood. I enjoy surfacing the experiences of underrepresented groups, challenging internal assumptions, and making sure the team hears directly from real people. It’s rewarding to see how research can influence product decisions and ensure that what we build truly reflects the realities and priorities of the people we’re designing for.
Q:
Tell us a little about testing and validation – when do you do it, how do you do it and why do you do it?
A:
Testing and validation are ongoing processes that happen throughout product development, not just at one stage. Early on, I focus on understanding user needs and validating assumptions using methods like interviews, contextual inquiries, and surveys. As the product develops, I shift to testing concepts and interaction design, often through usability studies and prototype evaluations. Closer to launch, research becomes more formal, with structured usability testing to ensure the product performs well in real-world conditions and meets necessary standards, especially in regulated environments like healthcare.
How I approach testing depends on the product’s maturity and the decisions the team needs to make—it can be quick and exploratory or more structured and data-driven. The main goal is always to reduce risk: making sure we’re building the right thing, avoiding usability issues, and meeting user needs. Testing and validation ensure we make informed decisions and deliver products that truly work for users.
Q:
When you are validating a new product concept, what are the first three things you test for?
A:
When validating a new product concept, I first look for problem and solution fit, asking if we are solving a real and meaningful problem for users or just creating a solution in search of a problem. Next, I check if the concept aligns with users’ mental models, making sure it fits how they naturally think about the task rather than introducing confusion. Finally, I assess the feasibility of interaction, ensuring users can intuitively understand how to engage with the product even at the earliest prototype stage. These three areas help ensure we’re building something valuable, understandable, and usable from the start.
Q:
From your experience, what’s a common pitfall or blind spot that designers often have when it comes to testing and validating their own work, and what’s the best way to overcome it?
A:
A common pitfall I see is confirmation bias. Designers are naturally invested in their work and want to see their solutions succeed, which is a strength, but it can make it difficult to stay neutral during testing. This often leads to unintentionally framing questions or interpreting user feedback in a way that supports the existing design, or overlooking negative signals and interpreting ambiguous feedback as positive. For example, I’ve seen designers react to user struggles by thinking, “They’re just not doing it right,” instead of asking why the user’s approach is different from what was intended.
To overcome this, I recommend using structured and objective research protocols and involving a neutral party, such as a researcher or another designer who wasn’t involved in the project, to help design and interpret the findings. If that’s not possible, it’s helpful to focus on testing hypotheses rather than designs. Instead of asking if users like a layout, ask if they can complete the intended task efficiently with that layout. This shift moves the focus away from seeking approval and toward uncovering the truth about user behavior. It leads to much deeper and more actionable insights, helping teams improve their designs based on real user needs rather than assumptions.
Q:
How do you define the role of a user researcher and strategist, especially in the context of a product’s development cycle?
A:

A user researcher acts as both an investigator and a translator throughout the product development cycle. Their primary role is to uncover users’ needs, behaviors, and mental models, then translate these findings into actionable insights for product and design teams. This ensures that decisions are informed by real user data.
As a strategist, the researcher defines which problems matter and identifies when research will have the most impact. They guide teams on integrating research, considering constraints like time, budget, and business goals. By joining early—often before a roadmap—they help plan user touchpoints and ensure solutions address the right problems. During development, they stay involved to provide both strategic direction and hands-on support.
Ultimately, user researchers don’t just ask if users like something—they dig into the reasons behind user feedback and help teams understand how to improve products based on those deeper insights.
Q:
You’ve worked in both consumer tech and the healthcare industry. What are the key differences in testing and validation between those two very different fields?
A:
The biggest difference between testing and validation in consumer tech versus healthcare is rigor and regulatory oversight. In healthcare, usability issues can cause serious consequences, including clinical errors or patient harm. Validation is highly structured and meticulously documented. Strict standards like IEC 62366 govern usability engineering and compliance. Ethics boards or institutional review boards often oversee participant recruitment and protocol approval. Every step follows regulatory frameworks, with higher stakes because patient safety is critical.
In contrast, consumer tech allows for more flexibility and speed. There’s a culture of rapid iteration and experimentation, which is great for innovation but can sometimes come at the expense of thoroughness and depth. While the risks in consumer tech are different, they are not insignificant—especially when products handle sensitive data or reach vulnerable user groups.
Because of this, I try to bring some of the discipline and best practices from healthcare into consumer tech environments. This includes careful documentation, ethical research practices, and a focus on accessibility and long-term impact, not just short-term outcomes. Ultimately, it’s about finding the right balance between moving quickly and ensuring that the products we build are safe, responsible, and truly meet the needs of users.
Q:
What are some of your favorite testing methodologies, and how do you decide which one to use for a given project?
A:

I’m a strong advocate for mixed methods research because combining qualitative insights with quantitative data gives a much fuller picture of user behavior and needs. One of my favorite methodologies is contextual inquiry, which involves observing and talking with people in their own environment as they use a product or complete a task. This approach helps me understand not just what users say, but how they actually interact with products in real-world settings. For example, if we’re studying a grocery shopping app, I might go to the store with participants to see their process firsthand, which reveals details and pain points that wouldn’t surface in a simple interview.
Virtual moderated interviews are another favorite. These allow me to reach participants regardless of location and see how they use products remotely, which is especially valuable in today’s distributed work environment. The ability to connect over video calls means I can gather feedback from a diverse range of users and see their reactions and interactions in real time.
Unmoderated usability testing is useful for scaling research quickly. I can set up scenarios for participants to complete on their own, then observe their behavior and gather feedback without being present. This method is efficient and allows us to collect data from a larger group, but it’s most effective for digital products where hands-on interaction is straightforward.
Surveys are also a key tool, especially for reaching larger groups and adding quantitative context to qualitative findings. By layering survey results with insights from interviews or usability tests, I can validate trends and prioritize next steps based on broader user input.
When choosing a method, I consider the development stage, confidence needed, and the decision we’re supporting. For early exploration, I use interviews and contextual studies to uncover deeper insights. For later validation, I combine unmoderated testing with surveys to scale and triangulate findings. Time, budget, and business constraints matter, but I start with the best-fit approach and adjust. If a decision is critical, I recommend rigorous methods, even if they require more time or cost. With constraints, we may compromise on depth or scale, but I maintain strong standards and ensure insights are actionable.
Q:
Can you share an example of a time when user testing completely changed the direction of a product you were working on? What was the most surprising thing you learned?
A:
At Sonos, I led a study visiting homes to observe how families interacted with audio devices. We expected simple patterns of use, but discovered much more complexity and personalization. Some children used bedroom speakers to create private spaces, shielding themselves from household sounds. Adults set audio timers to structure routines, signaling dinner or the day’s end. We also saw people moving speakers between rooms, challenging our assumptions about fixed device placement.
These observations revealed that audio was not just about individual enjoyment or entertainment. It was deeply tied to family dynamics, daily routines, and personal boundaries. The most surprising thing was how much people customized their use of the technology to fit their unique household needs, often in ways we hadn’t anticipated. This insight shifted our focus from simply improving the product’s core features to considering how we could better support multi-user households and flexible use cases.
As a direct result of this research, our team rethought both hardware and software design. We prioritized features that allowed for easier movement of devices and better support for multiple users. Internally, the conversation changed from how do we improve the product for one user to how do we create an experience that works for everyone in the home. This experience reinforced the importance of observing real-world use and being open to unexpected insights that can fundamentally change a product’s direction.
Q:
How do regulatory requirements and patient safety impact your testing process, particularly for medical products?
A:
In medical device development, every usability study is part of risk mitigation, validating that products can be used safely and effectively in real-world clinical settings. This involves formal task analyses, carefully designed protocols, and adherence to standards like IEC 62366 and FDA guidance. Studies often require ethics board approval, and documentation must be audit-ready. Usability is a core safety and regulatory requirement, not just a UX concern.
Q:
Considering your experience with wearable technology, what are the unique validation challenges for products that collect sensitive user data?
A:
Whenever we gather user insights, we work with sensitive data. This is especially true for wearables, which collect continuous personal information. Devices track health metrics, location, and daily habits, raising privacy concerns. A main challenge is ensuring responsible handling through secure storage, anonymization, and proper user consent. In healthcare, frameworks like HIPAA set strict standards for protecting sensitive information. In consumer tech, guidelines are often less rigorous or poorly understood.
Another challenge is that users may not always realize how much personal information they are sharing or how it could be used. During research sessions, people can reveal surprisingly personal details, so it’s important for researchers to be transparent about data practices and make sure participants are comfortable and fully informed about how their data will be used.
It’s also critical to ensure that everyone on the research team understands and follows best practices for privacy and data security, especially when collaborating across teams or using external vendors. Balancing the need for meaningful insights with the responsibility to minimize the collection of sensitive information is an ongoing challenge.
Finally, as wearable devices often collect data continuously and in real-world contexts, validating their accuracy and reliability can be challenging. Researchers must design studies that reflect real-life usage while still protecting privacy, and be prepared to address concerns from users who may be wary of how their data is managed. Ultimately, building trust with users and being transparent about data practices are essential for successful validation of these products.
Q:
How do you see the role of AI impacting user research and testing?
A:
AI is changing research by automating tasks like transcription, summarizing sessions, and analyzing sentiment at scale. This frees researchers to focus on interpretation and strategy. The real value is in AI augmenting human judgment, such as tagging, synthesis, and participant recruitment. However, it’s important to remain critical, as AI can accelerate both good and bad research. Thoughtful study design and human oversight are still essential.
Q:
What’s one innovation in testing or validation that you’re most excited about right now?
A:
I’m excited about agentic AI, which helps synthesize large volumes of qualitative data. I’m currently using an AI agent to surface patterns and themes across ongoing customer interviews, making insights more accessible and actionable. This saves time and democratizes access to user voice, amplifying the impact of research.
Q:
What’s one piece of advice you would give someone who wants to include rigorous testing into their product development cycle?
A:
Start small, start early, and test often.
You don’t need an elaborate usability lab, a lengthy protocol, or even a fully working prototype to begin gathering meaningful feedback. The most important thing is to make testing a habit from the very beginning of the development process. Validate your assumptions before you focus on finalizing interfaces or features. Even simple methods, such as walking through a concept or asking users to think aloud as they attempt basic tasks, can uncover critical insights early on. This is when it is easiest and least expensive to make changes.
Involve your entire team in the process. When designers, engineers, and product managers watch real users, it builds empathy and alignment. Seeing user struggles or successes often drives better, more user-centered decisions. Remember, you are not the user, and assumptions can be unexpectedly challenged. Start small and include everyone to create a learning culture. This continuous improvement leads to stronger products and better outcomes for users.

—
Check out the rest of our Spotlight series to hear more from leaders in the design industry. Sign up for our newsletter and follow us on Instagram and LinkedIn for design news, multi-media recommendations, and to learn more about product design and development!