00;00;00;00 - 00;00;18;24 Unknown Let's be honest, navigating the digital world today. It can feel like you're constantly being asked to. I don't know, juggle chainsaws while solving quadratic equations? Yeah, it's a lot. Whether it's figuring out some complex new software interface or just, trying to understand what someone really means when they give you feedback, our attention is just stretched so thin. 00;00;18;26 - 00;00;44;12 Unknown It truly is. It's like this landscape of constant demands and that mental energy you mentioned. That's a real cost to users, you know. So for anyone building tools or interfaces, understanding how people actually interact, what causes that friction and what genuinely helps them, that's absolutely essential. And that's really the heart of what we're diving into today. We've got, a fascinating stack of sauces here. 00;00;44;13 - 00;01;07;28 Unknown Yeah. On one hand, some really concrete real world user feedback on two different interfaces, one focused on graph interaction and another on, a software bill of materials or SBM tool. And then we're pairing that with some expert guidelines from the Nielsen Norman Group. You know, the heavyweights in the UX world on the best ways to actually ask for user feedback. 00;01;07;28 - 00;01;31;15 Unknown So it's effective for everyone involved. Right. So our mission in this deep dive, it's really to cut through the noise and pull out the most valuable nuggets. Exactly. We want to see how these specific user opinions, how they translate directly into design implications, and just as importantly, how companies can approach gathering your thoughts, you know, in a way that respects your time and actually gives them valuable information. 00;01;31;17 - 00;01;51;20 Unknown Yeah, it's about understanding the mechanics of useful feedback, really from both sides of the equation. Okay, let's unpack this. Let's look at the user feedback first. We'll start with the comments on graph interaction. Now the user looked at the current interface and gave it a rating of about the same. Right. Not exactly groundbreaking feedback initially user. No it's pretty neutral ground. 00;01;51;24 - 00;02;16;24 Unknown But even there we find something really interesting a positive. The user really liked the idea of clicking on a table to interact with the graph. Their explanation was that it would, and I quote, make the graph less overwhelming. In general. Interesting. This immediately tells us something important about user preference, right? They find familiar things like tables less intimidating than potentially complex visualizations. 00;02;16;26 - 00;02;41;23 Unknown This points directly to the value users place on clarity. And you know, leveraging known interaction patterns to reduce that cognitive load. The mental effort you need to process things. That's a great point connecting the table interaction to reducing mental effort. Right. So the table interaction that was a clear positive concept. But then they looked at a proposed change right where you would interact directly on the graph itself. 00;02;42;00 - 00;03;07;13 Unknown And that's where the friction appeared. Exactly. Two main issues were highlighted with that proposed direct graph interaction. First, just a lack of intuitiveness. Okay. The user said it was not intuitive that you should or can click on the graph at all. Even adding a label didn't help, they felt. It just feels like a poor user experience. And look, if a user doesn't instinctively know how to interact with something, that's a fundamental usability problem right there. 00;03;07;13 - 00;03;27;26 Unknown It sort of violates their expectations based on other software they use. And the second issue that cut right to efficiency, the proposed change actually added a second click to do something that presumably took one click before. Yeah. And the user was very pointed about this. We want to reduce clicks not add mouse clicks. Can't argue with that. 00;03;27;27 - 00;03;50;04 Unknown No. It hammers home a core principle. Users really care about efficiency. They want to get their goals done with minimal steps. Absolutely every unnecessary click or like moment of confusion just adds friction. It slows them down, makes the tool feel clunky. So these two points together, just from one piece of feedback, are pretty powerful. Interactions need to be intuitive and efficient. 00;03;50;05 - 00;04;09;23 Unknown Simple as that. Okay, so let's move to the second piece of feedback. This time about a proposed software bill of materials or SBM interface. Now this user rated the proposed design as better than the current one. So that's a definite win. Yeah a solid step forward. And the details. They tell us why the pros list was pretty extensive. 00;04;09;25 - 00;04;30;08 Unknown And the number one item again it involved the table the ability to drive via the table. The user loved being able to sort and filter info in the table. And then click an item to see its context represented visually, like its blast radius. You know, basically what it connects to or effects, right? The impact. Exactly. This just reinforces the previous feedback. 00;04;30;13 - 00;04;56;00 Unknown Tables are excellent. Familiar control centers for complex data. It lets users find they need in a structured way, then pivot to a visual for context. This speaks to both efficiency finding data quickly and clarity seeing the relationships visually. And the table itself was highlighted as a strength too, because it could show the entire S bomb contents plus valuable extra data like, vulnerability or runtime info. 00;04;56;03 - 00;05;18;02 Unknown And crucially, users could show hide columns and sword filter on what I think is important. Oh, this is huge. It gives the user flexibility and control over information density. They aren't overwhelmed by seeing everything at once, and they can tailor the view to their specific task. That level of customization is highly valued because you know different users need different data points at different times. 00;05;18;03 - 00;05;44;12 Unknown Makes sense. Another positive was placing the details for artifacts in a static spot instead of, say, a slide out panel. Yes. Slide outs can be disruptive. They often cover other information, or you need an extra click to dismiss them, right? Keeping details in a static area means that critical information is always visible and consistently located. That enhances clarity and reduces the mental load of remembering where things are or managing multiple overlapping panels. 00;05;44;13 - 00;06;03;27 Unknown And they really like the dynamic nature of the visualization itself. I really like that the blast radius expands when I click on it, and minimizes when I don't care about it. Yeah, that on demand detail view feels very intuitive and respectful of screen space too. You get the info when you need it and it gets out of the way when you don't. 00;06;03;29 - 00;06;25;00 Unknown Again, this points to clarity and efficient use of space. The user also felt the proposed designs layout made it cleaner and simpler to add more items below the table graph area. That suggests good scalability and a well thought out structure contributing to overall perceived simplicity. Okay, but there was a neutral point in their feedback too, wasn't there? There was. 00;06;25;00 - 00;06;47;14 Unknown Yeah, a slight uncertainty. It was about the blast radius. Visualization, expanding at the same time the table size was reduced. The user wondered if it might be better for the visualization to expand over the table instead. Oh, okay. It's a nuanced point, right? About how dynamic elements interact and use screen real estate. It shows users think quite practically about layout and flow. 00;06;47;15 - 00;07;14;17 Unknown And they had a couple of suggestions or points for improvement. Yeah a couple. The main suggestion was driven by a limitation. They spotted the table was based only on artifacts. The user immediately saw the value in being able to see tables for other types of data within these mom like contributors. This led to the idea of using tabbed tables so users could pivot between different data sets, artifacts, contributors, whatever, while still potentially linking them to the visualization. 00;07;14;18 - 00;07;39;18 Unknown That sounds like a great idea for providing flexibility. Exactly, and access to the full scope of data within a complex system. And finally, a classic problem with data tables came up the potential for needing side scroll, which would be annoying. Yes, the dreaded horizontal scroll. It's almost universally disliked in tables. It breaks your flow. Makes comparing data across columns really difficult. 00;07;39;23 - 00;08;02;21 Unknown This loops right back to efficiency and frankly, avoiding user annoyance. Users don't want to fight the interface just to see their data. Wow. Okay, so looking at those specific points across both boxes of feedback, the patterns really do jump out as you've been highlighting users consistently value efficiency. Doing tasks with fewer clicks, driving from familiar places like tables, avoiding frustrating things like side scrolling. 00;08;02;23 - 00;08;31;28 Unknown They prioritize clarity, intuitive interactions, static details, clear visualization that doesn't overwhelm, and they demand flexibility. Customizing their view, seeing all relevant data, pivoting between different data perspectives. These aren't just like abstract design principles. They're direct reflections of what makes a tool usable, even enjoyable versus frustrating. Exactly. This kind of granular feedback is absolute gold because it gives you the specifics behind the general principle. 00;08;32;00 - 00;08;53;25 Unknown You know why efficiency matters? Because a user told you adding a click was bad. You know why clarity matters? Because a user found clicking the graph unintuitive. It's concrete. Which brings us perfectly to the second part of our deep dive. Getting that kind of detailed, actionable feedback is only possible if you ask for it effectively. So let's shift gears and look at those expert guidelines from interning on how to request feedback from users. 00;08;53;25 - 00;09;16;28 Unknown Right. This is where the value of a well-structured voice of customer program or VOC program comes in. It's all about gathering data during or maybe after a user interaction, but doing it poorly that can really alienate user and just yield useless information. Okay, so here's where it gets really interesting, because the guidelines give us five core principles for asking effectively. 00;09;17;00 - 00;09;39;22 Unknown Let's jump into the first one. Task then ask. This is the fundamental timing principle. Basically, you wait until the user has completed or is significantly into the task they came to do before you ask them about that experience, right? The sources give us some pretty good cautionary tales about bad timing. Think about those intrusive popups that hit you the second you open an app. 00;09;39;26 - 00;10;04;01 Unknown Oh yeah. How can you review something you haven't even used yet? The shark app apparently did. This forced a choice immediately. And then this is poor directed negative feedback to support, as if it was a bug report rather than user experience feedback. Well, that's terrible timing and it frames it all wrong. And there were other examples, right? Like United Health Care and asking before a task if you'd be willing to give feedback afterwards. 00;10;04;02 - 00;10;24;22 Unknown Yeah. The sources noted that users either just forgot or they were simply too distracted by the task ahead to even commit to giving feedback later. Makes sense. An upfront request is just an interruption when they're focused on their goal. Now, a good timing example from the sources was Southwest Airlines. They asked for feedback after a user completed a key task, like booking or canceling a flight. 00;10;24;25 - 00;10;45;09 Unknown The prompt was relevant to the action they just completed. It didn't cover up critical information like confirmation details. And importantly, there was a persistent feedback tab available just in case the user wasn't ready right then or closed the initial prompt. So the takeaway for guideline one is subtle requests don't block information and offer on demand options like a feedback tab. 00;10;45;16 - 00;11;07;13 Unknown Remember that task, then ask. Makes perfect sense. What's the number two guideline to ease the eager emails? This tackles the problem of feedback fatigue, particularly via email. The source highlighted one user who got five feedback emails in just four days, five and four days. Wow, my inbox is stressed just thinking about that. You just start ignoring those immediately. 00;11;07;14 - 00;11;36;18 Unknown Exactly. The recommendation is to be much more selective. Maybe send requests only after make or break moments like a major purchase, or a completed customer support interaction, or finishing a really significant process. Not after every minor click. Okay, but maybe even more crucial. Try to ask in the same channel as the user's interaction whenever possible. So like if I just use the mobile app, ask me there instead of sending an email later. 00;11;36;18 - 00;11;58;01 Unknown Yes, that's often much better. The shipped example illustrated this kind of. They used a push notification to rate a shopper, which was a good right use in the mobile channel. But immediately after the user tapped that notification, they were hit with another pop up asking for app feedback. Okay. Too soon? Yeah, poor timing within the flow. It would have been less jarring to separate those requests just slightly. 00;11;58;03 - 00;12;23;15 Unknown The even better approach, though, is giving users a choice of channel altogether. So letting the user decide if they want to give feedback via text, social media or something else. Right. Offering options respects user preference and avoids jamming up their email. Examples included Chase providing feedback options on Twitter, Delta reaching out via text after a flight, and major offering feedback choices both in-app and via phone. 00;12;23;18 - 00;12;47;16 Unknown It's really about meeting the user where they are and using channels they actually use for communication. Okay, guideline number three seems incredibly basic, but clearly it's a common failure point. Keep surveys short. It sounds simple, doesn't it? But long surveys are just a disservice to everyone. They're time consuming for the user. And honestly, they generate mountains of data that companies might struggle to analyze effectively anyway. 00;12;47;17 - 00;13;04;29 Unknown Yeah. The goal based on the sources, should be a survey that takes roughly one minute to complete. Anything approaching 2 or 3 minutes is already pushing it. It really is about quality of questions over sheer quantity. And setting expectations is key here too, right? Like, tell people upfront how many questions there are or how long it should take. 00;13;05;07 - 00;13;27;16 Unknown Absolutely vital. The only example was bad because it requested two minutes during a busy task and had a progress indicator that just made the time commitment look intimidating. Similarly, KLM asking for two minutes of feedback during the stressful process of checking in for a delayed flight just bad timing and too long for the context. Contrast that with the positive CD baby example via email. 00;13;27;17 - 00;13;50;26 Unknown Yeah, that explicitly stated only four questions and the survey genuinely took about 30s. That's hitting the mark perfectly. Clear up front expectations, setting and delivering on the promise of brevity. The source even pointed out that using the numeral four instead of spelling out four in the email heading, was a smart little detail. Because people scan online texts so quickly, little things matter. 00;13;50;27 - 00;14;19;14 Unknown Okay, guideline four offer flexible feedback formats. How should companies approach the types of questions they ask? Well, a good approach uses a mix. You need the quantitative data from rating scales. You know stars, thumbs, numerical scores. But they need to be clearly labeled positive, negative, and potentially multiple choice questions. But the source argues strongly, and I definitely agree that it's crucial to always include an optional open ended text field where users can elaborate if they want to. 00;14;19;17 - 00;14;40;09 Unknown See, you get the structured data you can easily analyze. But users aren't trapped if their feedback doesn't quite fit the predefined boxes. Exactly. This gives them a voice to provide context. Mentioned issues you didn't even think to ask about or offer suggestions. Now the best practice for placing that optional text field. It's usually at the end of the survey. 00;14;40;11 - 00;15;02;06 Unknown Right. The Ikea example showed why starting the survey with a broad open ended question felt a bit annoying and was kind of a barrier to entry for the user. They didn't know where to start. Yeah, it feels like homework right away. Nobody wants that, right? Another smart placement for an optional text field is after a qualifying question. For instance, did you encounter any issues during your checkout? 00;15;02;08 - 00;15;27;06 Unknown If the user says yes, then you present a text field. Please tell us more about the issues you faced. Optional. Good example cited. Included meant using simple thumbs up down with an optional character limited text box and Sonos which use rating scales. But you know, could probably be improved by adding that open field for comments. Okay. And finally, guideline number five appreciate and incentivize. 00;15;27;12 - 00;15;47;29 Unknown This feels pretty fundamental. It absolutely is. Giving feedback is a favor from the user. They're taking their time and energy to help you improve. Companies should use micro copy. Those small bits of text on buttons, forms confirmation screens to briefly explain how the feedback will be used. Usually just stating it helps improve the product or service and simply saying thank you. 00;15;48;02 - 00;16;12;24 Unknown That's non-negotiable and offering an incentive can really boost participation, I imagine. It definitely can. Things like loyalty points, discounts, or entry into a sweepstakes acknowledge the value of their time. The Starbucks example offered entry into a $100 gift card sweepstakes for completing a survey about their omnichannel experience. Vice, the source noted the survey link could have been maybe a bit more prominent, but the idea of the incentive itself was sound. 00;16;12;26 - 00;16;36;29 Unknown That makes sense. A little gesture goes a long way to show you value their input. And there was a really interesting idea mentioned from Embrace Pet Insurance allowing users to reward specific staff members they interacted with. Oh, how'd that work? They had this system where users could conceptually give, like a beer or a long lunch to a customer service rep who helped them out. 00;16;37;01 - 00;17;00;17 Unknown Oh that's great. Yeah. It adds this layer of personal connection and makes the feedback feel like it's directly impacting a person, not just disappearing into some corporate black hole. So, okay, we've dug into what users value and interfaces based on their concrete feedback. And we've covered the expert backed best practices for actually asking for that feedback effectively. What happens once all this valuable feedback rolls in? 00;17;00;17 - 00;17;24;08 Unknown Can't just sit there, right? Absolutely not. That's a key point the sources make. The feedback is only useful if it's acted upon. That means reading it regularly, maybe weekly or monthly, depending on the volume. But critically, it means grouping similar feedback together to identify patterns, themes, and recurring issues. You're looking for the signal in the noise. Not just individual comments scattered everywhere. 00;17;24;08 - 00;17;43;26 Unknown Write. This analysis part is where the rubber meets the road. Them. It really is. This is what helps prioritize what fixes are most needed, where the team might have accumulated some UX debt that needs clearing. And importantly, where you might need to conduct deeper or more formal user research to truly understand the root cause of an issue. 00;17;43;28 - 00;18;05;11 Unknown And it also helps you spot what's working. Well. So you know when to, you know, congratulate the team for positive feedback too. And while this ongoing stream of feedback is incredibly valuable, especially when maybe dedicated user research isn't always feasible or frequent, it shouldn't replace that deeper research. Think of ongoing feedback as your sort of early warning system and temperature check. 00;18;05;13 - 00;18;27;25 Unknown Yeah, dedicated research really helps you understand the why behind the what. That the feedback often gives you. They work best in combination. Really. This has been a pretty comprehensive deep dive into user feedback, hasn't it? Hitting it from both a user's perspective, what they notice and value in interfaces, and the company's perspective how to ask for that input in a way that gets useful results without annoying people? 00;18;27;25 - 00;18;54;09 Unknown Yeah. The interface examples really showed us that efficiency, clarity, and flexibility are just non-negotiable in good design. Absolutely. And the expert guidelines gave us that clear, actionable playbook for asking time at right with task. Then ask. Use the right channel. Keep it brief. Offer flexible formats, especially that optional open text field and always, always appreciate and maybe consider incentivizing the user an understanding these points. 00;18;54;09 - 00;19;16;20 Unknown It gives you, the listener, a much clearer picture of the forces shaping the digital products you use every day, and it equips you to instantly recognize a well designed feedback request versus one that's, just going to waste your time and maybe cause some frustration. Which leads us perfectly to our final provocative thought for you. Think about the last time you were asked for feedback online or in an app. 00;19;16;22 - 00;19;25;19 Unknown Now, armed with these insights into what makes feedback valuable and how to ask for effectively, how well did that specific experience truly respect your time and perspective?