Speaker 1 Okay, let's dive into something that's really buzzing right now. Human centered design, but specifically within generative AI startups. Speaker 2 Right? And not just any startups. We're talking about those in really high stakes areas. Think health care, cybersecurity, national security. Speaker 1 Yeah. Fields where getting it wrong isn't just inconvenient. It can have, well, serious consequences. Speaker 2 Absolutely. Generative AI is—I mean, it's everywhere. It's exploding. But in these sectors, things like user trust and whether it actually works in the real world, that's paramount. Speaker 1 Totally. Imagine you're a founder, right? You've got this amazing AI concept. The temptation is just to build it fast. Speaker 2 Sure, get it out there. Speaker 1 But what if that AI is meant for doctors and they just hate using it? Or it's a cybersecurity tool that actually creates more noise for already swamped analysts? Speaker 2 That's precisely the problem human centered design aims to solve. It prevents those disasters. Speaker 1 So what we want to unpack today is like, what does it actually look like to design AI that genuinely helps people in these critical jobs, especially when lives or major security issues might be on the line? Speaker 2 We'll look at some specific methods. How do startups really get inside the heads of, say, doctors or security analysts? Even military planners? Speaker 1 Yeah. How do they build AI that fit seamlessly? And, you mentioned trust. How are some companies making their AI less of a black box, making its reasoning clearer? Speaker 2 The transparency piece is huge. Definitely no black boxes allowed when the stakes are high. Speaker 1 Okay, so let's start with the basics. Why is it so critical, especially for these early stage startups like Pre-seed to Series A? Speaker 2 Well, at its core, HCD is about flipping the script. It means putting the user's needs, their desires, their context, putting that first. Sometimes, even before you fully know if a tech is feasible—first before feasibility. Speaker 1 That sounds almost counterintuitive for a tech startup. Speaker 2 It can seem that way, but think about the risks of not doing it. You mentioned the doctor example. There are classic stories, even pre-AI like early electronic medical records. Speaker 1 Yeah, I remember hearing about those. Technically functional. The doctors found them so clunky. Speaker 2 Exactly. Brilliant tech maybe. But if it disrupts workflow or adds frustration? Adoption tanks. Or cybersecurity—imagine a tool spewing out thousands of alerts. Speaker 1 That sounds like alert fatigue, a huge problem. Instead of helping, it burns analysts out. Speaker 2 Precisely. So HCD isn't just a nice to have in the discovery phase. It's about making sure you're building the right thing—something people will actually use because it solves a real problem and crucially, makes their difficult jobs easier. Speaker 1 Okay, that makes a lot of sense. So if that's the why, what about the how? What are these HCD-driven startups actually doing to get this user insight? Speaker 2 They have a whole toolkit. One really powerful method is contextual inquiry and interviews. But this isn't just, you know, sitting in a conference room asking questions. Speaker 1 You mentioned shadowing doctors. So researchers are actually in the hospital, in the air, or... Speaker 2 Observing military planners during an exercise, or watching cyber analysts respond to an incident. You need to be in their environment to truly grasp the workflow, the pressures, the real pain points that might not even come up in a standard interview. Speaker 1 Wow, that's deep immersion. What else? Speaker 2 Co-design workshops are another great one—picture getting clinicians, nurses, maybe even patients in a room with the AI developers. Speaker 1 So they're designing together? Speaker 2 Yeah, brainstorming, sketching interfaces, mapping out workflows side by side. It ensures the tech aligns with their mental models—how they actually think and work—rather than forcing them to adapt to the tech. Speaker 1 I can see how that would build buy-in right from the start, but it sounds like it could also be messy. Speaker 2 It needs good facilitation, absolutely. But the value you get from that direct collaboration is immense. Then there's rapid prototyping and usability testing. Speaker 1 Okay, this sounds more like traditional product development. Speaker 2 Sort of, but the emphasis is on rapid and early. We're talking low fidelity stuff. First paper sketches, maybe simple click-through digital mockups. Speaker 1 Really basic? Speaker 2 Sometimes yes. Or even fake chatbot interfaces. The whole point is to get concepts in front of users incredibly quickly, get that gut reaction, iterate before you've sunk months and serious money into coding something complex. Speaker 1 Fail fast. Learn fast. Speaker 2 Exactly. And there's another fascinating technique, especially for complex AI behaviors: Wizard of Oz testing. Speaker 1 Wizard of Oz. Like, “pay no attention to the man behind the curtain”? Speaker 2 Pretty much. The user interacts with what seems like a fully functional AI system. They think they're talking to the machine, or the machine is doing the analysis. But behind the scenes, a human expert is actually pulling the levers—simulating the AI responses or actions. Speaker 1 Oh, I get it. So you can test really sophisticated AI interactions and see how users react to them without having to build the entire complex AI model up front. Speaker 2 Precisely. It's a fantastic way to validate those core concepts and user experiences very early on, saving potentially huge development costs if the initial idea isn't quite right. Speaker 1 That's clever. Okay. And eventually they have to test it in the real world, right? Speaker 2 Yes. That leads to pilot studies and real world trials. This is where, you know, the rubber meets the road—deploying a prototype or an early version in a real clinic or a specific DoD unit. Speaker 1 And just watching closely. Speaker 2 Observing, collecting data, getting structured feedback. How do people actually use it when it's part of their real job? What breaks? What works better than expected? This refines the product before a wider rollout. Speaker 1 So these methods give you the raw user data. Are there frameworks that help structure this whole HCD discovery process? Speaker 2 Yeah, definitely. A really common one is design thinking, often visualized as that double diamond. Speaker 1 I've seen that. Speaker 2 Right. It represents diverging—first exploring the problem space broadly, really understanding all facets of the user need without jumping to conclusions. Empathize. Define. Speaker 1 So resist the urge to just start building. Speaker 2 Exactly. Then you converge to pinpoint the core problem you're actually trying to solve. Only after that deep understanding do you start diverging again to brainstorm potential solutions, and then converge again on the best one to prototype and test. Speaker 1 Explore the problem. Define it. Explore solutions. Deliver. Makes sense. What's the other framework? Speaker 2 Lean startup, which fits really nicely with design thinking. Lean is all about building, measuring, learning—rapid cycles. Speaker 1 The whole minimum viable product idea. Speaker 2 Yes. Test your core assumptions quickly with an MVP, get it out there, get real user data, learn from it, and then pivot or persevere. It's about continuous feedback loops, constantly validating your hypotheses about the user and the market. Speaker 1 So both frameworks really push for flexibility and learning rather than just rigidly following a plan. Speaker 2 Absolutely. It's not about blindly following steps A-B-C. It's about having a structured approach to learning, being agile, and balancing that deep user empathy with rapid iteration to stay ahead of the curve—especially in AI. Speaker 1 Which seems essential in these fields we're focusing on. Can we dive into some specifics? How does HCD look different in, say, health care versus cybersecurity? Speaker 2 Yeah, the core principles are the same, but the application gets tailored. In health care, for instance, the complexity of clinical workflows and patient safety concerns are huge. Speaker 1 So testing needs to be incredibly rigorous. Speaker 2 Definitely. You might see startups running highly realistic simulation labs. Imagine doctors using a prototype AI assistant during a mock patient consultation. Speaker 1 So you can see exactly how it fits—or doesn't fit—into that conversation flow. The decision-making moment. Speaker 2 Exactly. Does it interrupt? Does it provide information in a useful format at the right time? There are examples like Hippocratic AI who reportedly went to extraordinary lengths, running thousands of safety simulations with actual doctors and nurses. Speaker 1 Thousands. Wow. Speaker 2 Yeah. Building that trust, ensuring the AI truly meets clinical needs and safety standards before it gets near real patients. It's about deep collaboration from day one. Speaker 1 Okay. Now shift to cybersecurity. What are the unique HCD challenges there? Speaker 2 Well, think about the cyber analyst. They're often overwhelmed—drowning in data and alerts. The pressure is immense. Speaker 1 The alert fatigue we mentioned. Speaker 2 So good HCD here isn't just about finding threats faster. It's about reducing the analyst's cognitive load, making their lives less stressful, not more. Speaker 1 How does that translate into design? Speaker 2 Take a company like Andesite, for example. Their whole concept was building a bionic SoC—security operations center. The focus was squarely on empowering the human analyst, not replacing them. Speaker 1 Augmentation, not automation alone, right? Speaker 2 They focused on automating the truly tedious tasks, providing really clear explanations for why the AI flagged something. And crucially, giving the analyst control over the level of automation—letting the humans stay in the loop and make the final call. Speaker 1 That sounds key for adoption and security. Giving up control is hard. Speaker 2 Very hard. Trust and transparency are vital. Now think about national security—even higher stakes, potentially life or death decisions. Speaker 1 Yeah. Supporting military logistics, intelligence analysis. The margin for error is tiny. Speaker 2 And the operating environments are often incredibly complex and challenging. There's the story of Defcon II, which apparently started because a four-star general directly asked for help. Speaker 1 A general needed AI for what? Speaker 2 To help the Air Force move critical resources—people, equipment—faster and more efficiently, especially in difficult, contested environments. The key was their incredibly close collaboration with military users throughout the entire design and development process. Speaker 1 So built hand-in-hand with the people who would actually use it in the field. Speaker 2 Exactly. Ensuring it wasn't just technologically powerful, but also operationally relevant, usable under pressure, and aligned with how the military actually operates. That deep user integration is non-negotiable. Speaker 1 These examples really highlight how context shapes the HCD approach. Now, I know Red Cell Partners is a firm that specifically incubates startups in these three areas—health care, cyber, national security. Does their approach reflect this heavy focus? Speaker 2 Very much so. From what's publicly known, HCD seems baked into their incubation process. They appear to invest heavily right at the start in structured discovery programs. Meaning, dedicated time and resources for that deep user research. The contextual inquiries, the workshops, the early prototyping, setting up pilot engagements. They seem to recognize that this upfront investment pays off massively down the line. Speaker 1 So they don't rush the early stages. Speaker 2 Apparently not. They also leverage extensive domain expert networks. So imagine having, you know, retired generals, senior healthcare executives, top cybersecurity practitioners available to provide insights and guidance during the design process. Speaker 1 That access must be invaluable for understanding those complex worlds. Speaker 2 Absolutely. And it seems they often start with problems identified by those industry insiders—a mission-first approach. So they're tackling needs that are already validated by people on the ground, rather than just pursuing cool tech looking for a problem. Speaker 1 Which circles back to building the right thing. Speaker 2 Exactly. And a consistent theme seems to be focusing on human-AI collaboration—building tools that augment human expertise, making the doctors, analysts, or commanders better at their jobs rather than trying to replace them wholesale. Speaker 1 That seems like a smart strategy for adoption and trust in these fields. Speaker 2 Definitely. And finally, they seem quite transparent about using this HCD process. They talk about it. That signals to potential customers, partners, and investors that user needs are genuinely at the core of how they build companies. Speaker 1 It builds credibility, showing they're not just tech-focused but user-focused. Speaker 2 Right? It says, "We understand your world and we're building with you." Speaker 1 Okay. This has been a fascinating deep dive. As we wrap up, what's the main thing you think our listeners should take away from this? Speaker 2 I think the biggest takeaway is that, yes, generative AI technology is moving incredibly fast. It's powerful, it's transformative. But technology alone—it's just not enough. Speaker 1 Especially not in these high-stakes domains. Speaker 2 Exactly. The real key to success—the differentiator—is deeply understanding and meeting human needs. Building AI that isn't just smart, but is usable, trustworthy, and ultimately makes people's lives—and often very critical jobs—better. It's about the human side of the equation. Speaker 1 That's a great way to put it. So maybe a final thought for our listeners to chew on. What really are the potential downsides, the risks, if we don't prioritize this human centered approach when developing generative AI, particularly in health care, cybersecurity, national security? Speaker 2 Yeah. What are the consequences if we get it wrong? Speaker 1 Something to think about as this technology continues to evolve so rapidly around us.