00;00;00;00 - 00;00;19;05 Unknown Imagine a world where your digital tools, they don't just sit there waiting for you to tell them exactly what to do. Right? Instead, they actually act autonomously, making decisions, executing pretty complex tasks all on their own. Yeah, and this isn't like some far off sci fi thing anymore, is it? Not at all. It's happening right now. Yeah. 00;00;19;11 - 00;00;41;07 Unknown And it's fundamentally reshaping, well, everything how we design, how we build, how we interact with technology. It's a big shift. It really is a seismic shift. Yeah. We're definitely moving beyond just simple automation or even, you know, the really advanced assistance we've gotten used to. We're stepping into an era where AI is becoming, well, a genuine collaborator, an autonomous one. 00;00;41;08 - 00;01;10;06 Unknown Yeah. It's not just your smart assistant anymore. It's like an active participant in the process. Absolutely. An active participant. Back an answer. Simple. And that's exactly what we're diving into. This fascinating. Sometimes kind of bewildering world of a genetic AI. We'll be looking at a thing. Gave us neutron user to act upon how we're evolving from I just assisting users to AI, enabling truly autonomous collaboration and what that means for you listening in, right. 00;01;10;06 - 00;01;31;15 Unknown Whether you're a product leader or a designer or maybe just someone really curious about, you know, the absolute bleeding edge of tech. Exactly. And this whole exploration, it's actually part of a bigger picture for us, isn't it? It is. It's central to our mission. This is a generative AI podcast, and it's curated specifically for your high level product thinking, keeping you aware of the latest trends. 00;01;31;17 - 00;01;54;22 Unknown Every week, we're digging through developments across UX, AI, tech, always searching for those, compelling signals. The stuff that cuts through all the noise out there. There's so much noise, right? And our goal is to synthesize all that, boil it down into these thematic deep dives, basically giving you a shortcut to being really well informed, hopefully with some surprising facts thrown in and, you know, maybe just enough humor to keep you hooked. 00;01;54;29 - 00;02;19;13 Unknown We try. And for those of you who've been listening for a while, you know, this entire process, this whole deep dive approach, it's really rooted in the rigorous insights curated by principal product designer Chris Mullins. That's right. His weekly workflow is pretty intense, scanning just tons of content, new product releases, tool updates, design frameworks, UX strategy posts, AI experiments, you name it. 00;02;19;13 - 00;02;40;19 Unknown All those breakthroughs in applied tech. Exactly. He's spotting emerging ideas, sharing critical insights publicly. And this deep dive you're listening to right now, it's a direct result of that really meticulous expert curation process pulling the signal from the noise. Precisely. So as we get into it today, here's what you can expect. We're going to unpack what a genetic AI really is. 00;02;40;22 - 00;02;59;29 Unknown We'll explore it's real world shifts. You know, where it's actually making waves in different industries okay. Then we'll delve into the critical implications for interface design. Huge changes there and for the very role of designers themselves. And we have to talk about the guardrails. Absolutely. The essential human centered guardrails we need for this new era. Can't skip that. 00;03;00;06 - 00;03;22;26 Unknown And then there's this really interesting neurons about AI and creativity. Yes. Connecting back to Chris Mullins, his own insights on structured ideation. It's fascinating. It really is. It should be quite an illuminating journey, I think might challenge some core assumptions along the way. Definitely. So let's zoom out a bit. How did we even get here? What's the journey been like for AI, say, over the past year or so? 00;03;22;26 - 00;03;45;27 Unknown This led us to this point, this seismic shift. You called it? Yeah. Well, we started with AI basically as passive responders, right? You type in a prompt, it spits back an answer, simple interaction question and into then boom. Generative AI can transform things like brainstorming. Prototyping gave us new tools. Big leap, huge leap. But now we're seeing something. 00;03;45;27 - 00;04;09;01 Unknown I think even more profound emerged. This a genetic AI. And these aren't just tools anymore that, you know, assist you. They're systems that truly act on their own initiative. And that distinction is just it's critical. The core difference here. Think of it less like, a super smart calculator and more like maybe a junior colleague you can delegate to. 00;04;09;05 - 00;04;39;29 Unknown Okay, that's a good analogy because these agents, they don't just process information based on one single prompt, they make a whole series of decisions. They execute multiple tasks, sometimes complex ones. They participate in processes right in ways that fundamentally reshape how designers, how engineers hack, how entire organizations think about user experience. So it's moving from automation, where you script every step exactly where you define every true delegation, where you give the AI a goal, an outcome. 00;04;40;04 - 00;05;01;02 Unknown Yes, give it a mission, not just a single instruction. And it figures out the steps needed to achieve that goal. It plans, it acts, it learns. That's a crucial distinction that planning and acting part. So okay, what does this actually mean then when your digital tool isn't just a static canvas or a prop box waiting for you, but it's a decision making agent, one that takes initiative. 00;05;01;05 - 00;05;27;01 Unknown It feels like a philosophical shift almost. It absolutely is. What's the biggest, like mental hurdle for product teams moving from how can I help me do this specific then world to can I actually delegate this entire outcome to an AI and crucially, trust it to act appropriately? That mental model shift is yeah, yeah it's significant. We are definitely moving from that old command and control paradigm. 00;05;27;02 - 00;05;45;06 Unknown Tell it what to do, right to more of an oversight and guidance model. Think about it. Previously, if you want to say marketing copy you prompt in Lem, review it, tweak it, prompt again, iterate the usual loop the usual loop. With an organic system, you might tell it's something much bigger, like increase engagement on our social media channels by 15% this quarter. 00;05;45;07 - 00;06;07;27 Unknown Okay, that's a goal, not an instruction. Precisely. Yeah. And the agent would then potentially autonomously research trends, draft different post variations scheduled and strategically analyze the performance data as it comes in, and then adjust and then adapted strategy based on those results. Maybe it tries different tones, different visuals, different timing, all with potentially very minimal human intervention needed along the way. 00;06;08;03 - 00;06;35;10 Unknown The system now has a persistent goal and importantly, the ability to act iteratively towards achieving it. It's usually built with key components, right? Like a reasoning engine, often a large language model. Brain kind. Yeah. Then an action model which lets it actually use tools or APIs to do things. To do things. Exactly. Yeah. And crucially, memory. So it can track its progress, remember what worked and what didn't and learn from past actions. 00;06;35;13 - 00;07;00;11 Unknown So it's this iterative planning, execution and learning cycle. That's what makes it agenda. That's the core of it. Yeah. And like you said, this isn't just theory. It's playing out now. We're seeing a genetic AI in action. Where are some examples. Well take luxury brands for instance. It's quite interesting. Recent Vogue business report highlighted how companies in fashion and beauty are already experimenting quite actively with a genetic intelligence. 00;07;00;12 - 00;07;27;17 Unknown Okay. And they make this useful distinction between what they call visible agents and invisible agents, visible and invisible. Okay. What's the difference? So visible agents are probably what most people might picture. First, think of those really sophisticated chat bots you sometimes find on a luxury brand's website, the ones that actually seem helpful, hopefully. Or maybe AI stylist that a customer interacts with directly, you know, to get personalized fashion recommendations like a personal shopper. 00;07;27;17 - 00;08;05;21 Unknown But I exactly like a digital concierge that really understands your style preferences, learns from your feedback, and actively curates entire outfits for you. The AI presence is obvious in these front facing interactions. Got it. So what about the invisible ones? That sounds intriguing. That's where it gets really interesting. I think these are the agents working purely behind the scenes, okay, autonomously automating really complex logistics networks maybe, or driving dynamic personalization across millions of users on a website, tailoring experiences in real time or pricing or even implementing real time pricing strategies. 00;08;05;26 - 00;08;27;23 Unknown Imagine the agent reacting to a sudden surge in demand for a specific handbag, maybe because a celebrity was seen with it and it dynamically adjusts the price across different regions, checks inventory levels, optimizes shipping routes from various warehouses, and maybe even automatically triggers a reorder of raw materials from suppliers to prevent stock outs. All without a human clicking a button. 00;08;27;23 - 00;08;52;00 Unknown Potentially, yes, all autonomously, based on predefined rules, goals, and real time data feeds. That's quite something. It is. And in both scenarios, visible or invisible. The core challenge is immense, right? How do you delegate that level of autonomous decision making? Yeah, while meticulously maintaining that critical brand voice, that specific luxury feel, and maybe most importantly, ensuring user trust. 00;08;52;00 - 00;09;21;15 Unknown Trust is huge, especially in luxury. Absolutely. So brands like say, Dior or LVMH, they're not just using AI for basic optimization anymore. They're actually creating systems that can suggest campaigns, adapt marketing flows, even dynamically create content or personalized offers with very little human input. Need a day to day? So the AI isn't just following a template? No, it might be tailoring visuals, text, the emotional tone of an ad all based on an individual's known preferences or online behavior. 00;09;21;18 - 00;09;46;13 Unknown That's a huge step beyond just assistance. It really speaks to the AI's ability to act on its own behalf within guardrails, of course, but still acting, maintaining the brand identity while acting autonomously. That's the tightrope walk it is. And we're seeing this kind of profound reshaping happening in other areas, too, like content production and media. Oh yeah, definitely at the NAB show, that's a big media and broadcast event. 00;09;46;15 - 00;10;08;24 Unknown A major topic was how AI is now handling huge chunks of post-production. Okay, not just as a tool for editing, like a smarter filter or something. Exactly. Not just a tool for the editor, but more like an autonomous process manager. Okay, unpack that autonomous process manager. This is where that concept of delegation really comes into focus. Think about the sheer scale and complexity involved in global media distribution today. 00;10;08;28 - 00;10;37;21 Unknown It's massive, right? So studios are using a genetic systems to automatically generate subtitles, not just in one language, but potentially dozens localizing voice overs in real time for global audiences as content is streamed, even predicting optimal monetization strategies across all these diverse channels streaming, broadcast, social media, clips, everything. So the agent might look at a new series and say, okay, based on audience demographics, competitor releases, ad revenue projections. 00;10;37;23 - 00;10;58;04 Unknown The best release window is X, and we should push these specific clips to TikTok precisely, and then it might autonomously coordinate the delivery of all the different versions of the content to the various platforms, making sure all the technical specs, the legal clearances, everything is handled correctly. Wow, that's not an editor using an AI tool. That's the AI managing the workflow. 00;10;58;04 - 00;11;17;11 Unknown It's managing vast interconnected workflows. It's a true delegation. And the source you mentioned actually called it UX for machines. Yeah. Where the machine itself is the user of an interface that we've designed specifically for it's autonomous tasks that just kind of bends my brain a little bit, doesn't it? Are we designing UI's for robots to complain about now? 00;11;17;11 - 00;11;39;06 Unknown I'm only half kidding. Well, maybe not complain, but certainly interact with. But UX for machines, that phrase really drives home the shift in thinking, doesn't it? It absolutely does. Which leads us neatly into the broader design implications. What actually happens when the interface itself starts making decisions? Yeah, we look at something like maybe Apple's liquid glass UI concept. 00;11;39;06 - 00;12;00;27 Unknown It seems very visual on the surface, right? But maybe it's a good metaphor for this moment. It's more than just a metaphor. I think Apple's vision is, even if conceptual hints at a deeper technical reality that's emerging, it reflects this broader shift in how we need to think about interfaces away from static layouts, exactly. Away from layouts where everything is fixed, predefined. 00;12;01;00 - 00;12;23;08 Unknown We're moving towards much more, ambient adaptive dynamic experiences. Okay, so what does that look like? Imagine a UI that doesn't just change its color scheme based on the time of day. Dark mode, light mode. Right. Basic stuff, but one that autonomously rearranges its entire layout or prioritize a certain information, or even changes the interaction patterns themselves based on what? 00;12;23;09 - 00;12;49;27 Unknown Based on what the background agents are doing, or maybe even predicting the user will need next to your screen stops being just a passive window onto a fixed application and becomes more like a dynamic canvas, constantly adapting to an evolving context that's being informed by these autonomous AI actions happening behind the scenes. It's less like a blueprint and more like a like a living environment, a living, breathing, responsive environment. 00;12;49;27 - 00;13;18;27 Unknown Yes, that's a great way to put it. Okay, so if the interface is becoming this dynamic thing that has to fundamentally redefine the designer's role, right? Completely, you're not just a layout expert anymore. Pushing pixels or defining those neat linear user flows. We all know that model is becoming insufficient. Instead, you become more like what system architects, interaction coaches, both of those I think you're designing for what some people are trying to call a user agent diet, a user agent diet. 00;13;18;27 - 00;13;45;01 Unknown Okay. Meaning an interaction where an AI agent might actually be the first entity to, quote unquote, touch your UI before the human even gets there. Potentially, yes. Before the human user even arrives. Think about what that means for the traditional design process. It kind of flips a lot of it on its head, doesn't it? It really does. So designing for this user agent dyad, it means you're not just thinking about how a human clicks a button, right? 00;13;45;01 - 00;14;12;01 Unknown You're also thinking about how the AI agent perceives the information on the screen, how it decides to act on it. Yes. And crucially, how that action then influences what the human user sees or experiences. Next, give me an example. Okay, so in a truly agent design tool, maybe the AI doesn't just wait for your first prompt based on the initial project brief you gave it, it might preemptively populated design canvas generating ideas. 00;14;12;02 - 00;14;31;09 Unknown Not just ideas, but maybe entire sections of a design. Or perhaps in a data analysis tool. An agent filters through thousands of data points before you even see them, presenting only what it deems the most relevant insights based on your goals. It acts as an intelligent gatekeeper, so you're designing for the AI's first pass as much as for the humans interaction that follows. 00;14;31;16 - 00;14;55;03 Unknown Exactly. You're essentially designing the conversation between the human and the AI. Even when parts of that conversation are happening invisibly agent to agent or agent to system, it moves us away from thinking about individual screens or linear flows, definitely, towards orchestrating these complex behaviors and decision making systems. You got it. Architects of intelligent behavior, not just UI designers. 00;14;55;03 - 00;15;17;29 Unknown That's a powerful shift. And it brings us to this really fascinating point about AI and creativity. There was a specific comment highlighted in the source material from Chris Mullins that caught my eye. It said Chris Mullins does not believe AI isn't good at brainstorming. The double negative, the double negative. So let's untangle that. He does believe AI is valuable for ideation, for brainstorming. 00;15;17;29 - 00;15;37;25 Unknown Yes. Yeah. Which is interesting because it cuts against some of the initial, maybe knee jerk reactions people had. Right? Totally. The immediate fear was, oh no, AI is going to kill creativity. It'll just automate everything and make it bland, right? The fear it would stifle creativity. Yeah, not augment it. But the research actually backs up Chris's perspective here. 00;15;37;25 - 00;15;57;23 Unknown Oh, really? Yeah. There's a recent study. I think it was on our caf that confirms this. It showed that designers perceive AI as being most valuable during the divergent thinking phase of the design process. Divergent thinking. So the brainstorming part. Exactly. That's where you're doing the broad ideation, the sketching, the lateral exploration, generating lots and lots of possibilities. 00;15;57;23 - 00;16;22;24 Unknown Okay. It turns out I isn't just good at replicating existing patterns or automating the grunt work, although we can do that too. It's real power creatively seems to be enacting as a genuine creative partner. How so? Well, it can explore the potential solution space vastly more broadly and much, much faster than any single human or even a typical human team could ever hope to. 00;16;22;27 - 00;16;54;08 Unknown So it's not about replacing the designer's ideas, but about expanding the pool of possibilities. Precisely. It's about surfacing surprising solutions. You might not have thought of identifying edge cases you overlooked, or proposing completely alternate paths or approaches that humans maybe stuck in their own biases or habits wouldn't immediately jump to. It's like having, I don't know, an infinite number of junior designers who are incredibly knowledgeable but have absolutely no ego or preconceived notions, just constantly churning out ideas. 00;16;54;10 - 00;17;13;22 Unknown That's a pretty good way to think about it. And the research really emphasize this. Designers don't actually want AI to design for them. No, no, they want AI to think alongside them. And that's the key phrase, which repositions AI perfectly from just being a command executor to being a true collaborator. And that's central to this whole agent model we're discussing. 00;17;13;25 - 00;17;39;07 Unknown Thinking alongside us, combining human intuition and judgment with the AI's speed and breadth. Exactly. It's a partnership, and this connects so neatly into Chris Mullins is deep at work, doesn't it? His stuff on structured ideation, it really does. He spent a lot of time analyzing why traditional brainstorming, you know, the classic Alex Osborne method from the 40s. Just shouting out ideas, yeah, is often fundamentally flawed. 00;17;39;07 - 00;18;04;17 Unknown Yeah, I've definitely been in brainstorming sessions that felt less than productive. You can see those flaws play out. Oh, absolutely. Mullins points out really specific issues like production blocking, production blocking. Yeah, the simple fact that only one person can really speak effectively at a time. So while one person is talking, everyone else is either waiting their turn, maybe forgetting their own ideas, or just not generating new ones because they're busy listening or waiting. 00;18;04;20 - 00;18;26;22 Unknown It inherently throttles the flow, right? Like a single microphone for the whole group. Makes sense. Then there's evaluation, apprehension, the fear of looking stupid, basically. Yeah, that fear of judgment from peers or superiors. It leads people to self-censor. They hold back those potentially brilliant but maybe slightly unconventional or out there ideas because they're worried about how they'll be perceived. 00;18;26;29 - 00;18;50;28 Unknown It stifles the very creative wildness brainstorming is supposed to unleash. Been there. And what was the third one? Cognitive cognitive fixation. This is where hearing other people's ideas, especially early in the session, can unconsciously anchor your own thinking. So you hear a couple of ideas about, say, mobile apps, and suddenly your brain gets stuck in that mobile app groove. 00;18;50;29 - 00;19;18;24 Unknown Yeah, making it much harder to break out and explore totally different directions, like maybe a physical product or a service design solution. Right. So Mullins advocates for these more structured methods instead, like brain writing. Exactly. Brain writing, the nominal group technique, various design thinking methods, they're structured precisely to overcome these inherent human psychological barriers, things like generating ideas independently before sharing them, using anonymity, ensuring everyone contributes. 00;19;18;27 - 00;19;45;25 Unknown It reduces judgment and cognitive bias. Okay, so bringing it back to AI, the insight isn't just the AI is good at brainstorming full stop, right? It's more nuanced. It's the AI can excel within these structured, divergent thinking processes. Precisely. A genetic AI with its ability to rapidly generate a huge diversity of ideas, often anonymously and crucially, without any human ego evaluation, apprehension, or production blocking. 00;19;45;28 - 00;20;09;04 Unknown It perfectly augments these structured ideation methods. How so? Think about it. An AI can act as this tireless, non-judgmental idea generator. It can flood a structured, session like brain writing with an unprecedented volume and variety of starting points, drawing on vast data sets and perspectives humans simply don't have access to, it overcomes the psychological barriers that bogged down traditional brainstorming. 00;20;09;05 - 00;20;33;29 Unknown Exactly. Which means designers can then focus their valuable human skills. Intuition. Critical judgment, synthesis on curating, refining, and connecting the ideas generated by the AI, rather than struggling against psychological friction just to get ideas out in the first place. It makes the AI a powerful co-designer, especially in that crucial early fuzzy divergent phase. A very powerful co-designer. Yeah, which raises a really practical question for everyone listening. 00;20;34;02 - 00;20;58;23 Unknown How can you start leveraging a genetic AI to make your own ideation processes more effective, more innovative, especially in those early, messy stages where groundbreaking ideas are born? Imagine using AI to pre-populate a brain writing session with like 100 different angles on a problem before the humans even start. The potential is huge. It's about optimizing that human AI collaboration loop, not just automating tasks away. 00;20;58;28 - 00;21;19;06 Unknown Okay, but with all this power, this autonomy, we absolutely need to talk about the guardrails we do. It's non-negotiable. And Chelsea Fleming, who's a UX research lead at Google Labs, made a really critical point about this. What was that? She said, basically, I should support humans, not supplant them and fail to experiment, should be treated as valuable learning moments. 00;21;19;06 - 00;21;48;08 Unknown Support, not supplant. That's key. And the bit about failed experiments that's so important, isn't it? Especially now in this phase where everything's moving so fast and we're essentially creating new interaction paradigms from scratch. We will have failures. We need to learn from them, not hide them or punish them. Her principle really feels foundational for this whole era, because the success of these agents, their adoption, it hinges entirely on the trust infrastructure we build around them. 00;21;48;08 - 00;22;11;23 Unknown And the failure tolerant culture. Exactly. If we don't build systems where people can understand what the AI is doing, why it's doing it, and how to recover gracefully when it inevitably makes a mistake, then adoption will stall. Or worse, the risks could be huge, right? It's not about aiming for perfect systems on day one. That's impossible. It's about building resilient systems that learn and improve, and that users feel comfortable overseeing. 00;22;11;25 - 00;22;33;07 Unknown So what does this mean for UX design? Our responsibilities have to expand right beyond just usability or making things look nice far beyond we now have to explicitly account for several new critical considerations. First one intent explainability. Explainability. Making it clear why the AI did something. Exactly. It's no longer enough for the AI to just do the thing. 00;22;33;14 - 00;22;55;27 Unknown Users need to understand the reasoning behind its decision or action. Why did it choose this path? Why did it make this recommendation so how do we design for that audit trails? That's one way. Designing clear audit trails that show the agent's decision making steps or maybe having why now buttons that give a concise, plain language rationale for a proactive action. 00;22;55;27 - 00;23;18;10 Unknown The AI just took. Okay. Like if an inventory agent reorder stock, right. Good explainability would show why it ordered that specific quantity at that moment, maybe citing sales forecasts, supplier lead times, current stock levels it detected transparency in the reasoning. Makes sense. What else? Second big one user override. This is critical letting the human take back control. Absolutely. 00;23;18;10 - 00;23;44;06 Unknown Providing clear, intuitive, easily accessible mechanisms for humans to intervene, to pause the automation, or to completely take back control from the agent when they feel it's necessary. Autonomous does not mean uncontrollable, right? Like the emergency brake kind of think about a smart home system. If an agent decides to turn off all the lights because its sensors indicate everyone's asleep, but you're actually still a breeding, you need an easy way to say nope. 00;23;44;07 - 00;24;11;25 Unknown Exactly. Whether that's a physical switch that always works, a simple voice command, or a big, obvious pause automation button in an app, human agency has to remain paramount. Got it. Explainability override. What's the third shared control models? This is about how we design the actual interaction, the interface, the process itself, so that human and AI collaboration feel seamless, clearly defined, and importantly, balanced. 00;24;11;25 - 00;24;29;26 Unknown It's a dance you said earlier it is a dance, not a monologue, where the AI just dictates. Think about a pilot using autopilot in the plane, okay? They're very clear models. They know precisely when it's safe to let the system fly autonomously, when they need to actively monitor it, and exactly how to take full, immediate manual control if needed. 00;24;29;26 - 00;24;57;28 Unknown So we need that level of clarity in our UI. For agent systems, we do things like clear visual cues showing who's currently driving the human or the AI transparent communication of the agent status. Yeah. Is it planning? Is it executing? Is it waiting for approval? So you know what? It's up to right? And easy mechanisms for humans to provide feedback or corrections that actually help train the agent, making the collaboration better over time. 00;24;58;03 - 00;25;22;21 Unknown It's about designing for a fluid partnership. Explainability override, shared control. These feel like the pillars of trust. It really are. And this is incredibly relevant for you. Listening. If you're a product leader, a designer, how are you going to build these essential guardrails into your next project? How will you foster that trust? How will you ensure human agency stays central when the AI is designed to be autonomous? 00;25;22;24 - 00;25;45;02 Unknown These aren't just, you know, nice to have their fundamental. It's not just a technical challenges. It's a profound design challenge, maybe even an ethical one. And it will likely define whether these genetic systems succeed or fail in the real world, because ultimately, without trust, even the most technically brilliant, precise AI agent, it's basically useless. People just won't delegate to it. 00;25;45;05 - 00;26;11;01 Unknown Which brings us nicely to what this all means for product leaders. Specifically, how do we orchestrate this future? Well, the first big implication is that product design itself fundamentally becomes orchestration. Orchestration. Like conducting an orchestra exactly like that. You're no longer just laying out static screens or defining simple linear user flows. In a world buzzing with a genetic AI, you are composing complex behavior between multiple agents, and some of those agents are human. 00;26;11;01 - 00;26;36;19 Unknown Summary. Precisely. They interact. They collaborate. Sometimes they even delegate tasks to each other. You're coordinating this intricate dance. Okay. So that demands a much more holistic way of thinking, doesn't it? Systemic thinking. Absolutely. You have to understand how an autonomous action taken by an agent in one part of the system, how that ripples through and affects other agents, other processes, and ultimately the end user experience. 00;26;36;25 - 00;27;00;02 Unknown Can you give an example of orchestrating? Sure. Think about designing a next generation customer service platform. It's not just about writing the chat bot script anymore, right? It's about orchestrating the whole flow. How does an initial AI agent triage the incoming request? How does it intelligently route it to the best suited human agent, providing them instantly with all the necessary context? 00;27;00;05 - 00;27;22;13 Unknown How does another background agent simultaneously pull up relevant past interactions or knowledge based articles? How does yet another agent perhaps suggest the next best action to the human agent in real time? You're conducting all these different players, AI and human, to create one cohesive, effective, user centric experience. That's the job orchestration. Okay, so that's the first shift. What's next? 00;27;22;13 - 00;27;44;10 Unknown The second big thing is the rise of what we touched on earlier. These internal AI facing interfaces. The UX for machines I.T again, exactly. This is a whole new frontier for UX design. Designers now have to think really critically about how the humans who manage these AI systems the engineers, the data scientists, the operations teams, how do they debug the agents, how do they guide them? 00;27;44;16 - 00;28;11;16 Unknown How do they simply observe what they're doing? So UX isn't just for the end user on the front end anymore? No. It's desperately needed inside the AI toolchain itself for the AI caretakers, if you will. That makes sense. How do you design an interface that lets a human operator understand what an agent is thinking or trying to do right, or intervene smoothly when something goes off track or needs adjustment, or even figure out how to train the agent more effectively based on its performance. 00;28;11;16 - 00;28;38;22 Unknown It's about transparency and control. But for the internal teams managing the AI? Exactly. Think about, say, a complex robotic arm in a factory controlled by an agent system. The internal UI for managing that agent might need to show not just its physical position, but maybe its current task. All its confidence level in achieving that goal. Any anomalies it's detected, maybe even a visual trace of the reasoning steps it took to decide on its current action. 00;28;38;22 - 00;29;05;19 Unknown That level of insight. It's crucial for safe, reliable and efficient operation. And honestly, it's a rapidly growing area design that most people haven't even started thinking seriously about yet. Orchestration. Internal interfaces. What's the third major implication? This one might be the most profound shift psychologically, trust replaces precision as the core UX concern. Trust over precision. But surely precision still matters. 00;29;05;20 - 00;29;26;28 Unknown Oh, absolutely. Precision. Accuracy. They're still incredibly important. They're table stakes, really. But the fundamental question users will ask about these agent systems starts to shift from just is this output correct to something deeper? Do I understand what this thing is doing, and why do I feel comfortable delegating this task or decision to it? That's a huge psychological hurdle for users, isn't it? 00;29;26;28 - 00;29;49;23 Unknown Giving up that direct control? It is. Think about a self-driving car again, even if it's statistically 99.99% safer than a human driver, people are still wary, right? It might still get rejected by users if they don't understand why it breaks suddenly, or if they feel they can't predict its behavior. In slightly unusual situations. Precision is necessary, but it's not sufficient. 00;29;49;26 - 00;30;20;14 Unknown Trust becomes the key differentiator. So the new KPI, the main measure of success, isn't just raw accuracy, it's the perceived trustworthiness and interpretability of the autonomous system. If users don't trust it, they won't delegate to it. They won't use it to its full potential, regardless of how accurate its underlying models might be. Which means as designers, we need to build in not just that explainability we talked about, but also things like predictability, queues, ways for the user to get a sense of what the agent is likely to do next, and clear communication of uncertainty. 00;30;20;15 - 00;30;43;01 Unknown How confident is the AI in this decision exactly? How do we design a dashboard that doesn't just show a vague AI working status, but visually represents the AI's confidence levels? Yeah, or maybe it's current thinking process in a simplified way. That level of transparency is critical for building trust. Okay, trust over precision. That's big. What's the final implication? 00;30;43;03 - 00;31;13;19 Unknown Finally, for our design systems themselves, the very building blocks we use, they have to adapt dynamically. Meaning meaning the era of purely static design components where every button, every field is rigidly defined and predetermined. It's gradually losing relevance. In this a genetic world, we have to embrace the flux, the dynamism. We absolutely have to embrace flux. Expect to see much more variable styling in UI's, more adaptive states for components, and what some are calling probabilistic interaction design, probabilistic interaction design. 00;31;13;21 - 00;31;37;26 Unknown These are design systems where components aren't just fixed static blocks anymore. They're more like flexible elements. They can automatically adjust their appearance, their behavior, maybe even their layout based on what based on real time context, based on the user's current state or inferred needs, and crucially, based on the autonomous actions and predictions of those background AI agents. Can you give an example like predictive text on a keyboard? 00;31;38;03 - 00;32;00;02 Unknown But for the whole UI? That's a great starting point. Your phone's keyboard learns your typing habits and predicts the next word. Now extend that kind of adaptive intelligence to the entire interface. So the UI isn't just reacting to my direct input, it's anticipating your next move, or maybe the agent's next required action and subtly adjusting itself to make that interaction smoother, faster, or clear. 00;32;00;02 - 00;32;24;28 Unknown So our design systems need to be alive, not just static libraries alive, constantly evolving, responsive not just to users, but to the underlying AI as well. They need to reflect the dynamism of the systems they represent, responding to implicit signals and predictive models, not just explicit clicks. Wow. That demands designers think differently too, doesn't it? In terms of ranges and probabilities, it's not just fixed values. 00;32;25;03 - 00;32;48;26 Unknown Exactly. How does a button maybe subtly change its visual weight based on the inferred urgency of the task? How does the text field dynamically expand or offer suggestions not just based on what the user typed, but on an agent's assessment of what information is likely needed? Next, it's designing for a continuous spectrum of possibilities rather than just a few discrete preprogramed states. 00;32;48;26 - 00;33;15;12 Unknown That's the challenge, and the opportunity. Dynamic. Adaptive. Orchestrated. Okay, so let's try and bring this all together to recap it. Agenda take AI. It's not science fiction. It's not coming soon. It's actually here right now. It is. And it's actively reshaping industries. We talked about luxury retail media production, and it's changing the very foundations of how we design interfaces and for product designers, for a product leaders like you listening in this really marks a profound turning point, doesn't it? 00;33;15;12 - 00;33;40;26 Unknown It truly does. But the key takeaway, I think, is that it's not about surrendering our craft automation. It's not about AI simply supplanting human creativity or judgment. It's about evolving our role. Exactly. It's about evolving into designers of delegation, designers of oversight, designers of adaptable. Our role becomes more about orchestrating intelligence, both human and artificial, rather than just designing static interactions or pushing pixels. 00;33;40;27 - 00;34;12;01 Unknown We're moving from being builders of tools to becoming architects of intelligent ecosystems, architects of intelligent ecosystems. I like that the next frontier of UX, then it isn't just interactive, it's intentional and it's autonomous. Okay, so as we wrap up this deep dive, we want to leave you with a final provocative thought to chew on. All right. What is one area maybe in your own work, maybe just in your daily life where you could start thinking about delegating a task, even a simple one, to an autonomous agent? 00;34;12;04 - 00;34;29;27 Unknown Interesting question. And if you were to do that, what fundamental guardrails thinking about explainability override shared control, what would you absolutely need to put in place first to ensure you could trust it to ensure you felt in control? How would you make sure you understood its decisions? How would you make sure you could step in if needed? 00;34;30;01 - 00;34;40;03 Unknown Something to think about because the future of interaction this a genetic future. It's already taking shape. It's here now, waiting for us to design it responsibly.