00;00;00;00 - 00;00;21;29 Speaker 1 So, the pace of AI integration into our daily work lives. It's, you know, it feels almost dizzying, doesn't it? For a long time, the conversation has been framed as this kind of stark choice automation versus job loss. But what have we told you? That's really just scratching the surface today. We're going much, much deeper. We're going to unpack some really groundbreaking research. 00;00;22;04 - 00;00;31;20 Speaker 1 It systematically audits not just what I can do in the workplace, but what workers actually want it to do, and critically, how that impacts our fundamental sense of control. 00;00;31;23 - 00;00;51;05 Speaker 2 Exactly. And to do that, we've pulled from while truly pivotal sources, we're talking about a major Stanford study that meticulously built this database called work Bank, also a really comprehensive report from McKinsey on something they call super agency in the workplace. And, a fascinating psychological study that introduce the sense of agency scale. 00;00;51;07 - 00;01;13;27 Speaker 1 Okay, great. So our mission for this deep dive is basically to give you a shortcut, a shortcut to understanding the nuanced human centric side of AI transformation of work. You're going to hear some surprising facts, some connections you probably won't find anywhere else. So let's just, jump right in. So to really grasp the nuances here, we first have to appreciate the sheer scale of this transformation. 00;01;14;01 - 00;01;20;14 Speaker 1 I mean, the McKinsey report really doesn't pull any punches. It calls AI and innovation as powerful as the steam engine. 00;01;20;15 - 00;01;30;04 Speaker 2 Yeah, it's a huge claim. And they frame it as a cognitive industrial revolution. It's not just about automating physical tasks anymore. It's about, well, cognitive functions, thinking tasks. 00;01;30;06 - 00;01;32;02 Speaker 1 Right. And how have these tools actually changed? 00;01;32;03 - 00;01;59;13 Speaker 2 Well, what's striking is how rapidly large language models, you know, LMS have evolved. They've become what we now call AI agents. Just think a couple of years back, 2022, 2023 models like Claude or GPT 3.5. They were largely text only, limited understanding, couldn't really use external tools. But fast forward to say, January 2025 models like Claude 3.5 or Gemini 2.0 flash. 00;01;59;20 - 00;02;18;20 Speaker 2 It's a monumental leap. They're now multimodal, meaning they can literally see and hear the handle text, audio, images. They also have, much more advanced reasoning for multi-step problems. They can hold surprisingly long, coherent conversations, pull in real time data, and even personalize the responses much better. 00;02;18;21 - 00;02;22;12 Speaker 1 Wow. So these aren't just advanced chat bots anymore, are they? They're actively doing things. 00;02;22;12 - 00;02;37;06 Speaker 2 Exactly. And that brings us to the core concept of a genetic AI. These are systems designed with, autonomy, goal directed behavior. Let me give you a concrete example. In 2023, and I might have, say, summarize data for a call center rep. 00;02;37;07 - 00;02;38;28 Speaker 1 Okay. Helpful but passive, right? 00;02;39;03 - 00;02;52;21 Speaker 2 But by 2025, an AI agent, it can actually converse with a customer, understand their needs, and then plan and execute all the follow up actions on its own, like processing payments, checking for fraud, completing shipping details. They're not just assisting, they're acting. 00;02;52;22 - 00;02;59;19 Speaker 1 Okay, so if I is taking on more and more, does that mean humans become less relevant? Because that's all of the fear right? 00;02;59;22 - 00;03;06;06 Speaker 2 It is. But Reed Hoffman's concept of super agency from that McKinsey report offers a different view. 00;03;06;08 - 00;03;10;10 Speaker 1 Yes. Super agency. I found this really compelling. It flips the script, doesn't it? 00;03;10;11 - 00;03;21;07 Speaker 2 It does. It's just AI isn't about replacing us, but about empowering us. Amplifying our human potential for creativity, productivity and, you know, positive impact. 00;03;21;08 - 00;03;22;14 Speaker 1 Supercharging us, basically. 00;03;22;14 - 00;03;34;02 Speaker 2 Pretty much. And here's a kind of a bombshell finding from McKinsey that really underscores that potential business leaders seem to be dramatically underestimating how much their workforce is already embracing AI. 00;03;34;03 - 00;03;34;29 Speaker 1 Really? How so? 00;03;35;00 - 00;03;47;20 Speaker 2 Well, leaders estimate only about 4% of employees use generative AI for at least 30% of their daily work. But when you ask employees themselves, they self-report that figure is three times higher. 12%. 00;03;47;21 - 00;03;48;27 Speaker 1 Wow. That's a big gap. 00;03;48;28 - 00;04;01;17 Speaker 2 It is. And it gets bigger. Employees are twice as likely as leaders to believe they will use AI for more than 30% of daily tasks within a year. We're talking 47% of employees versus just 20% of leaders making that prediction. 00;04;01;17 - 00;04;05;27 Speaker 1 So workers are seeing the potential much more clearly, or maybe just adopting it faster. 00;04;06;04 - 00;04;21;14 Speaker 2 It seems that way. And interestingly, even the groups McKinsey calls bloomers and Dumas, you know, the skeptics, even they are surprisingly familiar and comfortable with these AI tools. This isn't just a perception gap. It could be a real blind spot for companies planning their AI strategy. 00;04;21;20 - 00;04;35;28 Speaker 1 Okay, so if employees are actually more ready for AI than leaders think, the really crucial question becomes what do workers really want from it? And this is where that Stanford City comes in, right? Diving deep, but directly asking workers. 00;04;36;01 - 00;04;47;19 Speaker 2 Exactly if they developed this novel auditing framework. They used audio enhanced mini interviews, which is cool to capture really nuanced preferences. And they introduced this new tool called the Human Agency Scale. 00;04;47;19 - 00;04;50;05 Speaker 1 Human Agency scale. Tell us about. 00;04;50;05 - 00;05;18;19 Speaker 2 That. So that has it basically quantifies the preferred level of human involvement in tasks. It's a scale goes from H1 where the AI agent handles a task entirely on its own. That's pure automation all the way up to H5 that signifies essential human collaboration. You absolutely need the human touch. So roughly H1 to H2 is automation, while H3 to h5 falls more into the realm of augmentation human plus AI. 00;05;18;20 - 00;05;22;00 Speaker 1 Okay. And what did the research find? What did the work bank database show? 00;05;22;00 - 00;05;30;15 Speaker 2 Some really compelling stuff. For nearly half the tasks 46.1% workers express remarkably positive attitudes toward AI agent automation. 00;05;30;16 - 00;05;33;23 Speaker 1 Okay, so almost half people are saying, yes, please automate this. 00;05;33;23 - 00;05;40;20 Speaker 2 Pretty much. And the primary motivation this came up in almost 70% of responses is to free up time for high value work. 00;05;40;21 - 00;05;42;06 Speaker 1 Make sense? Get rid of the grunt work. 00;05;42;07 - 00;05;50;22 Speaker 2 Exactly. The other big reasons were, you know, the tasks being repetitive or tedious, stressful, or even seeing opportunities for AI to improve the quality of that task. 00;05;50;23 - 00;05;55;08 Speaker 1 Can you give us some concrete examples, like what kind of tasks are workers really eager to offload? 00;05;55;09 - 00;06;05;09 Speaker 2 Absolutely. Think about things like, tax preparers schedule appointments with clients that scored a perfect 5.0 on the desire scale. Total desire for automation. 00;06;05;10 - 00;06;06;18 Speaker 1 Okay, nobody likes scheduling. 00;06;06;23 - 00;06;19;01 Speaker 2 Apparently not. Or, public safety, telecommunications. Maintain files of information that was high 24.67 automating those feels like a huge win for their calendar. And, you know cognitive load right. 00;06;19;05 - 00;06;22;27 Speaker 1 But what about the flip side. What do people not want automated. 00;06;22;29 - 00;06;43;29 Speaker 2 Well, that's just as revealing tasks workers least want automated are things requiring sort of nuanced judgment, creativity or direct human empathy. For example, ticket agents and travel clerks trace lost, delayed, or misdirected baggage that scored a measly 1.50 people on a human helping with that stress. 00;06;43;29 - 00;06;46;21 Speaker 1 Yeah, I can see that. You want empathy when your bag is gone? 00;06;46;21 - 00;06;58;05 Speaker 2 Definitely. Or crucially, editors write text that scored only 1.60. It really highlights the strong human preference for maintaining control over the qualitative, the creative, the relational aspects of their jobs. 00;06;58;05 - 00;07;05;21 Speaker 1 And here's where we hit a kind of a significant snag, right? A major disconnect that the Stanford study really brings into sharp focus. 00;07;05;21 - 00;07;06;24 Speaker 2 Yeah, this is important. 00;07;06;25 - 00;07;25;02 Speaker 1 The top ten occupations where workers most desire automation. They account for only a tiny fraction, just 1.26% of actual usage data from live chat bots like Claude AI. This is looking at data from dimension 2024 to Jan 2025. 00;07;25;04 - 00;07;28;14 Speaker 2 So let me get this straight. People really want certain things automated. 00;07;28;17 - 00;07;29;24 Speaker 1 Yeah, the mundane stuff. 00;07;29;24 - 00;07;33;22 Speaker 2 But the AI tools people are actually using right now aren't doing those things much at all. 00;07;33;22 - 00;07;43;03 Speaker 1 Exactly. It shows a significant disconnect between what's maybe most needed or desired and what's actually being adopted, or perhaps what the current tools are best at. 00;07;43;06 - 00;07;45;28 Speaker 2 And if we connect this back to the human agency scale findings, well it. 00;07;45;28 - 00;07;46;26 Speaker 1 Gets even more nuanced. 00;07;46;27 - 00;07;55;10 Speaker 2 The findings show that for almost half of occupations 45.2% H3 which is equal partnership is the dominant worker desired level. 00;07;55;11 - 00;07;58;12 Speaker 1 Okay. So lots of potential for collaboration. People want to work with the AI. 00;07;58;15 - 00;08;20;24 Speaker 2 Precisely. It underscores this enormous potential for human agent collaboration. However, there's another fascinating divergence. Workers generally prefer higher levels of human agency than experts deemed technologically necessary. About 47.5% of tasks show this gap. Experts think I could handle more, but workers want to stay more involved. 00;08;20;26 - 00;08;32;10 Speaker 1 So it's not just about what I can do. Technically, it's about what humans want it to do alongside them. It's about control and involvement. Can you give us some specific examples of this sort of tension? 00;08;32;12 - 00;08;41;24 Speaker 2 Certainly. Consider editors again. This is the only occupation where workers predominantly desire H5 essential human involvement. They want to keep writing and shaping text themselves. 00;08;41;25 - 00;08;42;25 Speaker 1 Makes sense for editors. 00;08;42;26 - 00;08;52;02 Speaker 2 It does. But interestingly, AI experts assess mathematicians and aerospace engineers as H5 dominant. They think AI has less potential. They're technically speaking right now. 00;08;52;02 - 00;08;58;04 Speaker 1 Well, that's interesting. So the experts see less room for AI in math and engineering than the workers in those fields might even want. 00;08;58;11 - 00;09;16;03 Speaker 2 Or perhaps the workers just see different uses, the nuances from the worker transcripts themselves are incredibly insightful here. Like an art director uses AI for personal project management, you know, summarizing tasks, improving their own writing. But they said they would never use it to replace actual artists. 00;09;16;03 - 00;09;17;27 Speaker 1 Right. Tool versus replacement. 00;09;17;27 - 00;09;29;23 Speaker 2 Exactly. And mathematicians, currently they see limited use for AI. They seem to crave AI that can, and this is a quote, come up with new stuffs rather than just solve existing problems. 00;09;29;23 - 00;09;31;25 Speaker 1 So they want a creative partner, not just a calculator. 00;09;31;25 - 00;09;50;27 Speaker 2 Pretty much, though they do see potential for maybe elaborating proofs. And aerospace engineers. They primarily consider AI for debugging code or systems rather than core design work. It really sounds like for some professions, AI needs to evolve beyond mere problem solving or task execution for them to truly embrace it as a collaborator. 00;09;51;00 - 00;10;13;04 Speaker 1 It absolutely does. This highlights that it's not just about a task's technical difficulty, maybe, but its inherent human value, or creativity, or maybe even the satisfaction derived from doing it. Now let's look at the Stanford Studies desire capability landscape. This framework helps categorize tasks into four critical zones. Can you walk us through those? 00;10;13;04 - 00;10;22;17 Speaker 2 Sure. You've got the automation green light zone. That's where there's high worker desire for automation and high technical capability for AI to do it. Seems like a go zone. Okay. 00;10;22;19 - 00;10;23;10 Speaker 1 Low hanging fruit. 00;10;23;11 - 00;10;29;10 Speaker 2 Then there's the red light zone. High capability. I could do it, but low worker desire. People want to keep doing these tasks. 00;10;29;10 - 00;10;30;06 Speaker 1 Tread carefully there. 00;10;30;07 - 00;10;39;06 Speaker 2 Exactly then the R&D opportunity zone high desire from workers but currently low capability AI isn't quite there yet, but people wish it was. 00;10;39;06 - 00;10;40;21 Speaker 1 So places to innovate. 00;10;40;21 - 00;10;47;20 Speaker 2 Right? And finally, the low priority zone, low desire and low capability. Probably not worth focusing on right now. 00;10;47;21 - 00;10;52;27 Speaker 1 Okay, that makes sense. Green light, red light R&D low priority. So where's the money going? 00;10;52;29 - 00;11;07;01 Speaker 2 Well, this is what's truly critical. The study found a staggering mismatch in investment. Looking at Y Combinator companies, 41.0% of their company task mappings are concentrated in the low priority and automation red light zones. 00;11;07;01 - 00;11;14;06 Speaker 1 Wait. Seriously? 41% in areas where workers either don't want automation or the tech isn't ready and they don't want it. 00;11;14;09 - 00;11;26;11 Speaker 2 That's what the data suggests. Current investments seem heavily focused on software development and business analysis tools, leaving many potentially promising tasks within the green light and R&D opportunity zones under addressed. 00;11;26;14 - 00;11;41;11 Speaker 1 So it's a real puzzle. Like you said, why isn't money flowing where it's most desired and capable or where there's high desire for future development? Why do you think investors are, maybe getting this wrong? Are they just behind the curve or fundamentally misunderstanding? Worker needs? 00;11;41;18 - 00;11;59;26 Speaker 2 It's hard to say for sure. It could be lagging indicators, maybe easier targets initially in software, or perhaps a disconnect from the ground level worker perspective highlighted in the Stanford study. It certainly feels like a disconnect potentially missing out on where the real human needs and maybe the next breakthroughs could be. 00;11;59;29 - 00;12;05;17 Speaker 1 That's really important point. Interestingly, though, AI research papers show a more encouraging trend. 00;12;05;17 - 00;12;18;03 Speaker 2 Yes, that's the slight silver lining here. Academic research seems more heavily concentrated in that R&D opportunity zone, suggesting the research community might be aligning better with future needs and worker desires. 00;12;18;04 - 00;12;27;12 Speaker 1 Okay, so there's hope on the research front. Now what does all this mean for human skills? If I takes over certain tasks, what skills become more valuable? 00;12;27;14 - 00;12;36;10 Speaker 2 The study indicates a pretty significant shift. Traditionally, high wage, information focused skills think analyzing information. They seem to be becoming less emphasized. 00;12;36;15 - 00;12;37;25 Speaker 1 The things I is getting good at. 00;12;38;03 - 00;12;50;00 Speaker 2 Exactly. Instead, skills like interpersonal and organizational skills. Things like establishing and maintaining interpersonal relationships, assisting and caring for others seem to be gaining more importance. The human stuff. 00;12;50;00 - 00;12;51;18 Speaker 1 So EQ over IQ. 00;12;51;19 - 00;13;16;15 Speaker 2 Almost in a way, or at least a rebalancing. There's also a clear trend toward requiring broader skill sets, not just deep specialization in one automatable area. For you, the listener, this could mean really focusing on those uniquely human skills, those areas where AI is likely to be an enhancer, not a replacement. Communication, collaboration, empathy, complex problem solving, and unpredictable contexts. 00;13;16;18 - 00;13;16;28 Speaker 1 Makes a lot. 00;13;16;28 - 00;13;28;19 Speaker 2 Of sense. And this connects perfectly back to the McKinsey report, which highlighted the challenges leaders are facing. A significant chunk 47% of executives feel the pace of AI tool development is actually too slow. 00;13;28;19 - 00;13;31;10 Speaker 1 Too slow, despite everything we've heard about how fast it's. 00;13;31;10 - 00;13;41;18 Speaker 2 Moving. Yeah, but look at why they say it's too slow. Top reasons talent skill gaps 46%. Which directly relates to that skill shift we just discussed and resourcing constraints 38%. 00;13;41;18 - 00;13;48;16 Speaker 1 So they want faster development, but they lack the skilled people and resources to actually implement or leverage it effectively. 00;13;48;22 - 00;14;06;06 Speaker 2 Seems to be the case. What's also quite notable is that only 39% of companies currently use benchmarks for their AI tools, and when they do use benchmarks, they primarily prioritize performance and operational ones. You know, speed, efficiency, much less focus on ethical and compliance benchmarks. 00;14;06;06 - 00;14;11;17 Speaker 1 That feels like a potential risk area. And there's also the black box problem, right? McKinsey mentioned that, too. 00;14;11;17 - 00;14;18;22 Speaker 2 Yes, the black box issue, many looms even now don't really reveal why or how they arrived at a particular response. 00;14;18;26 - 00;14;25;05 Speaker 1 It reminds me of those moments when your GPS tells you to turn left into a wall, and you have no idea why it chose that role. 00;14;25;06 - 00;14;33;05 Speaker 2 Exactly. Now multiply that uncertainty by a million for critical financial decisions like credit risk assessment. You can see the trust issue pretty clearly. 00;14;33;05 - 00;14;36;29 Speaker 1 Absolutely. Building that trust is crucial, especially for high stakes applications. 00;14;36;29 - 00;14;44;16 Speaker 2 And while transparency is increasing, there's still, you know, a long way to go to build that full confidence needed for truly critical applications. 00;14;44;18 - 00;15;04;06 Speaker 1 Okay. This has been fascinating looking at the workplace dynamics. Now let's kind of elevate this discussion. Let's connect it to a more fundamental human concept. The sense of agency. We're going to explore a really interesting psychological study here one that developed the sense of agency scale or so ass. 00;15;04;06 - 00;15;12;10 Speaker 2 Right. So when psychologists talk about the sense of agency or so A, what they mean is that fundamental feeling you have of being in. 00;15;12;10 - 00;15;13;24 Speaker 1 Control, control over. 00;15;13;24 - 00;15;23;01 Speaker 2 Control over your thoughts, your body, your actions in the world around you. It's that internal conviction, that feeling of I did that or I decided this. 00;15;23;08 - 00;15;25;19 Speaker 1 The feeling of being the driver, not the passenger. 00;15;25;19 - 00;15;39;09 Speaker 2 Precisely. And it's interesting because psychologists distinguish between two aspects. There's the conscious judgment that you're in control, kind of a cognitive assessment. And then there's the more gut level feeling of agency, which often comes directly from your motor system, from performing actions. 00;15;39;12 - 00;15;41;17 Speaker 1 Okay, thinking versus feeling control. 00;15;41;19 - 00;16;08;12 Speaker 2 Sort of. And the development of the scale. The so act was actually incredibly significant in psychology because before this, there was a real lack of reliable tools to measure the sense of agency in a way that was, decontextualized and cross situational meaning, a way to measure your general chronic sense of agency, how much control you typically feel you have, rather than just how much control you feel in one specific situation, like, say, during hypnosis. 00;16;08;14 - 00;16;30;04 Speaker 2 Most existing measures were either context specific like that, or they captured related but slightly different concepts. Things like self-efficacy, which is belief in your ability to succeed or locus of control, which is about whether you see outcomes as determined by you or external forces. So as aim to get at the core feeling of being the author of your actions more directly. 00;16;30;05 - 00;16;33;10 Speaker 1 Got it? And the scale itself, it breaks down into two factors. 00;16;33;10 - 00;16;51;23 Speaker 2 Yes, positive agency and negative agency. Positive agency is while feeling effective and in control. But negative agency is particularly intriguing, especially in our context today. The study describes it as being akin to existential helplessness, a kind of fatalistic, pessimistic, and potentially really demotivating feeling. 00;16;51;23 - 00;16;53;02 Speaker 1 Existential hopelessness. 00;16;53;02 - 00;17;07;02 Speaker 2 Yeah, yeah. And it focuses not just on failing to meet big goals, but on a perceived lack of control over really rudimentary basic faculties like your ability to move or even control the stream of your own thoughts. 00;17;07;02 - 00;17;20;21 Speaker 1 That sounds incredibly significant, especially when we think about a world increasingly powered by AI assistants and agents. Yeah, it's not just about job security then, is it? It touches something far more fundamental to our identity, our well-being. 00;17;20;21 - 00;17;42;20 Speaker 2 It absolutely is. And this, I think, raises a really important question that ties all our sources together nicely. As AI agents increasingly take on more tasks, even the seemingly mundane ones the scheduling, the file maintenance, maybe even summarizing our own thoughts. How might this profound shift impact our fundamental chronic sense of agency or perceived control over our daily actions and thoughts? 00;17;42;22 - 00;17;51;11 Speaker 2 Does offloading too much even the boring stuff, somehow erode that core feeling of being in charge of our own lives? It stretches far beyond just concerns about job security. 00;17;51;18 - 00;18;13;18 Speaker 1 Yeah, it's about how we define our own contribution, maybe even our own existence, when so much can be outsourced, automated or done for us. Wow, what a journey we've been on today. Seriously, from the sheer transformative power of AI and that super agency concept to the really detailed desires of workers captured by the human Agency scale, we uncovered those critical mismatches in AI investment. 00;18;13;24 - 00;18;20;29 Speaker 1 So the fascinating evolution of human skills. And finally, we touched on this profound psychological dimension, her very sense of agency. 00;18;21;00 - 00;18;44;17 Speaker 2 It truly reinforces the idea, doesn't it, that the future of work isn't just about what I can do, it's about how we, collaboratively design and integrate it. How do we do that in a way that genuinely enhances human potential and critically preserves our sense of control, our sense of agency? The goal ultimately really should be augmentation. Working alongside AI, not just handing everything over for automation. 00;18;44;20 - 00;18;45;08 Speaker 2 Absolutely. 00;18;45;09 - 00;19;03;11 Speaker 1 So as you, our listener, go about your day, we want to leave you with this thought to mull over what aspects of your own work or even your life, do you most want to retain human agency over? Where do you want to stay firmly in the driver's seat and conversely, what tasks would you eagerly, maybe gleefully handover to an AI agent, if you could? 00;19;03;14 - 00;19;13;14 Speaker 1 And perhaps most importantly, how might your answer to both those questions change as AI capabilities continue their rapid, almost dizzying evolution? This is a deep dive we'll certainly continue to explore.