The L&D Strategist Book Club

Exploring AI & Learning with Ethan Mollick's Co-Intelligence

Join the L&D Strategist Book Club as we explore Ethan Mollick's insightful book, Co-Intelligence: Living and Working with AI.

This conversation offers a multifaceted perspective on the challenges and opportunities presented by AI for L&D strategists, and includes leaders in learning design, technology, marketing, edtech, and human capital management. Discussion covers a wide range of topics including the four rules of co-intelligence, the integration of AI in the workplace, addressing biases in AI, and the future of work.

Listen to the discussion

  • 00 :00 Introduction to the Book Club and Ethan Mollick's Work
  • 00:46 Meet the Participants
  • 02:14 Diving into the Four Rules of Co-Intelligence
  • 02:29 Rule 1: Always Invite AI to the Table
  • 06:31 Rule 2: Be the Human in the Loop
  • 15:26 Rule 3: Treat AI Like a Person
  • 20:07 Rule 4: Assume This is the Worst AI You Will Ever Use
  • 20:39 Future Trends in AI and Their Impact on L&D
  • 23:40 Exploring AI's Interactive Potential
  • 25:01 AI and Empathy: Can Machines Truly Understand Us?
  • 27:50 Future Trends in AI and Organizational Behavior
  • 29:59 The Future of Work: AI's Impact on Employment
  • 35:28 Balancing AI Integration with Human Expertise
  • 40:14 AI's Role in Enhancing Productivity and Innovation
  • 47:59 Concluding Thoughts on AI and Human Connection

[00:00:00] Wesley: Welcome to the L&D Strategist Book Club and we're here to discuss Ethan Mollick's new book, Co-Intelligence, or Living and Working with AI. And Mollick's book is particularly useful to L&D professionals. He's a Wharton prof of management, specializing in entrepreneurship and innovation. He's created numerous education games and business sims, and he's also an early experimenter and innovator with LLMs and one of the most followed voices on AI on social media. And he's got this really great, pragmatic perspective on living and working with AI and its implications for business, so that's all relevant stuff.

[00:00:39] And we've got a great crew of folks here to talk about it today. So, I'm going to introduce everybody quickly before we dive in.

[00:00:46] So first of all, we have Greg, who is the CTO of Socratic Arts, and Greg received his PhD from The Ohio State University, where he studied computer science and artificial intelligence. And his current research focuses on how to use AI for education.

[00:01:00] We also have Holly, who's the president and CEO of Socratic, where she leads a team of learning strategists that design award-winning learn-by-doing programs for a range of clients.

[00:01:09] We're also joined by Ian, who is an independent marketing consultant who serves edtech businesses. And Ian served most recently as a director of marketing at Turnitin, where he led the corporate messaging and response to the advent of generative AI for global secondary education.

[00:01:25] We also have Josh. Josh is a co-founder at Covision and an interactive meeting expert with over 25 years of experience in large group engagement processes in technology.

[00:01:36] Lisa is the director of learning design and innovation at McKinsey & Company, who alongside her talented team, is transforming what it means to learn and teach in professional environments.

[00:01:45] And we also have Malika. Malika is a senior global leader with more than 20 years of experience in human capital management with multiple certifications and a doctorate in organizational transformation and learning technologies.

[00:01:58] I'm Wesley. I'm the VP of Design Services at Socratic Arts, where I lead creative teams designing learning experiences for our clients. And I'm also exploring ways we can integrate AI both into those experiences and into our workflow, and I'm going to be hosting book club today. So, let's dive in.

[00:02:14] So, we're going to go to our first topic, which is about the four rules of co-intelligence. So a little background on this: Ethan Mollick proposes four rules of co-intelligence or working with AI, how to really incorporate it into our work and lives. So, the first rule is, always invite AI to the table.

[00:02:33] And the question really is, how can we apply that rule at work? And I'm just going to pick on Lisa first, Lisa, what do you think about that one?

[00:02:41] Lisa: Sure. It's interesting when I read the book and I started thinking about this topic, it reminded me, this reference will date me a little, but that's okay.

[00:02:48] I'm okay with that. But it made me think about in college I was working in a call center and they assigned us an email address. And I thought, what am I ever going to use this for? What, I don't need an email address. Come around and put a flyer on your desk with everything you need to know.

[00:03:04] And so I'm proud to say that I have shifted my mindset in the last two or three years since college. And I really liked this idea, right? This mindset of, "Always invite AI in." What can I do every day? What's something I'm doing now that I could augment my work stream with? And so I actually think this is a really helpful mindset to say, rather than wait and figure, and somebody will tell me what I need to do, I'm going to, I'm going to bring AI in right up at the front.

[00:03:32] And I liked that frame for the whole rest of the book, right? If it's, if it becomes my go to yeah, that, that was a really powerful sort of way in for me, I thought.

[00:03:41] Wesley: Holly, what do you think about that first rule about always inviting AI to the table?

[00:03:44] Holly: Sure. I was thinking about this the other day and you and I had, we're talking about how really not, every opportunity is great to use AI, but nearly every opportunity is a good time to practice using it.

[00:03:56] And just thinking about... even last week, I was using the other day, I had a big planning doc and a bunch of bullets, and I wanted to communicate out to the team and I used it to try to create some kind of synthesis and harmony and it was fast. And also, it lost some of its meaning and it lost some of the nuance and some of the personality.

[00:04:20] I think we're going to talk more about that today, but I think it's amazingly fast. And I think that's really interesting in, especially in the world that we're all in with learning and development so many possibilities. We recently used it here at Socratic to fast track a proof of concept and both ideating the concept and thinking about the content, but also using it to build a tool to help give feedback to learners' deliverables and the client loved the concept and we're now working with them. So, pretty interesting. I'm really personally excited for one day when it will find all of the docs and various receipts that I have all over my house and get it ready for my CPA, because that's what I'm really trying to tackle right now.

[00:05:03] Wesley: Malika, I wanted to give you a chance to respond because I know you were interested in this one. And I was just curious about your perspective on that. Especially from where you sit within global human capital leadership.

[00:05:15] Malika: Yeah, no, absolutely. I think I had a little bit of a different perspective before reading the book and also before I saw Ethan last week. I was with him at a conference, and he spoke and my thought was to carefully invite AI to the table, right?

[00:05:34] And now my, that was a pre- notion and, post conversations and post Co-intelligence, I see the benefit of having AI earlier on because it's how it learned and how it contributes to the information that we're going to be using is. Now I'm more educated, right? And so I think that it's truly impactful to bring it in because it's learning alongside of us.

[00:06:02] And we're the humans in the room, right? We're teaching it. We're checking it for accuracy and we're bringing in our expertise and, but what's also really interesting inviting it to the table is. ..it does have this speed that we don't have, right? It brings in these ideas and generates things a lot faster that allows us to react to.

[00:06:23] It's not saying that we would go with said strategy or recommendation, but it provides us more options.

[00:06:30] Wesley: Absolutely. And I think you bring up a great point just around the caution, which really leads into the second rule, which is, "Be the human in the loop." I'm just going to pass the baton around because y'all are already talking about it, but Lisa or Holly, do you have anything to add on that, "Be the human in the loop," bit?

[00:06:44] Lisa: Yeah. I liked the first two rules because I felt like I saw him speaking to two archetypes, right? The person who is reticent. And who's a little hesitant and maybe not sure they want to do it. Always bring AI to the table. The person who is all in and excited and ready to delegate all the tasks they can to AI.

[00:07:02] It feels like that second rule feels like a little bit of a cautionary moment, right? Be the human in the loop. Bring the empathy. Bring the understanding, pressure test. And so I, I thought the kind of the way that he constructed the rules was so complimentary to, to speak to different kinds of people and maybe where they're at in their willingness to adopt AI.

[00:07:22] And, I really appreciated understanding I just made me think as I was reading the whole time, how much. human wisdom we can bring, how we are accountable, right? We're the risk mitigator. We're the ethics. To make sure that the way we're bringing it to bear really is in the best interest of the work that we're doing.

[00:07:40] So I liked the two and I felt like they were a really nice compliment to each other.

[00:07:44] Wesley: Yeah, totally.

[00:07:46] Holly: I think also just to Malika's point a couple of minutes ago, we need to be able to train it. So, he starts by saying, be the human in loop, but in the later chapters, he really talks about being the expert human in loop.

[00:08:00] And I think when you add expertise to AI, not only do you train it in a way that we need to train our people, right? We know people learn through doing the work and getting feedback and then those feedback loops, right? And you need to do that because often those, as he explains in the book, those initial responses are pretty vanilla.

[00:08:23] They're pretty generic. And or, they're missing things. And so being able to add that human expertise to it's really important. For example, we found it here in our work. It's pretty great at drafting, first scenarios. And then we can tweak it. It's less good at other kinds of ideation.

[00:08:42] So yeah, important to have an expert like running it.

[00:08:44] Wesley: Yeah. I have a, I have a friend over at Microsoft who she was just like, "garbage in garbage out." And I think that's the idea.

[00:08:50] Malika: I recently had that conversation with someone, but it's being the human in the loop. Also, what's really intriguing is watching Ethan on the spot, create a performance review for someone. And what's it was very good, right? It gave some insights into the person and said, this is what they're doing, right, this is what they're not. And that can be a very helpful tool, especially if you have a lot of performance reviews to write. I'm using this as an example, because you cannot remove the human element of that. You need to be able to humanize it and bring in the empathy and the in person or virtual, if you will, at least interactions that you have with that person as being their leader.

[00:09:32] And so that is, I believe, also very key and also keeping to a specific tone.

[00:09:38] Wesley: Absolutely. Absolutely. And yeah, and I'm sure as an HR leader you could see the benefit of being able to crank through a lot of performance reviews quickly, but then also the horror of leaving that to purely machine intelligence.

[00:09:51] Like it's, it's like the double edged sword. And yeah, I like those two rules paired.

[00:09:55] Malika: We may talk about this further as we get into deeper conversation, but being that human, and it goes back to what Holly and Lisa were saying, is that we have to step in to remove the biases that are going to be pulled into... to the work or to the outcomes or outputs that the AI is generating.

[00:10:15] So we are, again, we're still teaching it and we know how it's curated. We know how it learns, and we know that there's a lot of biases that can come in that we need to make sure that we're looking out for. So, being in the loop is key for that as well.

[00:10:30] Greg: I was going to add something to bringing AI to the table, specifically, what it even means to bring it to the table, which I feel like it's changing, as we speak. So, I don't know if people looked at the new GPT 4o yet that just came out. It's really very impressive and it gives you different ways to bring it to the table.

[00:10:49] So for instance, yesterday I was out for a walk and was thinking about some work stuff and I had, I just had my phone pulled up, ChatGPT 4o and just had a conversation with it while I was walking. And it feels like you're just talking to a colleague as you're out somewhere.

[00:11:04] And it just has a whole different feel now that they're moving in this direction of, multimedia integration.

[00:11:09] Wesley: Yeah, absolutely.

[00:11:11] Holly: I wonder if our grandkids are going to have real friends.

[00:11:14] Josh: I felt a little creepy about writing the stem, but the, "treat it like a person," and endowing it, it's really a relationship. That's what he's talking about. I've been using it since, since ChatGPT came out and I realized after reading these rules, "Oh, I'm not leveraging this right."

[00:11:29] I put it in, I get an answer. It's only okay. But it's... this is the creepy part... it's like any good relationship. You get out of it what you put into it.

[00:11:37] You really have to get to; you have to get to know it and endow it and create a relationship with it. And then you get more out of it and it becomes much more useful.

[00:11:45] Wesley: I've had to lead various little workshops, both internally and externally, on just the basics of using LLMs for L&D tasks, helping us all get our feet wet, both at Socratic, but also for many of our clients. And one of the little ticks of how I interact with it is I always say, "please" and "thank you" and people will like, giggle, when I do that.

[00:12:04] They're like, "Why did you say please?" And I'm like, it's because it might be because I have a liberal arts background versus a tech background, but I'm naturally treating it like a relationship. I also might have a small bit of paranoia that when the singularity comes that it might remember that I was actually nice and I'm a human.

[00:12:19]

[laughter]

[00:12:20] Josh: I did that too and I gave it a name. I'm like, call yourself this, which is weird, but it's because I really wanted to try wading into okay, I'm going to, this is going to be a companion I'm going to work with and see what it's like.

[00:12:31] And that was interesting, too. And I told it my name. So, it says, "Hey, Josh."

[00:12:36] Wesley: Yeah, particularly now that it's remembering across conversations.

[00:12:39] Ian: I think given this audience and specifically, you and me Wes, we both bring roots in the toy industry and children's play. And I think one of, as we talk about the attributes that people assign to the technology and imbue, I think that's a very human and common thing to do, that you and I used to make products along those lines with an educational bent. But from, teddy bears to toys, like Forky in Toy Story 4.

[00:13:06] Holly, was that your question about whether or not our grandkids will have actual friends? I think on one hand, yeah, but it's also a very real question when you look at other long-term trends.

[00:13:18] One of the timeless books that I keep coming back to , perhaps for a different book club context, is Bowling Alone, where when we look at isolation and various things that are going on as a result of social media, I think you like, I think you raise a genuine point and a concern in terms of what have we seen over the past 30 years of how humans interact with these technology systems and what does it mean for human connection?

[00:13:44] It's a deep challenge.

[00:13:45] Wesley: I noticed that too, Ian. And I was, there was the part, and we don't need to go too deeply on this, but where he was talking about the history of Replika and how people were like, it began with somebody replicating a loved one who had passed on and then how people dealt with this.

[00:14:00] And I was thinking about it the same way in light of the current loneliness and lack of belonging epidemic that's going on in the world, often exacerbated by how we use technology. And I was just like, good heavens, how is this new tech going to click into and both soothe and maybe exacerbate that trend in relationships and how can we use it to help bridge relationships versus isolate us more in our little screened boxes?

[00:14:28] Ian: Some of the piece here where Mollick in the book referenced the Eliza research from the 1970s, and I remember, again, dating myself, remember playing on that when my dad got a really cool, new, shiny Macintosh Plus in the mid 80s, and from a very real perspective, there's a lot of relationships that form on social media where the human beings never actually connect and meet. From the point of view of any given audience member, that online relationship, even if it's technically with another flesh-and-bone human being, it's entirely synthetic.

[00:15:02] And so what's the difference between having an online chat with someone that you have never had, a relationship that's not technologically mediated, and a persuasive bot? I don't know.

[00:15:13] Wesley: And I'm going to just throw a lasso around that and rein us back in from like the deep discussion of like embodiment and phenomenological experience.

[00:15:21] One thing I wanted to pick up on, which is this third rule, and I want to move on to the kind of next question soon.

[00:15:26] But this third rule about treat AI like a person but tell it what kind of person it is. Because Malika, you brought up something really important that I didn't want to gloss, which is bias. There's so much inherent bias in AI because it's built off human biases. And so the ghost in the machine... it's still there.

[00:15:44] The bias is still with us. It's in the AI and I'm still grappling with... I hate to pitch this to you, Malika, because none of us have answers, but have you had any thoughts about that yet or of like how we need to be thinking about that? Because I'm so concerned about it.

[00:16:00] Malika: Yeah, it's interesting with Holly and Ian, actually all of us, when we're talking about treat it like a person then the biases are... right?

[00:16:07] Because we all have them. And but what we're trying to strive for is better, right? And how do we do that? You're right, there's not an answer, at least not currently, that I'm aware of. When we're treating it like a person, I do think that by giving it as someone mentioned this, like giving it a name, but really clearly defining what the AI's role is and what the expectations are.

[00:16:31] I haven't played around with 4o as much, but maybe there's a way to teach it about biases. And I, what I haven't seen or heard is a way to do that because we've all read the book and we all have done our other research. We know that it's pulling in information.

[00:16:50] It's basically curated information that's already out there, but maybe there's a way to really partner, bring it to the table with the AI and say, your role is to reduce biases in AI, or really see what happens there and use that as part of the learning a s we start to integrate it more. Wesley, you're right. I don't think there's a real, clear-cut response. I would love to take that on as some research and I've been thinking about that. It is crucial, just in my newsfeed today, something popped up where, someone in the NFL made a comment that is now out there and the AI can use that as truth and it sets women back to, X times to be, "Women can't be in the workforce and have kids and, you should be home married cooking dinner."

[00:17:41] And the NFL came back and said, "Hey, that's not how we feel as the NFL," but that's how that person felt. And that's how that information is out there. And now that can be a part of treating this AI like a person. But really what we want to do is help it evolve. And it goes back to us finding the governance and the corporate and social responsibility and writing rules around how do we manage this AI because right now it's a little lackluster.

[00:18:12] Wesley: Yeah, I agree.

[00:18:13] Josh: And it's privatized, right? That's the other thing. Yes, Josh, what he clearly laid out in the book is it's only people with a ton of money that can build these.

[00:18:23] And how much of the biases in the initial training and how much is it in the evolution after.

[00:18:28] Wesley: 100%. Because there's so many layers of bias because they've got de biasing mechanisms in the models, but they're not going to catch everything because of the biases of the people who are training the model.

[00:18:39] And which means the point of view is probably going to be Western. It's probably going to be white. It's probably going to be male. And it's going to, and even when those folks are well intentioned, it creeps in. When I'm working with my teams so far, we encounter the worst biases in the images, in imagery. But I just think it points back to as the technology evolves. We all have to really take that "be the human in the loop" role to heart and also keep doing all our diversity, equity and inclusion training among our human teams to catch it because we can't trust the AI to catch itself.

[00:19:15] Malika: Yeah, I think this is probably going to lead a little into number four, but and I don't discredit, "treat AI like a person," and I understand the concept, but do we really want to treat it like a person? Don't we want it to use it to help evolve, right? Or to find ways to break down the flaws in, war and what's the word, racism and inequality and and things that what we believe make us human.

[00:19:42] And again, that might lead us into assume the worst of AI, but again, there's still a lot of work to be done in that journey.

[00:19:50] Wesley: Yes, but you're right. It's a powerful tool. It cuts both ways. And just because it has deep flaws. It doesn't mean we can't also flip it around and try to leverage it to solve some of our very human faults precisely because it isn't human, which I think is fascinating.

[00:20:02] And I'm excited to hear more about research angles you take on that. So please keep us posted. And I'll take us to number four, "Assume this is the worst AI you will ever use," which is pretty self-explanatory. It just has to do with the hockey stick of tech evolution, which is going to change.

[00:20:17] I don't know if anybody has predictions or ideas about how it will change. I don't know if Greg, you or Josh have thoughts on that yet, but I'm definitely curious what better AI is going to be like.

[00:20:32] Greg: I think, look at, look at the movies, right?

[00:20:33] Just look at all the things in movies and science fiction. And that's what it's going to become.

[00:20:38] Wesley: Yeah, absolutely. Let's move on to our next one, which is, we were really going to focus on, future trends in AI and Mollick brings up a bunch of trends in his book. And I'm just really curious to talk a bit about, which trends we think are going to impact L&D the most and which ones we're most excited or concerned about.

[00:20:57] And so he talks about a bunch of stuff. I'm going to list a few of them. But like he talked about, just information overload, right? Because you can create more stuff faster with AI. So, all our content channels are likely to clog up even more than they've clogged now. Some people are talking about the end of the internet, which I'm like, okay, what?

[00:21:16] One of the biggest ones that I thought was fascinating was this idea of distrust of truth and digital media. The fact that you can make any image, any video deep fakes are going to be so... they're already super easy to make. We're about to go through a super fun election cycle with that technology in place.

[00:21:33] Also, we've got more human-like AI personas. It's going to be increasingly difficult to differentiate between whether you're interacting with a human or AI. We're going to see the deployment of AI in roles that require empathy, like every customer service phone tree ever. We're talking about the transformation of work.

[00:21:51] Also, the acceleration of innovation and scientific research, which is fascinating, and then stuff that's a little spooky to me, like algorithmic management. That's a whole mouthful, but Malika, I'm going to pick on you again, and I'm just going to ask hey, are there trends around AI that you're particularly excited or concerned about from an L&D or HR point of view?

[00:22:14] Malika: Yeah, absolutely. I'm going to talk about the ones I'm excited about because the ones I'm concerned about, we don't have enough time.

[laughter]

[00:22:21] I think for me, the algorithmic management and more human AI personas, I'm going to spend a little time talking about that. When we think about the opportunity to track activities, behaviors, outputs and outcomes of workers. Right? In L&D, that's key. That's what we're trying to strive for.

[00:22:44] And that's one of the areas that I see AI really leaning in on the L&D front to help identify future behaviors. Current behaviors. What are the outputs? What are potential outcomes and be able to create different real-world scenarios that help L&D evolve their learning journeys, their facilitation and even, and obviously content, but also how the information is received and perceived.

[00:23:12] The other piece would be. The more human AI personas like chatbots, movies, video games, characters like that piece, I think we're, some of us are already there using chatbots in L&D. So, evolving that I believe would be very useful.

[00:23:28] I know that one of the things that we created was an inclusive marketing learning journey, and we gamified it. And the premise was a combination of Where in the World is Carmen San Diego? and then like an interactive conversation with one of your peers and it really resonated well. But if we had this type of AI available to us, then in this technology would be even more interactive. Like you're having a real conversation with a colleague and you're talking in real time and getting coaching in real time.

[00:23:59] And so I see that as yet another benefit. And then going a little futuristic. I was just at the Sphere again in Vegas, and this time I had an opportunity to interact with I think there's seven or nine different robots they have there. And they're using AI to inform these robots.

[00:24:18] So I had an interactive conversation with one of the robots and it asked me a lot of questions and it said, "What questions do you have for me?" And I said, " where do you think the interaction is with HR and AI is going in the future?" And to hear its outcome... and, and it referenced its coworker, the person that was overseeing the robot, as a friend. And I see things like that could be also beneficial in the future when it comes to L&D and we talk about, " What is the more human like AI personas that we can bring in?"

[00:24:52] Wesley: Awesome.

[00:24:54] That, yeah, I just love hearing... the future casting is super fun for me. And so Josh, I'm going to pick on you next. Do you have any thoughts?

[00:25:01] Josh: Yeah, my work is less corely L&D and more of the interactive events, and then I also have the psychology background. So, I've thought about in terms of those two areas and I have both things that are like, exciting and concerning all in one.

[00:25:15] I think mental health is like a massive problem that seems to be broken. Can AI be a therapist? I want to say, no way like how, like you were talking about empathy before... any AI roles that require empathy. I guess it depends whether we think it'll ever become sentient.

[00:25:32] Yeah. Can AI actually have empathy? So that's the concern is I just don't think... there is something about a human being connecting to a human being on an empathetic level that I don't think can be replicated. I'm happy for someone to argue with me about that.

[00:25:48] Wesley: Yeah, that's something I think about all the time, which is I'm starting to see all these things being advertised as like it's empathetic or empathic AI. And I'm like, no, it's not because it has no perception. But it can imitate empathy.

[00:26:01] A sociopath can also imitate empathy. And so, I'm like, but then there's the scaling problem in mental health with people don't have access. And over in L&D right is around coaching and mentorship, but we'll get to that later. So, then I'm like,

[00:26:16] is there a way to scale part of it ethically?

[00:26:20] Or is it just a no go? I don't know.

[00:26:23] Holly: I was just thinking about one of the things that he commented on early on in the book. Is that one of the differences between AI and humans is our ability to reflect and the processes that we use to reflect. And we know how important reflection is for learning.

[00:26:38] And while I totally agree with everything you're saying, and I am like a psych major too at heart, I also do think because of scalability, could it help us better reflect? Could it help us think about things differently? Totally ourselves, right?

[00:26:55] Josh: There, are lots of methodologies that I think could be done, like less of the traditional Psychodynamic model of sitting with someone and EMDR, for example, or other cognitive behavioral stuff where it's exciting to think, " Wow, people could get help."

[00:27:12] And the AI could just help them step through. a cognitive behavioral therapy intervention or EMDR or something. I can imagine that working. At the same time, I want to believe that there that it can never really replace certain types of coaching and therapeutic relationships. Maybe that's just self-protective, because there's rule number 4, this is the worst AI we're ever going to deal with.

[00:27:34] But I just think that's empathy is something that's human. I don't know.

[00:27:38] Wesley: Yeah no, it's I wonder about the same thing. That's he talks about. Yeah. Once you encounter AI, you have at least 3 sleepless nights and I haven't had my sleepless nights. Yeah, totally. But I want to pass the baton to Ian.

[00:27:52] Ian, I know you love to think about trends and think about things deeply.

[00:27:56] Ian: I'm going to try to constrain the trend conversation to L&D and organizational behavior because there is not a conversation going on that isn't shaped by this. And I think, Malika made a comment earlier, which provokes a lot of thought here.

[00:28:13] And I think that when we look at it from an organizational leadership point of view, and Holly was just making some remarks related to this as well. One of the things that I think thoughtful, intentional leaders have to wrangle with is the gap between best practice and the things we all should be doing and what most commonly happens in organizations.

[00:28:38] And I think as we look at these future trends, I can see it going different ways in different organizations. On one hand, if an organization has… we'll just call it "wrapping paper values" where words are put up and, try and engage people, but they don't genuinely care about the human element, then one path for AI is going to be the panopticon example of further surveillance technology, automation, and cost squeezing.

[00:29:08] On the other hand, in the more optimistic case, is there a piece here that can model best practices and model engagement?

[00:29:15] I've become a Perplexity fan. And for those of us who are comfortable diving into a role and doing role playing, it can be a fun role-playing buddy but that in and of itself is a hard skill for people. And does this create more practice opportunities with something that increasingly... and, we can talk about the Turing test... if we were to all have screens off and just be voices talking, could one of us be a bot and the others would have no idea whatsoever? Probably at this point.

[00:29:48] Greg: I've been busted.

[00:29:52] Ian: I thought it was the mention of the turret the Alan Turing mentioned in the PhD was like, hold on a second. I got this. Hold my beer.

[00:29:59] Wesley: I'm going to just try to hustle this one because this book was so rich that it's so hard to get through everything but one of these future trends was this idea of the future of work and how work is going to change. And there was what I think of as the "hot potato quote" in this book from the Turnitin CEO, Chris, where he said at one point, "Most of our employees are engineers and we have a few hundred of them. And I think in 18 months, we will need 20 percent of them. And we could start hiring them out of high school rather than four-year colleges, same for sales and marketing functions."

[00:30:30] And I'm going to ask Holly for a response to that statement first, because I'm just curious, what everybody thinks of that and how we can embrace and prepare for those coming changes.

[00:30:40] Holly: Yeah, I don't know Chris at all. And I know Ian does. I'm really, I've been anxious to hear his thoughts on that.

[00:30:46] But I'm assuming to at least to some extent, this was a shock. This was intended to maybe be shocking, which for me was useful because it's one of the things that I remember most, right? About this topic - it was just a statement. So, I think it was highly effective. But I also, on the other hand, I don't think we should run our organizations with fear.

[00:31:07] I don't think we get the best out of our people. I think we lose engagement. Certainly, lots of problems with that approach. I personally am very excited about what I see as the future of work, especially as it relates to L&D and using AI, and I think that we have a huge opportunity in our organizations to lead out in that way.

[00:31:30] And to help people come along in the process. Cause people do have natural fears. You don't have to tell them they're going to lose their job. They're already thinking about that. So, I think it's very critical to be aware of that and to be thoughtful about even just the way you talk about using it as a tool, I think is really helpful.

[00:31:48] I think Wesley, you've been really good in our organization about doing this. And I think it's absolutely changing our work today. So, I think there's no question about that. And this is the worst it's ever going to be. So, it's just going to continue to change and evolve. And I think the release from Open AI this week, is just one example.

[00:32:08] It's like week to week, we have to stay on top of those trends. We have to know what's going on and be really open. And I'm really trying to create a culture of openness about it and encourage use and also know practically you have others that you have some that are going to lead out in this area and some that are very concerned.

[00:32:28] Wesley: 100 percent and now Ian, I'm going to pick on you because you were most recently at Turnitin. So, I was curious for your hot take, as it were, on this one.

[00:32:37] Ian: Two qualifications, first and foremost, where we can take this a couple of directions.

[00:32:42] First, Chris Caren is just one of the nicest, most thoughtful, and earnest human beings you will ever meet or have a chance to speak with. Genuinely and truly. The second piece is that, saying that about Turnitin, I have no knowledge of future staffing plans, I love the company and, love my former teammates.

[00:33:05] But Turnitin is also a really interesting company in terms of being privately held by Advance, it is well entrenched as a market leader in edtech, which is public sector, very slow turn, and Chris actually did another piece about CEOs, thoughtful leadership, and really his focus on profitability, where from a business leadership perspective, there was a really strong argument to be made about cost management. And, if we look at it from the perspective that I can interpret Chris, having worked in his organization for six years and would work for him again, if that's the way the stars align, is, there's a point here where there's an opportunity for cost reduction and any profitability equation, you have both the revenue side and the cost side and cost reductions are profit increases, too.

[00:33:56] And that is the business metric that Chris has always been very intentional about, at Turnitin. And so, I would caution other business leaders from generalizing on that statement or, from a very specific leader at a very specific company in a very specific industry.

[00:34:13] The flip side of this is Turnitin is an established business with customers in 140 countries, but at the same time, we're enthusiastically looking at the startup space. And people are like, "Oh my goodness, we can now do startups with a team of three people," with the exact same productivity argument. It's just a different business context and different impact on people where on the entrepreneurial space, this basic argument is uncontroversial, completely uncontroversial.

[00:34:42] I think this is where in terms of the substance and the way it's being generally interpreted... I was a marketer at Turnitin the day he said that at ASUGSV and that was a fun day. " Oh, Chris, you didn't. Okay, fine." And he's right. It's true.

[00:34:58] But I think this goes back to the organizational leadership and the behavior aspects. And again, I'm going to point to Malika in terms of... we have the companies we have; how do we treat the people and continue to engage them?

[00:35:11] For the entrepreneurial cases, yeah, you can start with your team of 3 and scale very thoughtfully. I think that some people are being bullishly optimistic, but change is not going to come instantly for entrenched firms, and if you change too quickly, that will come at a cost.

[00:35:26] Malika: Yeah, I think that's perfect. And just to add on to this a little bit, I'm all about being proactive and the future of work.

[00:35:36] And in some cases, when it comes to resourcing and how companies are looking at their resources now that they're implementing a more strategic AI plan I almost feel like we need to be reactive and the reason I say that is I think we're jumping the gun, jumping ahead a little bit too much without truly understanding what roles human will play as it pertains to AI... and as it pertains to being that human in the room , right?

[00:36:04] And maybe it's looking at taking your individuals and changing business acumen. Maybe they're learning no code or low code, or there's an organization you set up within your company that focuses on governance and responsibility and sort of the economic impacts.

[00:36:21] So I just, I feel like, yes, there is going to be a shift in work, and you hear leaders like I won't name names, but some individuals like at Microsoft, which I recently left, is talking about how it's not going to replace your job and how it's not going to replace workers. And it already has.

[00:36:41] Like you're already downsizing for the future of AI, but you truly don't have a clear roadmap of what that work is going to look like in the future and what roles you'll need. So maybe just slow down a little bit. I'm with the people who in the book, and I knew even before the book that says, "We're not saying no to AI, we're just saying let's just slow down and put some parameters around it.

[00:37:03] And let's learn from our lessons." And I am going to say Microsoft, because in the book there's two situations that are discussed where Microsoft jumped the gun a little bit when it came to AI, and we saw what happened from that… So again, I think it's the shiny new toy and people are being reactive, but let's ease into this and have more governance and more of a plan in place.

[00:37:24] Ian: Yeah, I think if I can grab this, and this is actually a jump back to the prior topic. Since I think one of the things that Malika you implied where Chris, in his most Chris way, was earnest and honest in his response and assessment. The flip side of this as some of what Malik is hearing where there are organizational leaders who are being like, "Oh no, you're absolutely fine. I'm giving you a pink slip tomorrow. AI is not going to take your job. We're going to just eliminate your job. And it's just not going to be a thing."

[00:37:53] And so there has to be some honesty and candor. Jumping back a moment into the sort of tradeoffs and training conversation. One of the pieces here, we're acting like AI and chat bots is all new.

[00:38:07] Malika, I don't know if you were part of Microsoft when Tay went live on what was then Twitter in 2016. But Microsoft did a chatbot experiment on Twitter where within 24 hours it became a vicious, racist, awful example. And there is a huge amount of risk in the literature where I think speaking as a marketer, there's a lot of people cashing in on the novelty.

[00:38:29] I've done it myself; I admit it. We also have to have the longer view of how this technology has come along and unfolded. Greg, I may drag you into that one a little bit and put the PhD to work. And really see, like, how have we interacted with these things in the past and what has happened in this moment of persuasion.

[00:38:49] And what does that reflect about us? Tay is a fascinating case study.

[00:38:53] Greg: And the real question from this one for me isn't so much about, the company, it's about society because, we're going to get to a point where most of us don't have to work anymore. Because AI and computers and robots and such will be able to do a lot of the things that need to be done to, get us everything we need as a society.

[00:39:14] And when we get to that point, how do we structure society? How do people get rewarded? What do people do with their time? And it seems like a really big kind of societal question, more than a company question, in my mind.

[00:39:25] Wesley: Yeah, no, I'm fascinated and I want to move us along because we have so much to talk about, but I did want to mention one point that Mollick made about this in the book that I thought was just great, which is that you can take just the pure productivity model and be like, "Okay, I can do the same thing I was doing before with fewer people faster." And that's one way to do it. But the other way more to your perspective, Greg, I'm like, this is going to be a whole new world. What are we doing differently?

[00:39:51] So instead of thinking, "How can I do the same thing with fewer folks?" you can flip it and say, "I have this amazing team. What more can we do? What can we do differently? How can we then stand on the shoulders of AI and do so much more, better, or differently and innovate?" which I thought was a really lovely perspective. And also, as a human who'd like to keep my job, I liked that perspective.

[00:40:14] But, moving on with this idea of productivity, we really wanted to talk about, productivity and innovation and how we can encourage it, right? It's because AI promises these huge wins in organizational productivity and innovation.

[00:40:28] But Mollick really points out that a lot of people are making shadow use of AI at work precisely because of all these fears. They're afraid of consequences like they might get in trouble. Or they're afraid they're training their own replacement. Like as soon as I use AI to do this and show my company, they're going to eliminate me and just use the AI to do it. And there's also lack of clarity around security and IT policies. So, in this context, what can we do to really help drive organizational productivity and change when there's all this fear and Josh, I'm going to pick on you.

[00:41:01] Josh: The thought I had about this is. What he talks about with still needing human expertise and the danger that we're going to lose it if no one has to go through the boot camp of learning, then where are the experts. I wonder can AI get you from, the 25th percentile to the 75th. But the 99th percentile is still going to be a human being and so therefore do we encourage people to be an expert that can then leverage AI and be, maybe it's going to make fewer people experts, but they're going to be highly revered and paid. I don't know. I think that organizations, you tell me, cause I'm not an L&D person, but I think they're going to be motivated still If they lose all human expertise and how are they going to know what they're getting from AI is what they need.

[00:41:45] Wesley: Yeah. Yeah, absolutely. Holly, did you have some thoughts on this one?

[00:41:49] Holly: Yeah. I think that we hire interns. We've been lucky to get some great people out of Boston college. Janet Kolodner has a great program there and trains learning engineers. And I think about what they do versus what an AI can do.

[00:42:05] And I think that it's a great opportunity for us to help new people in the organization at whatever level they come in to be able to get better with it. They can start to increase their skills and we can focus our feedback.

[00:42:22] And even just, I'm just thinking, we've been toying with the idea, do we create a semi- Socratic tutor that could give them feedback on their deliverables?

[00:42:29] And then we could have our human mentors focus more on the complex and nuanced parts of their jobs. Because the roles that we have here are very nuanced and complex. And we find it hard to hire people. Even people who come in with a really strong foundation and cognitive science, and they just need time with it.

[00:42:47] And so I feel like that's a huge opportunity for us long term is to, again, not necessarily replace people, and not just speed up their tasks, but hopefully actually get them more expert at using the AI to do some of the things that are less interesting and boring, frankly,

[00:43:04] Wesley: Yeah, 100%.

[00:43:06] What we were going to talk about next is this idea of closing the performance gap and I'm going to mention it quickly and then segue into the next topic, which I think actually all of you are more interested to talk about. So just for those who are listening, just a few interesting stats.

[00:43:20] The reality is that AI does close the performance gap between high and low performers, particularly in those more junior roles. Mollick mentions that there were participants in an MIT study who use Chat G P T to accomplish a range of tasks, and they got a 37 percent reduction in task time, and it also helped reduce productivity inequality.

[00:43:39] Like the folks who were super-fast and the folks who were slower, they all balanced out. Similarly, Boston Consulting Group did a study on 800 consultants who used AI to conduct research and analysis and ideation around launching a new product. And the ones who used AI beat their non-AI enabled peers, just like hands down, their results were faster, more creative, better written, more analytical.

[00:44:03] And also in law, they we're seeing that folks near the bottom of their class in law school who used AI equalize their performance with students at the top of the class. So, AI is leveling the playing field, at least, among people who are earlier in their career, more at that novice level.

[00:44:20] And this is an interesting point because this really feeds into this idea of AI is a threat to human expertise, and I want to ask, Lisa about this first, which is, how will AI use hinder the processes that we know are involved in learning and then what can we do about it?

[00:44:41] Lisa: I think it's a really important and a pretty existential question for our industry, right? I have a particular passion about this because I lead an expertise-based firm, right? So, I lead an expertise-based team in a firm that really values expertise and knowledge and what we know and bringing the best of that to bear.

[00:44:58] And so I think this is a super important question for us. And one of the things I've, as I've been listening to this conversation is I've just been thinking, there are a lot of things where we're saying the opportunity of AI kind of seems to be amplifying some of our worst impulses.

[00:45:13] That our brain loves shortcuts. And so, what if we lean into those shortcuts, right? That our brain that we'd love for somebody to do the work for us. So, if we lean into that, do we lose our expertise? I think this is also a really pretty incredible opportunity, though, for us to make new choices, right?

[00:45:29] We are still agents. We still get to choose. And I really deeply value expertise and he made a point that basically says, if you want to get the best out of AI, the expert is who will get the best out of AI, right?

[00:45:44] I think we've all kind of seen that. In our organization, in a meeting recently, somebody put up a, an agenda that they built for a learning program. And they were like, I built this in three minutes using our proprietary GPT. And I was like, oh, and it's not that great. Like it's not that great.

[00:46:00] Yeah, you're right. You got an agenda in three minutes out of the GPT, but it's not great. And so, I think there's something really interesting for us to think about. And the few things that I am personally doing with my team is Encouraging them to use AI. We've built a whole crowdsource prompt library and we are doing lots of sharing sessions where it's like by the end of the year, I want you to tell me all the ways that you have found productivity in your work and then tell me what you did with that time you gained back, right?

[00:46:27] Like, where did you lean into new ideas? Where did you lean into new research? How did you contribute to somebody else's learning and their development? I do a lot of my work in apprenticeship. And so, I'm thinking a lot about, " What kind of apprenticeship models can I build an AI that could augment what human beings can do?"

[00:46:43] How can I free them up on the little stuff so they can take on more nuanced, interesting and complex work? So, I really, I took away from this book, like a really empowering message on the value that human expertise will still bring, and we just have to figure out what the new starting point of that expertise is, right?

[00:47:02] I'm not going to write the dirty 1st draft anymore. I'm going to get that 1st draft. That's going to save me a little bit of time. And then, and so, what are the new skills I'm teaching for people at the beginning? So, I thought that this was a really, I hear us all on the existential threats and I maybe I just don't personally want to dwell in that moment too long, but I just also think there are some really incredible opportunities for us to personalize learning in a way we've always dreamed about bring apprenticeship into the workflow in a way that we've always dreamed about. And so, I hear you on the threat to expertise, but I think it's also a real opportunity for kind of a tremendous leveling up of what we're capable of doing.

[00:47:43] Wesley: Yeah, absolutely. Absolutely. And yeah, I don't mean to over index on that, the threat part, so much as say, given that we've got this new tech, which is so powerful, what's the best practice around that? But Josh, I really want to hear from you on AI and human expertise.

[00:47:59] Josh: Yeah, I said some of it before around the around the coaching and therapy and is AI ever going to be capable of empathy? But I love what Lisa said about, and it goes back to what I was saying in the beginning about, it's really about us developing our relationship with it.

[00:48:16] I also wouldn't mind if we shift into a universal basic income model and AI is doing a lot of stuff... I don't know if that's dreaming and ideal, but I do think that it's exciting, but it's also scary to me because the trends or the technology is distancing us more than it's bringing us together. That's the bottom line. And I am concerned that the trend will be going more of the direction of minimizing our connections to each other.

[00:48:42] Wesley: Yeah. No, that makes total sense.

[00:48:44] All right. I wish we had more time because there's so much to talk about. I really wanted to talk about the two-sigma problem and the idea of AI as tutor and mentor and how we could close the gap on that. And I really wanted to talk about AI as creative with all of you, but I know we got to go because time isn't infinite.

[00:49:01] So thank you so much, each of you for participating and I've learned so much from hearing each of you and your perspectives and I hope we can do it again soon.

Participants Bios

Greg Saunders

Greg Saunders, Ph.D., is the CTO of Socratic Arts, where he is responsible for all aspects of technology. He holds a Ph.D. in Computer Science and a B.S. in Mathematics from Ohio State University, where he studied artificial intelligence and neural networks. He has held research positions at Northwestern and Carnegie Mellon universities and is currently focused on the use of AI for education.

Holly Christensen Sestak

Holly Christensen Sestak is CEO and President of Socratic Arts, where she leads a team of learning strategists that design innovative, award-winning learn-by-doing programs for a range of clients. Holly is most passionate about supporting emerging leaders (and their unique needs, through formal and informal learning) and exploring how AI can both enhance and scale solutions for her clients at Socratic.

Ian McCullough

Ian McCullough is an EdTech business leader whose experience spans marketing, product, supply chain, and learning & development. Ian was a member the first class of Carnegie Mellon University's Master of Entertainment Technology program in the late 1990s, where he worked on the bleeding edge of now-popular technologies like conversational software agents and virtual reality under Donald Marinelli and Randy Pausch. He started his career creating educational content at pioneering toy maker LeapFrog, managed software engineering teams at Electronic Arts on The Sims 3, and co-founded Maker Faire award winner Giant Cardboard Robots. Most recently he originated the K-12 marketing function at Turnitin, where he led the company's message to that market in response to the advent of generative AI.

Josh Kaufman

Josh Kaufman is a co-founder at Covision and an expert in interactive meeting technology. With over 25 years of experience, he has developed tools and processes that maximize participant engagement for major organizations like The World Economic Forum and Fortune 500 companies. His innovative approaches are featured in the book Virtuous Meetings (Jossey-Bass, 2014). Josh is particularly excited to leverage AI to rapidly identify common themes and foster alignment across large groups during live problem-solving and ideation sessions.

Josh holds a master's degree in clinical psychology and has completed coach training from the Coach Training Institute. An avid adventurer, he enjoys running, improv theater, rock climbing, and hitting the ski slopes whenever possible.

Lisa Christensen

Lisa Christensen is the Director of Learning Design and Innovation at McKinsey & Company, leading a global team to innovate exceptional learning and leadership development offerings. She founded and heads McKinsey's Learning Research and Innovation lab, translating academic research into cutting-edge learning practices. A recognized thought leader, Lisa has authored influential articles on intentional learning, modern apprenticeship, and feedback culture, showcasing her commitment to science-backed innovation. As a founding member of CLO LIFT, she collaborates with top industry leaders to tackle systemic challenges in learning and development. Before McKinsey, she was a senior leader at a boutique learning design firm.

Malika Viltz

With over 25 years of experience in Human Capital Management, Malika Viltz is a Senior Global Leader, overseeing the full ecosystem of people experiences and business optimization for a diverse and dynamic workforce. Malika has multiple certifications and a doctorate in Organizational Transformation and Learning Technologies. Her mission is to identify and address real-world challenges and opportunities for institutions & companies and their people; while also designing and implementing optimal solutions that leverage the latest tools, technologies, and processes. She's passionate about developing innovative and sustainable ways of working and enabling the readiness and success of people in an agile and evolving world.

Wesley Hall Parker

Wesley Hall Parker is VP of Design Services and a Learning Design Director at Socratic Arts, where she focuses on practical applications of Gen AI and other tech to accelerate the design and development process. She also works with teams to consider Gen AI-enabled features for client programs. Ms. Parker has 20 years of experience designing award-winning, performance-driven learning solutions and games across a wide array of industries, modalities, and learner levels (from partners to preschoolers). A passionate advocate for user-centered design, Ms. Parker champions learning experiences that are story-centered, hands-on, engaging and memorable. She holds a master's degree in literature from Stanford University.


Socratic Arts, Inc.

Science-based learning that improves performance.

Contact Us

©2024 Socratic Arts, Inc.