
In this episode of 10X AI—a show exploring how AI amplifies people rather than replaces them—host Julius Neal speaks with Brooke Struck about facilitating AI transformation in organizations. The conversation examines the critical distinction between AI for efficiency (reducing costs within existing models) and AI for disruption (fundamentally reimagining value propositions and operating models).
Brooke emphasizes that successful transformation begins not with technology selection but with deep problem framing—understanding what outcomes organizations truly care about enough to sustain difficult change. The discussion explores how leadership buy-in emerges through facilitative processes that give teams ownership over transformation direction, rather than having change imposed by external consultants. Throughout, Brooke reveals how AI implementation often requires addressing foundational organizational issues: establishing consistent processes before automating them, breaking down silos through cross-functional problem framing, and recognizing that the biggest barrier to AI adoption isn't technical capability but organizational trust and cultural readiness.
- Problem Framing First
"When someone approaches me, regardless of the door that they're coming from, the first thing is to really have that problem framing conversation. What is it that we actually want to solve for? What is the outcome that we are worried about that would not be acceptable?" - Efficiency vs. Disruption
"AI for efficiency being fundamentally, we're not overhauling the kind of value proposition that we're delivering to clients. We're looking primarily at reducing the cost of delivery. But AI for disruption being something quite different, where we're actually really overhauling what the value proposition is that we're delivering to clients." - Process Prerequisites
"The efficiency gains come often when we say, here is a well articulated process and we choose certain pieces of that process that we are then going to augment through AI to make it much more efficient. But if there isn't really anything that we can describe as a process to start with, then we're just throwing stuff at the wall." - Portfolio Approach
"I think about this as an investment portfolio problem. A better way to make investment decisions is to say, I want to allocate my portfolio a certain percentage to low risk, a certain percentage to medium risk, and a certain percentage to high risk." - Leadership Modeling
"If the leadership team shows that they are taking a different mindset and taking a different approach towards the way that they are acting with each other, that sets the pace for people elsewhere in the organization to say, okay, when they're talking about this behavior, that's what it looks like." - Trust Over Information
"People are not showing up to talk to me because of the information that I'm giving them. What people are showing up for is the confidence to push through even when things get hard, that trust in each other to keep moving it forward." - Human Connection
"The biggest use case for ChatGPT and other LLMs is therapy and companionship. We've created a society where real people are so disconnected from each other that the connection that they're seeking, they are finding through these digital avatars, and that's eroding our capability to connect with other real people."
AI for Efficiency vs. Disruption
Julius Neal: Welcome to 10X AI. This is the show where we explore how AI doesn't replace the people, it amplifies them. So I'm your host, Julius Neal, and in every episode, we talk with innovators, builders, and thinkers who are using AI to unlock human potential and not eliminate it. Today with us, I'm joined by Brooke Struck, the founder of Converge, and he is a facilitator in transformation of different companies. And I would like him to introduce himself and show us how he started from facilitation of traditional program to going into the AI integration in each company. Come on, Brooke, what is.
Brooke: Hi, Julius. Thanks for having me here today. The practice that I'm running at Converge is really focused on helping leadership teams to come together, align and commit around a transformation, and then really guiding them through that transformation and helping them to stay focused and stay disciplined, to stick with it. And the reason that I take this facilitative approach is that I see so much of what's needed in that kind of commitment and that kind of stickiness of transformation is about that leadership buy in. It's about that feeling within the leadership team that this direction of transformation is something that belongs to us. I know that the death knell of any project is when someone says, oh, yeah, well, that's just Brooke's thing. If someone internally on the client side starts to associate a transformation initiative or a change or a strategy with an external consultant, that's often a very clear sign that in the very near future, they are going to stop putting in the work that's needed to transform that thing. That's really why I take this facilitation approach to pull it out of leadership teams.
Julius Neal: So when companies approach you, do they tell you, okay, we wanted AI transformation, or they just, you start with transformation, traditional transformation of the organization, efficiency, operational excellence, and all this? Or is it already straightforward, direct? Okay, Brooke, we wanted AI transformation.
Brooke: That's an interesting question. I mean, conversations come in through a lot of different doors, and AI these days is a very popular door. And so when those conversations started, people say, hey, Brooke, we really want to work with you on this AI transformation. The first thing I'm going to do is dig a little bit deeper beneath the surface and ask, what is it about AI transformation that has you so excited or on the other side maybe has you so scared? What is the outcome that you actually care enough about to be willing to put in the work that's needed for transformation? Because there are always difficult moments during a transformation process. And sticking with it during those difficult moments is the key thing. So if we don't tap into that root of why the transformation is worth doing in the first place, it's much, much harder to sustain motivation.
So when we talk about AI and when the AI doorway is the way that people come in, that first question for me is what is it that you want AI to achieve for you? Or what is it that you're worried that AI is going to undermine or dig out from underneath you and things might collapse? Using that to define a kind of end state that we want to achieve. And then we can work backwards and say, okay, now in terms of the way that you're going to adopt AI and your business might change. How does that feed into getting us where we want to go?
Julius Neal: Okay, so in terms of that, how do you prepare the program for the facilitation? Once you know all the problems, their targets and objectives, how do you prepare that? Is it you start with AI tools or is it very technical, or is it first on the human side?
Brooke: It actually starts on the value proposition side as well as the human side. So the value proposition side is really about getting clarity within the leadership team about what they see as the future of their business in terms of the kinds of clients that they're serving and the kinds of pain points that they're solving for those customers and the value proposition that they're using to win with those customers. So making sure that we're really clear on that and AI already comes up in those conversations. How do we see the pain points of customers evolving as a result of AI? How do we see our industry evolving and the other offerings out in the industry evolving? That's already part of the conversation.
So that's on the value proposition piece, but the human piece is also there. When we think about how we envision delivering that value proposition in the future, which of those pieces can be automated or can be augmented with AI, and which are the pieces where we see the human component are going to be, will continue to be non displaceable, irreducible, that this is an essentially human element of how the value is going to be delivered to clients?
Julius Neal: And then how do you face or answer the anxiety of the employees who is going to be affected by this AI transformation? Because when you do the facilitation, of course they will ask these tools or this transformation will affect my job. Why would I want to buy in that AI transformation, how do you deal with that?
Brooke: So part of this is about making sure that the employees themselves have a voice in the conversation. As I mentioned earlier, buy in is such a key factor in making sure that transformation moves forward. And it's not just the buy in around things at the C suite table. It's also making sure that middle managers and frontline employees see themselves in the vision that's defined for the future. So making sure that those middle managers and those frontline employees have an opportunity to contribute their insights and contribute their creativity to defining that future vision. And that gives them an opportunity to feel heard and to see that their inputs are actually influencing the outcome, which makes it much, much easier for them to double down and buckle down and do the work that's needed for that transformation.
Julius Neal: Yeah. So in terms of the financial side, so do you also discuss how much are we saving? How much are we going to be efficient, cost efficient, or cost effective when you transform this AI transformation, or is it not in the table of discussion?
Brooke: For sure, the financial piece has to come into that. When we're talking about the value proposition and talking about the strategy of the organization, we need to be keeping an eye on the balance sheet and asking what is the price tolerance that we expect in the market and what is the pricing or what is the cost of delivery that we will need to achieve in order to reach the profitability targets that we set for ourselves. For sure, that's got to be part of the conversation.
One of the things that for me is very important in there, especially in the context of AI, is looking at where we're thinking about AI for efficiency and where we're thinking about AI for disruption. AI for efficiency being fundamentally, we're not overhauling the kind of value proposition that we're delivering to clients. We're looking primarily in those instances at reducing the cost of delivery, some of which we may pass on to clients, some of which we may also keep for ourselves to change our profitability profile. But AI for disruption being something quite different, where we're actually really overhauling what the value proposition is that we're delivering to clients, or in some cases, really overhauling the way that we think about how the work is delivered. So not just saying there's a traditional operational model that we've been following and now we just want to make it run faster, but saying there is a completely different operating model that we could use to deliver this service or to deliver an adjacent service that we're going to evolve into, and that's going to start from this premise of building AI tools natively.
Julius Neal: Yeah, banking onto that. You mentioned about AI for efficiency and AI for disruption. How do you balance the pursuit of these two?
Brooke: Yeah, for me, it's really important to acknowledge that these are two very different pathways. When we talk about AI for efficiency, we're usually talking about things that are, I would say, fundamentally more predictable, more foreseeable. And therefore we will have higher confidence in the guesses that we make, the projections we make of what kinds of efficiency gains we'll be able to achieve and what kind of impact that will have on our profitability, these kinds of things. So I think there's just much more predictability in the AI for efficiency category. And when you get into AI for disruption, you're looking at much less predictable outcomes.
And so I think about this as an investment portfolio problem. If we look at investment behaviors, if we present two investment options to an investor, one that's high risk and one that's low risk, usually that investor will choose disproportionately from the low risk bucket. But the end result of that is if all of your head to head investment choices always go towards low risk, you will have a very low risk portfolio overall. So a better way to make investment decisions is to say, I want to allocate my portfolio a certain percentage to low risk, a certain percentage to medium risk, and a certain percentage to high risk. And then we look at the low risk options and we compare them low risk against low risk. Similarly within the medium risk bucket, medium against medium, and then high against high. So that's a way that I've found it really helpful to work with clients in terms of thinking about their investments in AI for efficiency versus AI for disruption. Let's look at the efficiency gain options assessed against each other and then look at the disruptive options assessed against each other and make our bets in a way that maintains that allocation of the portfolio between low and high risk.
Julius Neal: Okay. If possible, can you give us a case study, for example, in one of your clients where you presented this is AI for efficiency, and then that this is AI for disruption. It's a real case scenario.
Brooke: I'll need to give it some thought. Obviously, client confidentiality keeps me from getting into too much detail. Just on the surface. Yeah, on the surface, I would say something around. Okay, here's a case study that I can present from a professional services business. They've been thinking about data collection and really focusing on quantitative data recently, and the efficiency gains pieces are how can we use AI to improve the messaging in our survey distribution to get higher response rates? How can we use AI to accelerate the kinds of analysis that we can provide and the kinds of insights that we can extract from the quantitative data analysis?
The more disruptive piece is to say, okay, well, one of the reasons we were leaning into quantitative analysis is because qualitative analysis has been traditionally so time consuming and therefore expensive. The cost of qualitative analysis is getting now very, very cheap. Obviously, there's a whole bunch of methodological and technical pieces that need to be sorted out in order for that qualitative analysis to be robust and trustworthy. That's part of what goes into the investments of AI for disruption. But posing that question of saying, okay, if we were to now really ramp up the kind of qualitative data collection that we do, because we now have a much, much more efficient engine for doing the analysis, what kinds of data would we collect now that we just didn't think about collecting before because it was simply too expensive to get the value out of it that we were looking for?
Julius Neal: Okay, okay. So when you come into a company, how excited is a company of transformation going into AI? Is it out of fear, or are they excited to really, okay, we wanted to transform this company into an AI future ready or something? Oh, hey, Brooke, we are very fearful about the coming of this disruption of this AI. Can you help us transform or at least elevate our current status from here to there?
Brooke: Yeah, that's a very interesting question. And I would say I'm seeing different patterns across different industries. So we'll start by saying, and different ages of companies as well. So companies that are quite young, I would say, are much more in the excitement frame around AI.
Julius Neal: That's interesting.
Brooke: Yeah. Companies that are still finding their feet or ones that are already very active in iteration. If the companies are already changing a lot, then the prospect of now being able to integrate AI into those next cycles of change is a very exciting one for them. Older companies that are more established that change on a slower cadence, I think they are seeing AI. They're approaching AI more through the threat framing.
Julius Neal: Oh, okay, okay.
Brooke: And I think a lot of that is connected to industry as well. So if we think about the tech industry, for instance, the tech industry more or less is a group of young companies on the grand scheme of history. We're not thinking about tech companies that have 100 years of history, 150 years of history that are these very slow moving arcs. Basically every tech company is a fast moving company, more or less. I mean, it's not entirely or universally true, but more or less, all tech companies are fast moving. And so in that industry you've got young companies that are fast moving. So they tend to take the very positive, excited frame around AI. If you look at organizations in retail, for instance, these are companies that often are older, they're more established, they're in an industry that doesn't evolve as quickly. And therefore their framing around AI. If I didn't know anything else about the company, I would say their framing is more likely to be in the fear framing because they see new entrants into the market as ones who are really going to disrupt the industry as a whole.
Addressing Employee Anxiety & Buy-In
Julius Neal: Okay, so have you ever had a client who is already when they came to you and asked you, Brooke, can you help us? Because we are already affected by AI, or they were already the victim of the disruption of this transformation?
Brooke: I would say companies that I've worked with that are feeling the early effects of AI, it's often in the AI for efficiency piece. And so the way that it's manifested itself so far in the work that I've done with clients is that clients are coming to me and saying I can't keep up with price competition in the market because there are some others who have found some early efficiency gains and that's allowing them to hit price points that are dramatically lower than they used to be. And so that's creating price competition for me. Those are the early signals of where the AI impacts have been seen and felt in the clients that I've worked with.
Julius Neal: Oh, okay, so that's okay. So in an organization across leadership and support, how do you do the strong execution of this and then alignment into AI?
Brooke: Yeah, so as I mentioned earlier, when someone approaches me, regardless of the door that they're coming from, the first thing is to really have that problem framing conversation. What is it that we actually want to solve for? What is the outcome that we are worried about that would not be acceptable? Or in the more positive framing, what is the opportunity that we are not willing to leave unexplored and untapped into? So having that really honest and frank conversation early on to agree on what the problem is that we're looking to solve, that's super important.
One of the downfalls that I've seen in transformation initiatives is that once people start proposing solutions and strategies and these kinds of things, there's a discussion about those, and we need to make decisions about which solutions and strategies we are going to integrate into our transformation pathway. And that conversation gets bogged down because people have not agreed on what problem they're looking to solve. And so if you and I, for instance, are talking about a solution, you might say, Brooke, this solution is great. Look at how much efficiency we can gain with these things. And if we've never had a conversation about whether what we're looking to achieve is primarily efficiency or the evolution to a new business model. If you just have in your mind this tacit belief that our transformation is about efficiency, and I have in my mind a tacit belief that it's actually about radical transformation, you can say, Brooke, this solution is amazing. Look at all the efficiency gains that we'll get.
And my reception of that is, that's completely irrelevant. It's not that the solution is not a good one for efficiency gains. It's that efficiency gains are not the thing that I care about. I actually care about this other thing. But if we never have that conversation openly, when we start talking about solutions, we will really struggle to have a productive conversation about which ones to prioritize. Because actually, there's a whole conversation in the background that hasn't happened about what a viable solution, what an interesting solution even is, what are the metrics we should be considering to assess the quality of solutions. If we never have that conversation, we're going to get so bogged down and trying to figure out what it is that we should actually do. And the end result of those conversations is often just a compromise. We say, okay, Julius, oh, okay, I see you're pushing for these five things. We'll just go ahead with two of your things and two of my things, and we'll just slam them together and then we'll go. And the reason we do that is I'm tired of this conversation already. We're misunderstanding each other. We're not making any progress. It's frustrating. Time is ticking. We need to get moving. And so we just end the conversation, compromise, and then we're off to the races in this transformation that actually no one is all that excited about.
Julius Neal: Oh, yeah, yeah, I agree. So see, some of the companies who have been thinking about AI transformation, that even when they were not yet in that stage of AI conversion, they were already inefficient. So then the top management, which is not connected to the bottom, just decided, okay, we wanted to convert to AI. How do you think is it going to be beneficial for them to go to AI away or just make some transformation first before thinking about AI?
Brooke: So a pattern that I've seen that resembles what you just described, where you have this disconnect between senior leadership and frontline employees. Something that I've seen often connected with that is that there are not very robust systems for managing the business that are currently in place. Let's go back again to the professional services example that I talked about before. In this professional services company, there were not a lot of standard operating procedures. There were not a lot of tools to help employees to do their work effectively or in a consistent way. And so when senior leadership says, we want to implement AI, one of the challenges there is the efficiency problems that they're having are the result of a lack of systematization.
And in practice, what this feels from the employee's perspective is every time this professional services firm signs a new project, more or less, we need to figure out from zero, from the ground up what this project is and how we're going to execute it. More or less some senior partner in the organization said yes to a project that we've never done before and now we're the ones left holding the bag to figure it out. Bringing AI into that kind of context is not going to be easy at first. And the reason for that is that the efficiency gains come often when we say, here is a well articulated process and we choose certain pieces of that process that we are then going to augment through AI to make it much more efficient. But if there isn't really anything that we can describe as a process to start with, then we're just throwing stuff at the wall.
And so what I ended up counseling in that instance is actually to use AI for documentation. Just start embedding AI into your project workflows in certain ways so that when you do something, one of the steps that you're going to do, in a certain sense, to really, really simplify it. It's if you're working with Claude or ChatGPT, one of the last things you're going to do in your project is say, okay, now LLM tool, I want you to create documentation of the process that we followed for this project and basically spit out the intellectual property. And we're going to use that to start to build a library of playbooks that we can use for our client projects. And so as you start to build that library of documentation, it becomes easier and easier just to start to have standardized standard operating procedures. And once we have those standard operating procedures, then we can have a more critical look at where is it in that workflow that there are the greatest opportunities for AI to bring efficiencies. If this is a five step process, maybe step two is very labor intensive and if we can start to think really, really in a targeted way about the AI tools we can use at step two, then we can find some efficiency gains that will really have a big impact on project profitability.
Julius Neal: Yeah, so yeah, that makes sense. And also the issue there is if you don't have a process, AI is some very structured way of doing things. So you have to be able to make some steps clear for AI to be able to understand what is your process, where are the inefficiencies and where the models will be able to help you.
Brooke: It's interesting that you say that. So first of all, in one sense I agree. It's a tool that works best in structured environments. Also, if you put it into an unstructured environment, it will just spit out slop and never tell you that the thing that it's giving you is a bad result. That's something to be very worried about is if you put it into an environment to succeed, it will succeed. And if you put it into an environment to fail and you're not careful about it, it will still look like success. It's never going to say, I don't have enough information or context to give you a high quality answer to this question. It's always going to answer. But if you put it in a situation to fail, the answer that it's going to provide to you will be bullshit.
Julius Neal: Yeah, yeah, I know, I know, I understand that. So looking into that, if it's going to be in the organization you are helping. So how do you give or give the clarity of message when it comes to the biases of AI? There's some hallucinations sometimes in all this. So how would you tell them, okay, we have to have this a category or some guidelines on how we interpret the AI for us. Is there some bias? Is it hallucinating, is it bullshit or all these things?
Brooke: Yeah, so the first thing I'll say is I am not an AI integrity expert. There are people whose day job is basically just that. So I can help clients to get started in those conversations. But beyond a certain frontier, that's where I would reach out to a partner in my network and say, we are now past the threshold of what it is that I can help a client to do. So that's when I would recommend bringing in a specialist partner to help with that piece of the transformation. But some of the early stuff for me is asking critical questions about what kinds of environments we're bringing AI into.
So for me, one of the key approaches to this is to ask when we're asking AI to do a task in this context, are we feeding it a very clear kind of structure or framework for the task and are we giving it the raw material that it needs for the task? So if we think about the worst thing, the spaces where you can use AI, that you're going to end up with the most slop. It's when the framework or the structure that you're asking about is really unclear and the substantive raw material is absent, you're saying, go and get the raw material. And I'm going to ask you a very, I'm going to give you a very vague ask. People in the agency world will recognize this. I'm going to give you a terribly under defined brief and I'm going to tell you to go out and just gather the information. The results of that are almost always going to be poor.
So the more structure you can bring to what it is that you're asking the AI to do, the better the quality of the results that you get, that you will get. And similarly, if you're giving it more and more of the raw material to work with, rather than asking it to go and identify and collect that raw material itself, once again, the better results that you're going to get. So I talked earlier about this example of the professional services firm that was doing qualitative analysis or getting into more qualitative analysis. So the kinds of places where we could see success there are we're not asking the AI to go out and scrape a whole bunch of websites to pull in unstructured data and then just saying, what are the insights that you see from this collection? So that would give you a terrible result. But if you go in and say, okay, here is the interview protocol that I used to interview these 50 different customers of a client base, and here is the workflow of the overall project, here are the kinds of questions that I need answered to feed into conversation with the managing directors. And here's what I need in terms of informing language that we think will resonate to put on our website in terms of how we frame these decisions. If you give it really, really clear instructions and a lot of the raw material dumped in yourself, rather than asking it to scrape it there, you can get really, really good results. Where essentially you're just saying, take this raw material and fit it into this shape, which is very, very detailed, as opposed to saying, here's a shape that is not detailed and I don't have any raw material. You need to go and find it.
Julius Neal: Yeah. So you mentioned about the data integrity. So for example, have you been into a client or a company that when they, when you, they were there and then you discuss with them, okay, I need this, this, this data. Or are there some apprehensions about giving too much data? The data privacy. So of course we are talking about AI having a very good data. So if you feed that good data, you get good results. But incomplete data, even though it is good, it's also going to give you an incomplete decision. Incomplete guidelines.
Brooke: Yeah, for sure. So there are two issues there that I want to highlight. The first is the privacy issue. The data privacy issue of what are the tools that we can use where we have high confidence that privacy is going to be preserved. That's one question. But then the other is the quality of the underlying data in the first place. And in fact, this is a conversation that I had with JD St. Martin recently. He's the president of Lightspeed, the point of sale company based here in Montreal. And one of the things he was talking about, and this connects back to an earlier point that you and I were discussing, Julius, is that processes are often a key ingredient for high data quality and high data integrity.
So if you don't have consistent processes internally for data creation, you will probably have really low fidelity internal data to work with. And so often the conversations around organizational data quality and organizational data integrity have been focused on technology. What's the technology that we need in order to make sure that we are centralizing our data effectively. But one of the underappreciated aspects of high data quality is that we also need to treat the human side of that equation. Who are the people that are actually using these tools that we are then repurposing? The way that they're interacting with the tools is the passive engine that's creating the data that's extrapolated from their interactions. If those interactions with the tools are really inconsistent, then the data that we infer and that we create passively out of those interactions is also going to be inconsistent.
And so coming back to this professional services firm example, it's if you have no processes, then any data that you're able to, found data, data that's created passively, that you don't need to actively go and survey people and these kinds of things, the quality of found data will be proportional to the consistency of the application of processes.
Julius Neal: Nice. So going into a more structured organization, so when there's a. The organization is more mature, they have more processes already established. And then most of them, a lot of them are working in silos. Each department will have their own systems. And then you come in as a facilitator. Okay, we will have some AI. So how will you deal with them? Because they have their own problems. And since they are in silos, they are not in coordination or their processes are not, department A, B, C are not streamlined. And then you're going in there and then we will have this AI transformation program for all of you. So we will streamline the process. So have you had some experience on that?
Breaking Down Silos Through Problem Framing
Brooke: Yeah, and I'll come back to a point that I know I've hit on several times already in our conversation. It starts with the problem framing. When we frame the problem, we need to frame it in a way that cuts across functions. The problem is not that sales is doing this and that IT is doing that and that HR is doing this other thing. We need to frame problems in a way that cuts across the organization and that the entire organization feels and that then pulls through to the strategy that we will articulate the value proposition. We need to articulate the value proposition in a way that it takes the whole company to deliver.
And when we do that, when we have this conversation about what the whole company needs to be coordinated in order to deliver effectively, we then arrive at something where we say, okay, here are the actions that are needed in transformation, and those actions are not going to be siloed. We need to take these actions across the company, across all of these different functions. And so that influences the way that the leaders in the organization need to collaborate with each other. That's a big thing, is that for me, often when I've seen siloization within functions, that's a reflection of the siloization or the lack of collaboration at the leadership level.
If your head of marketing and your head of sales don't work together, it's a very good bet that your marketing team and your sales team also don't work well together. Once you get your chief marketing officer and your chief revenue officer or something that, whatever those titles are within your organization, if you get those C suite representatives of those functions collaborating really deeply on outcomes that they can only deliver jointly, neither of them could do it on their own and therefore they are seeing the importance of collaborating and the value of collaborating instead of working in silos, that really has a profound effect on the rest of the organization.
So part of that is interpersonal, part of that is the soft cultural side. But then it also needs to be complemented by the more process oriented pieces. If the CRO and the CMO are going to collaborate with each other, what are the kinds of new forums that we need to create for that? What are the new check in cadences that we need to create for this initiative that they are jointly spearheading for them to check in with each other, to update each other on progress, to discuss observations, new opportunities, emerging risks, these kinds of things to be able to manage that initiative effectively. And then from there the actions that are flowing out of that decision making forum, how is that being delegated to other forums where you have operational members of those two teams collaborating with each other? So that's where we start to see those silos coming down. It starts with framing a problem that it takes everyone to solve. From there it flows into positing actions that it takes the whole organization to execute and from there into working formats, working structures, working containers that bring together the participation of members of those various teams.
Julius Neal: Yeah, so one of the, always the problem in organizations, communication. So they will always say, okay, we're not communicating, we don't know what's happening. Up, down, left. So in your facilitation is this also coming up? Number one is communication. Because from my experience it always has to be there. The communication, I don't really understand why it's always has been a problem for many organizations.
Brooke: Yeah, yeah, it's interesting when we say, oh, communication is weak, one of the things that I think we often ignore or don't realize is the very successful and effective communications that are in a certain sense are hiding in plain sight, within a silo. When we talk about siloization, one of the things that I come across frequently is within the silos, the communication is really good. It's across silos that the communication is very bad. And so that I think is an important thing to recognize. It's not that communication is weak, it's that there are certain. There are certain. I'm thinking about network analysis here. There are certain lobes of the network. There are certain subgroups within a network within which the communication is very strong. And there are certain tools and forums and culture that promote that strong communication.
Julius Neal: I'll just turn off, turn on my lights.
Brooke: Yeah, yeah, sure.
Julius Neal: Okay.
Brooke: Plunged into darkness here. Oh, good. Yeah, so, yeah, coming back to that. There are certain subgroups within an organization that communicate very well within their subgroup. There are tools and there are communication forums, weekly update meetings and those kinds of things. And there is a culture and a set of habits that entrenches that strong communication. And so rather than saying we don't have strong communication, we need to go from weak communication to strong communication and therefore we need to go from no communication tools to a set of communication tools. I try to reframe that and say, actually you have a current pattern of strong communication and you want to move from that pattern to a different one. Let's look at what it is that makes your current communication pattern the one that it is. Why is it that these people always know what's going on with each other but never with folks across the organization? Well, they coordinate with each other in these different ways and at these different moments. Okay, well, what are the kinds of ways and moments and tools that would be required to have that communication jump from this group over here to that group over there that also communicates internally very, very well?
Julius Neal: Okay, so how do you think AI will play a role in that communication? Streamlining, standardizing the communication across departments, across the levels of management from the top to bottom?
Brooke: Yeah, so there are a couple of things. One that comes to mind in terms of top down is customization, saying, okay, if this is the communication or if these are the insights, realizations, future directions, etc. that are coming out of high level decision making. AI I think, can be very effective at translating that into something that's closer to the local reality of different recipients of the messages. So here are these high level organizational decisions that are taken. What does that mean for me as a marketing analyst sitting at my desk today? I think AI can offer a lot of value in terms of that customization, in terms of helping people to interact with that message in a way that feels more concrete, more relevant, more actionable.
And the second is from the bottom up. So one of the things that always comes up in strategy is identifying risks and opportunities. And we're identifying leading indicators what are the early signals that we want to be paying attention to, to know as soon as possible whether a certain risk is materializing, or to understand as early as possible whether this new initiative is actually going to provide the kind of breakthrough that we hope that it will. There are all of these subtle signals that different people inside of an organization will be hearing. So someone from customer success might hear this and someone from marketing might hear that, and someone from sales might hear the other thing. You have all of these partial signals that are received at different places across the organization.
And one of the big challenges traditionally has been how do we create a mechanism to centralize those partial signals and to make sense of them, especially when the pyramid is big in an organization. You've got a small group of people at the top who need to make sense of stuff, and you've got a huge group of people at the front line who are picking up all kinds of different partial signals. So how do we centralize those signals and make sense of so much source? How do we get the signal out of the noise? That is another problem that AI, I think is really effective at helping to address. AI, one of its great use cases is how do we aggregate a bunch of partial signals into something meaningful?
Julius Neal: That's a very good point. And for example, you are in a company, how do you usually facilitate. This is from end to end. For example, you start with a transformation, the problem root cause analysis, problem solving, and then until they have this transformation. How do you do that?
Brooke: Yeah, so it starts out with that problem framing and design of the strategy, then shifting into implementation. My role is usually to support those leadership groups in what is usually a very refreshed, updated look at how they do their check in cadence. So for instance, if your C suite is meeting every two weeks, or they have a meeting, a bigger, meatier meeting every month or every quarter, whatever that cadence is, that's usually the main place where I'm going to come and intervene, where I'm going to help them to monitor and manage the transformation and also to adopt those new ways of collaborating with each other.
So this is where the work that I'm doing in strategy and culture really come together. We're going to frame a strategy in a way that it takes the whole organization to deliver. And then on the cultural side, I'm going to help the organization to deliver in a more coherent, consistent way across different parts of the organization to deliver that thing. So the implementation phase is often the main spot where people will see me showing up is in those really high level leadership meetings where we're doing the steering and the guidance of the implementation of that transformation. Sometimes that will also then trickle into what are the ways that the collaboration forums further down the pyramid need to be evolving? What's the training that we need to give to our middle managers to start running their meetings in a way that reflects and mirrors the type of collaboration that we're now creating at the leadership level. So how can we get that culture change from the very top to percolate all the way down into the interactions between everyone inside of the organization?
Cultural Keys to Successful Transformation
Julius Neal: Yeah, so you mentioned about the culture side and then some, the technical side. So what are the some keys to success on the cultural front?
Brooke: Yeah. One of the things that I see often is that when we talk about culture. There's a tendency in the context of change to say, to point at all of the ways in which we're failing and to say this is wrong and that's wrong, and this is wrong and that's wrong. And in doing that, we forget that actually there are a lot of things that have to be going in order for us to even get to this point. So especially working with more established companies, it's if your company has been running for seven decades, don't tell me everything is going wrong because the fact that you survived for seven decades shows how much needed to go right.
And so not forgetting that history of success. There are a couple of reasons for that. One is that it's very demotivating to focus solely on failure. And even if we frame it as saying we didn't do this well, we didn't do that well, we didn't do the other thing well, when employees hear that, even if what you're saying is this decision was bad and that action was bad and we did this wrong, a lot of what employees are hearing is you, you were wrong. There's this very easy transition from the action to the person. This action was the wrong one to take. You made a mistake, you weren't working hard enough, you weren't, whatever that is. So there's a lot of blame that comes with that. So that can be very demotivating.
And the other thing is we can inadvertently change things that are actually a key to our success. So there are some parts of the recipe you don't want to mess with because those things already work. And if you only focus on the things that aren't working without taking an assessment of actually what's working really well and what you want to keep unbeknownst to you or despite your intent, you might accidentally change something that was really essential to your success. So those are a couple of the things on the cultural front to keep in mind. It's let's have a balanced perspective of what's working well and what isn't.
One of the things I'm seeing that personally I've just found really interesting is often when we say these things aren't working well, I'm starting to see a pattern where in fact, the things that aren't working well, actually things that we do really, really well inside of the organization, but not consistently. It's we're not doing this thing well in two thirds of the, two thirds of the circumstances. But over here on the left, we're actually really good at doing that. And this provides a really natural framing for cultural change to say, look, this thing that you're succeeding at over here, we want you to do more of that. We want you to do that across the board. Because the success we're seeing here, we think can actually work everywhere. So that's been a really interesting insight over the last.
Julius Neal: That's nice. That's a very interesting observation from you. And also I think maybe it's a change of perspective. You can just change a perspective what you said. Two thirds of the time, everybody will say, ah, it's not working well. Instead of saying, okay, we did this very well in one third of the time. Why not just make it, make it standardized, do it better and do it 100% of the time and focus on that. So that is a very, very powerful insights from you. And it's.
Brooke: There's another piece there that I want to touch on. And then I mentioned it earlier, part of it is structural. We need to set up these new meetings and we need to adopt these tools and there's this training, there's all of that structural stuff. And in terms of culture and collaboration, there's just an interpersonal piece to it as well. We also need to learn to ask each other different kinds of questions. We need to start treating each other differently. We need to change our mindset about how we think about each other. So that's a part that I think often gets neglected. Because people feel that it's a little too squishy, it's too nebulous. What does it mean to actually change a mindset?
And so there. One of the things that I've seen is most powerful is it's important to model the behavior. And that modeling really has to come from the leadership team itself. If the leadership team shows that they are taking a different mindset and taking a different approach towards the way that they are acting with each other, that sets the pace for people elsewhere in the organization to say, okay, when they're talking about this behavior, that's what it looks. I don't just have it written down somewhere. I feel in my gut. That's the kind of interaction, that's the kind of behavior that they've been talking about. I now have a model that I can follow. And because that model is from somebody very senior inside of the organization, I'm also very motivated to follow it.
Julius Neal: Wow, that's nice. That's a very good insight from you. So it was a very interesting and very insightful conversation with you. It's very great. Thank you for that. Now we're going to the rapid 10x round. So I asked you 10 questions about you and how you integrated AI in your life. So question number one. What's one tool you actually use every day? AI tool.
Brooke: Yep, Claude, that's my tool of choice. Yep.
Julius Neal: Why?
Brooke: A lot of the work that I'm doing is linguistic analysis or from an AI lens. Linguistic analysis is the closest bucket. And I just, I love the way that it treats language. I love the way that it writes, I love the way that it conducts analysis. It's just for the use cases that I have. It's a much better alternative than ChatGPT or Gemini or Copilot or any of the others.
Julius Neal: Okay, what one task AI has already 10x in your life?
Brooke: I talked about this professional services firm and documentation, and I learned a lot for my own practice from doing this project with them. And so I now I'm creating my own library of documentation in such a low effort way.
Julius Neal: Nice, nice, nice. Okay, Third one is what's human. One human skill AI can never replace.
Brooke: Connection between people.
Julius Neal: I agree. Yeah, it will never replace that. If you could automate one annoying thing forever, what would that be?
Brooke: There are a lot of annoying things. I'm going to say there are a few ingredients, and I'm working on this already. And they're all around project management. So meeting summaries, next steps, and then all of the communication and tracking stuff that goes with that. So in my ideal world, I come into a conversation, I have a meeting, and then my magically out of the box pops a summary that's actually good and informed by the context of the whole project, with next steps, again, that are good informed by the context of the project, and then that is automatically packaged into an email to send to clients. It goes automatically into my task tracking, task management tools. If that could just be a click of the button and completely outsourced, I'd be a very happy man.
Julius Neal: What's the biggest misconception about AI now?
Brooke: I listened to a podcast episode yesterday that really opened my eyes to something that I don't think I've been fully appreciating, and that is that I see a lot of people in their work approaching AI from a productivity perspective or an information management perspective. And one of the things that had my eyes open to is that apparently the biggest use case for ChatGPT and other LLMs is therapy and companionship that people are. And that, to me, was so. So sad, such a gut punch that we've created a society where real people are so disconnected from each other that the connection that they're seeking, they are reaching out and they are finding through these digital avatars, and that that's eroding our capability to connect with other real people. And so that for me is that really set a big misconception for me. So you mind if I give you a bit of context about this?
Julius Neal: Yeah, no problem.
Brooke: This came out of a conversation I had with a partner of mine. He and I were talking about creating the Brooke AI chatbot. What if I just took all of my intellectual property and all of my writing and all that kind of stuff and just jammed it into a custom GPT and made that available to people? And what he and I were discussing is actually the biggest challenge that my service helps clients to overcome is a challenge of trust, a challenge of confidence. It's not that my clients couldn't take all the stuff that they say to me and jam it into ChatGPT and get an answer. It's that if they did that, they wouldn't have confidence in that answer, they wouldn't trust the answer that they arrive at. And in particular, they wouldn't do it as a group. It would be very siloed. It would be very one to one.
And so if what people are looking for there is trust. Then if I. The way that I was thinking about that problem, creating this Brooke alter ego, the idea there was, I'm going to help them with their productivity, I'm going to help them with their work. But actually, what they might be showing up for with me is trust. And so that was a big misconception that I had. Oh, yeah, people. People are not showing up to talk to me because of the information that I'm giving them. And in fact, this is something I should have realized because it's so obvious to me in my own work, what people are showing up for is the confidence to push through even when things get hard, even in those moments where they ask themselves, did we make the right decision? They will maintain that confidence. They will maintain that confidence in the decision and in each other and that trust in each other to keep moving it forward. Yeah, that was a big misconception for me that got righted.
Julius Neal: Okay, so what advice would you give who wants to become 10x starting today?
Brooke: Think really deeply about what you want to be doing more of and where your unfair advantages are. What are the things that you do super, super well that people around you don't find easy to do? And then use AI for all the connective tissue around that use it to clear as much space as possible to let you do your thing.
Julius Neal: Oh, okay. And it's hard to do that. Yeah, for sure. What would you tell your younger self about the future?
Brooke: Don't worry so much about appearances because no. Nobody's watching as closely as you think they are. You can make a lot more mistakes than you think you can.
Julius Neal: I know. So what's one book, podcast or resource that shaped your thinking about AI or innovation.
Brooke: Around innovation? Probably one of my favorite books is one called Teaming by Amy Edmondson.
Julius Neal: Can you tell me more about it?
Brooke: Yeah. It's looking at the dynamics within teams that are the most effective and the most innovative. Amy Edmondson is probably most famous for her work on psychological safety and the importance of psychological safety and innovation and incentives, safety culture, and these kinds of things. And teaming for me was just an amazing super concentrated, dose of exploring the dynamics within teams that psychological safety makes possible, especially around innovation.
Julius Neal: Wow, nice. So if you had to bet on one AI trend that will be massive in three years, what is it?
Brooke: I talked earlier about the misconception around AI and thinking that it's for information or for productivity, when really a lot of it is about trust. And the intersection of those things I think is really customized coaching. I think that professional coaching is probably something that is under threat by AI now because there is a lot of self service that you can do through AI chatbots. Now the quality of that self service is highly variable and you should be critical of it and you should be skeptical of it. But again, the AI is never going to tell you that on the surface it will feel you're getting really good coaching. So I think that that's an AI trend that in the next three years we'll really see exploding.
Julius Neal: That's where to see. Okay, what does 10x mean to you personally?
Brooke: I'm going to come back to one of the questions you asked me earlier, and that's about opening the space to be more myself. If I can get more opportunity to do the things that I'm best at and that I enjoy most and that add most value in the interactions and in the relationships that I have and all the other stuff can be cleared away. That's what 10x is for me.
Julius Neal: Nice, Nice. Very refreshing. All right, thank you very much for that insights. I really enjoyed all your insights and all the experiences and those that you shared. And I think most of the listeners or viewers of this podcast will be able to get some insights from you and they'll be able to also do that or invite that into their lives or into their organizations. And if they wanted to reach out to you, can you tell them how do they reach out to you?
Brooke: Yeah, for sure. You can find Converge at convergehere.com and if you want to reach me, it's brooke@convergehere.com.
Julius Neal: Yes, yes. Thank you very much, Brooke, for your time and it's really very enjoyable time for me and I hope and I'm also rooting for your success.
Brooke: Thank you, Julius.

Leave A Comment