Ground-Up Governance
Sound-Up Governance
BONUS: Board evaluations don't work...but they could!
0:00
-43:10

BONUS: Board evaluations don't work...but they could!

We've been missing the whole point of board evaluations all along. They could be easier, more fun, higher impact, and actually worth the money.

TRANSCRIPT

Hi, my name is Matt Fullbrook. Today I want to do a detailed examination - takedown, maybe - of the board evaluation, but we'll get to that in a second. It's been more than a year and more than 50 episodes of One Minute Governance since I put out one of these longer form episodes. I don't do many of these because - I think I've said this before - these are really hard to make. If you haven't ever written or recorded a podcast or other scripted audio thing, it turns out that the trickiest part is making it sound easy.

I'm not complaining. Or at least I'm not only complaining. These episodes are good opportunities for me to get a little silly when some dorky governance topic gets me thinking to the point where I'm feeling irritable. Today's snark builds directly off my last long episode, which was titled Why Do Corporate Governance Consultants Mostly Suck? I've been inspired by what I perceive, probably unfairly, as a notable decline in the quality of the discourse around board effectiveness in the past year.

I'm probably largely reacting to the ongoing enshittification of LinkedIn, which is the only social medium I use and is my main point of access to board effectiveness discourse other than in-person. If you're one of the folks who's sharing half-hearted hot takes accompanied by context-free selfies, then I just want to acknowledge it’s the algorithm I hate, not you. Get those likes.

Anyway, where in my last episode I took aim at governance consulting in general. Today I'm zeroing in on what might be the main revenue generator in the governance consulting arsenal, which is board evaluations. I suspect I'm preaching to the choir here. Everyone seems to have a gripe with the way OTHER people approach board evals, so you're probably already nodding along a bit in anticipation that I'll be validating your opinions, which I might.

Let's set the stage. A bit of an update on my journey. As I write this sentence, it is July 14th, 2025. Editor’s aside: this revision of the script is from August 8th, 2025, but that previous sentence is still untouched in the past three weeks. Did I mention that making these episodes is hard? Anyway, that means it's been about 20 years since I first worked on a board evaluation. I've been self-employed for more than four years since I left my role at the University of Toronto. Although I'm currently sort of looking for a job, which is neither here nor there…unless you have some cool ideas about what I should be doing with my life or an opportunity you think might be interesting. Those four years of self-employment have been amazing for a lot of reasons, but the greatest gift has been having the space and freedom to loosen my grip on the conventional thinking about my line of work.

To put it another way, I've started gleefully embracing doubt. Some of that doubt originated in my realization that board evaluations, such as they are, are pretty awful. And let me tell you: I worked *really* hard for years on making my surveys and interviews as smart and customized and relevant as possible. Each one better than the last. And still the ROI for me and my clients was embarrassingly small. But I think I’m onto something that really works. Let me summarize, just to give you a sense of where we’re going, and then we’ll dive in – including my usual musical allegory. Here’s the summary:

- A significant majority of the time and effort of a typical board evaluation is to gather data on how boards have done things in the past. Nobody needs that. Boards already know where they want to improve without you needing to do any research. Besides, you and I can already guess with a high degree of accuracy what problems a board wants to work on without knowing anything about them. Measuring the past should be less than 5% of the project.

- At least half of your time and effort in a board evaluation should be in designing interventions that will likely help this client to get better.

- You can’t design those interventions without learning about the people in the room and what types of recommendations they are likely to embrace and carry out

- Unless you studied survey design, you’re probably *really* bad at survey design, and also bad at analyzing the results. Get some help, or ditch the survey altogether.

- Consider working hands-on with your client to model the behaviours you hope they will adopt. Don’t expect them to just read your report and magically understand how to do what you’re asking them to do.

If any of that sounds interesting to you and you want to know more about the why and how of the whole thing, then this episode is for you. If not, this is a great point to bail.

[Music plays]

In a weird way, the best place to start is with a new paper in preprint, meaning not peer reviewed, from the Max Planck Institute for Human Development in Germany. It's called Empirical Evidence of Large Language Model’s Influence on Human Spoken Communication. In short, the researchers believe they have evidence that people are starting to talk like ChatGPT. In the paper's abstract, they write that “this marks the beginning of a closed cultural feedback loop in which cultural traits circulate bidirectionally between humans and machines. Our results motivate further research into the evolution of human machine culture and raise concerns over the erosion of linguistic and cultural diversity and the risks of scalable manipulation” Yikes.

I'm sure you know this already, but ChatGPT and other LLMs don't really know anything, and they certainly don't believe in anything. They're programmed with super complex prediction algorithms that draw on massive amounts of existing material, all for the purpose of guessing what word you probably want to consume next based on the prompt that you've provided, and they ultimately generate a legible response. But: no feelings, no consciousness, no nothing. Most of the chat interfaces are programmed to demonstrate a level of civility, and you might ask them to generate something that conveys an emotional sentiment, but behind it all, it's just a guessing machine.

But that doesn't mean LLMs don't have preferences, better known as biases, based on what training material they've consumed and how the prediction formulas have been designed. Every LLM generates its own distinct flavor of vocabulary and turns of phrase that are different from the way that most people talk. Until now.

Back to the paper in question. The researchers analyzed a bunch of unscripted spoken content – academic talks on YouTube and interview podcasts and stuff – and found that, after ChatGPT came out, the way people talked started to change and absorb some of ChatGPT's weird quirks and turns of phrase. The researchers’ fear is that ChatGPT then gets trained on all that content that sounds like itself, potentially causing a vicious cycle and leading to the death of all creative human thought.

For the record, I'm not as alarmed as all that. We've been told a million times that this or that new technology or social movement will be the death of everything we hold dear. The human drive to be creative always seems to win for some reason.

What *has* happened over time, at an accelerating pace as far as I can tell, is that we pay more money for an ever increasing number of dumb shortcuts that are bad for us. I’m talking things like fast food and bite sized rage bait disguised as news and artificially tuned singing and now using ChatGPT so much that you start to sound like it. We love shortcuts in general, and we especially love them when they satisfy our basic desires like eating and feeling smart and thinking we're better than people who disagree with us. But creativity persists nonetheless.

My design thinking hero friend, Andrew Seepersad, explained to me that everybody is creative. He said once, “have you ever used a knife like a screwdriver? That's creativity!” And that's never going away. And there will also always be creative outliers, great artists creating revolutionary work that alters the course of history.

But when it comes to board evaluations, there's a well-worn path that for some reason literally every governance consultant I've ever met follows. There's creativity among the people on the path, for sure, but, from what I can tell, even the most creative haven't noticed that they could walk in a completely different direction if they wanted.

Let me ask a question. Imagine you're a director on a board and you hope to do a better job – both individually as a director and collectively as a board – a better job tomorrow than you did yesterday. And let's say that we're in an alternate universe where board evaluations have never existed, so there's no convention or best practice that you need to feel obligated to conform to. What are some steps you'd take to increase the probability that you'd actually be a better director tomorrow than you were yesterday?

I'm going to pause for three seconds for you to plant a seed of creativity.

Okay, with those seeds planted, let me dismissively summarize a conventional approach to board evaluations. Let's say a conventional board evaluation costs 100 Mattbucks to conduct. Typically, we would spend about 90 of those Mattbucks measuring and analyzing potential problems from the past. We'd do this by conducting some interviews, maybe circulating a survey, maybe also reviewing some documents like board packets and mandates and so on. And then we have 10 Mattbucks left over to craft some recommendations about how to get better and write a long and detailed report about the whole thing.

And that's the defining characteristic of the well-worn board evaluations, path: measuring and reporting on problems in a way that makes it clear we’ve done lots of work, but without caring much about whether we’re doing the right kind of work in the first place.

But let's put ourselves back in the position of that director who's in a world without board evaluations and wants to get better tomorrow. Including the seeds that you might have planted in those three seconds of silence.

You know what question you're probably not asking yourself? “What are some things my board is bad at?” Why? Because you already know. You can name a whole bunch of things you hope to get better at, both individually and collectively. What you don't know is exactly what to do about it. Or how to get everyone in the room to agree on what *they* want to do about it. And then making sure those things are designed to actually address the problems. And then following through.

In other words, we were about to allocate 90% of our budget to answering a question that nobody really needed the answer to. We tell our clients what they already know and charge a tonne of money for it. They’re better off spending it on pizza, which almost certainly will improve your board meetings. Am I arguing that a pizza will improve your board effectiveness more than a board evaluation will? Let me put it this way. I'm not NOT saying that.

[MUSIC PLAYS]

Advances in technology have really changed how music gets recorded. This is a big reason why a song by the Ronettes or the Beatles or whatever can still sound great 60 years later in a playlist of current music. And why Bing Crosby or the Ink Spots from 20 years before that might be great, but they sound old as dirt.

If the history of recording technology is of interest to you, this is one of the many, many reasons you should check out the amazing podcast, A History of Rock Music in 500 Songs by Andrew Hickey. You could start at the beginning – highly recommended – or just dive into a relevant episode like the one on Telstar by the Tornadoes. Seriously, I won't be mad if you ditch me right now and go listen to that. It's so good.

Regardless of what era or songs you prefer to listen to, there's a huge difference in how the work got done in the 50s and 60s compared to how it gets done now, partly thanks to technology and partly thanks to a shift in the economy of the music industrial complex.

Imagine your band has written what you know could be a hit song. If it's 1965, then it's pretty normal for studio recordings to just capture a live performance of a band with maybe an overdub of a background vocal or guitar solo or whatever.

So that's what you do. And your first take is absolutely brilliant, except for one glaring mistake, or what musicians call a “clam”. A clam is the opposite of subtle. You can't use the take as is. So, the 1965 solution is to do a bunch more takes. But none of them come close to the magic of the first one. Sigh.

At least your recording engineer is a brilliant editor. In 1965, that means physically finding the spot on the tape with the clam, hoping you can do some splicing – like, literally cutting and pasting – to put together a clam-free final version of the song where the edits aren’t too noticeable. Or, otherwise, living with one of your inferior takes.

Make sense?

If you're making that same song in 2025, some parts of the experience might feel familiar. Although, as far as I know, there aren't any recent pop hits that have been recorded by a live band in the studio, all playing at the same time. It's still something that happens on occasion, just not usually in the mainstream. Have a look at the hugely popular YouTube videos by Snarky Puppy, for example. Not that those count as pop hits.

More likely, the elements of the song are recorded at different places and different times. Beyonce's Cowboy Carter album, for example, was apparently recorded at 14 different studios. Even just the song Texas Hold' Em was recorded at five different studios. And even if a band is captured live in the studio, there's one huge difference between 2025 and 1965.

Computers.

All the sounds are carefully isolated and then captured in software called a digital audio Workstation or DAW. You might have heard of Pro Tools or GarageBand or Ableton or Logic Pro, for example, where anything can be mixed and manipulated in countless ways and undone if you don't like it.

It's honestly kind of a miracle. I will use these techniques when recording this podcast. I'll do several different takes of the voiceover and choose the bits I like and piece them together in ways you'll never notice.

Miracle though it might be, there's a cost. It's a cost that's analogous both to people starting to talk like ChatGPT and to people missing the point on board evaluations. What started as a whole new world of possibilities for recording, editing and mixing music has now become a set of conventions that are generally accepted even if they aren't desirable. Rhythms are corrected to align perfectly among the instruments and perfectly to the tempo of the song, what we call “the grid”. Tuning of vocals and instruments is corrected to avoid any potential tension or appearance of imperfection. The dynamic range of recordings, also known as the variance between the quietest and loudest parts, is reduced to the point where every song is exactly as loud as every other song, which is to say, as loud as possible.

FYI, the process of correcting rhythms is called quantizing.

So now let's allow *these* ideas to plant some mental seeds. Quantizing, pitch correction, narrowing the dynamic range to make things as loud and similar as possible. These approaches have been applied to basically every mainstream pop song for the last 20 years, to increasing degrees. None of this is a comment on how good a song is. We're just talking about recording and production techniques.

Ask yourself as honestly as you can: to what extent are your board evaluations trying to treat your clients in a similar way that modern production is treating songs. Snapping things to the grid. Reducing human imperfection. Making each board a bit more similar to each other board.

Oh, and I didn't mention earlier that some of the hints at the ChatGPTification of speech are the increased use of the words “delve”, “comprehend”, “boast”, “swift” and “meticulous”. I find that hilarious, honestly. I like and use the word “meticulous” on the reasonable regular. I use the word “swift” kind of frequently because my better half and I often remark on the chimney swifts flying around in our neighborhood. It's a type of very twittery little bird. Cute as hell. But the idea that thinkers and content creators are consuming so much ChatGPT content that they've unconsciously started saying “delve” all the time is more than a little hilarious to me.

[MUSIC PLAYS]

My friend, Paul Smith, was the first I know of to refer to the Corporate Governance Industrial Complex. The constellation of consultants, educators, lawyers, regulators, software providers, pundits and so on who all make our living off of whatever we think corporate governance is. I’m sure the Complex has existed in some form since the formation of the first corporation. But the major collapses of the early 2000s started an explosion, a chain reaction that hasn’t really stopped. It’s how I put food on the table, and why you’re listening to this podcast. Good for us!

But it’s time to make an uncomfortable admission: Boards are and have always been fine.

Before The Complex existed, boards were fine. Since the emergence of The Complex, boards are still fine. On average, they're probably no better or worse than they ever were. Before The Complex, there were no real board evaluations. Boards did their jobs, such as they were, and did them fine enough in most cases without much of a thought about some formal process to assess their potential shortcomings. And in nearly all cases, everything was just fine.

That doesn’t mean board evaluations can’t be good. Just that they probably AREN’T good in most cases. As in, they don’t provide value that’s anywhere close to in line with their cost. I think I have a useful hack to push us in the right direction: replace the expensive research phase of the evaluation process with a completely free educated guess.

Remember, we typically spend 90 out of 100 Mattbucks on problem identification. What if I told you that without doing any kind of survey or interview, I can guess with a high degree of confidence that any and all of your clients would be really happy to improve on at least five of the following 10 issues? Probably more than five, actually.

1. They have too much stuff to do and too little time.

2. The board doesn't always get the right information in the right ways at the right time.

3. Some people in the room talk way too much.

4. Some people in the room talk way too little.

5. They often feel like they are in the weeds or too operational.

6. Management comes out of board meetings with a bunch of stuff to do that doesn't really help anybody.

7. People ask questions that feel like a waste of time to everyone else.

8. The board rarely disagrees and/or when they do disagree, it's rarely constructive.

9. Management presentations could be a lot better.

10. Division of labor among board and committees is less than great.

I actually like this list, but I should admit that I wrote it in less than one minute and could come up with way more examples. But this is actually a really good start because, in my experience, literally every board wants to improve in most of these areas, if not all of them. So even if we limit our scope to just these 10, think of how happy your client would be if we could make some significant progress on them. Amazing, right?

So, we still have 100 Mattbucks left because we've done no work. Still, we don't just want to force our client to focus on these 10 things without getting their buy in, right? Why don't we have a quick conversation with the board chair or governance committee chair and ask them which of the 10 things are most relevant to them and maybe get a rough sense of their priorities? Let's say the cost of that conversation is one Mattbuck. We've now spent one Mattbuck to reach the exact same point in the process that we'd usually spend 90 Mattbucks on, albeit with lower resolution because we don't really have any data. But that's okay, because gathering a lot of data to tell our client what they already know and that we could have just guessed ourselves was probably not an awesome use of time and money anyway. And we still have 99 Mattbucks left to allocate to actually useful stuff.

This is the first and probably most important mindset shift that can transform board evaluations from expensive, time consuming, low impact, and boring to something better.

[MUSIC PLAYS]

Let's get back for a second to the weird music phenomenon I was talking about, where on the one hand, the sonic variance of pop music has decreased dramatically. Every hit song from the past 20 plus years has been corrected rhythmically and pitch wise and squeezed dynamically. The net result of all this is a decrease in diversity in how pop music sounds. Meanwhile, some songs from a time before all that are still insanely popular. More popular than you probably realize. Apple Music creates an annual playlist of the top 100 songs that Shazam, a music recognition software, hears the most on the radio around the world. Shazam apparently hears more than 1 million hours of music every day. Here are some examples of the songs on that list that are, well, not so new at all. In fact, 16 of the top 100 are from 1993 and earlier.

Here are a few examples, starting in 1993 and moving back in time. What's Up by 4 Non Blondes from 1993 was at number 99. Everybody Wants to Rule the World by Tears for Fears from 1985 was at number 94. I Want to Know What Love Is by Foreigner from 1984, ranked at number 90. Billie Jean by Michael Jackson from 1982 was at number 56. Africa by Toto from 1982 was at number 61. Side note, Toto was basically the backing band for MJ's Thriller, so 1982 was a pretty big year for them. And the oldest song on the list was Hotel California from 1977 at number 73.

The highest-ranking old song was Take On Me by A-ha from 1985 at number 36. The 36th most Shazam'd song of 2024, right above Birds of a Feather by Billie Eilish. Also, I learned today that A-ha was Norwegian. Think about this, though. 16% of the top 100 songs on the radio in 2024 were from 1993 and before. These songs are still huge. And I promise you'd think most of them sound freaking perfect in a playlist of Dua Lipa, Teddy Swims, Benson Boone, Sabrina Carpenter and the rest of the artists in the top 10.

As an aside, Benson Boone, whose Beautiful Things was number four on the list – ahead of Beyonce, ahead of Taylor Swift – has about the same number of monthly listeners on Spotify as Queen, which is cool because his whole vibe is basically a Freddie Mercury pastiche.

I like current music, to be honest, but I think part of what we're seeing here is some evidence that the trade-off of human imperfection for technological precision in music might not be worth it in the long run. And by “worth it” I mean both artistically and commercially.

One way to summarize how the workflow of music production has changed in general is that there's still a tonne of work that goes into songwriting. But then we've divested from the pre-production and performance part of the process and reallocated that investment to the post-production – all the correction and tuning and stuff we've been talking about. But when the objective of all that post-production is to ultimately reduce the variance, the human imperfection from song to song and artist to artist, then the potential for genuine surprise or even revolution also decreases. And people's reaction to art comes in large part from surprise, from confronting the unexpected. So when something truly unexpected breaks through and has a massive impact on tastes overall, you get those revolutions. The emergence of a new paradigm: jazz, rock and roll, electronic, new wave, hip hop. But since then, what? No disrespect to trap, which I like, but it ain't a revolution.

And it all sounds a lot like what those researchers are worried about with people starting to talk like ChatGPT, right?

So, I'm making a soft argument that we might find great artistic value in reallocating our investment to the performance part of the process, similar to our reallocation of Mattbucks from the value-neutral act of telling the client what they already know and into something much more exciting. And this is the first mindset shift that I think will finally make board evaluations great.

The second shift is inspired by the design thinking folks I've been hanging out with lately, which is invest in understanding your audience. And I'm not talking about anything as generic or general as customizing a survey or conducting one-on-one interviews or whatever. I'm talking about literally trying to understand the preferences of the individual people in the room.

Here's a trivial example. Some people love consuming large quantities of information. Data, storytelling, analysis, tables, graphs. They want it all. In bulk. And can consume it quickly and retain everything. That's not me. At least not most of the time. I can hear and recall music really well, far better than average. But the same doesn't apply for numbers or conversations or names or even stories. It's probably not you either. But some people seem to remember everything and can read super fast. Yes, I know that the pace of reading affects comprehension in general, but there really are people out there who have the skill or talent to just read way faster than other people and still have a high level of comprehension and retention.

Nearly every board evaluation leads to a recommendation related to the content or quantity or timing of information flow to the board. How can we possibly hope to provide a useful recommendation if we don’t understand how the individual people in the room prefer to consume that information?

On a related note, on the day I wrote this sentence, Kelsey McKinney published a really nice essay called “Shakespeare Makes Me a Slower and Better Reader” – good shit.

Without understanding your audience, the probability that your work will, well, work is low. You're basically relying on luck. So maybe let's allocate a few Mattbucks to learning about the people – the directors and executives – so that you can design recommendations that will have the desired impact in the right circumstances.

And please include management here. Don't forget that management will not only be affected by your recommendations, but they'll also bear most of the burden of carrying them out and will also be the ultimate beneficiaries (or victims) of the result.

My preferred approach here is super fast and cheap but not intended to be anywhere near perfect. Let's say it costs another one Mattbuck. FYI I like my approach so much that I’m gonna keep it to myself for now, but you can definitely come up with something good without too much effort. You might prefer to take a more meticulous approach than me, but I encourage you not to do more than 10 Mattbucks of work here. So now we have somewhere between 89 and 98 Mattbucks of budget left to work with.

Now we can do the important part. We know what challenges we want to address and we understand our consumer. For example, our client board and executive team have admitted to us that they tend to get sidetracked in meetings by conversations about the granular details of the music playing in the elevators at their head office. Last meeting they spent 15 minutes unable to choose whether the elevator theme song should be the original version of Charli xcx's Guess or the remix with Billie Eilish. Anyone on the remix side of the argument is objectively correct. I mean, Charli’s verse is good and everything, but who thought it was good enough to repeat twice and call it a song?

In any case, it's not that the conversation was pointless, but everyone regretted it afterward. Even if the board's time is best spent on mundane topics like this, which is questionable, considering a broader range of song options from different artists would have been more useful than adjudicating whether Guess was 2024's thirstiest hit. Anyway, the client has informed you that this type of thing happens all the time.

In your research about people's preferences, you've also learned that a third of the board is obsessed with green things, which in hindsight helps to explain the whole Charlie xcx fixation. You recommend a few potential interventions to improve things in the future. Such as:

1. Management can take some time to think about exactly what conversations they hope to have with the board, and then color code elements of the pre-reads and presentations to guide the green-obsessed board members’ attention accordingly

2. Management can craft some specific questions, the answers to which they sincerely want the board's insight on, and put them in sparkly green letters on the first page of the pre read. And

3. Be sure to provide at least two versions of every video update – apparently they already do video updates, which is cool – one where the background music is the OG version of Guess and the other with the remix, and maybe a third with something completely different just to offer options

Reflecting on this story, I bet the part of your brain that's been conditioned by the Corporate Governance Industrial Complex is instinctively assuming this whole example is a joke. It's not. The only part that's mostly unrealistic is the Charli xcx part. And maybe the detail your board gets distracted by is more relevant to your organization than elevator music. But everything else about this example illustrates stuff real boards and executives could try in the real world, people ARE influenced by visual features like colors. Managers SHOULD have a plan for what conversation they hope to have and design board material accordingly. People DO have idiosyncratic preferences that affect the way they engage in group discussions and decision making. Video updates ARE a good idea and super easy to create.

And the probability of these things making a real difference on the challenges at hand is way bigger than big difficult changes like rebuilding agendas or crafting entirely new reports or changing the functioning of committees or whatever. Frankly, most normal recommendations from board evaluations are both too difficult to carry out and not very well designed to solve the problem at hand. And that's a bad combination

Anyway, so let's say we spend somewhere between 50 and 60 Mattbucks designing interventions that have a high probability of improving the challenges we've identified. In some cases, we might propose a few different options for the same challenge to suit a wide variance in preferences among board members and executives. By then we've got between 29 and 48 Mattbucks left, and we're already way ahead of the traditional approach. If we'd done things the normal way, we'd have already started eating into our last 10 Mattbucks and our recommendations would be too difficult and poorly designed.

Side benefit to this alternate approach is that the workload so far for the client is really low. Depending on what you did to learn about people's preferences, they might have only spent a few minutes each. 10 to 15 minutes at most, I hope. Way lighter lift than some hour long survey and an hour long interview.

By the way, speaking of surveys, can I please just say that if you haven't had formal training in survey design, then you're almost certainly making lots of mistakes. I'll add that, similar to what we've just been discussing, your survey really ought to be designed for your audience. Not only in the sense that you should be asking questions that are relevant, but in another important sense too.

For example, if you learn that your audience is particularly crunched for time at the moment, then you really should reduce the survey burden from an hour to as close to zero minutes as possible.

Or if one of the challenges you're trying to work on is that your client tends to avoid conflict or disagreement, then you should put some time into designing questions where every possible answer feels just as safe as any other answer. As in, don't ask questions like “to what extent do you agree with the statement: My board is really frickin awesome?” because nobody is going to disagree no matter how they feel. Choose a different approach.

If you don't have formal survey design training and you still want or need to design a survey, hire someone to help you. Preferably someone super nerdy who did their PhD in survey design or something.

And, coincidentally, we've reached a point in our process where designing and delivering a super easy survey might be a good idea. The main complaint that boards, executives and consultants have about board evaluations is that there's usually not a lot of follow through. Good thing we have a bunch of Mattbucks left to invest in increasing the probability of real change. What if we crafted a short survey that's intended to measure and increase the probability of follow through, however imperfectly?

Since there's no hope of scientific validity in a survey with a sample size this small, our aim is to have a psychological impact rather than to present irrefutable data. In short, the survey will present participants with the interventions we hope to try and they can tell us how much they like them and how impactful they think they’ll be. And just the process of thinking about the recommendations and thinking “hm, you know what? I think this might work!” gets them into a mindset where they are more likely to buy in going forward.

Hey, I also have a couple things to say about the validity of analysis when we're dealing with small samples. And yes, no matter how large your board is, the sample is still small. Number one, averages don't tell you anything on their own. Illustrating the distribution of answers is much more informative than an average. In my opinion, you should just delete any averages or references to averages from your reporting to your client. Tell them about the distribution of answers, the alignment, the variance, and don't frame disagreement as a problem to be solved when it's equally possible that it could just be a difference in preferences or styles or perspective. Or because your question was ambiguous.

And for fuck's sake, don't compare one organization's answers to another's, or even worse, to an average of a bunch of other organizations. First of all, any question generic enough to be relevant to a bunch of different organizations is probably of limited value. Also, averages. Also, what if a significant variance from normal is a good thing?

[MUSIC PLAYS]

And before presenting your client with recommendations, take some time to test your beliefs against some kind of evidence. By evidence, I don't mean looking around and saying “people say this is best practice so it must be good.” I don't mean saying to yourself that your other clients have praised you for this recommendation in the past. I mean some kind of measurable proof that your recommendations will actually cause something better to happen.

I've talked before on various platforms about the lack of evidence supporting generally accepted positions on director independence, executive compensation, board skills and diversity, and not to mention the fact that boardrooms should probably be laid out differently, or that nobody agrees on what governance even means. So, if you come into or out of a board evaluation with some strident perspective on what good looks like in any of those areas, then you've either discovered something new that the rest of the world needs to know, or you've been understandably duped by the Corporate Governance Industrial Complex's embrace of its own mythology. Sorry about that.

Instead, aim to come out of the engagement with experiments for your client to try. Low cost, low risk interventions that are designed to increase the probability of something good happening in their boardroom. Things they can try, learn from, and then build on or discard as they see fit, depending on the result. You may need multiple prototypes to address the same challenge for the same organization. Like a few minutes ago I recommended three different interventions to solve the same problem for that green-loving board, right? And even suggested that they might need multiple versions of just the video update. That's the shit I'm talking about.

Make it your problem when the recommendations don't work or when your client doesn't follow through on them. If your recommendations are awesome, the client will try them. If they have a few options, then it's more likely that one of them will work or that all of them will work a little in different ways.

So let’s say you did in fact use a survey to build some buy-in to your recommendations. At that point you have a well-defined problem, an understanding of your audience, some carefully designed recommendations, and some confidence around which interventions will resonate most and that they might follow through on.

If you happen to have any Mattbucks left over, then you can spend them on the fun stuff: working – in person, preferably – with the client. Modeling how to carry out the interventions, giving them a tangible experience of what it will look and feel like to follow through on the findings of your board evaluation. Not training, exactly, but practicing. Getting some reps in before the game in a low stakes environment.

The worst case scenario here is that they'll realize that they don't like your recommendations after all. But that's not such a disaster, in my opinion. We all tried our best.

What's more likely is that they'll discover ways to tailor your recommendations to their real world and their own skills and vibes. Instead of a report telling them the what and why, they have hands on experience of the how.

And with that, we’ve reached the end of the board evaluation.

In my experience, this approach, or something like it, will actually make your client better. It's the board evaluation version of giving Benson Boone an opportunity to record the way Freddie Mercury recorded, even if it turns out there's not much money left over for post-production. Nobody ever listened to We Are the Champions and said “man, I really wish they'd tuned the vocals.” And it's not just because Queen was chock full of great singers and songwriters – which helped a lot, obviously. It was also because the process was designed to increase the probability that it would produce A Kind of Magic. Pun intended.

And that’s what you want. Not the “normal” approach of spending tonnes of time and money on stuff that doesn’t matter. And not trying to force your client into some artificial set of conventions that has no basis in real evidence. Not walking into the room with your ChatGPT vocabulary. But instead leaning in to the human imperfection, which is where the magic really comes from in the real world.

So, if you find yourself delving meticulously into your work and boasting about how swiftly you comprehend things, then maybe slow down and read some Shakespeare for a few minutes every week and see if that cleanses your palate of the ChatGPT aftertaste. My off the beaten path favorite has always been Richard II, but your mileage may vary.

Before we wrap up, it's important to acknowledge that we sincerely don't know what a perfect board evaluation process might look like. Nobody ever conducted an experiment to isolate which methodologies are most effective. So, to the extent that your board evaluation is similar to what other boards or consultants do, remember that nobody has any reason to believe that one approach or tool is any better than any other.

Why, then, do I believe in the approaches I’ve shared today? Partly because of the simple fact that the normal way doesn’t work, so there’s no sense in hanging onto it. Partly, though, because the combination of designing for your customer, investing in solutions instead of validating problems, and focusing on balancing simplicity with impact and fun seems to ACTUALLY cause change –improvements in areas that have been resistant to change for decades.

Do you have your own ideas that are better than mine? If so, get out there and try them! Since we have no reason to stick with the status quo, you're free to try anything you like. See if you can find approach that leads to something like those 16 songs that have lasted for generations instead of what we do now, which makes even great songs disappear overnight.

Thanks for listening. Once again, my name is Matt Fullbrook. I'm based in Toronto and spend most of my time working with boards and executives on various forms of pain relief. They have really hard jobs and very few resources that prepare them for how best to work together. If you enjoyed this episode, the best thing you can do is share it with a friend who might also like it. You can also shoot me a note by heading to mattfullbrook.com. If you have an idea of a topic you'd like me to cover in the future, I'm all ears.

A full transcript of this episode is available at groundupgovernance.com which will include some notes on the music you’ve heard throughout the episode, which, incidentally, has some subtle relevance to the narrative.

Catch you next time.

MUSIC NOTES:

I figure there’s a high probability that you didn’t pay much attention to the little music interludes in this episode. They went by pretty fast. When I started thinking about the connection between music production and board evaluations, I thought it might be fun to do a bunch of different versions of the same theme using various performance, recording and production approaches. I also gave myself a super small time constraint for each one (30 mins *at most*) just to see what weird stuff happened when I didn’t have the liberty to just mess around forever.

TL;DR: What did I learn from all this? It was really interesting how much the constraints affected my thinking and creativity and performance. The equipment I used - instruments, recording gear, computers, etc. - made me focus on different things depending what I had at hand. The time limit forced me to choose intentionally whether I would focus on getting good sounds or capturing tight performances or getting nice sounding mixes or something else. Would I do this again? Probably not as a creative exercise. It’s a bit better for quantity than for quality. But it was definitely a fun way to test some assumptions while quickly creating a bunch of different podcast interludes.

NOTE: The links embedded below point to Soundcloud, where you can hear the unadulterated versions of all the little music clips from the show. Bloopers included.

The musical theme, such as it is, was the bassline from this little loop I made on the Teenage Engineering OP-1 Field. The drums and funny Theremin-style lead line were kind of throwaways I added afterward. The OP-1’s recording workflow is what production nerds would call “destructive”. Once you delete or record over something, it’s gone forever. And for someone with clumsy fingers like me, it’s also really easy to accidentally press the wrong button and lose something. It’s fun, exciting, and annoying. Why did this make a good theme? It’s short, simple, and still kind of interesting. Something I could potentially try a bunch of different things with. This one shows up in the episode about 36 minutes in.

The next one I made used a similar production approach on a different piece of equipment: the MPC Live. MPC workflow is non-destructive. It’s also got lots of functionality like quantizing, a million different sounds, and a bunch of internal post-production capability. I played all the drums and synth bass “finger drum” style with no quantizing. Super simple and kinda funky. Perfect background music for a podcast, honestly. This happens at about 5 minutes into the show.

Next was the one that took the most effort in the end. I sampled some acoustic guitar - Fender acoustasonic telecaster, actually - into the MPC and chopped it up into samples, then added some drums. This is all quantized directly to the grid. Then I dumped all that into my DAW (I use Reaper) and added some slap bass - Fender Deluxe 5-string Jazz bass with cool butterfly painting by Mathias Chau - and super weird guitar stabs on my gold foil Jazzmaster through the Hologram Chroma Console pedal. I think this is the one I like the least. It’s sloppy and a bit annoying. If I were going to spend more time on it, the first thing I would do is re-think the drums. This is at about 25:45 in the show.

At this point I was a bit tired of the digital tools and just played the bassline through some heavy distortion and recorded that. It didn’t make it into the show, but it was a bit cathartic.

Up to this point I had made all the music sitting at my desk in my office. A few days later I was in my basement feeling a bit more unconstrained and did the fun dub version that shows up in the show at about 17:20. The bass is an American Standard Fender Precision compressed to hell and the guitar is my Gretsch Sparkle Jet through the Universal Audio Dream 65 amp emulator and Strymon Timeline delay pedal. The drums and synthesizers are all on the OP-1 Field. I recorded and mixed all of this on the TX-6 and TP-7, which I really enjoy because it’s all kinda manual. No precise editing, no automation. So all the fade ins/outs and weird effects throughout are things I added in real time while the mix was happening.

The last one I made was the straight up rock one at 11:40 in the show. This was the easiest to make and the one that, at least to me, ultimately sounded the best. I gave myself a limit of two takes per instrument just for fun. That means if my first take was *almost* perfect, I’d have to gamble whether to do another and possibly mess up even worse. I started by doing a take on the Fender Acoustasonic where I was making up some chords that moooostly worked with the original bassline theme, but were a bit more straight ahead folk-rock - a bit less tense. It was super messy, so I did another. I fudged through the bassline in one take with a few funny fills that I kinda liked. There’s some ambient electric guitar chords on my Gretsch Roundup that I added a lot of reverb to because…well, if you could hear my performance with no effects on it you would be embarrassed for me. And the drums are just finger drummed on the OP-1 Field. I wish I had a real drum kit, because that would’ve made this feel a lot sexier. This was also all recorded and mixed on the TX-6 and TP-7. Nothing fancy like the dub one, though.

OTHER TUNES:

The intro/outro music isn’t new. I recorded it a couple of years back for the very first long-form episode of the show. I also failed to take very detailed notes on how I recorded it. It sounds to me like the bass is my MusicMan Stingray 5 and guitar is my Gretsch Sparkle Jet. The synth was on the Nord Lead A1 and drums on the MPC Live. I have no recollection of the inspiration or creative process here. But I still kinda like it. Moody and twangy.

I also did a dumb little performance of the synth theme from Take On Me. Nothing to see here, really. This was on the OP-1 Field

More fun was the few bars I did of A Kind of Magic by Queen. Queen is one of those bands where more than a few of their songs are baked entirely into my unconscious. Every sound and nuance. A Kind of Magic isn’t even close to my Mount Rushmore of Queen songs, but it’s still seared into my music memory. I didn’t even bother to check the original recording to see if I got the key right, or if I had lifted the guitar solo excerpt accurately. Other than a few liberties with the bassline, I know I got it lol.

Discussion about this episode

User's avatar

Ready for more?