TRANSCRIPT
Matt Voiceover: 00:00
Welcome back to Sound-Up Governance. My name is Matt Fullbrook and today we have the ninth installment of my conversation with Andrew Escobar, a corporate director, open finance expert and keen AI tinkerer. Andrew was one of the first people I knew who built their own LLM model. So I was interested to ask him AI related stuff might be on his mind as a director. For reference, this conversation took place in September 2024, which in AI terms is like 500 years ago. Still, in my opinion at least, Andrew's insights are just as relevant as they were a few months back...even if my references to current events are a bit out of date. Let's hear what he had to say.
Matt: 00:55
Let's imagine you're joining a new board. And you, more than most, are familiar with what AI is and how it may or may not be useful or relevant or harmful or whatever in general to organizations. So you've started on a board or maybe you're about to start. What's something you actually want to know with respect to a new organization and AI? We could just be generic. If you're a new director, what's something someone actually should want to know?
Andrew Escobar: 01:31
To be clear, I am not an expert on an AI.
Matt: 01:33
But you know more than most.
Andrew Escobar: 01:36
Maybe not. Maybe. Actually I don't know. But you know what? Maybe, maybe more than most directors. Sure. I think few directors have actually tried to build a model themselves. Right. I'd want to know what that organization is doing today. That's a great starting point. I'm sure you've considered. I'm going to assume you've considered how AI is going to impact your business. I think that's a conversation that most boards and management teams have had. But what have you done to actually build and think through this and be more hands on? And are you doing it in a safe way? I'd want to know your comfort level as an organization with what I do think is a pretty big challenge and a big opportunity and a bit of upheaval in the regular course of, honestly, business and society, you know. And if the answer is, well, you know, we haven't done much, I don't know if I'd be concerned or not because again, every organization is different. But I need to know what the starting point is before I do anything next.
Matt: 02:48
Are there answers? And maybe you can't get universal on this, but maybe give a shot. Are there answers that you can imagine getting that would automatically make you go, oh, this is bad?
Andrew Escobar: 03:01
Yes, yes, I need to be very careful here. I actually don't think this is something that a specific organization did. They were asking, I believe it was chatGPT, a specific question about their specific business. And they got a very specific answer back. And it was revealing of that organization's strategy. That would be concerning to me just to know that that was out there. But that just might be, you know, regular IT practices. Data leak. I don't know what it, what it could have been, but if you were telling me that you were doing something with your data with a very public model where that data is exposed, I would be concerned because I actually don't know how it's being used. I don't know what the impact is to our data and how it's being used because it wasn't being done in house. And that early experimentation can happen very much segregated and separate from the outside world. Like it is absolutely possible for you to even to dip your toe into AI and to do it in a safe and contained way. But if the answer was we were using public models and we gave it a small data set and we wanted to see what it would do, it would cause a red flag because you didn't approach the problem with a lot of intent.
Matt: 04:32
This is related, but not in a governance way. I don't even remember where this, the study happened. Did you see, and if you did see, did you believe, that it was like over 40% of new content on the internet is AI generated right now, according to some source. Which means that of course AI is mostly being trained - like the language models especially are being trained on AI generated content. Which, you know, doom loop obviously. But like it just seemed like, it seemed like such an excessive number to me, you know?
Andrew Escobar: 05:14
New content. I probably think it's more. Yeah, right. And that's a question of like quality versus not quality. But I almost think that's the wrong metric because like, if I have a question, I might use Perplexity to ask a very particular question and get a really precise answer back. Is that new content? Like that's new content. That's new content that's been written, that's sort of more unique to my situation. And so I don't know, I don't worry about the deluge of what's being called slop on the Internet created by AI. I'm be much more worried about what we're doing to protect ourselves and our data.
Matt: 05:58
I can't remember, it was just announced today or yesterday. A reasonably significant news outlet had fired their art critics and replaced them with AI.
Andrew Escobar: 06:08
Oh God.
Matt: 06:10
Who wants to read AI generated art criticism. Like, who, literally, who in the world would like to read that? And I think the answer is literally nobody.
Andrew Escobar: 06:22
Nobody right now, maybe.
Matt: 06:25
Yeah. Until... And that what it's going to do is AI will be judging AI generated art for the purpose of AI consuming AI generated art criticism. And then we can just ignore it.
Andrew Escobar: 06:39
Hopefully.
Matt: 06:40
Yeah, right.
Matt Voiceover: 06:43
One of the things that's so important about Andrew's perspectives here is that the temperature is low, the future of anything is by definition uncertain. And if you add AI to the mix, well, I challenge you to predict next month, let alone five years down the road. A lot of the dialogue around boards and AI suggests that being behind the curve is basically like having a terminal illness without knowing it. Or they're told that everything's going to be fine as long as AI is at the core of every agenda item. Neither of those extremes is practical for most organizations, nor are they rooted in the reality of this whole uncertainty thing. In hindsight, I wish we'd done a deeper dive here, but so it goes. You could always make an AI generated podcast if you want to hear more. Thanks for listening to Sound-Up Governance. If you enjoy the show, please spread the word.
MUSIC NOTES:
Today’s music started with me listening to the end of the song Hearts Alive by Mastodon. The end, in this case, means the last three minutes of a roughly 14-minute prog metal journey. It’s one of my favourite rock guitar moments ever. Anyway, the connection between Hearts Alive and this gentle, chill trippy guitar thing will only make sense to anyone who’s got a guitar in their hands and plays both tunes. The lineage will be at least a little clearer. I played this on my Fender gold foil jazzmaster. All the rest of the sounds were done on my Moog Sub 37, which I’m always delighted to have an excuse to bust out.
Share this post