Good morning, good afternoon to everyone out there. Welcome to episode 61 of the Leading IT podcast. Today we’re going to talk about AI—what works, what doesn’t work. Uh, we’re going to cut through the hype today. Our podcast, for new listeners, is for Australian IT leaders who are looking to stay up to date with the latest IT news and trends in AI, cybersecurity, cloud and infrastructure, strategy, and leadership. Your hosts are Tom Leyden, the CIO at Longview, and me, Josh Rubens, CEO at consulting firm Empyrean IT, and we’re going to tackle the fast-changing IT landscape from both sides of the client-vendor paradigm with pragmatic, actionable advice.
And today, we’re excited to be joined by Dave Phung. He’s the Chief Disruptor at GPT Strategic, and we’re going to discuss the current state of play for AI in the Australian market and what’s working and what’s not. But first, as always, we’ll get into a bit of news.
Good day, Tom.
Good day, Josh. You’re excited about Hawthorn this Friday?
I am, I am. It’s good to, uh—yeah, didn’t expect it, and you know, anything’s a bonus from here.
Ah mate, they—I reckon they’ll give the Doggies a chance, they’ll give them a run for their money.
I reckon.
Hope so.
I’ve got some news. I do have a bit of news. Uh, first one—uh, Windows. Windows 10, end of life in October 2025. Not news, of course, but um, they—they are saying that they’re starting to report there are a lot of compatibility issues with, um, upgrading to Windows 11. So, some new changes around the patch rollouts and the way they’re upgrading that, so they’re coming up with errors. So if you’re thinking about Windows 11 upgrades, there’s a lot of advice around what you need to do, uh, but you should really start checking on your hardware you’ve actually got out there. Now, the cynics would say that this could be a ploy to buy more hardware, but I think what Microsoft’s point is that they need to be a bit more in control of the hardware you’ve got so they can secure it better.
Josh: Do you believe that?
I have a couple of things. One is, if you’re only thinking about it now, you’re a bit late. Six weeks before the end of support of Windows 10—you should probably have been planning it—I’m not having a go at you, I’m just saying in general, customers should have been planning it for a while. And yes, uh, they want people to buy the Copilot Plus PCs. Yes, I would suggest, yes—seems a bit dodge anyway, so something you need to be aware of, right?
Yeah, you’ve got to follow the process and do all the app compatibility testing and not assume that it’s just going to be, uh, you know, a very easy transition. So it really depends on what you’re running, I think.
Yep.
And what’s your news?
So, uh, well, the VMware annual event was last week.
I missed it.
I know, well, I think a lot of people did. Normally, uh, we do a whole show on it, but because since the, uh, the Broadcom acquisition, um, yeah, VMware’s popularity has decreased. I wonder if there was a much smaller turnout than usual last week. Uh, there were a few things, so I’m going to—I’m going to hit the sort of top five or six things briefly. One that I saw that was quite interesting in the keynote was from a, um, Barclay’s CIO, uh, survey. They reckon that 83% of enterprises are planning to move workloads back to private cloud from public cloud.
How many?
85%?
83%, so back out of public to private.
Yeah, and primarily because of cost, or—
Yes, cost. It—it does seem like a very self-serving statistic. Think, um—I don’t know if that sounds like a very high number. I mean, definitely people are certainly starting to look at their workloads a bit more closely and, you know, move things back that are, you know, that are just costing a lot of money and probably don’t suit the public cloud. Uh, like storage particularly is very expensive, but, um, yeah, I—I don’t know, that feels a little high and self-serving, and I was interested to see that Michael Dell on X, uh, tagged it as well, which, you know, um, not—not surprised—not surprised very much by that.
Um, couple of things, so they actually had IDC, because everyone’s been complaining about their pricing, they had IDC did a report saying that, uh, compared to public cloud, that, um, VMware Cloud Foundation reduces, uh, infrastructure costs by 34%, increases 50% efficiency, and 42% lower overall cost of operations.
Do you believe that?
Do you believe that?
I don’t know. Again, again, it’s—I’m—I’m getting a little cynical in my old age, but I feel they’re trying to justify the fact that they’ve tripled the price and customers are really annoyed, and they—they’re trying to, um, I don’t know, and—and can you—are Gartner and IDC pay for play?
I don’t know.
Yeah, yeah, yeah. The problem is their perception. So they charged a certain amount, and now they charge much more. It doesn’t matter what the underlying cost savings—perceived savings—the reality is you’re comparing against your last bill, right?
Yeah, probably, yeah. You can’t just triple things and tell customers to suck it up and—and expect people to stick around. So, um, there were a few interesting things. I’ll cover them quickly because I—I want to get into our chat today. Uh, so they—they released, uh, they announced VMware Cloud Foundation version 9. Um, so they’ve consolidated about, um, a dozen management consoles into one, so instead of having to, you know, do things from a different console, they’ve got, uh, an improved VCF import tool so you can easily import data from different tools and older versions. Uh, advanced memory tiering with NVMe, so virtualized memory tiering, uh, which will work really nicely for machine learning and AI applications. Uh, multi-tenancy—so they used to have a solution called vCloud Director, which was for multi-tenancies, and now enterprises will be able to run that themselves in their own environments.
And, um, and the main thing, which will not surprise you, is they’ve announced the, uh, VMware Private AI Foundation with Nvidia. So everyone’s wanting to get in bed with Nvidia, so now, um, things like, uh, a model store—so enterprises will be able to present a store of curated LLMs that users can go in and self-service. Um, you can get Nvidia AI Enterprise software with it. Um, there’ll be some Nvidia NIM microservices that people can launch, so customers who want to, um, run AI on-prem, um, you know, they—they—they wanted to provide a really simple stack for users to go and—to go and do that. And—and maybe—maybe worth a chat with—when we bring Dave in to talk about because a lot of the—you know, the—the word around is that the cloud is good for AI, you know, when you’re doing your—your proof of concepts, when you’re testing it, but once you—once you know what you’re doing, bring it on-prem because you can buy the Nvidia infrastructure and then, you know, you’re not continuing to pay for that, you know, the high amount each time. So that—I’m interested to hear Dave—Dave’s view on that.
Um, so that’s pretty big. So there’s a—there’s a—and they had a couple of very big customers, um, the US, uh, Senate and the Federal Credit Union got up on stage and talked about their experience with this private AI on—on—on VCF, so had some really big customers there. Um, and the only other one, um, was they’ve improved the VSan data protection snapshots because that was always a bit of an issue with VSan, so now you can have immutable snapshots to roll back to, uh, really good for, uh, ransomware recovery. And then the final bit, which is also related to today’s chat, is they’ve enhanced their edge compute stack. So the concept that with now AI, with IoT devices and streaming data, um, that the edge is going to be, you know, a really important place to do AI and data processing.
So, um, yeah, that was the main thing. So not a heap. I mean, yeah, normally we do an hour on the VMware conference, but that was—honestly, that was it. So I’m sure there was more, but I couldn’t—I—I certainly couldn’t find a lot more. So I don’t know about you, but a little disappointed with that. Normally there’s a lot more there.
It’s the AI stuff. The on-prem AI stuff is—is interesting. That would be worth maybe talking about how—how you actually make all that come together in the future, but, um, yeah.
It is interesting, you’re right. And obviously they’re setting themselves up for that on-prem return-to-prem play that may or may not actually exist.
I’m not sure.
Yeah, yeah.
Did you have any other news, Tom?
I do have a quick—quick news item about AI, actually, in a very consumer-focused one. So, um, I got a Samsung fridge recently.
Ah, yeah.
Uh, they’re building—they’re building AI into consumer things.
What?
So, um, you know, so they’re building them into consumer-grade products. So I got a Samsung fridge, so they’ve—they’ve put AI into your fridge, uh, to tell you what’s in your fridge and what you can cook with what’s there.
I need that. I have no idea what’s in my fridge. I certainly have no idea how to cook what’s in my fridge!
So I thought that’d be—there’s a good use case for you, Josh.
Yes, that’s an enterprise, Enterprise use case. Let’s—let’s sort out a proof of concept immediately. It’s a marvellous idea from the point of view of food waste, right?
There you go. I’ve got half a celery stick left; what am I going to do with this?
Yeah, well, how many times do I go to the supermarket and buy stuff I already had? It’s really—it’s really silly.
Right, yeah, so get the, uh, get the cottage cheese out and you’re good to go with the—with the…
So quickly, Dave, let me do an intro, and then, uh, yeah, we—we—we’re keen to get into it. So Dave is a specialist in transforming enterprises through disruptive technologies and human-centered design. With a career over two decades, he’s delivered cutting-edge digital innovations for the likes of Fidelity Investments, General Electric, the Australian Open, Penfolds, US Golf Association, Sony PlayStation, and many others. In 2022, Dave and his team received the Cannes Lion Award for innovative use of technology in sport, one of the highest honours in the creative industry. Currently, he’s the Chief Disruptor at GPT Strategic. Dave leads the implementation of Gen AI strategies for a diverse array of clients. Welcome to the Leading IT podcast, Dave.
Thanks for having me, Josh, thanks for having me, Tom. It’s great to be part of these conversations.
It’s a pleasure. So, um, when we get started, what are your thoughts on on-prem private cloud, VMware, you know, Nvidia? I guess it—it feels like the orthodontist telling you you need—you need braces, doesn’t it?
Yes, it does. It’s—I think the real answer is, it depends on your use case, your industry, your sector, the specific data you’re handling. You know, there’s all sorts of different requirements in different jurisdictions on how data is stored, transited, received, archived, etc., etc., all the different disaster recovery, business continuity planning requirements from all the different possible industries. So I think really the answer is, it depends. Um, I—I haven’t heard of a great many, uh, projects kicking off to migrate, um, you know, cloud workloads onto on-prem, but hey, let’s see what happens.
Oh, come on, 83%, Dave?
Yeah, so, yeah, let’s see, you know. But I—I did read a, uh, contradictory— a contradictory statistic around laptops, uh, in that it could—it could have been on one of your podcasts where a CIO was saying—
—podcast, sorry—laptop refresh cycles are essentially getting longer and longer because all the workloads are no longer on the machine. I found that in my personal experience. I’ve got an old—well, not that old—three-year-old M1 MacBook, it doesn’t—it doesn’t skip a beat, and I’ve no—I have no reason for it. Everything’s through a browser, everything’s done at some—somewhere else, basically, to put in layman’s terms. So I’m—I’m not sure I need another laptop, unless I wanted a fancier screen with higher and, you know, more—more pixels. There’s no real, um, requirement for me anyway.
So—so what do you think of the whole, um, the whole sort of—Microsoft’s push around these Copilot Plus PCs with running, you know, small language models on your device with those sorts of local use cases? Have you—you got any thoughts on that?
I think there’s an inevitability to that, ’cause—
—you know, right now it’s really about—I mean, the Microsoft narrative is very, very well played out, right? From their—all the way back in—it feels like 100 years ago, but about eight months ago, thereabouts, you know, they reversed the—the truck and, uh, backed out a heap of cash into the OpenAI vault and said, “Hey, let’s—let’s do this,” and you can see that it’s—it’s triggered this wave, and you guys, I’m sure, are seeing it as well, a wave of generative AI adoption through Copilot, which is—it’s—it’s easy, right? It’s there, you might as well turn it on. Your data is already behind SharePoint, it’s on your Azure instance, so it’s—you know, the story writes itself. And I think that what’s going to happen from—from there on, if we sort of project out, you’re going to see this, uh, you know, deeper specialization, not only at the business but the function level, and then on the individual level, right? And that’s all going to be Copilot-enabled and OpenAI LLM-powered, right? I think that’s the narrative.
Because the form factor is still one that remains unsolved, right? The form factor hasn’t changed. It’s still screens and keyboards and mice. But is that the case? And there’s been a few false starts—I’m sure you guys have seen the things like the likes of Rabbit R1 and other sort of hardware plays, um, to really deeply embed AI into our lives or generative AI into our lives. But yeah, it was a bit of a bit of a false start, let’s be frank. Um, but that’ll—that’ll change. That’ll change, I suspect, very, very quickly. And it’s a—you know, I’d say in Australia right now, the—the general attitude is that of cautious optimism. More and more projects—and now the general theme is, you know, improve productivity through off-the-shelf tools, right? At a very high level, it’s things like, you know, “Hey, give everyone ChatGPT Pro licenses,” or “Turn on Copilot for a certain—certain number of people and let’s see what happens.”
Um, and Josh and I were just saying before we started hitting—uh, before we hit the record button, that every couple of days now, you see, you know, “Insert percentage project failed, insert percentage, you know, pilots fail to—to, uh, operationalize,” etc., etc. I’m—I’m genuinely glad those headlines are starting to make their way into the—into the business media, so to speak. And it’s because over the past 18 months, the story’s been completely overhyped, completely over-sensationalized. It turns out that we’ve all—we all still need to work and trawl through emails and answer things, and, you know, it—you know, it’s ridiculous how much we’ve, um, inflated expectations in the short term and completely forgotten about really basic business fundamentals. What’s the problem you’re trying to solve, guys? You know, what’s the opportunity we’re trying to address, guys? It’s—you know, we get—you know, someone told me once that we’re just humans. It’s just monkeys looking at shiny lights. We’re hopeless, right? We’re absolutely hopeless. Um, we’re—we’re so easy, and it’s so easy for us to just, um, yeah, get dazzled by the shiny lights of AI.
So on that point, Dave, as well, the Gartner hype curve, right? We’re all familiar with the Gartner hype curve. It feels like we’re very much on the downward slope for AI, where it’s all been overhyped, overpromised, and then now we’re actually dealing with it. We’re realizing that, you know what? It’s not the answer for all things. It’s the answer for some things, though.
Well, don’t forget that it’s still the answer for some of those things.
So do you think we’re sort of coming down to the bottom of that slope, or do you still think we’re—there’s a fair way to go around, you know, expectation—you know, realization or normalization?
I’d like to think we’re closer to the bottom, and I—I say that because, you know, in the various webinars and panels and what have you, the conversation tone and the question tone seems to be quite different, right? Three months ago, it was—three months ago, the general question was around, you know, maybe—maybe six months ago, the general question was, “Is this stuff sentient? Tell me about hallucinations.” Now, the general questions are around, “How do I ensure I can safely and quickly adopt and bring this technology in to yield the productivity benefits?” Yes, uh, safely. Um, and how—what are the risks, and how do we manage them? So it’s really about the practical execution. So that’s why I think that we’re closer to the bottom of that. Yeah, and people are just practical now in the sense that, ah, it’s not going to solve everything, we—we still need the people to do the work. Um, and it’s—I think pragmatism finally has—has reared its head and said, “Hey, we’re still here. Let’s have a talk about what problems we’re trying to solve,” type line of questioning. So it’s—it’s a good thing. And, you know, we’re fortunate in technology because, you know, the likes of OpenAI and Microsoft have marketed Gen AI to every person on the face of the planet, it seems, and with that, I guess, lens, the average information worker, let’s call it, can now look at their role, their tasks, business function, and industry, and figure out how they might bring it into the workforce, and where it might actually generate some benefits. So, closer to the bottom to answer your question…
So, Dave, can I ask, what do you think is the, from your anecdotal view, the state of AI in Australia? I’m talking from a business enterprise-wide perspective.
Yeah, look, I think what’s been happening over the past 6-12 months is CIOs, CTOs, CDOs have gone about the ‘pay and play’ model, right? Let’s get users familiar with the technology, let’s get the organisation comfortable with what it does, what it’s good at, what it’s not, etc. And now, with that foundational layer of knowledge, organisations are starting to look at more specific use cases. It would be inaccurate to broadly categorize all businesses into that, but there are many businesses now that I speak to, running a series of different pilots. There are many organisations a bit earlier, just off the shelf productivity tools. You know, my belief is that general purpose productivity tools will not solve business-specific problems, but you need to go through that first before you can look at your business with the right lens.
So, we’re starting to see a lot of people with the right lens, really figuring out the pilots and where to integrate generative AI to yield the most benefit. I think there’s a very thin wall between success and failure. The two things that really stand out right now are:
- Chief Technology Officers and technology leaders, of various titles, have a good opportunity to take a leadership position in the AI and Gen AI conversation. That’s because most people are still foraging around to figure out how to piece it together. IT leaders have the opportunity to say, ‘Hey, we are here to enable the business with the technology.’ We don’t know the use case because we don’t feel the pain, but we have the wherewithal to bring the right tools, techniques, frameworks, and supplies to solve your business problem. So, I think it’s a wonderful opportunity for IT leaders to step up to the plate, so to speak.
- It doesn’t really matter where it comes from, but infusing a mindset of innovation is critical. Whether it be techniques like human-centered design or emergent thinking concepts, there needs to be an ‘okay-ness’ in experimenting, trying new things, failing, and moving on. That’s crucial because for a technology that has gone from a garage project to global regulation in 18 months, no one really knows where it will be in another 18. Making your organisation’s leaders okay with experimenting and adopting that ‘failing fast, failing forward’ mindset is powerful, especially with generative AI, where every second day there’s a new earth-shattering headline about who has invested in what, Nvidia’s stock, or the latest advancements in language models.
Being in the eye of the storm, IT leaders should step up and say, ‘Hey, we can help you with a framework to integrate Gen AI into your business functions safely and securely.’
You mentioned safety quite a bit there, and I think there’s a lot to unpack around that. Do you want to start with a few points about what you mean by safety?
Yeah, that probably needs another 7 hours, doesn’t it? I think there was an article I read a while ago that said the most dangerous thing about generative AI is still the users. It’s malicious intent, bad actors—they exist in every system, in every corner of the globe, whether we like it or not. So, I think a general level of education about what these tools are, for example, why you shouldn’t upload commercially confidential information into publicly available language models—just really fundamental stuff like that—is crucial.
There are layers, right? The first layer is people—get your people proficient with the tools, what it is, what it does, etc. That’s absolutely crucial for any IT leader right now in Australia. The second is the organisational level—what are the technology boundaries? What tech stack do you want to put in place to cordon off, for want of a better label, boundaries so that your users are safe? Once you have those two in place, you’re most of the way there. Once people understand what they can and can’t do with publicly available tools and within the tools they’ve been provided, you’re most of the way there. Then, of course, there are the security controls you can put into place and the various standards and compliance that I’m not an expert in, but I know they exist and need to be overlaid onto all of that.
Everyone wants the benefit, but no one wants the risk. It’s on IT leaders to hold the business’s hand and say, ‘Here are the risks.’ You can Google it, right? Hundreds of them are known.
Sorry, I was going to say, from a risk point of view, there’s a data loss risk that IT leaders need to be 100% across. Then there’s the quality issue where people overtrust AI and end up doing something absolutely stupid or detrimental to the business by depending too much on its code or whatever the AI says, assuming it’s gospel. Have you come across that, Dave? How do you mitigate that sort of risk?
Yeah, it’s a good one. I liken it to any of your co-workers, right? If you ask them a question, they’re going to try to be helpful. They’re going to try to give you something that points you in the right direction. Language models are exactly the same, especially off-the-shelf consumer tools like ChatGPT, Perplexity, Claude, or what have you. They’re no different. That’s where I think a threshold level of awareness and understanding comes in. Even if it’s self-directed education—understanding what these tools are, how they’re created, what language models are, what they’re designed to do—once that understanding is in place, users should be able to figure out how to engage. But it’s easier said than done.
The consumer tools, especially, are made to be as addictive as possible, or maybe ‘sticky’ is a better word. They’re made so that people go back easily, and they’re made so that your confidence in them grows with use. They’re not made for accuracy or comprehensiveness—they’re made for user acquisition and referrals. I sometimes liken it to the biggest social experiment the world has ever seen.
So, Dave, there’s a lot of fear out there, especially at the C-level, and maybe one of the biggest fears is FOMO. The CEO is coming to the CIO or CTO saying, ‘What are we doing? We’re going to be out of business in five years if we’re not doing AI.’ How do you balance caution and doing things properly without betting the farm, but also without getting left behind?
These things can’t be 18-month waterfall projects where I take my time to build the data lake, connect everything, get cybersecurity in place, and bring in consultants to work out all my business processes. That takes too long. The traditional way of delivering, like an SAP project or whatever, is too slow. So, what do you suggest as a way to strike that balance?
What I see the early adopters, the early-adopter CTOs, doing is looking at it like an investment portfolio. You’re not going to dump everything into high-risk tech stocks, right? You want a little bit of exposure to high-risk tech stocks, just like you probably want to allocate a small, single-digit percentage to generative AI projects to see where it adds value, test and validate some ideas, prove concepts to the business, get some runs on the board, build early traction, and build confidence in its application of use for your business. Then go from there. It’s actually really simple. Whatever your IT budget might be, carve out a little bit, discuss with department heads to figure out what it might be, with a threshold level of understanding of what the technology is. Once that’s achieved, look at your business function with that tinted lens and figure out where it might add value.
The typical tasks are easy to spot—the things you don’t like doing, the dull, the drudgeries, dangerous, perhaps not so much in the information worker context. But things that should be fast and easy but are slow…
That was my next question, Dave. What are the main use cases you’re seeing as a practitioner out there? What’s working, what’s not? Some real-life stuff if you’re able to share.
Absolutely. A really common on-ramp at the moment is Gen AI in front of your knowledge portal. I’ve said this before in other conversations, but I liken generative AI to the third wave of knowledge management. The first was SharePoint, where you dump all information, and 3 weeks later, it’s completely unusable. We’ve all seen that. The second wave was more collaborative tools, where you work on organisational knowledge together as a team—for example, Notion, Coda, Google, or M365. Now, the third wave is generative AI, where with a corpus of organisational IP, you can create new knowledge.
That’s a very common on-ramp. Very few organisations have issues with data; they have issues with access, timeliness, accuracy, traceability, and all that good stuff. But that’s a very common use case.
Interestingly, the flavour of the month, or maybe quarter, seems to be proposal writing. We’ve got a whole host of different pilots out there for proposal writing, which isn’t altogether a huge shock. We’ve got a wonderful partnership with a group based in Perth called Bid Right, who have spent, uh, 20 maybe 25 years writing bids, and they had that Kodak moment, as they say, um, recognising that the technology might simply and quite crudely just put them out of business because their work is copywriting and writing compelling, um, writing compelling, uh, language, uh, to help organisations win work. So, yeah, proposal writing, tender writing, bid writing—that’s a big body of work for us right now. Um, there’s no organisation that’s going to argue with, “Hey, help us win more work.” Um, so that’s a, uh, that seems to be quite prevalent. Um, another is actually reporting and insights. Um, reporting and insights, and that’s, uh, I guess also not a huge surprise. We work with a lot of large organisations, uh, that have commissioned research—maybe academic sort of research—internal knowledge, troves and troves of it, structured data, unstructured data. And, you know, a request, uh, would come into their data or reporting or Insight team, and, you know, that’s three people involved, and four days later, here’s the result, right? Whereas with a language model, with enough, you know, with enough, um, elbow grease, with enough data, you can build a language model that essentially can be queried for that type of knowledge. Um, yeah, we’re now starting to look at, uh, sort of social listening platforms, and the name just escapes right now, but there are social listening platforms that essentially track the health of your company’s brand in the various, um, social media outlets, and, um, you can sort of, you know, bring that data in. So within, it’s really limited only by your imagination, right? You’ve heard that before. Um, and your availability to data, of course. Uh, so, yeah, proposal writing is a big one. Uh, knowledge management, knowledge access is also another big one. Uh, your reports, Insight generation is another big one. Um, yeah, so, yeah, we’re starting, we’re starting now to also look at interfaces that are not text because, for some reason or another, we’ve all been trained that Gen AI equates to a text-based interface, which is frustrating because it’s probably the worst UI for most use cases. Um, so, yeah, we’re starting to experiment with non-chat interfaces—perhaps more point-and-click, even voice-based experiences. It just depends on the use case; it depends on the end user.
Is that with Omni? The GPT Omni stuff seems to—is that what’s enabling that? Yeah, yeah, yeah. I mean, we chose to build exclusively on the Microsoft stack because there are 300 million users on the Microsoft 365 platform, right? So, it was just an argument we didn’t have to have and didn’t want to have. So, I said, “Look, we’ll stick with Microsoft until that’s proven to be not required.”
Yeah, we’re the same. That’s 100%. Are you finding there’s particular industries, uh, organisational sizes, like, where it’s working, or are you just seeing it across industry? A good question. Um, no, we’ve got projects in every conceivable—yeah, every conceivable industry from consulting, professional services, um, FMCG, consumer packaged goods, superannuation—the whole box and dice, really. Um, where it does make most sense, though, especially in the work that we do, which is custom UI and custom models, where it makes most sense is in probably the upper end of small to medium, um, where there’s enough technology overhead, where there’s enough sort of FTE, where there’s enough labour behind it to sort of, you know, warrant some, you know, easy-to-measure productivity gains, if that makes sense. Yeah. Um, is it in the business case? Are they kind of productivity gain type business cases? So, do this and this now; this job now used to take four weeks, now takes four days—that kind of thing?
Yeah, great question. That, again, depends. Um, that’s the easy one to measure because, you know, at some point in time, the CFO will say, “Hey, what’s, what am I getting for this?” Right? What’s my ROI? So, yes, that’s an easy one to measure. But I think in practice, you know, we encourage our clients to look beyond productivity, right? Because, yes, it’s easy to measure hours worked saved, but ultimately, if every single person, every single organisation got access to these tools, there’s no competitive advantage; everyone just emails everyone else faster, right? And then, so, we encourage people to look beyond productivity. What that means is, once you’ve achieved that productivity unlock, what should be called into question is the work itself and how it’s done. Yes, right? And that’s harder to measure and harder to quantify, especially for a CFO, than hours saved. Right? I’ll give you an example, right? Um, it’s a listed company where the gentleman runs—uh, the gentleman that we work with, he runs the UK business. He was with the team in the UK, and they had a meeting. And they had a meeting—we built a UI for the chat UI with proprietary data. They were running a team meeting, and his team asked him, “What do we know about this particular product line?” And on the spot, he actually didn’t know, right? And he realised, “Oh, hold on, I’ve got this tool that was built for us.” So, he typed the question in, all the facts were presented, and they all walked away with clear actions. Right? Had it not been for that tool, they would have walked away two, three weeks later: who was going to follow up, who took the action? Again, what was the, you know—it just goes, you know, that goes into the machinery. Then, you know, fast forward a little bit. What’s happened is that the AI is invited to meetings, right, uh, to listen and to partake in meetings, to summarise notes, to provide critique. Um, and, interestingly, we’ve built this, uh, an AI assistant, uh, for the same team, built an AI assistant that essentially, uh, hallucinates. It hallucinates, and the purpose is, uh, the AI assistant hallucinates with facts and insights that’s given to the AI assistant. Um, and this AI system hallucinates because we want imaginative and creative ideas that are actually not fact-based, right? The inputs are the facts, but the ideas generated are actually quite imaginative.
Dave, I don’t know if you saw this article I saw in the AFR last week, and I don’t—I’m not sure whether to be excited or alarmed by it. Basically, it was the—they were talking about how the big four, and KPMG particularly, their profits were down 5 or 10% or whatever, and I don’t know. I think it was the head of consulting said, “Yeah, we budgeted for 125 digital FTEs this year; we’ve already done 20.” So, what do you mean by digital FTE? That’s where an AI bot replaces a human. Yeah, that’s pretty full on; that’s intense. Um, so, like, what was alarming was that it was said because not everyone’s been—maybe a little bit afraid to talk about it—but someone, you know, as conservative as the big four, is that they’re already doing it and it’s happening. So, yeah, you know, I wasn’t sure how to feel about it, but I think, I think in a lot of ways, I think we all know that the job landscape is going to change. And, you know, what do you think about where, where in a couple of years around role changes and what things will happen? I mean, I know there’s been talk about types of jobs will change, which we can’t foresee at the moment. I mean, yeah, I’m just keen, you know, to—what’s your thought on all of that? I guess we could look at it from a macro and micro view, right? I mean, if we zoom out a little bit, we humans have been trying to automate things for at least 300 years, so it’s not new, right? Yeah, from the times we rolled out of caves to clashing rocks together to get heat a little bit faster to, you know, generative AI creating copy for us, it’s part of that one continuum, right? So that’s the macro view. But if you look at it really closely, no one here can guarantee there’s not going to be job loss, right? But if we look at it really closely, what is a role, right? What is a role? What’s an organ—what’s a role in an organisation? It’s a collection of tasks, right? And what’s a collection of tasks? You know, there’s some inputs, there’s some processes, there’s some outputs, and if it can be systematised, systematised, systematised, we—right? Yeah, we should. So we can free up human labour; we can allocate that to digital labour; we can free up human labour to do things that are more meaningful, more strategic, or we could just do less. We can finish work early, right? That’s also another alternative, right? There’s a lot of articles conversely about burnout, right? Certain people in certain industries burdened by teachers, as an example, nurses burdened by administrative tasks—maybe generative AI should be pointed at that so they could leave work early and be more present or be more, um, refreshed for their next, um, shift, right? So, I think there’s a sort of pessimism versus optimism, right? And depending on what article and what day of the week it is, it can be, it can swing either way. But, um, I think at least in the near term, right, the shift is—if you’ve got a role that is effectively a collection of tasks that can be systematised, you should. You should. It’s more efficient. It will free you up to do more meaningful work, and it will ultimately deliver better outcomes for your customers, for your organisation. So, um, yeah, I think it’s a bit of both, really. I think it’s a bit of both. I mean, it might sound very idealistic, but I think ultimately we can evolve as a species. And I think, um, yeah, I think there is a pathway there, and I think it’s not, you know, it doesn’t have to be a binary outcome. Yeah, I’m really keen to hear how that can flow into what you’re doing and some of your clients. Yeah, sure. Um, we’re probably over 80% of our projects have been either exploratory or proof of concept projects, right? And I would say this is probably fairly similar across the industry. So, if we look at that exploratory phase or discovery phase, we need to make sure that we’re designing with purpose, right? So what that means is making sure that we’re looking at the end-to-end journey of that digital process. And that’s where a lot of clients or even internal teams trip themselves up because they’re looking for the shiny toy, right? They’re looking for, “Okay, I want to build this one use case.” But if you zoom out a little bit and look at it from a bit of a systems lens, you can see that this is actually part of a bigger end-to-end process, right? And how do you make sure that there’s no bottlenecks between the AI and the humans, the digital and the physical? So, um, a lot of our work has been, um, as I said, exploratory, so making sure that when we go into client meetings, we can apply those principles to ensure we’re getting the best possible outcomes. Um, the first principle being very simple: design with the end user in mind, right? And then secondly, making sure that we’ve got a fully integrated pathway and there’s no breakdown or no gaps in that integration because that’s where we lose efficiency, right? So I think if we could give people, whether that’s clients or teams, that bit of a guiding light, that should help them navigate some of the tricky conversations around that exploratory phase. So if we move on to the proof of concept phase, where some of those exploratory projects can land in the digital proof of concept, you can give that to the human end users in their workflows, and then you can iterate quickly in the workflow so they can see the benefits of AI, and they can see the benefits of that integrated pathway and what that can bring for their workflow, right? And the more they’re involved in that and the more they can test and learn in that space, you will ultimately build their buy-in as well. So when they understand how this works and how this fits into their world, it’s going to be easier for them to sell it up and out, right? So, you know, a lot of our work is helping to build up the confidence of those stakeholders to have those conversations, right? So they can go back and create that business case to the executive team to say, “Look, this is how AI can free up our labour, and this is how it can generate better customer outcomes.” Yeah, I think it’s a big deal, and it’s only going to continue to grow. Yeah. It’s an exciting time, that’s for sure. So what’s next? Um, we’re looking at building some more tailored offerings around the proof of concept to help bring clients in, um, and ideally be able to, um, be a little bit more agile as we go through that process with them to meet them where they’re at. But, yeah, it really is a good time for everyone to lean in and explore what is possible, really. Yeah, I agree. Well, good to talk to you today, mate. Thanks, Dave. Thanks.