Back to Newsroom
17 Mar 2025 - Company & Industry News

acQuire Connected episode 38: Top 3 Data Management Lessons the Mining Industry Can Learn from the Energy Sector

Many oil and gas companies have rebranded as energy companies or expanded into low-carbon projects such as carbon sequestration, critical mineral exploration, and geothermal energy.  

To stay ahead, they’re leveraging legacy data to cut costs, accelerate the energy transition, and meet new regulations. Operating in high-risk, highly regulated environments means these companies have developed rigorous data management standards to drive confident decision-making. So, what can the mining industry learn from them? 

In this episode of the acQuire Connected podcast, Jess Kozman, Data Practitioner at Katalyst, and Steve Mundell, Director of Product at acQuire, discuss what mining companies can learn from the way the energy sector adopts data management practices for more efficient and sustainable operations. Here are our top three lessons from the episode: 

Lesson 1: The value of legacy data 

For decades, energy companies have collected and sat on large geotechnical datasets in their search for oil and gas. Now, as they transition to low-carbon energy projects, they’re realising that their historical data can be a valuable asset rather than a sunk cost.  

Instead of starting from scratch, these companies can reprocess and reinterpret information such as old seismic surveys, well logs and reservoir data to source new opportunities. 

“There’s this huge legacy, 40, 50 years in some cases, of digital data acquisition in the search for oil and gas. A lot of effort and expense is being put into finding, locating that data and making it more accessible,” said Jess Kozman. 

The mining sector faces similar challenges as high-grade, near-surface deposits become more scarce, deeper and more complex to mine. By applying the energy sector’s approach to legacy data, mining companies can digitise and analyse historical drill core data, geophysical surveys and geochemical assays to find more efficient and cost-effective ways to mine. This also supports sustainability efforts by using historical information as a baseline for land rehabilitation activities. 

Steve Mundell shares, “Areas of mines that were put to the side in the past because they were too complex or the metallurgy wasn’t right… are coming into the spotlight now as being potentially viable with the development of new techniques.” 

Lesson 2: Establish industry-wide data standards 

Mining companies still struggle with fragmented data systems, where drill hole logs, geophysical surveys, and environmental data are often stored in disconnected formats across multiple departments.  

The energy sector identified early on the importance of industry standard ways of working with data to improve efficiency.  

Wellsite Information Transfer Standard Markup Language (WITSML) was an early data standard for wellbore data. As Steve Mundell explains, “It was an earlier way of thinking about how there can be a standard that allows multiple functions to work around a single thing and work on their specialties to get the job done.”  

Jess Kozman adds, “The interoperability of that data has always been so critical to running a highly complex technological operation that is focused, as you say, on a single business object, which is a hole in the ground. But when you’ve got a high cost, high risk environment like a deep offshore well, it forces a certain degree of interoperability…” 

More recently, the Open Subsurface Data Universe (OSDU) has emerged as an open-source, vendor and technology-agnostic data platform used in the energy sector to standardise and streamline subsurface data management, recognising that certain workflows and data practices are not proprietary, and everybody can benefits sharing. 

The mining industry is beginning to explore how to leverage this model for its own data needs. Jess Kozman states, [the mining industry] basically looked at this data platform and said, here’s a data platform that allows you to have some standardised ways of dealing with geotechnical data collected from a hole in the ground that is applicable across any kind of an earth resource project”. 

Lesson 3: Leveraging new technology for more confident decision-making 

Both the mining and resources, and energy industry are looking at Artificial Intelligence (AI) and Machine Learning (ML) to enhance operational efficiencies, automate processes and improve decision-making. 

Despite the growing adoption of AI, there are still misconceptions about its capabilities. AI isn’t a replacement for data management, it’s a tool that enhances it. 

“If I hear one more person come to me and say, well, we don’t have to collect metadata or do data management anymore because I can just ask chat GPT where my data is, right? I’m going to give you a hard no on that,” says Jess. 

When AI is applied to poorly managed data, the results can be unreliable. Instead, Jess highlights its real value, “Where we are seeing real value to machine learning and AI is in augmenting those data management workflows that need to be done to develop a standardised, well-curated dataset.”  

One of the biggest success stories in using AI in energy sector is extracting metadata from legacy reports. Companies are leveraging AI for pattern recognition and metadata extraction from historical datasets. 

“A 1950s mag acquisition report may talk about line direction in 12 different ways… An AI tool can potentially say, ‘I found three things that I think you might be interested in. Look here first.’” 

For both industries, the most important step is ensuring proper data governance is in place. “That underlying infrastructure, data infrastructure, and data platform has to be in place before you even start embarking on some of those kinds of projects,” says Jess. 

The energy sector’s data management strategies offer some valuable insights for the mining industry through recognising the value of legacy data, implementing industry-wide data standards, and leveraging new technologies with a solid data governance foundation.

To hear more insights from Steve Mundell and Jess Kozman, listen to the full episode here:

 Like what you’ve heard? 

Follow us on Spotify or Apple Podcasts to find out about the latest episode drops. We also love to hear your feedback, so please leave a review! 

Read the full transcript below: 

Jaimee Nobbs (00:00):

Hi, Steve and Jess. Thanks for joining us today.

Steve Mundell (00:03):

Good evening. Thanks for the invitation.

Jaimee Nobbs (00:05):

So it’s great to have you on this episode. Today, we’re exploring what the mining and resources industry can actually learn from the energy sector around their data management standards and practices, in an industry that’s even more regulated and subject to scrutiny than the mining and resources sector. So it should be a fascinating chat. Let’s get into it. Can you start by telling us a bit about yourself and your background? Jess, would you like to start?

Jess Kozman (00:30):

Sure. So I’m Jess Kozman. I’m based in Houston, Texas. I work for Katalyst Data Management. I am a geophysicist by training and experience, and a data manager and a data management practitioner. Katalyst manages a large volume data on the cloud for the resource sector. I’ve got about 40 years in the global industry working with that data and jumping back and forth between mining and oil and gas resource sector, but always with a focus on geotechnical data.

Jaimee Nobbs (01:01):

Fascinating. And how about yourself, Steve?

Steve Mundell (01:04):

Yeah, I’m Steve Mundell. I’m Director of Product at acQuire. I’m based in Perth. My background is in geology, but that was more than 22 years ago now, so I don’t think I’m qualified anymore to be called a geologist. I’ve been working in the technology space for the last 22 years as a part of acQuire, working in everything from doing implementations and training and support of our technology, right through now to my role where I head up the R&D function within acQuire.

Jaimee Nobbs (01:36):

So if we jump straight into it. Jess, you said you’ve been working across the mining and resources, and also the energy sector. In specifically the energy sector, what would you say are some of the biggest changes to the way companies manage their data, that you’ve seen over your time working in the industry?

Jess Kozman (01:54):

Well, if you go back to the beginning, of course, I started with analogue data. So there was the big transition from analogue to digital data and to computerised workflows. But more recently, and I think more significantly, in the last couple of years in the oil and gas industry, we have seen a trend toward repurposing and reuse of legacy data sets, geotechnical data sets for things other than hydrocarbon extraction. So there’s this huge legacy, 40, 50 years in some cases of digital data acquisition in the search for oil and gas. And now every oil and gas company has either repurposed itself as an energy company or they have divisions that are working in geologic evaluation, but looking for what we call low carbon energy projects. So that could be CO2 sequestration, it could be critical mineral exploration, it could be geothermal, it could be repurposing an offshore rig for a wind turbine farm. And of those geotechnical data sets are valuable in those new evaluations. So we’re really seeing a trend toward recognition that legacy data is valuable and a lot of effort and expense being put into finding, locating that data and making it more accessible to the teams that are working in those fields today. And that’s been a huge change.

Steve Mundell (03:20):

So there’s a big change there from being an oil company looking for hydrocarbons, not just oil, but hydrocarbons then moving into these other energy types. So there’s certainly a data aspect to that, but what about the cultural shift in the mindset shift of the organisations themselves? I mean, that surely is not insurmountable. That’s a huge change.

Jess Kozman (03:45):

Yeah, it is. And what we’re finding is that there’s actually more of an overlap in the skillsets required than you would think just looking at it on its face value. So it’s not just the geotechnical knowledge. You have to know something about a reservoir if you’re going to inject into it as you do, if you’re going extract something from it. You have to understand pore space and the saying that’s plastered all over our walls: “It’s all about the rocks.” It’s always about the rocks, if you’re working with an earth resource. But also there are soft skills and there’s a mindset around oil and gas exploration and the idea that you can use the earth as a multi-resource asset and that innovative thinking, the ability to work with teams, the ability to work in a highly regulated environment, the ability to work across a multidisciplinary team, those things all carry over very well to any one of those kind of capital-intensive data-driven, highly regulated projects.

(04:49):

We also find that people in the oil and gas industry are very used to working on decadal timeframes, a project that expands past their likely tenure in a given asset team, or even in a given company. So there’s a lot that translates very well. There is also a cultural mindset in the way that we use that data and the data workflow that kind of gets flipped on its head. If you’re putting something into the earth instead of taking it out, you kind of have to reverse your thinking about the asset lifecycle. But the data that supports it is still highly applicable.

Steve Mundell (05:24):

And so have you seen any examples where it’s done well and not so well, and what are those factors that make it a success or it’s doomed for failure?

Jess Kozman (05:34):

Yeah, so I think what we’ve come to see is one of the success factors is recognition of that change management aspect of the organisation. Rebranding an organisation is one thing, but finding the right people to populate those new asset teams is another. And I guess I’d turn that question around to you, Steve. Do you see kind of the same thing, for instance in Australia, I know the mining industry is moving from kind of easy-to-find superficial deposits to undercover deposits. Do you see the same kind of thing in the transfer of skill sets and data?

Steve Mundell (06:10):

Yeah, so I mean that’s been reasonably long; I would say a longer history in Australia of that type of thing happening. If you look at the discovery of Cannington and Olympic Dam, they’re all deeper deposits that there’s no real surface expression, so to speak. So in a fair use of geophysics and what have you to start off that kind of search. But I think looking at in other areas, for example, with Fortescue, looking at them being an iron ore miner, but also looking at their Future Industries as well. So, the energy side of things and seeing a transfer of skills not only just within the way that they might be looking for the resource or developing that, but also around managing the land and things like that, there’s a huge number of aspects that are transferable between the two. And I think from those sort of aspects, there’s a bit of a cultural shift or recognition that it can be done and it can be of benefit and you can do these things. It doesn’t just need to be in one single box. You can do it to initiate a big change.

Jess Kozman (07:28):

Yeah, I think that the ability to think outside of your current workflow is very important in all of those aspects of the industry. I just came back this week from talking to a marine research institute here in the US on the southeast coast who are looking at seabed mining. Of course, they’re looking to the oil and gas industry for best practises around infrastructure and offshore exploration, which has never been a big area for the mining industry, but they acknowledge very openly that it’s going to take some innovative thinking in adopting those existing workflows.

Steve Mundell (08:03):

And touching on back on looking at deeper deposits and things like that, as we all know, one of those trends or throughout the minerals industry is everything is getting deeper, more complex, et cetera. So certainly the days of old, where high-grade, high value, large deposits where you could, I’ll just say “relatively” extract. I’m sure there are engineers and metallurgists and whatever cringing when I say easily, but with relative ease in the past. But the metallurgy becoming a lot more complex with the deposits and being deeper raises the bar, the thinking and the technology that needs to be applied to extract the metals. Particularly when you think of the amount of metal that is going to be required over the coming years. The increase in copper for example, and other metals that are required for the various transformations happening throughout the world.

(09:04):

So a big focus in those areas is required, and I think there is definitely momentum growing around that. It’s not sort of just going, “yeah, we’ve got to think about it sometime later”. There’s definitely a lot of effort being put in that area now. Seeing that, yeah, we’ve got these data sets, we’ve even got these deposits that may have been overlooked in the past. But now, they’re coming back into light. Or even areas of mines that were put to the side in the past because they were too complex or the metal energy wasn’t bright or what have you, are coming into the spotlight now as being potentially viable with the development of new techniques and all of that.

Jaimee Nobbs (09:48):

You mentioned that the energy sectors have probably got a more innovative way of thinking that the mining industry can learn some things from. Perhaps a data management standards perspective, what areas do you think the mining and resources sector are still catching up on?

Steve Mundell (10:05):

Yeah, I think I haven’t worked in the energy industry. I’m an observer of it, I guess, and I can only talk at it from that perspective. But if you think about an offshore exploration situation, where there’s all of these different functions, all of these different skill sets working on this hole that’s being drilled. So not only operationally drilling it, but then extracting data from it that’s going to be used for expiration or for determining where is this reservoir, how big it, the quality, et cetera of these things. And, from what I could understand, in through history of the development of certain standards that allow all of those different teams to work around this hole that’s being drilled and to reduce all of the handoffs that are happening and the translations that happened because everyone’s got their own proprietary way of dealing with something, if I remember correctly, that was, there’s WITSML.

Jess Kozman (11:13):

WITSML has been around for a while. Yep.

Steve Mundell (11:15):

Exactly. And maybe there are people out there cringing at the mention of that one there. I’m not too sure, but maybe it was just an early stage or earlier way of thinking about how can there be a standard that allows multiple functions to work around a single thing and work on their specialties to get the job done.

Jess Kozman (11:33):

Yeah, that’s really interesting that it’s blindingly obvious to someone just looking from the outside. And I think maybe we take it for granted sometimes if we’ve been in the oil and gas industry for quite a while. But yeah, the interoperability of that data has always been so critical to running a highly complex technological operation that is focused, as you say, on a single business object, which is a hole in the ground. It forces that kind of data interoperability to happen in spite of itself. And I’m sure we both know there’s always this danger of data becoming siloed and having bottlenecks and passing data back and forth between different disciplines, between different functions, between different parts of an asset team. But when you’ve got a high cost, high risk environment like a deep offshore well, it forces a certain degree of interoperability and those data standards, many of those data standards of which WITSML is one.

(12:39):

A venerable one because it’s focused on well bore data, which is critical. I think the oil and gas industry has progressed further, not in a linear fashion by any means, right? It’s been forward and back and missteps and it’s taken quite a while for the industry to get to a widely accepted data standard. But yeah, I agree that I think it’s an area where there are some best practises and lessons learned, specifically on the hydrocarbon exploration side, that the mining and minerals resource industry is starting to pick up on. And some of that accelerated adoption is being driven by the fact that a lot of that data is now available on the cloud, it’s available in the public domain. There’s technology available that scales to global reach and large data volumes. So it makes it easier to have those conversations about why industry standards for data formats and data exchange are so important.

Steve Mundell (13:47):

Maybe going in that sort of train of thought of standards, so to speak, there’s a history there and as one of those standards, but then going along to PPDM and O Studio, what has been the triggers of moving through what look like milestones, I guess, in development and change?

Jess Kozman (14:09):

Well, I think it was the earliest attempts to develop a standard data model built around relational databases, things like that. Took a lot of effort. There was a lot of missteps there. You always have to have that balance between technology providers and software vendors being able to stay competitive and have a product to sell and remain profitable, but recognising that certain workflows and best practises are not proprietary and everybody can benefit from that sharing. And I really think having data on the cloud has made that more visible. It’s made it easier to implement. So the standards that we’re seeing adopted broadly now by both oil and gas and the mining industry, and it’s the same standard. I don’t know if you’re working in the OSDU space. So here we have a industry standards based cloud native technology agnostic, vendor agnostic data set of tooling that when the mining industry looked at it, they basically looked at this data platform and said, here’s a data platform that allows you to have some standardised ways of dealing with geotechnical data collected from a hole in the ground that is applicable across any kind of an earth resource project.

(15:33):

So that kind of broad scale standard, I think widely being adopted by operators who have the most skin in the game in terms of cost reduction. I think that’s being this uptake of standards like OSDU on both sides, on oil and gas and on the mining industry.

Steve Mundell (15:53):

And just enough. Looking at maybe just going off a little bit of a tangent of thinking about OSDU and IT being open source, has that caused concern with any companies or would they look at it and they see the type of open source licencing that operates under that it’s workable? What’s been the questions?

Jess Kozman (16:14):

Yeah, the initial discussions always come back to that. From the operator’s point of view, it’s always, do I still have a way of controlling my own proprietary data? Am I sharing data? And of course, the answer to that is yes, proprietary data remains the property of one company on the platform, but the platform gives you the opportunity to share with the regulators that you have to submit to with joint venture partners from the vendor side, it’s always about where’s that boundary between the open source and the competitive space. And I think OSDU has done a really good job of defining that, letting it evolve with input from both operators and technology vendors to make sure that that competitive space is still there. And I think what we’re starting to find is even from a technology service provider’s point of view, we found, for instance, when we had a mining customer come to us and say, can you help us to adopt OSDU for our purposes? We were able to look at that and say, look, 60 to 70% of what we need to do is available there in the open source code base, so it’s going to be easier for us to deliver that and focus on the places where we provide a competitive advantage because we don’t have to reinvent the wheel for every customer that comes to us. So yeah, those benefits are being recognised, they’re being documented, they’re being talked about, and that kind of gets you over that hump.

Steve Mundell (17:35):

Yeah, yeah. No, that’s awesome.

Jaimee Nobbs (17:38):

From a digital transformation perspective, what role does technology like AI and ML play in this space in both the energy and the mining sectors? Well,

Steve Mundell (17:49):

Who wants to take that up first? Jess,

Jess Kozman (17:51):

I’ll let you go. Yeah, I could start on that so I could tell you what it doesn’t do. Okay. Right.

Jaimee Nobbs (17:57):

Let’s start with that.

Jess Kozman (17:58):

So if I hear one more person come to me and say, well, we don’t have to collect metadata or do data management anymore because I can just ask chat GPT where my data is, right? I’m going to give you a hard no on that, okay. We’ve seen people try to do that, and we’ve seen what happens when you turn any kind of ai ml, any kind of machine learning loose on an uncurated public domain data set, you’re going to get some oddball answers. Where we are seeing a real value to machine learning and AI is in augmenting those data management workflows that need to be done to develop a standardised QE, well curated data set that is ready to use as both a training and validation tool for those kind of advanced technologies. And when you do that, then you start to don’t get the answers to questions, where should I drill my next well?

(19:01):

But you start getting responses that trigger you to think about things you wouldn’t have thought about otherwise. And we’ve been able to demonstrate this by pointing AI tools, even commercial AI tools that have been pre-trained on a public domain data set, turn them loose on a proprietary data set and ask the right kind of questions with that curated indexed dataset behind it with all of the knowledge graphs that you build up with the understanding of what the relationships between metadata look like. And then you start to get some insightful responses from a natural language type AI tool, but you can’t skimp on that first step of cleaning up your data. And I think, again, there’s so much of activity going on in that space that people are starting to catch onto that pretty quick, that what it’s doing is it’s highlighting the need and in a lot of cases, the need to go back and curate those legacy data sets that might have some real gems of information buried somewhere in them. And you can use that AI and ML to point you toward the data that’s going to be important.

Steve Mundell (20:10):

Yeah, certainly Sam a lot in that kind of area, coming back to having quality curated data that you can trust point these tools at it, and not only just for learning, but then onwards for looking for insights. But I think even looking at data management practises and the whole scope of data management, not just so much of looking after it, but the collection of data as well as certainly seeing a lot of application of AI and an ML in the collection of data out of imagery, for example, turning just images of core or rock or what have you into observations that you can use in modelling or downstream analysis type purposes, or using those tools to identify outliers, looking at the quality of data or monitoring your data against other streams to see what outliers there are. Also, as these tools, generative AI can provide answers to a question, I guess if you’ve got a data set pointed towards, there’s also that aspect of using it to generate tools to help the data manager, so generating forms or whatever it might be to help with input. So if we’ve got these inputs that are coming in, well, they help start creating templates or what have you to be able to get this data processed and input. So there’s those sorts of applications that you’ve certainly seen as well and being able to demonstrate.

Jess Kozman (21:43):

Yeah, I’d say one of our big success stories has been around extracting metadata from legacy reports using AI and natural language processing

(21:55):

Large language models to help us understand that a 1950s a mag acquisition report may talk about line direction in 12 different ways, and what are the phrases and what’s the wording that you might be looking for? So those kind of processes that could be very labour intensive if somebody had to open up a 50 page report and look for where it might be. Whereas an AI tool can potentially say, I found three things that I think you might be interested. Look here first. And then those kinds of time savings translate directly into shorter time to value, higher value of information, and that’s the business we’re in. So that’s really getting some value out of it.

Steve Mundell (22:38):

And in the energy area. In terms of the application of ai, how have companies approached that or received the use of AI? What kind of policies needs to be generated to make them usable within the organisations?

Jess Kozman (22:56):

So the idea of machine learning, helping for pattern analysis, image analysis, things like that. That’s been around for quite a while, right?

(23:05):

Automated picking of seismic traces, things like that. The things that are becoming interesting with things like generative ai are those concerns that you alluded to there is how do you make sure that if you’ve got a whole staff of 50 data managers working on 15 different clients data sets, how do you make sure that their use of generative AI is not leaking data from one set to another? And again, that brings you back to best practises around data management, data governance, data lineage, curated data sets, and so that underlying infrastructure, data infrastructure and data platform has to be in place before you even start embarking on some of those kind of projects. What we get a lot of requests for is a data platform that will allow these companies to basically plug in, unplug these new AI tools as they become available as a microservice or as an API, so that you’re investing in that curated data set so that you can use the technology as it evolves. That’s the mindset people are starting to get into. Yeah,

Steve Mundell (24:18):

Yeah, yeah. But I guess from that perspective, there’s still that need of the people and process aspect to be better down, well, first in those organisations so that it can actually be implemented and they have the right structure governance around the tech.

Jaimee Nobbs (24:38):

How does this, this new technology coming in, how does that impact a company’s data privacy and security as well? What challenges does that open a company up to?

Jess Kozman (24:49):

Steve, you want to take that one first?

Steve Mundell (24:50):

Yeah. Well, I think coming straight back to it, as Jess mentioned, there is the possibilities of data being leaked out and lost, but it comes back to that focus on having the right structures put in place, the right policy. So looking at your data governance to ensure that there are the right measures, the right controls put around the data to reduce the risk of these things happening. And I guess it all comes back to the why when you’re putting those things in because you can try and do everything, but it’s going to be difficult to do that. But coming back to why, what are we trying to solve? And so looking back at those points that you raised is around privacy or your confidential information is understanding what needs to be protected by and then putting those controls into place. There’s various frameworks that are out there, whether it’s ISO 27001 and SOC2, we might have you that give pretty good starting points to create an information security management system. Those are good starting points for organisations I think, to start thinking about how are we going to put controls around our data to ensure that these things happen that are almost technology agnostic. It’s just going through a structured way of thinking about the data sets and how they’re used and by whom.

Jess Kozman (26:16):

And I think people usually think of that in terms of preventing the risk of data not being where it should be. But there’s a positive value statement in there as well, which is that your ideal working environment, as you said at the very top of our conversation, Steve, is this amalgamation of potentially pre-competitive open source public domain data research and academics, your own proprietary data, potentially shared data sets as parts of consortia or multi-client speculative data sets. And all of those different entitlements and obligations have to be properly managed in there for you to get the maximum benefit. But what you find is when you have those data classifications and data governance policies in place, everybody doesn’t have to do the same things to the same open file data sets, right? Everybody doesn’t have to go to three different states within Australia, download the data from the same API and clean it up in their own data sets because it is public domain, it can be shared.

Jaimee Nobbs (27:19):

I was wondering how from an open source perspective, if you’re sharing data, how that works from a privacy perspective, but that does make sense when you’ve got industry standards and frameworks in place that do protect all companies using that. That makes sense. If we’re talking about, I guess, industry standards and frameworks, Jess, the mining energy industries are both heavily reliant on maintaining an accurate and auditable data for regulatory reporting like you both have touched on. How does the regulatory requirements differ between the two industries? Exactly.

Jess Kozman (27:54):

So as in the way that operators are rebranding themselves as energy companies, what we find is that a lot of regulatory agencies that were previously charged only with regulating oil and gas wells where the product goes one way in the well bore from the earth up are now finding themselves having to think about CO2 injection wells, radioactive waste disposal wells, solution mining wells where something goes down and comes back up geothermal, closed loose systems where something goes down but never touches the rocks. So the regulations that were put in place 40 years ago to drill an oil and gas well and take oil and gas out of it are finding themselves highly stressed by these new projects, these new technologies and these new approaches to the earth as a source of energy and as a sink for carbon. And frankly, they’re struggling to keep up.

(28:58):

You guys working in Australia will understand very well the complications of working in environments that are regulated at both the state, local, perhaps indigenous ownership, commonwealth federal. So think about that and then put yourself back over here in the US where we have 50 states and each one with their own idea about how to regulate a Class six injection well. So this issue of who has primacy in regulating these new energy technologies is really causing a mess in the regulatory environment because legislative processes do not move as fast as technology processes do. And we’re starting to see regulators come to the conclusion that they’re going to have to pass regulations that address the outcomes and results, not the technology that’s used to get there, acknowledging the fact that the data and the technology is going to evolve much quicker than their regulations can. In the case of CO2 sequestration, in geologic reservoirs in particular, you got to remember, we’re designing projects that are meant to put carbon back into a mineralogical environment where they’re going to be for the next 20,000 years.

(30:18):

So we’re designing data systems for data types that haven’t been thought of yet, technology that hasn’t been invented yet, and people that haven’t been born yet, and trying to create a regulation that addresses that is very difficult. What we are seeing is that the open source environment and cloud data storage is allowing operators and regulators to start to work together on that, understand what those data sets are going to look like and what is a reasonable amount of data to ask somebody to store, manage and deliver to a government agency who’s responsible for that? How do you store data in a way that somebody can come back in 40 years and use it as a baseline to compare against a project that’s been injecting CO2 for 20 years and is 20 years into its post-injection closure stage. Those are really complex data technology questions, and I’ll tell you it’s fun working in that environment, but there’s a lot of complexity in it as well. I’d like to get Steve’s take on that. I mean, how much do you have those conversations with regulators, with surveys, with the commonwealth agencies about the best way to capitalise on the potential value of that pre-competitive data that they manage?

Steve Mundell (31:40):

Not a huge amount to us in terms of the work that we do, but something that is interesting, we looking in that regulatory space. So try and cover a couple of things here. So looking at the moment, you’ve got the J code, which is proposed to change in the not too distant future. So there’s a number of change there, but in particular looking at modifying factors, there’s a big presence of ESG within the requirements now coming in and other items as well. But so it’s putting a different lens over the top of the resource and the question raises, okay, where’s the data going to come from that’s going to inform these factors that are new? Do organisations have the data that they can use or where are they going to find it from because there’s already resources reported and they’re now going to be modified by new factors.

(32:40):

So how does that change? So there’s this interesting aspect of the regulations changing and then going to have to make retrospective changes with new kind of data, but then also looking at when the perspective of the land permitting, to your point that every state in Australia, we don’t have as large a number as the us, but yet every state does it differently with overlapping requirements, whether it’s from different mining acts or native title, heritage acts, water shires, all of these things crosscutting. And then you’re adding on other aspects coming in from the environmental or social aspects and just layer upon layer of regulation that organisations need to navigate in order to make the best decision they can about every single bit of land that they’re looking to access and continue access and work on. And that presents a pretty difficult challenge, especially if they’re holding hundreds of these licences, what’s the most optimal configuration that they can have for their organisation? It’s becoming quite a complex issue to deal with. It’s

Jess Kozman (33:58):

Very changing. Yeah, I guess we also see it as a business opportunity and a value case, right? In that if you can accommodate a data workflow that gives you access to information about not just your geological risk but your geopolitical risk, and you can amalgamate again all of the data from all of those different sources to make an evaluation of where are you going to develop, where are you going to invest, then that gives you a competitive advantage as a resource operator. So

Steve Mundell (34:31):

It’s a good point of just highlighting is that I would say when I was doing geology 20 something years ago, it was a rock and it had a colour and it had assay results, and that sort of defined the resource, the size of it, but now there’s even social impact aspects coming in to modify that resource and those very different data sets that are coming together to make your resource. And so I’m not saying that they’ve never been there before, but it’s just coming to a light that all of these things come together and points to that requirement for the data to exist together, not to be siloed out and considered as separate domains as that all of this in some way has relationships with another and comes together to define this asset that as of interest to an organisation or the world.

Jess Kozman (35:26):

Yeah, and sometimes one of the most important answers that an organisation can get about their data is using that data provenance and lineage to say, what did we know about this project at the time that we made this decision? What was the totality of the data and information that we had, not only to satisfy a regulatory request, but to understand our own work process and go back and do continuous improvement. What do we know now? What kind of data do we have now available to us that we didn’t have 20 years ago when we drilled this kind of well? So those are very valuable questions to be able to

Steve Mundell (36:06):

Answer, certainly is, certainly is. Yeah. With

Jaimee Nobbs (36:09):

Regard to the regulatory requirements becoming more complex and regulators trying to keep up with companies changing technologies and creating new data sets, what onus is on companies to actually take charge in terms of being transparent about that data, potentially an environmental perspective. The companies at the moment, it’s taking them a lot of time and resources to get this information, whereas why

Jess Kozman (36:41):

Would they share it?

Jaimee Nobbs (36:42):

Is that, why would they share it? Why would they want these regulatory changes? Isn’t that going to make it more complex for them? I’m guess I’m just playing devil’s advocate here, but

Steve Mundell (36:52):

Yeah, go ahead, Steve. Yeah, I think just as I guess a start into that conversation is just the social licence to operate is just ever present and ever growing there. And if there is scepticism about what you’re doing, how are you going about it? Even though you are just saying and putting it at a one-liner saying, yeah, we are meeting our environmental regulations. I think there is. Well, I think, I feel there is just a growing questionnaire, prove it, show me. And without that, you’re not going to have acceptance of the local communities or even a global stakeholder community of the way that you are going around operating, particularly if you’ve had incidents in the past and there might be scepticism around it. So yeah, there’s driving it, I feel is to be seen. I want to make this sound right, but being good corporate citizens, that sounds like an awfully fluffy thing to say, but I can’t think of another way of saying it, of just getting out there and doing the right thing so that there’s acceptance and that value has been generated by doing the right thing.

Jess Kozman (38:09):

So stakeholder engagement is a line item on every resource project, and it generates a lot of data. Who did you talk to? Who have you gotten permits from? What was the date of a signature? How long do you have to execute on a project plan under the terms of a particular agreement? So regulation and transparency of ESG goals of carbon counting, goals of carbon offsets in jurisdictions that have a carbon market proving what you’re doing with those carbon molecules is a data rich environment. And it’s really driving a lot of the work that we’re doing around, especially in the CO2 sequestration space, where regulations are demanding that there be some sort of a transparent calculation of mass balance that shows that the CO2 that’s going in at the wellhead is going into the geologic reservoir and it’s staying there, which means you have to find ways to measure all the potential ways that it could not go there.

(39:16):

All the potential escape paths including facilities and engineering equipment, real-time monitoring of pipelines, everything down to modelling that CO2 plume as it migrates through the reservoir, and transparency and consistency and standard ways of doing that are incredibly important to, as you said, Steve, establishing, developing and maintaining the trust of the stakeholders who give you that licence to operate. And the biggest problem with that is that what people remember are the anecdotal horror stories. Nobody says, isn’t it great that the oil industry drilled 5,500 oil and gas well in the Gulf of Mexico without an unplanned release of hydrocarbons? What they do is they go to the movies and watch Deep Water Horizon and listen, that was a data management failure. So nobody ever comes down the hall to me as a data manager and says, Hey, thanks. I opened up my laptop today and all my data was right there. Thanks. I’ll retire when that happens. But boy, if the data they need to support a business decision is not there when they need it, boy do it. Right? So it’s a data intensive and a data rich environment, and it’s only getting more so as this demand for transparency and data democracy, there’s an expectation from people that work in the data in a data rich environment out on the web that all of that data will be available to them if they want to fact check their local corporate citizens.

Steve Mundell (40:55):

I think from a data perspective is that some of these things that are put in place, initiatives that might be put in place by organisations aren’t just running for a week, a month or a year. It may be for many years. So taking a case of, for example, of doing bursaries to improve skills or what have you within the local community, that’s not something that’s going to deliver all the results in the first year. You’re thinking about people that are going through school or through education. So that might be five, seven years until you see someone coming through the other end and actually taking a job in the mine or in the operation or what have you that you are looking at. So yeah, there’s that time aspect as well. So having good data practises there that that data will curate and look after and presentable and transparent for a long period of time, and it’s available important as well.

Jess Kozman (41:53):

And that goes all the way back to your initial comment about skillset and resourcing for the new energy environment. I mean, we saw this in Australia when we did STEM and K through 12 outreach in the schools in Australia. When you talk about the mining industry, everybody immediately thinks about somebody in a hard hat out there driving a loader around, and then you start talking to ’em about, there’s a 50 person data science team over here writing Python code and managing algorithms to look at pattern recognition. And that’s mining too, right? So yeah, I think having those skill sets available is a big part of that equation and meeting those transparency requirements.

Jaimee Nobbs (42:41):

I think that’s a fantastic place to stop there, Jess. We’ll probably have to get you on and you two can riff off again soon, I think.

Jess Kozman (42:49):

Sure thing.

Jaimee Nobbs (42:50):

You’ve opened up a few topics there, but we’ll leave this episode for today. Thank you both so much. I thoroughly enjoyed listening to you both.

Steve Mundell (43:00):

Thanks, Jess. Thanks Jess.

Back to top