Joe Ales & Jason West
Season 2: Episode 7 – Data
Joe Ales: Welcome to the Underscore Transformation Podcast. My name is Joe Ales.
Jason West: And I’m Jason West.
Joe Ales: And together, we’re the founders of Underscore. In season two, we’re focusing on implementation and the challenges that surround making changes to policies, processes, and team structures. If you’d like to know more about scoping a transformation programme please take a listen to Season One.
Today’s episode is number seven in our 10-point transformation checklist, and we focusing on data. Data is the lifeblood of business. It really helps you understand past performance, predict future performance, and informs decisions every day. If you are implementing new systems as part of a transformation initiative, it really shines a spotlight on your data structure, your data architecture and the maturity of any data within your systems. Understanding the data you have today, the data you need tomorrow, and how you’re going to extract data from your current systems, and possibly loading it into new ones, are essential to the success of your programme.
So, Jason, this is a subject quite close to your heart. Last week I was the integrations and reporting geek, this time it’s you. What are the common challenges, Jason, around data that we’ve found in delivering our various transformation programmes over the years?
Jason West: I’ve got a feeling only referring a lot of this right back to you – but we’ll see how I do.
Joe Ales: Yeah, this is your examination [LAUGHS].
Jason West: OK, so you’re going to give me real-time feedback. [LAUGHS].
I think the first place to start is really on your current data. When you scoped out your programme, hopefully, you found all of the nooks and crannies that data has been hiding in. And it’s not uncommon, as you start delving into it, to find lots of data in lots of different places, in numerous formats, in dusty old systems, in people’s drawers, in Excel files. Normally, as you start delving into it, you start getting minor stress headaches around data retention policies, and all that sort of stuff, and probably rightly so, because as soon as you start looking at anything, you tend to uncover some stuff. So, the first thing is really make sure you’ve uncovered everything, so you really understand the full extent to the data that you’re having to bring in, or potentially bring into, your new system. And don’t be surprised that during the course of your programme, that you find new data sets that you suddenly need to do something with. But I think that first port of call is making sure, during scoping, you’ve actually really found everything that you need to find, and you spend some real time, and energy on it.
Typically, what you’ll find is that your data is not quite as clean, not quite as reliable as you need it to be. The data is not as clean, or reliable, or accurate as you really need it to be, and in fact, a lot of the reasons why you could be starting this transformation path, could be related to the fact that you don’t have good, solid, reliable data on which to base decisions. And that’s great for a business case. But when it comes to the practicalities of them needing to do something about it, it can be a bit of a challenge if you don’t have that ‘single version of the truth’ for your data today. You’re starting from quite a messy place, and that’s often quite a challenge. So, you really do need to make sure that you apply enough resource, and time, to cleaning up the data in your current systems, before you think about moving it into the new world. And that often raises some questions about “well who owns this data?” And in a lot of organisations is not clear, there’s nothing documented to say that “these data sets are owned by these individuals”. So, you’re having to address some of that as part of this programme. Before you get to actually implementing systems, having done some work around “Where is the data? Who owns it?” And then cleaning it up and making sure that people are made accountable for the quality of that data.
Joe Ales: And typically, a project like this is probably the first time the spotlight is placed on the quality of that data set. Those data sets have probably never been seen before or been so exposed. When you implement a new system, inevitably if you push out self-service, individuals will have more access to more information, than probably the system that you’re trying to replace. And you don’t want to be in a position where you turn it on and say “presto” and what you see is a load of data that comes from your legacy system that’s just not a great data set.
Jason West: When you think it thought, who are the people to be accountable for cleaning up that data? Who’s responsible?
Joe Ales: It’s process owners. It’s got be those people who are responsible for the data that is processed, and who are invested in those processes. And actually, if you’re going to push out self-service as well, make it the accountability of individuals for maintaining their data as well. Make sure that you describe that accountability for them. But do some groundwork on that data set before you make it visible to those individuals. Like you said, data quality, data audits etc.
Jason West: And fixing it in your current systems today, rather than through your data Extract, Transformation, and Load (ETL) and we’ll get on to that later.
Joe Ales: Yes, where it makes sense for you to do so. What you shouldn’t do is trying to invent data architecture that’s going to be used in the future state, in your current system.
Jason West: Yes, so clean up what you’ve got there today, but don’t create a whole load of new data sets in your new system.
Joe Ales: Yes, to almost simulate what the data set is going to be like in a new system. Don’t do that. But make sure that whatever data you transforming and loading, or pushing into, your future system is nice and clean.
Jason West: And the reason why you want to fix it in your core system is twofold. There’s the piece around cleaning up data as you’re extracting it, transforming it, loading it into the new system is complex, and risky and is just going to slow everything down. So just fix it where it lies.
Joe Ales: Yeah, do that whole data dictionary, and last week we gave a tutorial on that master data management structure. But I’m not going to say too much about data dictionaries as I’m going to get slammed yet again for the little knowledge I have around master data management.
But the point is that you’re going to have to look at your master structure and your data tables, your architecture, before you start moving it into a new system. Because that transformation, which inevitably, you will have to do, is more easily done from a set of robust data structures, than random ones.
A very simple example: Mr/Mrs as data points. If you’ve not got that in a structured way in the system that you’re moving data from, the new system will expect it. So if you have a system that stores Mr with a dot, Mr in uppercase/lower case, all of these things. That’s going to give you more pain transforming that data set, as you’re pushing it into a system that will inevitably have that type of data in a structured table format. So spend time tidying stuff up as far as you can.
Jason West: The second reason why you want to make sure you’re cleaning your data up in your current system, is when it comes to testing. Especially if you’re running parallel tests, if you’ve got payroll in scope, for example. If your fixing your data errors in your extracting process, then there’s a good chance your parallel testing is going to show up a whole load of issues, because the data sets weren’t aligned to begin with. So, it’s another good reason why you want to get that done up front.
The other area that you need to focus on is new data. So, all of those new data sets, that new structured data, transactional data that’s going to get created as you switch on new functionality. But with that functionality there’s going to come a need to create a whole new set of data, because you’ll be perhaps transacting processes that you haven’t transacted before, you’re switching on new areas of systems that require some information, and data. And that’s going to cover the configuration data – those dropdown lists – master data, and any reference data that might be required. And again, the accountability for designing those data sets, and those structures, and those hierarchies, should really rest with the process owners. So it’s not just the data clean-up that sits with the process owners, it’s the design of new data.
Joe Ales: Yeah, true. We’ve seen far too many examples where people talk about “Oh look, we’ve got that feature in that system, can we just turn it on, please, and enable it?” Be careful. Understand, what’s the purpose? What is it that we’re trying to achieve? Just because the system has the ability to capture certain data, doesn’t necessarily make it appropriate to do so.
Jason West: Or this might not be the right place to capture it. There might be another system, as part the overall architecture, that might be a better place to hold it.
Joe Ales: But look at it strategically – what is it that you’re trying to do with the processes that you’re pushing through? Or the policies you’re trying to embed within the system? And don’t be dazzled by the bright lights of feature that exists, that you think “ sure it would be great to have it, but actually, does it deliver any value or any real benefit to the original business case that we created from the outset?”
It most likely just adds complexity to your build, to your data points, and ultimately you’ve got an ed-user who will probably end up with a system in front of them, that they don’t quite understand, and makes them ask “why?”
Jason West: Yes, “why you asking me for that information? What are you going to do with it?”
Joe Ales: Just make sure that you’ve got a purpose for any data that you’re asking people to complete, and that it’s well understood, and well defined, with a solid policy idea behind it, as well. So that people can understand the value behind them giving that information, especially as we’re pushing more and more self-service functionality to these individuals. And, again, projects can sometimes get a little bit excited about the ability to capture all of this information; “we’ve got all this information we’ve not had before, wouldn’t it be great to capture it all”.
Jason West: Yeah, or “wouldn’t it be great if we could ask people for their shoe size”. But why? What exactly are you using that for?
Joe Ales: “Because it’s good to have”, but no it’s not.
Jason West: Unless you have to order PPE or something.
Joe Ales: But the likelihood is you’d already have the process embedded in your ways of working. So just be mindful of that.
Jason West: Yeah I think having somebody play that kind of external advisory, as we talked about in another episode, having someone just being able to send check some of this: “Are you sure you really need that?” Because this is data that is going to need to be maintained. Somebody’s going to have to input, someone’s going to have to report on it. So, just because you can, doesn’t mean you should.
Joe Ales: Absolutely.
Jason West: So, the other people to involve in those design decisions around data sets, that your process owners would be going through, is your data architects/architect. Because as much as finance/HR are going to be coming up with all this cool new functionality, and new data sets, that are required, it has got to fit into the rest of the enterprise architecture. But this data probably isn’t just residing in your system; it needs to be fed out into the bigger architecture. So having a data architect involved as part of those design meetings, as part of the design process, is important to make sure that anything that you do, will fit with reporting requirements/system requirements of other systems.
Absolutely, and this is any system that you’re implementing. You should have a cross functional view of the data sets. Because, in a lot of cases, that data stored in your system is probably mastered elsewhere; cost centre structures, for instance. We’ve see many projects, HR projects for instance, they don’t pay too much attention to finance information. They just think “well its cost centre structures are really not that important to the HR world”. Well guess what: they are. Because ultimately you’re going to design approval processes that rely on delegated authority, and processes that finance have in place will probably rely on their cost structure.
Jason West: And it might not be unheard of, for people to want to know how many heads or FTE they have, or the cost of headcount in particular cost centres. So getting that right is going to make your life easier when it comes to switching this thing on, and if people ask for some reports you won’t have to say “that’s going to be a bit tricky”.
Joe Ales: Just make sure that you’re not developing data structure in isolation. And don’t just lift and shift what you’ve gone from your legacy system into a new one. Really think holistically about the entire data set, and all the processes that you’re trying to execute now, that you were probably executing manually before, but now you’re going to digitalise the whole thing. So you’ve got to start bringing in people from across the organisation, and the data architect will hopefully help support the programme lead, the transformation lead, and provide some assurance to the programme sponsor that the dots are being joined across the enterprise wide systems.
Jason West: The other area that the process owners need to think about when they think about data is, weirdly, people. What is the user experience? You know it’s all very well having this ability to create lots of drop down lists, or whatever it is, but try and describe things in plain language that people will understand. Often, legacy systems require some pretty arcane, alphanumeric codes; try not to put them into your bright new shiny system if you can avoid it. Explain it in plain language. If you’ve got an office in Frankfurt, for example, called it Frankfurt; don’t call it DE003 because that’s how it was coded in your ancient, Oracle system somewhere down the road. Just because whatever system it was – make your choice of old databases – needs particularly coding, try not to put that into a new system. Deal with that that translation in an integration. That’s where the design of your integrations, design of your processes, your user experience all comes together.
Joe Ales: And this is your opportunity to design the data set, and the data structure. Again, going back to my master subject of master data management; you’re going to create your new data dictionary, your data catalogue, and then use your transformation toolset to convert old legacy codes, binary codes that you bought from your legacy system, into a shiny new Frankfurt label, or whatever it is. I totally, totally agree with what you said, Jason, and there’s nothing worse than seeing, frankly, garbage from the old systems landing in the new one. It’s such a wasted opportunity.
Jason West: Of course, the other area that process owners need to focus on is the new transactional data, that’s going to get created with the implementation of new technology. And if that data is going to add any value, then it’s got to be reported on, someone has got to review it, it needs to be interpreted, and then action has got to be taken. Otherwise, back to your earlier point, why are you collecting it? It’s really process owners that need to decide who can view those that new transactional data, who’s going to transact it? Who’s going to report on it? It then ties into the whole security design of the system. So, these things are all related, because it’s not just the system; that then needs to line up with your target operating model. Having that read across, all the way from your vision, your strategy, your target operating model, all the way down into the processes and, ultimately, the data – who can see and do what with that data.
Joe Ales: Yes, absolutely. Process owners should be designing the future processes and these data sets, knowing full well what the target operating model is for the function or the business. Otherwise they’ll be making decisions in isolation of that. Having your target operating model articulated upfront, is going to inform how the process owners are going to design in their processes or policies, how they’re going to transact these various data sets. So you’re absolutely right, and one of our future episodes is going to be on target operating model.
Jason West: And, of course, we covered it in the scoping series as well; in Season One.
I think the other area that programmes get into difficulty, is when design teams or your process owners wait until the system implementer is there, sat in front of them in the room, to then only then start thinking about new data sets. And then rushing around trying to figure out “crikey, we’re going to have to design a new job architecture, a new general ledger. We’ve got all these suppliers to include”.
Joe Ales: I wouldn’t want to be the programme manager or the transformation lead, or indeed a sponsor, if that’s the case. In a situation where you’re finding out what data do I need to think about, when you got the system integrator or the implementation partners sat in front of you, asking you to make decisions about how you want to transact this process in the future, it’s not a good place to be.
Jason West: No. If you’re rushing around creating and designing new data sets on the fly at best, it’s going to have some repercussions on the usability of the system. At worst it could seriously delay your programme, but you might have to push the go live back, or even worse than that, you could be making major structural changes to your data, once the thing’s in production. And then your costs just gone through the roof. So, really focus on getting this right up front, and get the process owners thinking about data design before you get the system implementers in the room.
Joe Ales: And there are prerequisites, aren’t there, in different functions, I’m sure. Having your GL, for example. If, in finance, you’re moving from a number of different ERP systems into one, the structured GL going forward is something that you might want to do before you get a system integrator in a room. If you’re implementing an HR system, have your job architecture, your reward architecture well defined before you get the system integration into the room. Because a lot an awful lot of processes, and system design, will rely on having that data structure already in place. And if you haven’t got that, before you sign contracts with system integrators, and before you start going into workshops, make sure that you’ve done that ground groundwork on that data set before you start.
Jason West: Of course, it may be that you’ve got a number of different types of supply in procurement, and you miss one type, so one type of service or some category of supplier. If you then need to add that in, post go live, you have got to work through which process is going to transact that, what configuration changes are you going to have to make, where that data set is going to need to be transacted, or at least decisions made about not transacting it. So you’ve got to think, then, about security, about reporting, about integrations, even if that data is not required by downstream systems, you put it into your shiny new system, you probably need to exclude it.
Joe Ales: Yeah, or it will appear on some sort of integration downstream and make that system fail. The amount of regression testing that you have to do, when you’re making changes to core data structure, can’t be underestimated. It is much better to have done all of that designed upfront, you’ve got a set of development systems, you’ve got a project team; everyone is equipped to test all of this, all of these processes, these systems, use that time to do the activity. Don’t do it post go live. Post go live is absolutely road map time, and you will inevitably pick up some bits and pieces, as the end users get their hands on the system for the first time. You will be making changes, but if you’re making fundamental changes to core data structure, post go live, on day one/day two/months one or two, it’s somewhat risky.
[Intermission]
Jason West: So, it’s always best to err on the side of caution if you’re considering including, or not including, a certain class of data; whether that’s a type of headcount or a type of customer. You’re better off including it, even if you don’t actually have any data to put in or any processes to transact today, it’s better to include it during implementation, so you’ve got those structures to hang you data off, even if you don’t have that data available today. It’s just very painful adding those major set up structures later.
Joe Ales: Technology vendors, or system integrators and implementation partners might tell you otherwise; “it’s easy to make configuration changes, it’s easy”. And it may well be easy, but it’s the impact of making that change that can have a profound effect.
Jason West: I think it’s easy making changes to processes, it’s not easy making changes to data, and data structure.
Joe Ales: Yeah, absolutely. So, try and get it right first time.
Jason West: So, you’ve done your design, you understand where all of your legacy data is now, how clean or otherwise it is, and you’ve designed new data structures. But now you need to start implementation, so you’ve been through your design workshops, everything has gone swimmingly. But now you need to extract data from legacy systems, you need to transform it into a format that the new system can recognise, and then you need to load it in. So, that whole data ETL, as it’s known in the trade – Extract, Transform, Load. Where you’ve got software as a service, cloud technology, and nearly all of these functional transformation programmes are now enabled by cloud, you typically have several prototypes that need to be built during the life of the programme.
You’ll need to have a really robust and repeatable process for extracting data, transforming it, and loading it. This is central to the success of the technical parts of the programme, because, at best, if you don’t have that really slick, you cause delays. At worst you enter errors in as you transform, for example. So when it comes to that data ETL side, what is the best approach to it? What are the key considerations from your perspective?
Joe Ales: Understanding the data structure that you’ve got within your systems today and understanding the data structure that your new system requires. And then you’ve got a bit in the middle: how do I convert that data set into the new one?
As much as possible, depending on depending on the size of the programme, maybe get some sort of middleware transformation software. Middleware that does that in an automated way, so that this process has to be repeatable. It cannot take an eternity to gather data from different systems, different countries, and then sitting there manually transforming it. That just wouldn’t be sensible.
Jason West: So, if you find you’ve got people manipulating stuff in Excel, for loading it into a new system, be really worried. That’s not what you want to see.
Joe Ales: If it’s a single country implementation, it’s a fairly small business, fair enough. It’s probably doable in that scenario. But for multi multinational, we would really not recommend it. It just consumes an awful lot of time, and in a period where you’re so constrained with time, you just don’t have an awful lot of it.
Jason West: That’s true. How much time do you typically get given, as the client, to provide your data? What’s the window?
Joe Ales: It’s probably about a week or two weeks, if you’re lucky. And, of course, a client might turn around and say “well I’m going to give you this week at this point in time, I’ll do it an extract 4 weeks from the time you need it. So, I’ll start working on it now”. But, of course, by the time I give it to you four weeks have past, so that data set that I gave you, that I extracted four weeks ago, is no longer accurate. So you’re got to have a set of ETL processes there are they are routinely executed, and extract the data set that the system needs, as close to that the point that it needs it, so that you don’t have an awful lot of lag between data catch up, and so on and so forth.
Jason West: Because the start point for each of these prototype builds, is when you provide your data to the implementation partner. So the clock doesn’t start running on that build until you’ve given them the data.
Joe Ales: Exactly, and when it comes to go live, in production, you’ve really got to be extracting the data set an almost as close to that go live as possible.
Jason West: At the last possible minute, yeah. Otherwise it’s just a whole lot of catch up to do.
Joe Ales: And interestingly, one of the things that you need to think about, is if you’re going into a transformation programme that’s over two or three years, and with different phases, and across different geographies, don’t assume that those geographies are going to be ready to give you the data set you need, when they are ready to go live. Make sure that, globally, you do that that work up front. Even if those geographies don’t go live for two or three years, they need to be operating to the same data structures, the same data sets, from the outset.
Jason West: Yeah, so you’ve documented your data strategy, your ETL process, so all that is well understood by everybody, globally. And you’ve rehearsed it, as well.
Joe Ales: Yeah, absolutely. And you need to be sure that those geographies, if they’re going to come online a year later, two years later, that you’ve got the data set, and have structured data as the new system requires it. What you don’t want to do is rock up to a country, and say “right I need this data, in this way”, and it will say to you “well we’ve never had that data set”. If you if you had known that, you might have made some fundamentally different decisions, in terms of your data structure from the outset. Especially in federated business models, as well, where maybe a country or a business unit has the right to govern or manage itself, as it wishes. These things become more and more relevant; you will end up in a situation where the master data that you’re trying to add to a new system, just doesn’t exist.
Jason West: So that this is a deeply technical area, that’s fundamental to the success of the programme; it’s another one of those areas where you need technical expertise. That’s twofold: this data ETL process requires people to understand the data structures, and the operation of your current systems, and are able to extract the data really effectively, and quickly, and reliably. But you also need experts in the data structures, and the requirements of your new system. And they do the ‘transformation and load’ piece. So it’s a combined team; the external people you bring in, on a contract or a consultancy basis, working on your side, not the system implementation invitation partner side. They work with your internal IT team, and there’s got to be a really good relationship there, because they need to form this really effective, repeatable process between them. And it’s an area where it’s unlikely that those skill sets will sit inside your organisation, so be sure that as you’re planning the resources on this programme, that you’ve got sufficient budget and headcount to bring those people in.
Joe Ales: Yeah, and, again, we’ve seen programmes where they’ve resourced these data leads internally, and it really is not until the second or third iteration of that designed, that prototype, that they become fully aware of what data is required, in what format, and its purpose. In some of the projects we’ve seen data gathering workbooks, or whatever you want to call them, there are specific nuances within those, that many people don’t understand, and there’s so many of them. It’s really hard for the person, who has not had experience of implementing a particular technology in the past, to get their head around. And they don’t know what questions to ask of the systems implementation partner either, so the more knowledge you’ve got of that system and the data that’s required, in the system, within your team up front, that better you’ll be able to manage your risk.
Jason West: Because outside of integrations, heavily related to it, data is another major risk area for any programme implementing technology. When it comes to those data loads, one of the things that not everybody does, is rehearse that ETL process. And the whether that’s rehearsing before each prototype, but certainly prior to pushing all this new data into what will be your production environment, that can be really problematic if all you do is just a one off load, and you haven’t had time to practice it, to make sure that it’s all working as expected. That, before, you hit the loads, that you’ve tested the extract and transformation process doesn’t introduce errors and doesn’t cause issues. So, as a transformation lead, when you’re talking to your data lead, you want to make sure that they’ve included within their plan, sufficient time to test the process and rehearse this ahead of the final loads into prototype or production.
Joe Ales: Yeah, because you haven’t got many opportunities to do this. In a typical project you might provide that data set two or three times, and you might get it wrong the first time, but you shouldn’t be getting it wrong the second time. And the third time is likely to be a production environment, so you’re going to be struggling if you’re not got that process totally nailed down.
Jason West: And if you are a programme sponsor, one of the real ‘watch-outs’ is if you see the data quality in your prototypes falling, rather than increasing. If the data quality getting worse with each iteration, that’s a real red flag that you’ve got some serious problems in your data ETL process; so, something to really look out for. And as a sponsor, it’s one of those things that should be on the regular check-in with the technical team, or certainly part of the exec steerco, that you have a standing agenda item on data, as you’re going through the life of the implementation.
Joe Ales: When you go live, you don’t want to be in a situation where the sponsor is getting feedback from the end user community on poor data quality, poor data structure, after its gone live, because you’re not taking enough care in the transformation process for this data into new system.
Jason West: You can take a strategic view as part of your programme, you can say “you know what, we actually don’t know what the correct data is for a bunch of this, because it might be related to individuals, and there is no good quality data anywhere. And the only way to find out is by asking them”. So, it’s not uncommon or unreasonable to use the launch of the system to clean up some of that data. Which is fine, as long as you communicate that right up front.
Joe Ales: And that it’s a conscious decision from the programme sponsor, and that they understand the ramifications of that. But there are certain data sets, it if you don’t know someone’s emergency contact details, for example, because you’ve never captured it before. But it’s the right thing to do going forward, and it would be painful to do it in your legacy system, fine absolutely, if it can wait until you get live and you capture it at that point, why not.
Jason West: So, the final thing I think we need to just touch on, is that whole data access, data privacy, security around data, because it was always important, but in a world of GDPR – with almost infinite fines, it seems – you’ve got to pay close attention to this. Because ultimately the law applies equally during implementation, as it does when you’re in production. It’s the same. So, what are those good practices around data security, data governance, that whole piece that people need to ensure in place to keep them?
Joe Ales: You’re right, because a lot of these system implementations are actually using real data, from the legacy system, loading it as a prototype or as an iteration of that design, into the new system, and people are seeing data from real people. I’ve seen some organisations that have put some good practices in place, and checkpoints in place; things like NDAs Non-Disclosure Agreements, for instance. There are individuals on a project team, that would typically be have access to this information. But because they are part of the project they will inevitably see it, and this is something for organisations to think about, and actually really drive home the message that “you’re in a project team, you’re going to see information about accounts, people, procurement, finance data that you wouldn’t usually be given access to before, and not to disclose it, and this access is just for the confines of the programme”.
The other thing to think about is accessing these environments. Even though you’re accessing a test environment, the security protocols for accessing it should be absolutely the same as those in production. So, you need to have your security specialist, your security architects, from within your organisation to sign off the way in which you are accessing that environment. Making sure that you’ve got your single sign on enabled, your two-factor authentication, all of these latest security protocols that exist out there. And whatever practices you have in your organisation when you’re in production, try to have the same sort of processes and protocols in your development environment.
Jason West: So that the whole data governance piece needs to be as robust as it is in your production environment.
Joe Ales: Yeah, and then you’ve got your security models, right. Your security roles. Making sure that you’re applying the same security standards, to the security model that you’re putting in place within your development system, that you are intending to ultimately have in production. So, it’s not a free for all. People shouldn’t have uncontrolled access; even though you’re in the project team, there are some project team members that may not be allowed to see certain data. They may come across it because you’re project mode and these things happen, so that’s why an NDA might be important, and the message about “if you see it, report it”. You are applying the same type of process, and something else to think about is, as you’re doing your build, you’re not going to have a robust security model until further down the line. Until your second or third iteration of your design, for example – don’t invite end users to soon into the project, because your security model is not probably fully built yet. And even ultimately you’re going to expose data sets within that system to individuals that wouldn’t typically see it, and you do not want to start setting hares running. It is really important that you invite the end users, for user acceptance testing, at the right time of the project, which is probably quite far into it, quite close to that go live.
Jason West: And it’s frighteningly easy to check a box on one of these new cloud systems, and accidentally sent emails off to people, and give people random access to your new tenant, before the security model’s in place. So just it’s something to be highly aware of, and certainly I’ve had some first-hand experience of that on the programme, back in the day, and sometimes you have to learn from your mistakes.
So, we’ve covered a lot of ground on data here, you might look at that and think it might not be the sexiest topic – sorry Joe – but hopefully you now have a greater appreciation of the opportunities, challenges, and risks involved in re-architecting your data landscape.
Next week we move on to testing. So, you’ve built all this wonderful stuff, you’ve designed fantastic systems, your data is in wonderful order, you now need to make sure it really is as good as you think it is.
Joe Ales: Thanks for listening, we really appreciate your support. This episode focused on one of 10 critical success factors in the build phase of transformation. We make references in this season to the scoping phase of transformation; to learn more about scoping, please head over to Season One and look out for future seasons on Transition and Optimisation. If you’d like to be at the front of the queue for next week’s episode please hit the subscribe button, and don’t forget to like and review the show if you found it useful.