Jason West and Joe Ales at river bank

Joe Ales & Jason West

Season 2: Episode 8 – Testing, Testing, Testing (part 1)

S2Ep8 – Testing, Testing, Testing

[Pre-episode intro by Jason West: Welcome to the Underscore Transformation Podcast. Before we get into this week’s episode, we felt that we had to address what’s happening in the world today; how the global pandemic that we’re all living through has resulted in a radical transformation of society and business in just a matter of days. Plus, we’ve been talking about business transformation over the past few months, but that’s a structured approach to delivering positive change in organisations. The situation that people and organisations face today is markedly different. In a crisis we all have to make really tough decisions to protect ourselves, our loved ones, the teams we work in, and the wider society, and we all know 2020 is going to be a tough year. Governments and organisations are going to be radically altered, as will be the way we work, our relationships with our teams, family and friends. So, if you or your family have been directly affected by the coronavirus, we wish you a speedy recovery and return to health.

Now, when we get to the other side of this, and we will, the short term decisions that we’ve made to deal with the crisis are going to need to be reviewed in the light of a new normal. It’s going to be hard, so we’re going to have to grapple with the need to transform, which ultimately, will be greater than ever. Which is why we’re going to continue the podcast series as we had planned; we will also have some episodes coming out on working in remote teams, mental toughness in challenging times, and practical guidance for running a transformation programme with a 100% remote team.

So, once again, thank you so much for listening. We hope you’re safe and well and we hope that you find this week’s episode useful and the coming weeks as we get into it.]

Jason West: Welcome to the Underscore Transformation Podcast. My name’s Jason West.

Joe Ales: And my name’s Joe Ales.

Jason West: And together we’re the founders of Underscore. In Season two, we’re focusing on implementation, and the challenges that surround making changes to policies, processes, systems, and team structures. If you would like to know more about scoping a transformation programme, please take a listen to Season one. Today’s episode focuses on testing.

One of the fastest and most assured routes to transformation failure is to short cut testing. It’s a common story when speaking to operational teams struggling with new systems and processes. Either end users weren’t involved in testing, testers weren’t trained before being asked to test, or it was simply cut short to hit a go live date.

This episode aims to give functional leaders, who may be new to transformation and large-scale system implementations, a broad introduction to testing and how to avoid these, and other common mistakes, that cause programmes to fail.

So, Joe, I think we can all think back to systems being rolled out in our careers that didn’t quite work out as expected. What can our listeners do, to avoid following others down this well-trodden path?

Joe Ales: One of the first things to do is recognise that testing is more than just testing the system works. You’re making significant changes to policies, processes, ways of working. organisation structures etc. So you should really thoroughly test these to make sure that that the system works as intended. And from an end to end process, policy, roles and responsibilities viewpoint, etc., validate that what you’ve designed is fit for purpose. Having said that I’m sure we’ll be focusing a lot of this podcast on systems. And rightly so, but we mustn’t forget that there are important principles or practices that need to be considered when you’re implementing broad changes.

Jason West: Yeah, so where do we start? What are the mistakes that people should watch out for?

Joe Ales: The first mistake to avoid is believing that your software as a service solution (SAAS) is a plug and play, out of the box solution that doesn’t require much testing. Even the most well developed software as a service products and technologies require extensive testing to avoid configuration errors, process errors, data breaches (particularly taking into account data protection legislations that are becoming tougher and tougher), integrations (because it’s obviously implementing a system that connects and touches other downstream systems), and any operational issues that you might experience. Because you’re implementing changes to policy, process, operating model, ways of working, roles etc., so all of these things need to be thought through. And to avoid pitfalls, one of the first things I think organisations should do is hire an experienced test manager.

Jason West: Yes, and that’s typically someone you need to bring in from outside.

Joe Ales: Yeah definitely.

Jason West: Unless you’ve got people in-house that have extensive testing experience in the technology that you are implementing, rather than being generic test manager.

Joe Ales: Yeah, you need to have a test manager with, ideally, experience of the product that is being deployed. Because it is easy to try and approach test management with software as a service technologies in a very ‘blueprint’ way. And it might have been used and worked well enough with other ‘on premise’ technologies, but the same principles don’t apply to cloud really. So, the organisation should really think through that.

There are material differences to testing something an off the shelf SaaS solution, to those customised on premises products, and you are testing configuration choices. You’re not testing anything that the organisation has custimised from the ground up, having that knowledge of the system does help enormously.

Jason West: Yeah, because you’re not testing code as such, are you? It will work, as long as you bolt it together in the right way.

Joe Ales: Exactly, you’re testing that you’re testing the design decisions that you’ve made. And that they have applied the configuration choices you’ve made to a particular piece of technology. The testing is different to typical on premise.

Jason West: It is really easy to waste a lot of time if you approach testing a cloud-based product in the same way as you would for an on-premise ERP or HCM system. Because you just waste a huge amount of time checking whether there’s all these different bugs in code, and all the rest of it, and frankly you’re not going to find them. And you just spend a huge amount of resource and time looking for stuff that just isn’t there.

Joe Ales: Yeah, things like the login button appears in the same place every time, so you don’t have to test that the login button exist. You don’t have to test that the user can enter their ID and password – that’s just part of the product.

What you do need to test is “I’ve made a set of decisions in the way I want to run a process, now let’s play back that process in a testing world, to see if it works out as we intended it to.” It’s the most simplistic way of looking at it, but you don’t have to spend thousands and thousands of hours testing things that frankly the Software-as-a Service technology provider will have done themselves, to release the product out to the masses.

Jason West: And there’s some jargon around this (actually, there is a lot of jargon around this), it’s quite systems-y and IT-y, so it’s new language that we just need to learn. And one of those is around test scripts versus test scenarios. I think would be helpful if you want to expand on that a bit more about, what is a test script? What’s a test scenario? And what’s the difference between the two?

Joe Ales: You’re right actually; test scripts have probably come from on premise solutions where everything had to be really prescriptive, because you’re testing something almost from the ground up, that’s been designed and customising built to a particular organisations requirements. So you needed to go to that incredible level of detail, to make sure that no stone’s left unturned, so to speak, with step by step instructions to make sure you enter all of these data dimensions, to make sure that we can enter those data dimensions. Making sure that we can enter data with decimal points, and values of greater than a million in one particular field, and these types of things that you don’t shouldn’t have to worry about in the software as a service deployment. So you’ve got test scripts which is really quite prescriptive, and detailed instructions on: “log in as a X. Enter this. Open this tab. Open this bit of functionality in the system. Enter these values into the system. Press X or press submit or next or whatever.” And you go through these really prescriptive, detailed instructions. But to be honest, in the SaaS solution, if you’re going into that level of detail; mapping something out or describing what somebody else should do, it’s just quicker to do it yourself frankly. It’ll take less time for you to execute that test than it would to write these prescriptive scripts for somebody to execute.

A test scenario is different. So, test scenario is where you’ve made a set of decisions in your design workshops, that might be “I’m going to create a requisition, a purchase requisition, or create a purchase order. And anything that’s a value of ‘X’ needs to be approved by ‘Y’,  anything that’s a value as ‘Z’ is approved my ‘Y’ and ‘Z’.” So you’re making these design decisions.

Jason West: It’s rules really, isn’t it, it’s business logic.

Joe Ales: Exactly, and that’s what you should be testing. So, you’re now creating a scenario that says “right you are going to launch a series of processes, create a series of purchase orders, and let’s make sure that all the purchase orders that we create, test these various business rules that we built into our business process.” And you might create different types of scenarios, so you might need to create a scenario that requires individual ‘X’ to create a purchase order, but if individual ‘Y’ creates a purchase order then the rules might be slightly different. So, these are the scenarios that you have to think about. They’re not scripts, they’re not detailed instructions on how to execute something. They are very rule based testing scenarios.

Jason West: So, if you’re a programme sponsor or a transformation lead and you’re having a bit of a check on what’s going on with testing, I think something to look at, from that 60,000 foot view is “do we have tens of thousands of test scripts, or have we got a few 100 test scenarios?” If you’re implementing on premise ERP or HCM, then absolutely, you want the tens of thousands. But if it’s a SaaS solution, you really want to be seeing a few 100 test scenarios, and you can get into real trouble if you apply the wrong methodology to the technology.

Joe Ales: Yeah, you will and what will end up happening (again, we’ve seen some of these things with some of the organisations we worked with) you end up focusing your energy on testing the wrong things. And ultimately, you get a suboptimal solution that hasn’t been thoroughly tested in the areas you needed to. So yeah, it is very important that the other group to include in this, and perhaps we will expand on that a little bit later on, is processor owners taking a lot more accountability around “right, I’ve designed my process is to work in a certain way, with these business rules embedded within up my processes. I want to make sure, as a process owner, that the processes are going to work the way I want them to work.”

So, they [process owners] are so instrumental in helping the test managers carry out tests for each of the functional areas, that are impacted by this change. And the process owner then really thinks through the impact, not just on a system, but also on their operating model, the team behind the system, the policies they’re looking to push out across the organisation, and so on. So the test manager is there to pull all this together, but it does require significant involvement from those across the programme, to help pull together all those test scenarios; what is that business logic that they [process owners] really want to test thoroughly?

Jason West: Yeah, absolutely. Okay, we’ve hired a test manager, they’ve worked on a couple of implementations of the SaaS solution that we’re implementing, before and we’ve made sure that they’re going to take in a proportionate approach to system testing. What’s the first thing that your test manager really should be doing when they come on board?

Joe Ales: They should come up with a test strategy, that is really the first thing to do.

Jason West: Got ya. And what should a good test strategy include?

Joe Ales: We talked a little bit about the approach to testing, whether it’s scripts versus scenarios, so they’ve got to describe what’s their overall approach? How are they going to approach testing? What environment or technologies are going to used? Breakdown the components of testing – in a minute we will probably go into bit of discussion around the different types: smoke testing aspects of the system, processes etc., unit testing, end to end testing, parallel if it’s needed, and user acceptance testing (UAT as some people refer to it). But other components include what data or security is going to be tested? What data quality is going to be tested as part of the test strategy? How are individuals going to be trained to effectively test the system? Who’s going to do what? Who’s going to take accountability for what functional areas of the system; end to end process, policy, operating model etc.?

And then, they’ll also need to include in their strategy “OK so we’ve got all these tests, but how are we going to document changes? How are we going to document defects? How are we going to document issues? And how are we going to triage all of those, to actually unpicked the ones that are perhaps due to a lack of training, versus those that are real defects with a process, a system, a policy etc.?”

Jason West: Yes, it’s that high level overview of: what, how, and why various things need to be tested, and who’s going to carry those out?

But it really has to be formally documented. This isn’t something that you could just have floating around, you have got to document it, and you need to push it through your governance structure, and make sure the right people see it, review it, and approve it.

But it’s a fairly static document isn’t it?

Joe Ales: Yeah, from the very outset, yes. It doesn’t change that much; you might make iterations based on some lessons learned that you have picked up from different phases of testing. But realistically you come up with an approach at the very beginning, and that should remain throughout the approach throughout the life of the programme. You might change a few things, but it won’t fundamentally change.

And these documents, and this strategy should be defined at the very beginning of the programme. It’s not something that you do right at the end to pass some sort of quality audit or something. You do it at the very beginning, and the sponsor and the governance structures, like you described, will need to validate the approach that the programme is taking in its testing absolutely, and that comes at the very beginning. And if the sponsor starts to see reams and reams of paper, and details on scripts, then please challenge the programme manager/director/test manager on their approach, because they shouldn’t, in a SaaS implementation particularly, they shouldn’t be going down that path.

Jason West: But if you are building software from scratch, or implementing on premise, then yeah you want to see that.

Joe Ales: Yes, absolutely. If you’re seeing just high level test scenarios again something has been developed from the ground up then be worried, because it doesn’t have enough detail.

Jason West: So, the strategy is fairly static, but must be done up front and approved up front. The plan is different isn’t it? The plan’s a lot more fluid.

Joe Ales: Yeah, totally, because it largely depends on availability of resource; availability of technology, the system; availability of test environments etc. And the programme manager, together with the test manager, will need to line all that up. And, again, when you’re talking about testing, you ideally want people that understand the processes and have got a good background knowledge of the system they are testing. And these resources are not typically readily available, right, so you need to have your testing well planned, so that resources are going to be available, and potentially backfill for the period of time that you’re going to be testing. You’ve got to line all of that up.

Jason West: So, that resource planning and scheduling is one of the really important parts of the test plan. What else? What are some of the other major elements of the test plan you’d want to see?

Joe Ales: So firstly, who will be carrying out the testing? Again, very similar to the resource plan, but the logistics: Where is testing going to be done? Are you going to have testing done in a room with everybody in the room? In some cases, that is the most effective way of doing it. Or, are you going to get people testing remotely? Who’s going to access to the environment? What test platforms are you going to use? And prior to that, make sure that those test platforms are readily available with their users able to connect into etc.? And making sure that individuals that you’re inviting in to do testing are adequately trained, and know what they need to do.

Jason West: Trained on what particularly? Because we do talk about this quite a lot.

Joe Ales: Trained on the system and on testing practices. Very often, I’ve seen organisations just give responsibility to individuals to undertake testing. They say “right, go and run a process.” But these individuals don’t know what they’re doing, and they’re just following an instruction. Actually, what you want, is individuals that are willing to break a system and take pride in that. They say “you’ve given me a beautifully, shiny product that you think works beautifully, I’m going to try and break it and I am going to do stupid things, to break this beautifully designed process.”

Test managers will probably hate it, because these individuals will be picking up on an awful lot of issues and defects and bugs. Sure, when you put it through the triage, it might be things that realistically they’re unlikely to happen because users are typically not that, dare I say, stupid. But nonetheless, you need to sort of put these weird scenarios and weird human behaviour into technologies, just to see what happens. So at least you’re half prepared. You need to have people with that sort of mindset, and let them loose on systems and processes.

Jason West: So, I think the other thing you need is that entry and exit criteria.

Joe Ales: Yes, I was just about to mention that. And be very clear what that is. Get your executive governance structures to approve it, and it might be that each phase will maybe have a 10 point check this says “right, I’m going to come out of each phase with no P1 (priority one) issues, I’m going to come out of each phase with the user experience, individual experience is in large parts positive, and they can use the system or processes a user has to execute etc.” So, you come out with these 10 key bullet points, and you build all of your scenarios (or scripts if you’re doing an on premise deployment) and all of your scenarios or test scripts should underpin this exit criteria that you’ve developed, and you got your exec to sign off.

Jason West: It really needs to be the process owners, back to your earlier point, that are accountable for setting the criteria, for the processes in their scope. So the ones they are accountable for, it can’t be the programme director.

Joe Ales: Absolutely, and then it’s the programme director or the transformation leader, to join the dots across all of it, so that the handoffs between process A and process B, and process owner to process owner is done seamlessly. So, each of the process owners absolutely take accountability for their functional areas, but they shouldn’t lose sight that actually many processes, or people’s experience in life cycle process, will touch different process owners. The whole end to end needs to be managed really, really well.

Jason West: Yeah, because, after all, it’s the process owners who are accountable for the efficiency and effectiveness of these new systems, and processes, ways of working, that are being implemented. So it makes sense that they’re the ones to say “okay we must meet these acceptance criteria or this set of criteria, before we’re ready to push the programme onto the next phase, whether that’s the next prototype or into a deeper level of testing, or its go live.” Having really clearly defined and documented entry and exit criteria for each of the test phases is massively helpful, because it means that you make well founded decisions as you move through. But you can set the bar too high; as you alluded to Joe, having no priority ones, or only three, or whatever it’s going to be, whatever is appropriate, and if it’s too high you just get stuck chasing down every single minor bug that actually isn’t an issue.

Joe Ales: Yeah, exactly. So if you have a plan or an exit criteria that says “I’m going to clear every single bug that’s been identified, before I move on to the next phase” then you’ll be there for a long time. Because some of them, frankly are not possible to do, because the technology doesn’t allow it. You’d have to fundamentally reengineer something we’ve designed, to address maybe a P3 issue, and it just doesn’t warrant fixing. So you document it and say “I’ll close that one,” or you identify issues that you’re not going to block the entry of the next phase, just because you’ve got one issue that could still be resolved in the next phase.

So, you just have to be pragmatic in the way you react, but there are certain things that you shouldn’t allow to progress until you fix them.

Jason West: Any examples?

Joe Ales: You’ve got a fundamental error in one of your processes or in a configuration of one of your processes, that doesn’t allow the process to complete. So in that case, you need to make sure that you can complete that process – that issue is a ‘show stopper.’ But a minor one might be “we’ve got some help text, or bit of instructions, that are missing or they don’t have the right wording.” Again, you can fix that later on. So, it’s about being pragmatic, if your integration is failing dramatically.

Jason West: Or you’re missing a vital piece of data architecture, and you actually don’t really have a plan for how are you’re going to come up with it.

Joe Ales: Exactly, yeah. You do not want to closeout one phase of the system, with questions on your design; “I’ve got a flaw in my process and I need to redesign it.” Don’t move it onto the next phase because you’re just limiting your ability to test that process before you go live. Use the windows of testing that you have, and there won’t be that many, especially in SaaS implementations, you have probably got two or three windows for testing. So if you start pushing issues from one window to the next, and deferring issues, you’re just limiting your ability to actually effectively test it properly. It is really vital that people don’t brush aside issues to allow the programme to continue. Sometimes you have to make those hard calls and say “right, let’s just stop for a second, fix this and then allow it to progress.”

Jason West: Yeah, those windows are only about two to three weeks at a time, so not long. It’s not a lot of time to get these things tested and send it out into the world. And these are the things that are going to pay your suppliers, allow you to collect money, pay employees, handle all that personal data. So yeah, don’t take your lightly.

We’ve talked about setting the bar too high, but you can also set it too low, and you risk going live with the solution that either doesn’t work, or isn’t fit for purpose, or has got some horrific hole in it, that’s going to cause you’re a major issue once it hits general release. I think the test manager has got to be working with the process owners as well, because, frankly, that process owner is not a systems implementer or test manager. They’ve got to give that subject matter expertise and guidance around how various criteria is going to get assessed, who’s going to sign it off, and what you will do if that criteria aren’t met. And that’s the real value of having a test manager that has been there and done it.

Joe Ales: Yes, you definitely don’t want the blind leading the blind. Having someone experienced in any of these important things that individuals need to test, focus on these things, and don’t worry about XY&Z, because that works fine. And actually, the system implementers as well, have got a role to play in guiding organisations through “what testing are you going to do?” But the system implementers are there to configure the system against the specification. It’s up to organisations to test the logic, like we talked about just a minute ago. So the system implemented have got a role to play in guiding organisations through “okay these are the key areas that you need to focus your testing resources on, or your testing activities on.” But the test manager is the one who’s really helping to pull those plans together, to help the process owners define “these are the key things we need to really prioritise here, because these are issues that I’ve experienced in previous projects, that will causes pain, so let’s really spend a bit of time on these sort of areas.” They just provide a little bit of guidance to the process owners, and the process owners say “I want processed again from A to B in this way, with these various approvals and these system activities happening, and the roles and responsibilities of these various people are well described, well-articulated, and well defined.”

That’s the role of the process owner, and the test manager is sitting there going “how am I going to now create the various test scenarios to meet all of that criteria?”

[Intermission]

Jason West: I think the system implementation partners can help you with one part of testing, which is verifying that it meets specifications. But they’re never going to help you with validating that it’s fit for purpose, for your business.

Joe Ales: Absolutely not.

Jason West: They will have no clue about that, and don’t expect them to help you with it.

Joe Ales: Exactly, and they shouldn’t do it.

Jason West: They’d be well stepping well beyond their boundaries.

Joe Ales: Exactly, yeah. Who are they to know what’s right or wrong for your business? Their line is: “You give me a set of logic and business rules that you want me to configure in a system. I have done that.”

Their role is to make sure that they have configured it correctly, which sometimes causes a little bit of frustration, because there are inevitably going to be errors in business logic that’s been the added by the system integrators, because of a condition or rule or something crazy that they put into the system, to send the process either a or B through one channel or another.

Jason West: Silly things like give this approval if it’s over this amount, instead of if it’s under this amount. It’s just basic logic that gets switched around as people are working long hours to get stuff configured.

Joe Ales: Exactly. Sometimes those things happen, they don’t get configured correctly, but it’s human nature. But these are the key things that businesses should be focusing their testing activities on.

Jason West: So, we’ve got our sign off criteria sorted, but what about all the different types of system testing that goes on when you’re implementing a new piece of technology. Because some of this language is pretty opaque.

Joe Ales: Yes, let’s demystifying a little, shall we?

Jason West: Yes, we should just unpack it a bit. We’re not going to go into a massive amount of detail here, but I think it would help just to do a bit of a quick fire of “what are these things?” Because you’ll hear about it, you’ll see it written on plans, and sometimes it can be disconcerting to have to say; “what is this exactly?”

So, let’s just take it in order, a vaguely chronological order. Let’s start with smoke testing. What is smoke testing?

Joe Ales: What a good one to start with. Smoke testing is where you’re basically doing some checks. You’re not doing detailed testing, this is simply asking things like: “is your data is your data in the system? Have I configured it correctly?”

And this is will typically be done by the system integrators, to start off with, before they hand the system to the client. They will go in and they will check to see if they have configured all of the functional areas they need to configure. Has the data set been loaded? Have integrations been turned on? They will have a checklist of the things that they need to validate, that they’ve been configured.

Smoke testing, as well, comes into play much further down the line on the client side. So, typically just prior to a system going into production, some clients may decide to do smoke testing in certain parts of the system, prior to turning it on, into a live environment, as well as doing perhaps detailed unit testing in other parts of the system testing.

So, to summarise, smoke testing is done by the system integrator prior to giving a prototype to the organisation, to the client for them to do their various or unit testing etc. But it can also be done by clients at the very end of the programme, prior to switching the system into live mode. They will go in and just do some validation that things that the customer is expecting to see in the system, are in the system prior to turning it on.

Jason West: Yeah. Next up is unit testing.

Joe Ales: With unit testing, you are testing, in detail. This is where your scenarios, that we talked about a minute ago, come in for each of the functional areas of your system. You might be testing the purchasing elements of it, or finance elements of it, or the printing elements or payroll elements. You’re doing detailed testing against the scenario, not the scripts but against the scenarios. Especially in SaaS mode, we’re talking about software as a service, not talking about on premise here. This is where your majority of your effort will go, during testing.

Jason West: You can expect a lot of changes in the first round of unit testing, can’t you.

Joe Ales: Yeah, this is the first time your design team will fully understand the design decisions that they’ve made in various workshops. It’s the first time they are seeing the output of that in the technology, in system. So, they are going to identify defects, and issues, and say “wouldn’t it be great if we could do this as well?” So, enhancements etc. It’s not just about fixing defects and issues, actually, it’s about enhancing the experience for the user.

Jason West: And sometimes you have a couple of decisions in front of you, “we could do it this way, or that way. Well we chose that way, and now we see it in the flesh, actually that’s not great. We should have gone the other way.” It’s that evolving, iterative design feedback loop. Unit testing is essential to that iterative approach to design.

Joe Ales: And you should have all of these types of sort of conversations, and discussions, and questions in the first iteration of your prototype design. You do not want to be having these types of conversations when you’re quite close to go live. It’s less than ideal, let’s put it that way. This is your opportunity to really refine, and review the design decisions and their effect.

Jason West: It’s the difference between a defect and an enhancement, isn’t it. So in your first prototype, it’s absolutely fine to have maybe more enhancements then you have defects. But if you’re in your second round of testing, or getting towards your production in prototype and you’ve still got enhancements going on, then that is a real red flag.

Joe Ales: Yes, you do not t want to have changes at that stage, and system integrators will typically be really uncomfortable with lots of changes going through. Because if you’re making enhancements and changes at such a late stage, it just means that you’ve not taken those changes through the testing cycles that everything else has had.

So, yes, it is a little bit of a worry if you start to sort of see huge amounts of change. And sometimes, it is what it is, you know, it has to be done, because otherwise the user experience will be compromised etc. So, sometimes you have to make these hard decisions, but a test manager, and a programme lead, and transformation lead etc. will need to really be very careful about making enhancements later in the programme, and really signing those off as “these are absolutely critical. They must be done prior to us turning the system on. Or, actually, you know what; they are not critical prior to go live. But they are really great ideas. Put them into a road map that we will address three or six months down the line.”

Jason West: Yes, yeah. Okay, so that’s unit testing, let’s move on to security testing.

Joe Ales: Yes; this is a really, really key testing that needs to be performed. This is where you’re testing security within the system, what users can access, what they can see, the various components of the system that users can see. And it becomes obviously more important as legislation on data privacy etc., starts to get tighter and tighter, and tougher on organisations if they should fall foul of it.

Look at user rights, and if you look at GDPR now, it’s making sure that people are only allowed to see what they should be able to see. You’re going to have to have those conversations, during workshops and design workshops, about determining who can see what information about people, about finance, procurement, whatever it is.

Jason West: Yes, and we covered it in last week’s episode on data, but it’s always worth repeating: before you open up your new system to be tested by a broader audience than just the core project team, make sure you’ve done all your security testing, and that no one’s going to see any data they shouldn’t see.

Joe Ales: Yeah, absolutely. We also talked about how organisations can ask individuals to sign non-disclosure agreements, just in case they come across data that they wouldn’t typically come across. So, for listeners out there, if you’re thinking about inviting individuals from outside your project team into testing, which inevitably you will do as part of UAT, these individuals may come across data they wouldn’t typically see. So put some policies in place around that.

Jason West: Makes sense. So, moving on: performance testing.

Joe Ales: This is an interesting one, actually. Because for smaller organisations this is probably not something that they will do, because they don’t have the volume of transactions that they’re putting through. Especially with cloud technology that’s being used by organisations are significant size. If you’re doing something on premise, you probably do want to do performance testing, because obviously you’re testing the capabilities of your internal networks etc.

Jason West: Yes, “have we bought a fast enough server? Have I got enough storage?” for example.

Joe Ales: Exactly. But a lot of organisations are going down the SaaS route, so if you are 5-10,000 employee business, you probably don’t need to worry too much about transactional performance.

You do, however, need to worry about reporting performance. So, you’ve built a series of reports there follow lots and lots of complex logic inside, and you expect these reports to be issued during peak times of usage, then it might affect the performance of the system. I would suggest that some performance testing is done against reporting, but not on the transaction level.

If, however, you’re an organisation of significant size – 60/70/80,000 plus employees – then you have to do performance testing. You have to put thousands of transactions on at the same time, and run those reports, run those integrations all at the same time, to make sure that the system doesn’t collapse. And the system providers will also expect you to do some those tests, because obviously they don’t want to compromise their other client bases, especially if it’s a shared environment, with many clients.

Jason West: It tends to happen, because as much as you have stuff in the cloud, you can have latency issues. Especially if you’ve got large populations in Australia or New Zealand, they tend to get a bit of a bum rap when it comes to internet access and access to cloud services, so you have got to check this stuff out. Because it can decimate the performance of your system, if you haven’t done that. It’s something we’ve had direct experience of.

Joe Ales: Yeah that’s true. If you’ve got a European based data centre, and your Aussie colleagues are accessing it from down under, they might be struggling with latency. But more recently a lot of networks and data centres are more robust.

Jason West: Yes, it doesn’t happen so much does, I’m showing my age, aren’t I.

Joe Ales: No, I’m sure we’ll get write ins saying “we’re still experiencing that here in the middle of in the Outback.”

But if you’ve got lots and lots of transactions, and lots of employees, absolutely simulate a whole load of transactions being put through. But your reporting and integrations; test them to death.

Jason West: Absolutely. So, we’ve covered off some different aspects of testing individual areas the system. What about when you get to end to end testing? What is it? What should we be thinking about?

Joe Ales: This is where you should be testing all of the various components, and then making sure that it works seamlessly end to end. So, your order to cash, your hire to retire, your procure to pay, to make sure that all of the elements of the system work together seamlessly. It’s about getting all the process owners in a room, and saying “here’s my hire to retire process; if I raise a requisition to recruit somebody, can I hire them? Can I onboard them? Can I then make some changes to their compensation once they’re here? Can I put them on a leave of absence? Can I get them to raise the purchase order? To claim expenses?”

So you can see the entire life cycle of an individual. You can see all of that, you can see that whole end to end experience, And, again, as you see that end to end experience, you will have integrations that will kick off when certain events happen. For example, “I’ve recruited somebody now, I’ve hired somebody, I’ve onboarded somebody, so maybe I’m going to issue a series of integrations to inform other downstream systems that this employee has arrived.” Are those integrations kicking in or not? So it’s about joining up, not just the process, it’s also about testing that those integrations are working; that those reports are still working and being distributed against the schedules that you’ve created. And that this ‘Donald Duck’ hire (i.e. a fake hire) that you’ve put through the system, has ended up in the downstream systems. And ultimately, you terminate your ‘Donald Duck,’ and Donald Duck disappears from the downstream systems that you are integrating with. This is a really, really key testing phase, so do not underestimate this. This is not unit testing, this is about joining the dots, and a lot of organisations and individuals I’ve spoken to, don’t approach it that way. They still approach it like unit testing, but they try to join one or two areas of the process. But it wasn’t end to end; “we’ve now done a recruitment and we’ve now hired.” But you haven’t onboarded, you haven’t done the life cycle changes etc.

Jason West: And this is where all your ‘off system’ comes in.

Joe Ales: Yes, exactly. All your policies are coming in, and your processes, your handoffs, your standard operating procedures for each of your teams, the documents, the guides that each of the teams and individuals have to support those processes. Maybe the payroll team has guides on how they process somebody to payroll. Ad it’s at this point that you start to document that, and get it to a level of detail that allows you to then train those users, and those teams, in the use of the system.

Jason West: It’s an area where, as you say, people often just focus on the system. And do more unit testing, rather than properly joining it up end to end.

So, before we just wrap up on this, we’ve just got two or three more [testing types] to go. And we will get to UAT (User Acceptance Testing), but one that I know is dear to your heart, Joe, is parallel testing.

Joe Ales: Yes. The purpose of parallel testing is to verify the two systems [the legacy system and the new one] can complete complex calculations, or operations with the exact same results. In payroll for instance, you’d test that your gross to net and calculations are the exact matches between the two systems.

There are some tolerances, you always have some tolerances of maybe 1p here or 1p there. But largely if I’m going to be running my payroll in in one system, and I’m running the payroll in my new system, then the results should be exactly the same. That’s the purpose of parallel.

You’re not using parallel testing to test the configuration, of your payroll engine; that’s not what you’re doing. You’re testing that the calculations of your payroll are exactly the same in both systems. You would have done your configuration testing during your unit testing, of course, and during your end to end.

You’re going to test that your payroll systems, and maybe even the finance systems if you do in parallels in finance too, you’re using your unit testing to make sure that the configuration, and the elements are all working correctly, and so on. This [parallel testing] is to make sure that the calculations, when you pull it together, all the various components of the payroll, for instance – you’ve got your earnings, deductions, your taxes etc. – all of that, when you’re pushing your payroll through, all of that calculates well.

Jason West: Yeah. And where you do identify discrepancies as you’re going through your parallel testing, what happens next? What’s the process on that and how does it work?

Joe Ales: You have to understand and triage: why do we have differences? The differences may well be due to configuration, for instance, maybe data. And in some cases, actually, some organisations find that the legacy system in which they were operating their payroll and finance actually weren’t correct. You’d be surprised how many times that happens, which actually give the organisation a different set of problems. “We’ve now found an issue in a legacy system, what are we going to do? This issue goes back six months/three years/10 years, where individuals’ pay has perhaps being calculated or overpaid.”

Jason West: Or worse, their pension contributions have been miscalculated or overpaid.

Joe Ales: Yes, that’s the sponsor’s worst nightmare is hearing that news, that “our new system’s going to work beautifully, but your legacy system, however, we found this issue.”

Anyway, you are going to come up with differences. Some are differences that you have to fix, because you’ve now identified that there is an issue with the configuration of the system, that’s causing that difference to occur between the two systems. So maternity pay isn’t matching between two systems, for example, and it’s because there’s a calculation rule in the new system hasn’t been configured correctly. So, let’s fix that now, let’s unit test it, let’s rerun the parallel data to see if it’s addressed all of the issues.

And then you’re going to have some other differences, where you say “you know what, there’s clothing allowance that we forgot to load, for 500 employees that’s causing a difference of X. We know that’s a known difference. We can live with that.”

So, it’s about understanding what are the configuration differences that mean you have to fix your configuration, you have to run them through again to make sure that the issues are addressed. And then there’s other ones that may be data load issue. It’s just human error, and you know, we’re all human, and sometimes these things happen. And you can go, “for those 500, I understand why we have a difference.”

As long as the exec and the governance of the programme is comfortable with those parameters that you sit in your parallel, so you’re going to identify what are your levels of tolerance in your payroll. Some organisations will say “I want my tolerance to be all  issues with anything over 10p on net pay, or on gross pay, will need to be justified.” Other organisations are perhaps a bit more relaxed, so again, it depends on the organisation’s appetite for risk. That’s what it means.

Joe Ales: In terms of the number of parallel runs, how many different test cycle should you be running in parallel? How many parallel test runs should you have a minimum of?

Joe Ales: A minimum of two. And if you’ve got serious issues with your second one, you have to run a third one.

Jason West: Yeah that makes sense.

And I think it’s worth pointing out that this whole area, for any anybody that’s listening here, this is highly specialised, complex, detailed work. This whole area of payroll implementation, payroll parallel testing. So really do get external assurance from people that have got real hands on operational experience of your new payroll system, if you possibly can. It can save you a huge amount of time, as well as really reducing the risk associated with potentially going live with a payroll system that hasn’t been fully tested.

So, we’ve covered parallel. The next phase on from parallel, or does it run in parallel, with parallel testing, is UAT.

Joe Ales: Gosh, that’s a lot of parallel [laughs].

But, yes, User Acceptance Testing. This is where you’re bringing individuals from across the organisation to test your processes, your system, just making sure it’s fit for purpose from their perspective. It comes towards the end of your system implementation.

Jason West: Yes, it’s really about validating that this is fit for purpose; it’s going to work in all these different areas of the business; and the user experience makes sense and does not cause operational issues as this thing goes live. If they [the users] are finding defects, and design enhancements during your UAT, you’ve got real issues.

Jason West: This shouldn’t be the first time that the end user will experience the system. You should bring some end users in during your end to end, or maybe even unit testing, providing the caveat that you’ve tested security to death. So, you don’t get individuals that you’re bringing in into your system that hasn’t been security tested, that’s dangerous. Or maybe do it through presentation etc. and showcase what you’ve designed, so they’re not seeing the system for the first time. And do a demo: “this is what we’ve designed, what do you think? Have we missed anything?” Get their input during unit testing so they can say “you’ve not thought about XY&Z.” Then during end to end testing, again you do what I just described. And then, you invite your end users to have a play. But, as I said it’s not (or shouldn’t be) the first time that they will have seen the system. Because you want them to have some sort of familiarity, to avoid exactly what you just said Jason, where they say “I want to raise all of these defects and issues” because, actually, they’re not real defects or real issues, they’re just user misunderstanding. So if you have them involved earlier on, it will just help your UAT process much more effective, because you ultimately want constructive feedback, and the level of confidence from the end user that what you’re doing will land in your organisation, it will be adopted, and it will be used by individuals that need to use it etc.

Jason West: Yeah, absolutely. So, I think what we’re going to do now, is we’re going to draw to a close here for this week’s session. Testing it is a vast topic and we’ve got more to cover. I think we’ve talked about all those major phases within testing, the importance of getting the right test manager on board, the key documentation. But we do have a bit more to cover, and I’m very mindful that there’s only so much people can handle in any one sitting.

We will come back to this next week, and go through just a bit more detail around some of those really key areas. Because if you get this right, you will go live with something that is really fit for purpose, that is not going to cause you any operational issues, and ultimately is going to deliver the business case that you’ve committed to implement this new technology and this transformational change. So please do listening next week for the second part of our testing deep dive. We look forward to talking to you then.

[OUTRO: Thanks for listening. We really appreciate your support. This episode focused on one of 10 critical success factors in the build phase of transformation. If you’d like to be at the front of the queue for next week’s episode please hit the subscribe button, and don’t forget to light the show if you found it useful.]