Jason West and Joe Ales at river bank

Joe Ales & Jason West

Season 2: Episode 9 – Testing, Testing, Testing (part 2)

Jason West: Welcome to the Underscore Transformation Podcast. My name’s Jason West.

Joe Ales: And my name’s Joe Ales.

Jason West: And together we’re the founders of Underscore. In Season two, we’re focusing on implementation, and the challenges that surround making changes to policies, processes, systems, and team structures. If you would like to know more about scoping a transformation programme, please take a listen to Season one.

This week is the second part of our deep dive into testing. In part one we discussed how testing covers a lot more than just systems, and the value that an experienced test management brings. We took a look at test strategy and its contents; we also covered how to construct a really effective test plan, and how important it is to keep that updated throughout the implementation. And then we got into the various different types of testing: smoke, unit, performance, security, UAT and the like. So we covered a lot of ground; it was a big topic and we’re not quite finished with testing just yet.

There really is a lot going on in testing, and next we need to figure out how we’re going to coordinate all this activity. Any new ERP or HCM system throws up hundreds, possibly thousands of design enhancements and defects that need fixing. So how do you keep track of all this activity, when you’ve got process owners demanding changes, test teams identify defects, and functional consultants running around making changes all over this new system that you’re implementing? How do you deal with that Joe?

Joe Ales: Well Jason, good question. The first thing you need is a robust defect management system, and management process really. This is the process whereby any defects or enhancements that you’re inevitably going to find during testing, you’re going to record somewhere, review their severity, and categorise these issues and assign them to functional consultants, or a process of functional consultants, to solve the technical issue, or the process owner for resolution in terms of policy and process, and so on. And one of the things that you are going to continually do, throughout your project, as you identify issues, and you apply these fixes, and then you have to regression test those.

Jaon West: We did a whole lot of definitions around different types of testing in the last episode. But one we didn’t cover was regression testing.

Joe Ales: Perhaps we should talk a little bit about what that means, because that is a term that is going to become very familiar, even when the system’s in production, as it is something that organisations are going to have to get used to doing.

So, put simply regression testing is where you’re testing areas of the products that have been tested before. But now because you’re applying a change or an enhancement to an area of a system, you’re going to have to go back and re-test some of those functional areas.

Jaon West: It’s what you would do on a live system, isn’t it.

Joe Ales: Absolutely. And technology, software as a service technology, will tend to update itself every three to six months, and the technology suppliers or vendors will naturally, and rightly so, will encourage organisations to have regression testing processes in place. Because they are applying enhancements to, and adding new features to the existing technology, and existing functional areas of their system. So organisations are going to have to test how is that new feature going to fit into my system, and ultimately, making sure that new feature isn’t going to break anything that you’ve designed and developed previously.

Jaon West: Yeah, so one of the biggest areas to test when you’re doing your regression testing is often in reporting and integrations, because that tends to be where things most often break, and could be really quite serious if does.

Joe Ales: Yes, it could be security as well. You might be making changes to a particular business process, for instance, that requires certain roles to approve stuff or see stuff, and all of a sudden, guess what; they can’t see it because the security is not there. So you’re going to have to make changes to security policy, to allow those individuals to see those bits, and those features, and by enabling the security on that user you may inadvertently be opening up a whole load of security and visibility of the system, to that user, that you hadn’t really intended to.

There’s an awful lot to think about when you add an enhancement or a new feature onto something that’s already being tested thoroughly. So, paying attention to regression testing is very important, and that’s why having enhancement and additional features close to go live is quite dangerous, because actually you’ve not had time to really thoroughly test, and regression test everything that you need to test, before you turn the system on. So, word of warning to anybody out there: if you’ve got enhancements or good ideas prior to go live, how close are you to go live? Because you’ve got to really drive regression testing through the system, to make sure that you haven’t inadvertently broken something prior to go live. So that’s really important.

Jaon West: Yeah. So, back on the defect management side, we’ve got this process where we are identifying lots of things that need to change. Some of them defects, some are enhancements, and there’s this process of reviewing those, categorising them, putting a severity on them, and then feeding them to the right people – it’s the triage process, or triage within testing.

Who is it that should be owning that? Where does that sit in their programme structure?

Joe Ales: Typically, it should be the test manager that owns the triage process, and it’s owned by the customer first and foremost not the technology vendor. Bugs and issues, and the importance of those bugs and issues are categorised by the customer, because ultimately, it’s up to them to say “this particular issue has this much impact on my system, and my user experience, and this is how much I care about that issue.” So, that’s really important.

And it’s all coordinated and controlled by the test manager, and the accountability, of fixing those issues, should be the process owners’ for each of their respective functional areas. They are the ones that are prioritising how important this issue is to get fixed. And it could be a technology issue, and you know, this is much broader than technology. I know we’re focusing a lot on technology, because this is language that’s typically used within systems implementations, but we must not forget that any form of transformation is not just about system implementation. Its people, it’s policy, it’s process, it’s technology, culture. It’s a whole bunch of different things that the process owner should be held to account for.

Jaon West: Yeah, so, that whole triaging piece, it’s really important that the test manager, or the person that’s owning this, that this is where their depth of understanding of the technology really comes to the fore. Because they’re able to make distinctions between “no, that is actually a defect, that needs to be investigated. But that’s actually an enhancement; you’ve changed what you’ve designed. So, there’s differences in those. But also, actually, it was just a training issue? Or hold on, we’ve got a problem here with policy, or process that we need to get back to the right people.” And being able to understand all that, and push it to the right people is what a really great test manager brings to your programme.

Joe Ales: Absolutely. If you have a test manager without experience in the technology, their role becomes more of a test coordinator, to be honest; someone who tracks issues and pushes the issues, and signpost where issues are meant to go, but doesn’t really understand the impact of the issue. And they aren’t really able to articulate, across a number of stakeholders, the importance of having that fix. So, it then puts an awful lot of pressure on process owners, to be able to articulate back to the technology vendor, for instance, technical nuances within the system, and how to get the system to do what it needs to do.

And, again, one of the other things a good test manager with solid experience in the technology, would do, is not waste time of technical consultants; to triage things there actually are not possible within the technology. They will quickly be able to filter out and triage points and say “you know, what, what you’re asking isn’t technically possible withing this product, so let’s not waste the precious time of technical consultants, and instead, let them focus on what they need to focus. Which is fixing real issues, rather than investigating, or answering almost moot points.” So having a test manager with plenty of experience will be beneficial to both the customer, as well as the system implementers and system integrators etc.

Jaon West: Yeah, and they need to have, sometimes quite robust, conversations with people where you have somebody, and I’ve been this person, who goes “but it’s rubbish, why is it doing that? There’s a bug that need fixing.” And the test managers can say “nope, that’s just a feature of the system that you bought.” [laughs]. “find a way around it.”

Joe Ales: “I can’t fix it, just create a work around.” Yeah. You’re asking for a Unicorn and Unicorn don’t exist, as far as I’m aware.

Jaon West: Who knows in these times. [laughs].

But there’s a lot that’s going on here, so how do you keep track of all of that?

Joe Ales: I would really recommend not keeping it or tracking it on post it notes. That’s probably a good start. Or Excel spreadsheets. Most organisations will have some sort of case management system, that they will use, and really you need to start with that. Start by asking your IT function; they will have a product that they will use to track defects, and issues, and an issue resolution system. They’ll have a JIRA or a ServiceNow.

Jaon West: Or an extension of something more specific, whether it’s Bugzilla or that sort of thing. There’s lots of stuff out.

Joe Ales: Yes, there are lots and lots of systems, and actually in many cases, the system implementers, those that are sitting there configuring your system, will probably have one of their own. And this becomes the one version of the truth as far as what issues are we dealing with, and how, and which ones do we prioritise. And you keep a log, and use this is almost as your change control document, at the very end of the implementation so you can track “these were my design requirements, I’ve made some design changes, I’ve updated my design specifications etc. along the way, and there have been some enhancements or issues that we have fixed along the way.” And you can extract all of that and then that becomes your change control documentation, as well. Meanwhile, you’re still updating process flows etc., with changes you’re making along the way. You’re making changes to target operating model, potentially, changing roles and responsibilities, and all of that will have to be maintained. But documenting things, all of these issues and defects, enhancements etc. in a system where you are able to easily export and extract that data, and keeping it stored somewhere safe, as your change control document, is good practice.

Jaon West: Yeah. And keeping a record of the business rationale behind why these changes were made. Because it can be so easy to go round and round in circles on some of these, and you can waste time. And you might make some really great decisions at the time, that makes absolute sense, that even post go live, you look back on it and go “why did we do that?”

Joe Ales: Just keep your decisions logs; this is gold dust, because you will be live for three/six months and somebody will ask the question: “okay, so tell me why you’ve designed something like that.” And especially, as you said, if the process owner is no longer around. If the thinking is in that process owner’s head, as to why they designed a policy, process, or the TOM in a certain way, you need to have that documented. And you’ll be able to refer to it. Absolutely you can make changes, and will make changes; things evolve when you’re in production. And we absolutely encourage every single organisation to continually to optimise what you’ve got, but you need to have a solid baseline: “okay this is what we’ve got, this is why we made a decision, based on the information we had, and this is the set of decisions we made, and this is why we’ve designed what we’ve designed.”

Jaon West: But the accountability for tracking that sits with the client, not the system implementation partner.

Joe Ales: Yeah, yeah. I don’t know, if in years gone by, with on premise implementations, whether a lot of that responsibility might have sat with the technology provided perhaps. I’m sure our listeners will probably tell us whether that’s the case or not. But, yeah, now in software as a service deployments, the ones that we’ve experienced over the years, it’s always, always been the client’s responsibility to make sure that this happens.

Jaon West: So whilst it’s fine to use your system implementation partner’s defect tracking tool during the project, if you don’t have a suitable one available, you really do need to sort out one of your own by the time you go live. Because, otherwise, what are you going to do with this information? You’ll output it into an Excel spreadsheet, and that dies a death, and that ongoing governance control process, and systems, are going to be essential to maintaining and continuing to develop this new product, and all the processes and policies that are attached to it, and will be for many years.

Joe Ales: Yeah, absolutely. And IT organisations and functions have had these for years. We talked previously about the right structure for these programmes, and how some software as a service technology providers are trying to almost keep the IT function at arm’s length, and trying to sell directly to the functional heads. The CFO, CPOs etc. But actually, and we always talk making sure that IT function are integrated into the project, and have a role in the project. And the IT function should be providing infrastructure tools, and processes that they’ve used for years around change control, configuration control etc. So they’ve got an awful lot of those processes already there, so leverage that during your project. And the triage process that the IT function have, may be appropriate, or it may not, for your project, so speak to IT. They’ve got tons and tons of experience in this space, and actually, when you’re going through your governance structure, talking the language that the organisation already understands in terms of triage, defect management, prioritising – how do you prioritise an issue, how do you categorise the issue? If you’re using language that the IT function understands, and ultimately the business understands, it just put you in a little bit of a better position.

Jaon West: Yes. If your IT function has a 1-4 ranking of priority, if it has 1-3, if it has ABC, just use that.

Joe Ales: Yes, don’t reinvent the wheel. Don’t come up with something new and upset your CIO.

Jaon West: On that people element of it, and I’m sure we’ve covered it in a previous podcast, but it’s just a word of advice for any sponsor is just keep checking in with your CIO colleague, on the board. And just make sure that they are hearing good things from their team, that your programme manager is talking to the right people in the IT function, and that they are working together as a properly integrated team. If the CIO’s nervous, if they’re seeing that their team are being kept away, or kept in the dark, or that this finance or HR project is just off doing its own thing, and it had not been using any of the pre-existing processes and systems, and those sorts of things, that’s a worrying sign. And you need to make sure that your programme manager’s not building a wall around their programme and repelling all borders. We’ve seen that happen and it doesn’t end well.

Joe Ales: No, because ultimately, when the product goes into operation the IT functional will play a bigger role than they would have done during implementation, because all of a sudden you’ve got users trying to access systems, and print documents from it, and you do not want the CIO to say “well thanks very much, I don’t know anything about this. But now I’m having a problem that I need to deal with.” You don’t want to be a CPO or CFO in that space.

The It function are absolutely integral to the success of any programme so that have to be integrated.

[Intermission].

Jaon West: Actually, that’s probably a handy hint. If you’re in a position where you’re hiring a programme manager to come in and run this programme for you, alongside your transformation lead, absolutely take references on them from whoever they give you. Whether that’s a CFO, CPO or whoever they were directly working for, that head of the function. But also, speak to the CIO of that business. Just check it out and ask “how did relationship work?” It can cause an awful lot of heartache, if you don’t do that cross check, with at least the CIO, if you’re implementing new technology, and it would make absolute sense to do that.

So, on these people aspects of testing, you require a lot of input from multi-functional test teams in end to end testing, and user acceptance testing. And these are people that are drawn from across the business, different business units, different geographies, countries, whatever it may be. You need to test this thing in multiple different ways, because it’s going to get used very different ways in different places. So, deciding who’s going to test in each phase is something that really does require quite a lot of careful consideration. And we touched on some of that in the last episode, but just to build on that, that everyone that’s involved in testing should be absolutely clear about what their roles and responsibilities are, during these various test phases.

And you’ve got to make sure that they are trained before testing begins, otherwise it’s just like wading through treacle, because they don’t really understand what’s in front of them.

Joe Ales: And actually, it becomes really ineffective. And there is a danger that the first time the organisation get their hands on the system to test it for the first time, having had the design workshops, and the system integrators have gone away, and done some beautiful things with the system, configured it as per specifications in the design workshops. And then the customer gets given access to the system to test that first iteration of that design, and there is a very short window in which to test, right. So, these things are typically implemented in seven/eight/nine months, and we don’t have a lot of time to do lots and lots of testing. So, maybe you’re looking potentially at this three or four week unit testing window, the first time you get hands on it – and we talked about the definitions of different testing phases last week –  if your team isn’t ready, isn’t technically able to jump straight in and start identifying the issues from the get-go, you will end up spending two/three weeks just for people to get familiar with the system. And then ultimately, unit testing will have disappeared, it with a will have gone by. And then the individuals will start become more and more familiar with the system towards what we called end to end testing. And it’s at this point they started identifying an awful lot of issues, defects, and enhancements and the system integrators hate it because at this stage the system should be ready to go live. Why are you making so many changes?

And the reason we’re making so many changes is because actually, unit testing wasn’t effective, because people didn’t know what they were doing, people weren’t trained, people didn’t know what they were testing. And we talked, last week, about the difference between scripts and scenarios; if you’re having to give somebody a script it means they don’t understand the technology and they don’t know what they’re doing. So, if you give them a scenario and you want them to test purchasing process, or add a new supplier to the system, raise the first purchase requisition, send it through approval, raise the purchase order. If you don’t know how to conduct the simple business processes and simple tests, it means I’m going to have to spend an awful lot of time training you during that unit testing window. And time will have gone by, so actually, forget about it. Make sure that people are trained prior to them getting their hands on it.

Jaon West: And that’s training them on the system, task, process, role. But I think mindset’s really important isn’t it; for a really good test team, they’ve got a particular mindset, don’t they?

Joe Ales: Yes, they’ve got to be interested in breaking it, and take pride in it. Saying “you’ve given me a shiny new process. Let me see if I can tear this apart.” They’re not being disruptive or destructive, they need to go in with the mentality that “this needs to land well across the organisation, so it needs to be foolproof, and it needs to work really well.” So, they need to understand it, again one of the other areas actually is that they need to be involved in those design workshops. They need to understand what was the rationale for making those design decisions? And what was the rationale for creating that business logic in process, policy etc.? So, then when I get my hands on it, I will try to come up with testing scenarios and execution of tests to actually go against some of those design decisions we’ve made. Say if you’ve created logic that says “I will send for anything for over a million $1,000,000 or £1,000,000 will go for approval of X. So I’m now going to make sure that I test that, and I want to make sure that if I send it for £900,000 will it still go to that approval or not?” It’s that type of mindset that you need to have.

Jaon West: Yeah. Now, I think there’s I think there’s probably some something we should just cover off, before we closeout on this piece on testing. Hopefully we’ve provided a relatively in-depth exploration now, it was meant to be a fairly high level introduction, but I guess we just like talking about testing.

Joe Ales: It is so important. We’ve seen so many projects fail, haven’t we Jason, because of poor testing.

Jaon West: Absolutely. But given that we’re in this strange new world where we can’t get teams together, right now, to put them in a room and spend a week or two testing something, where you’ve got your process owners there, you’ve got functional consultants, you’ve got your test manager there answering questions, dealing with stuff in the room. With people that have got live system projects, now, that are still continuing, and are looking at test sessions coming up, what are the main considerations people need to think about? And what advice can we give them at this early stage as we’re all getting to grips with this?

Joe Ales: Actually, on the programmes I’m leading right now we’ve managed to mobilise virtual testing. Which is interesting.; this situation we are in now, and depending on when you’re listening to this podcast, obviously COVID 19 has impacted an awful lot of organisations and people.

Jaon West: Exactly, we’re recording this on 31st of March 2020 and we’re based in the UK right now, so we’re on lockdown.

Joe Ales: Yep, we’re on lockdown. We’re doing this podcast remotely, as well. So yeah, we’ve launched some testing activities virtually, and the way we’ve organised it, actually, we bought the process owners onto a virtual meeting this morning, using Microsoft Teams. We’ve gone through the scenarios that we expect them to perform, these are the tests that we need the team to do, and actually we’ve set this, they’ve gone off on their own, and we were having regular daily check points to make sure that we we’re on track against the plan.

Ideally, you’d like to have people in the room, bouncing ideas off each other, so the channel of communication is still open. We’re keeping the chat going, so people are a bouncing queries off each other, and if there are particular issues we will quickly jump on a quick call to just address or triage a particular issue that the individuals might have come across, to unblock it and allow it to move on.

So, this is the perhaps the new reality of deploying programmes now, where we are having to rely much more on virtual working.

Jaon West: And frankly, you know, we’ve done this enough times in the past on global projects, that it’s quite normal. You just have virtual teams because not everyone can be in one place at the same time.

Joe Ales: Exactly, and training becomes much more important in this virtual way of working. Because you don’t have the support of someone sat there, by your side, over your shoulder to provide you a bit of steering or guidance on something you’re doing. If you’re doing virtual testing it requires a) more disciplined from individuals, making sure that things get executed in a timely manner. Because everything is time bound with these programmes, you don’t have infinite amount of time, like we talked about before, you’ve got three to four weeks at best. Then b) individuals need to be pretty effective, so training is key, having the right mindset is key, as well as having good coordination, strong coordination from a test manager, frankly.

Jaon West: Yeah, and having virtual facilitation of the group, and keeping that going throughout the days of the testing, and keeping everybody connected. And you have to over communicate in these situations, to make sure that you don’t end up with people going off in weird directions.

But it’s all doable. You might need to allow a little bit more time, perhaps, in your plan to take into account that this is now virtual, and that it might need a little bit more space to get this stuff done. But training is really important; as you said, mindset, and I think that’s probably a good place to draw to a close on this, because we’ve talked about testing an awful lot.

Testing is really important. We covered off previously the importance of getting the right test manager in, strategy, test plans, all the different types of testing. Today we’ve covered off the team, who to get involved, how to run things virtually. We’ve covered the defect management process.

We talked a lot about testing. So, let’s look forward now, to next week, where we will be talking about governance and control during this implementation phase, during the build phase of your transformation.

And what we had planned was some round table discussions on the broad topics, of these ten key success factors from our build checklist. That’s quite difficult to do now, given that we can’t really sensibly get a lot of people in a room together. So, we are going to focus more, in some future episodes, on how to deliver a transformation programme in a virtual world, where everybody is working 100% remotely, and some of the practical guidance and advice on how to make that work.

So, that will be coming up and we’ll be inviting our head of leadership development, Lucy Finney, onto the podcast. We’ll be doing that in the next couple of weeks, so listen out for that, and we will catch you next week, when we will talk about governance and control.

[OUTRO: Thanks for listening, we really appreciate your support. This episode focused on one of 10 critical success factors in the build phase of transformation. If you’d like to be at the front of the queue for next week’s episode, please hit the subscribe button, and don’t forget to like the show if you found it useful. If you have any questions please contact us, Joe Ales or Jason West on LinkedIn, or via our website underscore-group.com.]