MyPage is a personalized page based on your interests.The page is customized to help you to find content that matters you the most.


I'm not curious

If It Wasn't For The Last Moment

Published on 05 May 17
0
2
There is a fridge magnet that says If it wasn’t for the Last Moment, nothing would get done!. Nothing could be truer in a lot of situations we find ourselves in – except it shouldn’t be true in critical business releases!
There have been a fair number of instances, when very close to release day, our Delivery Assurance teams have been invited to help assess business critical applications on the risks of going live. It sometimes is usually a key stakeholder who is nervous, wants to be assured that the release won’t fail, and the business won’t suffer. And it has usually been our job to deliver, more often than not, the bad news.
One part of what we did has been to focus on the business needs, the business context and business criticality of the applications. Working backward from there, we have looked at the required business coverage achieved through tests, the levels of testing that have been done, and how these risks have been possibly mitigated across the levels, given the context of the release.

In a lot of situations, we have had to look further, on aspects less considered by the test teams – and where fewer people ask those hard questions

  • How will the release behave once it hits the production environment?
  • Are the production support and the business support teams ready to handle the functionality within the release?
  • Are there fall back mechanisms in place, should there be issues with the release?
  • What are the support processes in place during the critical care phase (immediately post go-live)?
More often than not, we find teams are underprepared in their assessment of risks from the various parameters. They tend to focus on what they had found, and not what the implications of that finding are. Two examples from some of our assessments highlight what I mean.
In one instance, the program was a migration of application functionality from an older system to a newer technology. Given that the migration was done from code up level (and not by re-engineering the business processes (yes it does happen!)), the test strategy (to put it very simplistically) was centered around using test data for certain business days from the base system and replaying them on the target system and comparing the end of day outputs. Which was fine as a strategy, until we uncovered that everyone in the team (customer team included) thought that all of the business scenarios were covered by this approach, but no one could pin point and say that x% of business functionality had been covered. Strangely enough, no one had analyzed the test data sufficiently to figure out how much of that represented unique scenarios. The question left on the table was: What wasn’t covered, and what business impact would that have, and did we need to do something to cover it? Sadly enough, there were serious gaps when the assessment was done, to delay the actual go-live date.
Another situation we encountered was where the go-live date was absolutely critical to the business. The test teams were continuing to uncover defects even as the release date approached (albeit reduced numbers), and everyone was working hard to plug all the gaps. Our assessment was actually boiling down to answer the question – We are going live; can we survive it? Amongst other things, we also looked at the nature of the defects to sense what was broken and discovered critical business functionality that had to be focused on. We also looked at the arrival rates, and extrapolated that if that continued in live, they needed to extend the support capacity required to cope with the volume of fixes required.
Seemingly obvious conclusions in both the situations, but I guess it comes from taking two steps away from the release and gazing at the program and asking the hard questions that people within don’t want to ask or are afraid of what they will find out should they ask them.
In all of the instances where we have been called in at the last moment, our discovery is that most of these could be avoided through early strategisation and checks at defined points in the lifecycle. Most of the times, checkpoint reviews are just tick in the boxes, carried out to move from one stage to another. Most fail to ask questions that are asked at the very end (because the impact is so obviously staring them in the face then), upfront. Delivery assurance is not about asking the questions closer to release time, it is about asking whether the release risks are being mitigated continually. So why wait until the Last Moment?
This article can also be read here
This blog is listed under Development & Implementations and Quality Assurance & Testing Community

Related Posts:
Post a Comment

Please notify me the replies via email.

Important:
  • We hope the conversations that take place on MyTechLogy.com will be constructive and thought-provoking.
  • To ensure the quality of the discussion, our moderators may review/edit the comments for clarity and relevance.
  • Comments that are promotional, mean-spirited, or off-topic may be deleted per the moderators' judgment.
You may also be interested in
 
Awards & Accolades for MyTechLogy
Winner of
REDHERRING
Top 100 Asia
Finalist at SiTF Awards 2014 under the category Best Social & Community Product
Finalist at HR Vendor of the Year 2015 Awards under the category Best Learning Management System
Finalist at HR Vendor of the Year 2015 Awards under the category Best Talent Management Software
Hidden Image Url

Back to Top