Data II – The QA & Testing Challenge

This is the second of three articles on Continuous Data Modelling. This describes the complexity challenge that affects all capital market firms – large and small. A specific and practical data modelling solution is outlined, and the benefits of the solution explored. The author has purposefully removed all reference to specific tooling and named technology products. For further information, please contact Norton Edge.

The last few decades of technology advancement have been transformational, increasing real capabilities, speed and scale but also increasing operational and technical complexity.  Many times, mismanagement of technology and misunderstanding of the business will have compounded problems and created organisational friction. In our analysis, this complexity causes three core challenges (1) the Production Support Challenge – where many firms are at a tipping point, and infrequent technical outages and issues are exceptionally hard to troubleshoot. And (2) the Quality Assurance and Testing Challenge –  where testing strategy and frameworks are often detached from the “too difficult to replicate” live environment, and can be relegated to a crude checkbox culture. And finally, (3) the Sustainable Compliance Challenge – where partially documented Policies and Procedures (P&P) are out of date, impractical, and burdensome. In each of these three specific challenges, the solution requires an ongoing commitment to understand the data and operating model. Particularly, in today’s climate of increased cost pressure, it’s all too easy for firms to ignore these complexity challenges. However, by maintaining focus on the data model and also by embedding the right data model tooling, these challenges can be readily solved.

This article will repeat the brief introduction to Data Modelling, and then outline the second of these complexity challenges – the Quality Assurance and Testing Challenge.

Data Modelling: to begin with, Data Modelling always appears difficult. Management want to obtain insight, but don’t want to get “too into the weeds”.  Junior staff struggle to understand the business context of specific activities & infrastructure, or struggle to articulate it in business-friendly language. An impasse occurs, and widens over time. Applications are endlessly adjusted and integrated, responding to fluid business & regulatory requirements. A perpetual cycle of migration, enhancement and retirement is set in motion.  At no time, is there a pause, where the business can stop and take stock of the situation, in order to identify common ongoing problems and work out how they can be solved. Can the analysis be done in parallel or as part of BAU?

Continuous Data Modelling: if the business cannot pause to review, then it needs to be done continuously. Data modelling is the mapping of process across technical architectures, business silos and shared services. Done correctly, it measures change over time, providing auditability, and can drive standardisation. A by-product includes data dictionaries and business glossaries, ensuring all staff benefit from using a common taxonomy. Data modelling can provide the building block for true digital transformation, identifying common process and candidates for microservices, aiding the command & control analytics that business owners need. In short, you need to know your business, to change your business. There is no logical impediment for this to be done continuously. At a basic level, it already exists in restrictive change control (depending on how much this is a checkbox exercise or abused for a dilution of responsibility), but ideally it should be embedded into the day to day business activities to provide further value. One such activity is testing.

The Quality Assurance and Testing Challenge

Background: The business expects both functional as well as technical unit & integration testing to be more effective than it currently is, but are often unable to provide guidance as to what would make it better. Testing Strategies should not fly at 30,000 feet but require Test Manager pilots who can glide at lower altitudes over complex architectural landscapes. Connected third party systems also brings greater complexity, be they, for example, counterparties, trading venues, margin utilities, clearing houses, custodians, or simply some outsourced or managed services. Coordinating relevant Test Environments across systems and third parties becomes a vast orchestral task, leaving little time to focus on the value of testing itself. For example, Trade Reporting, whilst not in the critical path of the execution, is so prone to errors that further services are offered to enhance TR testing and improve troubleshooting. Despite the improved performance and resilience of systems generally, we still have the occasional hardware burnout and full day outages at exchanges, retail broker platforms suffering multi-day capacity issues, telco failovers not performing correctly, and even central banks failing to publish announcements simultaneously across multiple channels.  These recent issues may have been prevented, or at least mitigated by improved testing. Often testing fails to take account of real life use cases, and User Acceptance Testing (UAT) typically takes place without involvement by actual end users.  Continuous Delivery may reduce bugs in new commits, but creates other testing blind spots. At the crux of it, Quality Assurance and Testing need a degree of independence to be truly effective. Although new testing aids, tools and harnesses are commonplace, they often improve the efficiency of testing rather than anchor it to the actual functioning of the business.

Solution: Anchoring QA and Testing in the business activity can be done by using the business’ living Data Model. Firstly, the Test Plans & Strategy can be enhanced, with the data model embedded in the various testing tools and intuitive visual test documentation. Accurate (“database integration style”) mapping of fields creates structures, which can be overlayed by activities and data flows. Advanced tooling can map and visualise historical and real time flows which may be extremely helpful at identifying capacity issues. But at bare minimum, static reference data modelling will help identify fragility and pain points. It also provides a means to quantify the testing, and build measurement into the testing process. Secondly, post-testing, the data model can be leveraged in reviews, with traceability providing a means to understand changes to the data model, technical architecture, or testing strategy over time. Auditing the quality of testing can also use this and approval sign-offs will be vastly more meaningful.

Benefit: Ultimately, by linking the data model in this way, it provides a more iterative and more effective, recorded feedback loop. It provides a greater degree of independence, giving Test Managers the means to more easily understand the business process and the actual real life activities and data flows. A by-product is improved reporting to management, making it far more intuitive, and encouraging genuine engagement and constructive guidance. Lastly, it can be used evidentially for Auditors, Regulators and others.

In summary, reasons abound for why firms should focus more on data modelling, and do this as a continuous ongoing exercise. Many institutions both large and small will appreciate that complexity has been a side effect of the recent, rapid technical advances. A specific scenario is the challenge providing Quality Assurance and Testing. In the last couple of decades, testing has become an even more complex challenge. Despite improved tooling, it is still too removed from the end users, real life business activities and data flows. Production Issues may be fewer, but testing needs to go further to overcome the blind spots. The solution to this is anchoring QA and Testing in the real business activities through a Data Model. This can be done with Lo-Tech or Hi-Tech solutions – both able to enhance the initial planning and strategy phases of Testing, through to final post-test reviews. There are ongoing benefits, with greater iterative and effective feedback mechanisms brought into the test process, as well as helping QA independence, and providing more intuitive reporting. Using this information in QA and Testing improves the effectiveness through continuous data modelling. Reiterate the mantra – know your business to change your business.

More information on continuous data modelling and how it applies to your organisation is available upon request.

Norton Edge provides Subject Matter Expertise from seasoned, industry practitioners, helping you know your business to change your business.