A flexible data collection and supervisory system proven in over 25 jurisdictions.
Find out moreMeet all of your AEOI commitments including CRS, FATCA, CbC Reporting and ETR.
Find out moreAutomated Regulatory Reporting trusted by Regulators.
Find out moreAuthor: Graham Cassidy, Solution Architect
"Over the last two years alone 90 percent of the data in the world was generated." Forbes.com (2018)
“In 2004, banks faced an average of 10 regulatory changes per day. This has jumped to over 100 in 2020.” Regpac.com (2020)
We are living in the digital era. The rate of disruption events is more frequent than ever, with billion-dollar companies are being created overnight and the primary market value is no longer product, software or experience. The new primary market value is data. Data is on the increase every day and it needs to travel or it loses value. Humans can no longer be the couriers of this data. A tipping point has arrived in which data is no longer manageable by individuals. The below image illustrates this point. As you can see the disruption events in the digital era (blue markers) are more frequent than in any other age.
IMAGE: Frequency of disruption events between eras (Source: Microsoft)
The 2008 financial crisis created a seemingly perpetual snowball effect in which the days of soft touch regulation have been wiped out. Financial information is coming under more and more scrutiny. On top of this, there are an increasing amount of countries agreeing to financial inclusion commitments so the regulatee pool is diversifying and expanding. What this ultimately means is that the amount and complexity of data required for effective supervision is constantly on the increase.
‘‘IDC predicts that the collective sum of the world’s data will grow from 33 zettabytes this year to 175 zettabytes by 2025, for a compounded annual growth rate of 61 percent’’ Networkworld.com – Andy Patrizio (2018)
This increase means that the future sustainability of Regulatory Reporting success is dependant on the role of the regulator evolving from that of partial administrator/data steward to that of purely supervisor.
If this does not happen then the future will be humans working solely on data stewarding activities with little time left for actual supervision. Regulators will try to mitigate the problem by increasing the headcount but this will be tantamount to plugging the dam with their fingers. Inevitably the tide of data will overwhelm human attempts and the practice of regulation will become unmanageable risking market sustainability. The world is changing, and regulators need to move with the times to stay ahead of the tide or be swept aside by it.
“To keep up with the increasing competition from firms inside and outside the industry, banks should provide innovative services at the same rate as other smaller, leaner organizations do….Open APIs and open platform banking are set to change the shape of financial services completely.” Accenture.com (2019)
The widespread adoption of Machine to Machine (M2M) reporting has a major part to play in the current era where it is being elevated from a powerful auxiliary solution to part of the new norm. It is also evident that a lot of regulators realise this. Vizor has previously conducted a poll where we asked regulators to vote for key areas of interest. As illustrated in the below graphic API came second after Regulatory Data Management itself proving that they are already thinking about it.
IMAGE: Top areas of interest for leading Financial Regulators
To read more about this please refer to the Vizor article Regulatory Data Management and Data on Demand are key areas of interest for leading Financial Regulators across the world.
Machine to Machine is one of a set of modern technologies and strategies that can be used to respond to the aforementioned challenges and ensure regulatory efficiency into the modern age.
Indeed there are many key technologies / strategies (some of which are listed below) that Vizor has identified and observed increasing adoption of. These are some of the areas that Vizor has previously and will continue to invest in to ensure we provide optimal solutions to our clients and so keep ahead of the tide:
IMAGE: Set Of Modern Technologies And Strategies
With the advent of the digital era Granular Data is becoming a popular concept for some use cases. It offers a low-level formless data approach to simplify the process for the regulatee. However this naturally means larger volumes of data needing to travel, which in the past would have been undesirable but today, with various technological advances, e.g. processing power, M2M, price of storage, AI, etc. is no longer the case.
Machine Learning / AI is beginning to develop within regulatory systems. The long-term vision would be to utilize algorithms that can detect deviations in patterns so that potential crisis situations can be forecast in advance.
Regulatory Data Management is a program that introduces, amongst other things, the benefits of standardizing data across the entire industry so that systems can speak the same language. This will also be a great enabler of machine readability.
Now that the digital era continues to propel into the future more use cases exist than ever before and Machine to Machine Reporting has a big part to play in enabling them.
Of course, the future of Regulatory Reporting will always evolve and the prediction is that existing concepts will be advanced in combination with new concepts being invented. Some new concepts we may expect to see in the future include:
Advanced AI techniques will be applied to find more anomalies in data and detect them earlier.
The industry will advance towards its vision of a Global Data Model so that the worlds’ data can be standardised per industry. When this is realised it will vastly reduce individual design, development, rollout, support and training efforts. The benefits are massive but so is the challenge of getting global agreement and adoption.
With precipitating events such as Brexit the goal of Cross Border Regulation will become more prominent
It is important to clarify that any solution would require a combination of the above strategies and technologies to make it effective. Each newly introduced concept has combined with earlier ones to build up a powerful solution portfolio that can be pulled from. It is the role of the regulator and the appropriate guidance to determine the most suitable mix of technologies and strategies to use for each use case. No one approach in itself should be considered a panacea however the diversity of solutions is now available to facilitate Regulatory Reporting sustainability into the digital era.
Vizor has published articles on all these topics and the rest of this article aims to explore the benefits of Machine to Machine Reporting specifically as well as providing some case studies. We believe that Machine to Machine Reporting is becoming more necessary than ever as part of any chosen solution set.
Machine to Machine Reporting can be leveraged to get to an ideal level of efficiency where data can be processed, validated and analysed without human interaction. This does not preclude the design of manual rules or alerts that call for user supervision when certain triggers get invoked but the point is that humans will no longer be doing the cumbersome and distracting work of scrubbing large amounts of data and moving it around. Their time will be freed up to supervise effectually and only intervene at certain points where warranted. This is going to be a predominant method as the volume of use cases fitting this solution increase into the future.
There is more data than ever before, and it appears to be growing at a compounded rate. This means that the risk of human error is greater than ever before. It is reasonable to assume that a user analysing 2000 points of data in a day is more likely to make a mistake than if they were analysing 500 points of data in a day. Therefore, we can assume that as the amount of data increases so does the percentage likelihood of human error. On top of this the repetitive nature of tasks means that the human can enter the same data correctly the first time and make a mistake the second.
Machine to Machine Reporting can vastly mitigate this threat as the data can be pre-configured with the appropriate level of validation and rules already in place. Once a Machine to Machine solution has been delivered and effectively tested it does not make mistakes no matter how much data it processes. If it turns out that a bug has been introduced that leads to inaccurate supervision then this bug only has to be fixed once. If a human makes a mistake there’s no guarantee that they won’t make it a second or third time.
Another challenge of greater volumes of data is that if humans are the main administrator and courier it will take time to compile and submit. This means that the data may already be stale by the time the regulator gets around to viewing it and consequently all that the human effort will have amounted to is low quality data which has inhibited the ability to effectively regulate.
If the task of moving this data from regulatee to regulator and beyond is delegated to machines then it can be moved at up to real time speeds. This means that the data will be fresh and relevant. There’ll be less back and forth between parties and the chances of “right first time” submissions will increase greatly and in many cases the first revision will be the final revision.
NATIONAL BANK OF RWANDA
As part of an enterprising dedication to financial inclusion the National Bank of Rwanda have developed an electronic data warehouse (EDW) system. This EDW system is designed to “pull” data from 8 banks, 3 microfinance institutions, 2 money transfer operators and 1 MNO on a regular basis. This has proved successful for this use case and has generated a lot of interest globally as well as monitoring of the potential sustainability of the solution.
IMAGE: National Bank of Rwanda API Solution
BANK OF GHANA
Ghana have recently worked with Vizor Software to roll out the Vizor APIs in 2020. Vizor APIs offer submission endpoints that facilitate financial institutions pushing submissions to the regulator.
They have made API submission an option for all of their 41 reporting obligations. Initially they see 23 banks as the main consumer of the APIs but they are committed to onboarding hundreds of smaller institutions in the near future and are currently working with these institutions to plan this.
Their ultimate aim is that absolutely everything travels through API and human interaction is kept to a minimum making them one of the first regulators globally to achieve this level of adoption. More can be read on the Ghana rollout in the article: Vizor Software achieved the first milestone with Bank of Ghana.
IMAGE: Bank of Ghana API Solution
The implementations that were adopted by Rwanda and Ghana can be considered very effective for solutioning their respective challenges. Both approaches are in fact very similar however there is one key difference. Rwanda pulls data from each financial institution’s API while Ghana provides their own API and the financial institutions push to it.
These opposing yet strategically similar concepts of push and pull are at the heart of the current interest in data on demand.
The image that the term data on demand usually conjures up is that of standardised data sitting in each financial institutions environment waiting to be pulled by the regulator however the reality within the regulatory field is very different for two main reasons.
1. Ease of Machine to Machine Roll-out and Maintenance
Consider a widespread roll out of a pull based data on demand solution in which hundreds or even thousands of institutions needed to make their data available:
Each of these institutions would have to
Also, the regulator would have to
It is easy to imagine how such an undertaking could be fraught with challenges.
However, if the number of financial institutions is small this strategy can work but if the number is not, a more appropriate solution is for the regulator to stand up a single API. Then each institution develops the ability to connect to that and simply maintains their own access credentials (ideally through the accompanying web portal).
2. Pull is Push in Disguise
Another thing to note is that (as described above and illustrated in the below image) the data would have to be taken from numerous legacy systems and consolidated into an API environment, often going through various approval checkpoints on the way. In such a scenario what is seen superficially as a pull is actually a push in disguise.
IMAGE: The Concept Of “Pull” Can Often By A “Push” In Disguise
We have found that currently within the regulatory field the scenario of the institution pushing to the regulator can often be the most suitable. And as it is Machine to Machine it is equally effective in terms of real-time data collection.
We see this solution further evolving in the future as regulators get more involved with stipulating the data structure that financial institutions should adopt. If financial institutions choose (or are mandated) to actually store their data in the same structure as that recommended by the regulator then the doors will be opened to standardised data. This will mean that all the financial institutions are speaking the same language and we will see alignment across the industry. Generic shared solutions can be rolled out with a greater economy of scale, enabling RegTech innovation and reducing regulator burden as opposed to each financial institution making parallel individual efforts.
For further reading on the subject you can refer to the article: Next Generation Regulatory Reporting: Data On Demand.
When it comes to Machine to Machine a lot of the focus tends to be on APIs. While APIs are pivotal to the overall strategy they are useless without having something to consume them. They stay at a state of rest until a consumer either pushes data to them or pulls data from them. In essence something needs to connect to them.
Historically this has been achieved by having software houses write case specific service request clients which are costly to develop.
In an effort to join the evolution that is attempting to remove the need to create bespoke software applications for each API service request Vizor has created a new component within their software stack known as the API Connector.
The API Connector:
It is highly configurable and can be used to connect to any number of different APIs. It is also compatible with modern API standards such as:
Our clients are already taking advantage of this functionality for use cases such as pushing data into their Hadoop system and pulling financial institution information from their native MDM system.
We foresee greater and greater uptake of this functionality into the future as APIs (and therefore the need to connect to them) becomes more the norm.
With the progression of time from before the 2008 financial crisis to the current digital age the diversity of use cases within regulatory reporting have increased dramatically. This increase in use cases introduced a requirement for much greater volume, frequency and often complexity of data. As the rate of regulatory change increases so do these data metrics.
Technologies and strategies have been introduced to facilitate the efficient transfer and analysis of this data. However, it is within the interest of regulators to adapt before these metrics become overwhelming and lead to a tipping point of inefficient and ineffectual supervision.
In order to adapt they need to strategize the solutions that are right for them by choosing the optimal set from the powerful portfolio of options that are available. It is our belief that Machine to Machine Reporting will more and more often be chosen as a vital component of each set.
We’ve mentioned the core strategies and technologies at a high level in this article but the deep dive has been on Machine to Machine. However as previously mentioned Machine to Machine in itself should not be considered a magic bullet and it can be combined with a set of other options to be effective. Mix and match depending on your use cases.
The age of data is upon us. The management of regulatory data is no longer achievable by humans alone. Now more than ever it is vital that machines deal with the processing of data so that humans can be freed up to become full time regulators.