NAB is “progressively building towards” a multicloud environment for hosting elements of its open banking platform, having initially built it to be “all based in AWS”.
Enabling components of the open banking platform to run across multiple clouds mirrors a broader effort within the bank to move outside of its initial heavy alignment to AWS and run more workloads in Azure - work that has scaled up substantially this year.
Head of microservices and open banking Damian Fitzgibbon told a webinar last week that the bank has moved its consent data store out of an AWS-specific service to the open source Postgres to aid portability, should the store ever need to be re-hosted.
Customer consent is a key piece of information banks must capture and manage under open banking, since it is the explicit permission from the customer for how and where their banking data can be shared.
“With consent, [when] we originally started out we were going to build a Lambda service utilising a DynamoDB [instance] as its data store,” Fitzgibbon said.
“We actually ended up moving the data store to Postgres because some of the requirements around our multicloud capability meant that if wanted to move the data we needed to have a bit more [of a] portability consideration for that consent data given it was an enterprise store, so we refactored to Postgres and now have that running.
“We’re soon to build and release a process where we can take that Postgres data store and port it across to Azure if we needed to spin up the service across Azure.
“That’s still something underway but is something that we’re progressively building towards a multicloud environment for open banking.”
Fitzgibbon said that NAB handled “the very first consent that was created in the consumer data right [open banking] ecosystem ... between a customer using the Frollo app and NAB as the data holder.”
Frollo is a fintech company that produces an app that helps people to better manage their money.
Fitzgibbon said that NAB had needed to build a “number of core enterprise capabilities” in order to stand up its open banking platform.
He displayed an architecture diagram with components that are newly-built for open banking; others that “represent enterprise capabilities that we’ve either built - in the case of consent - from scratch, or a number of different enterprise services we’ve leveraged”; and more still that were “existing services we’ve utilised - most of those are the systems of record.”
Where a cloud-based enterprise service was not ready, the bank plugged into an existing system of record through what it calls the ‘OB proxy’.
“Eventually as services come online we’ll move to the cloud version of a service, but for some services we have to go back to our data centre to get some of the system of record data,” Fitzgibbon said.
“That OB proxy is basically a facade that we can point to a different system of record to get the information.”
Fitzgibbon said NAB is continuing to build out the cloud-based “enterprise domain services”.
“What they really are are trusted copies of the system of records that sit on the legacy systems so that eventually they become much more scalable and reliable than some of the core services, and actually become a way of almost caching the data that we can access,” he said.
“Over time, progressively it allows you also to hollow out some of the capability in the legacy systems and I guess that is part of [NAB’s] underlying tech strategy.”
The use of cloud - and ultimately multicloud - will help the bank to address the “high performance requirements” that open banking platforms must be capable of meeting.
“We need to be able to support 300 transactions per second (tps) on authenticated endpoints and 300 transactions per second on unauthenticated endpoints, so all up 600tps,” Fitzgibbon said.
“At the moment we’re well under 1tps so it’s still very early days but scalability and performance is a really big consideration with this infrastructure and architecture.”