ANZ Banking Group is looking to its Chinese operations to help unlock “the power of [its] data” by applying techniques such as deep learning.
The group has been recruiting a number of roles for what it is calling its “next generation big data ecosystem” at ANZ Technology Chengdu, which it set up back in 2011.
The Chengdu service centre employs over 800 people, including “experts who manage a range of complex functions including technologies used to support ANZ's business globally.”
It appears advanced data analytics will become a reasonably sizable support function for Chengdu; back in January, the bank sought an initial seven roles, including a lead engineer, technical architect and analysts to build out the “ecosystem”. This month, it has 11 open roles.
ANZ said it needed people on the ground in Chengdu to “define and drive the architecture of [the] big data ecosystem”.
This work covered “platforms, tools, patterns and APIs to source, ingest, consume and analyse data across the enterprise”; building frameworks to bring real-time and batch data into ANZ’s big data lake; and using “tooling and advanced data science techniques (machine learning, deep learning) to create business value”.
iTnews raised questions on January 21 on the extent the Chengdu-based team would run or support ANZ’s group-wide data analytics needs, and where the “next generation big data ecosystem” infrastructure would be hosted.
An ANZ spokesman provided a brief response on March 14.
“We are looking for some digitally skilled people in the region to join our established Chengdu service centre,” the spokesman said.
“The infrastructure is Australian-based and we have some engineers in Chengdu to complement our Australian-based workforce on the platform.”
Recruitment advertisements pointed to the platform being a commercially-supported Hadoop stack from Cloudera, which ANZ has now confirmed.
“Cloudera is a production platform that forms part of our strategic stack and assists with our ability to adopt cloud technologies to take advantage of modern data and analytics capabilities,” the spokesman said.
That ANZ is focusing on Hadoop for the next generation “ecosystem” isn’t much of a surprise; ANZ’s enterprise data lake is Hadoop-based courtesy of a 2017 data consolidation project.
Just what ANZ’s Chengdu-based data team could get up to was partially flagged late last year when the Chinese operation ran a data science competition targeting university students.
The challenge was to “predict (using provided datasets) whether the client will subscribe to a term deposit.”
“We are looking for participants who can design a customer response model with high differentiation power and high precision by analysing customer's multidimensional banking information and customer behaviour characteristics, using data analysis and advanced machine learning algorithms,” ANZ said.
The test dataset was not from ANZ but consisted of decade-old data from “a Portuguese banking institution”.
Prizes of cash or an “internship/job opportunity” at the Chengdu service centre were offered.
ANZ's spokesman did not respond to questions on the scope, timelines and aims of the next generation big data ecosystem project.
It is not clear to what extent, if at all, the project relates to a known three-year reset of how ANZ creates and manages its extensive data holdings. That reset, described at an IBM conference last month, is also about deriving more value from the bank's data.