Business Function
Group Technology and Operations (T&O) enables and empowers the bank with an efficient, nimble and resilient infrastructure through a strategic focus on productivity, quality & control, technology, people capability and innovation. In Group T&O, we manage the majority of the Bank's operational processes and inspire to delight our business partners through our multiple banking delivery channels.
Responsibilities
- Create and maintain optimal data pipeline.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements\: automating manual processes, optimizing data delivery, re-designing Jobs/code for greater scalability, etc.
- Work with stakeholders including the Product owner, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Work with data and analytics experts to strive for greater functionality in our data systems.
Requirements
- Advanced working SQL knowledge and experience working with RDBMS, Hadoop and NoSQL DB.
- Experience building and optimizing ‘big data’ data pipelines, Jobs and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with structured and unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
Experience with:
- Big data tools\: Hadoop, Spark, Kafka, etc.
- Relational SQL and NoSQL databases, including Postgres and Cassandra.
- Data pipeline and workflow management tools\: Airflow, etc.
- AWS cloud services or GCP.
- Stream-processing systems\: Spark-Streaming, Flink etc.
Apply Now
We offer a competitive salary and benefits package and the professional advantages of a dynamic environment that supports your development and recognises your achievements.