- ETL Design: Design and implement ETL processes within MaPS architectural patterns to extract, transform, and load data from various source systems into our reporting solutions.
- Pipeline Development: Develop and configure meta-data driven data pipelines using data orchestration tools such as Azure Data factory and engineering tools like Apache Spark to ensure seamless data flow.
- Monitoring and Failure Recovery: Implement monitoring procedures to detect failures or unusual data profiles and establish recovery processes to maintain data integrity.
- Utilise Azure Cloud based data technology platforms, Data Lake based storage & Cloud data pipeline/orchestration tools for efficient data storage, processing, and transfer.
- Ensure the scalability and reliability of data solutions using Azure's suite of services.
- Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs.
- Provide technical leadership and mentorship to junior data engineers, fostering a culture of continuous learning and improvement.
- Ensure data governance policies are adhered to and maintain high standards of data quality and security.
- Develop and enforce best practices for data management and integration.
- Ensure only thoroughly tested, high quality code, architecture and resulting Data Products are delivered to the MaPS business.
- Understanding & interest of broad architectural principles underlying modern data architecture. E.g. Data Lakes, External tables, medallion architecture, loose coupling.
- Reporting & analysis tooling. An understanding of report tooling such as Power BI & the needs of analysts consuming engineered data into these tools.
- Proven experience as a Data Engineer, with a focus on Azure services.
- Strong expertise in designing and implementing ETL processes.
- Experience in using SQL to query and manipulate data.
- Proficiency in Azure data tooling such as Synapse Analytics, Microsoft Fabric, Azure Data Lake Storage/One Lake, and Azure Data Factory.
- Spark/Pyspark or Python skills a bonus or a willingness to develop these skills.
- Experience with monitoring and failure recovery in data pipelines.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
- CaringWe care about our colleagues and the people whose lives we are here to transform.
- ConnectingWe will transform lives through our ability to make positive connections.
- TransformingWe are committed to transforming lives and making a positive societal impact.