• Experience building and optimising data pipelines, architectures and data sets
• Advise on building scalable ETL jobs across the data pipeline from ingestion to data warehousing and semantic models
• Experienced in exploring, debugging and data quality issues and defining data quality rules across complex data pipelines
• Leverage Hadoop ecosystem knowledge to design and develop capabilities to deliver innovative and improved data solutions
• Build processes supporting data transformation and data structures using cloud platforms like Azure (DataBricks, ADF, Data lake) and Snowflake
• Build strong relationships and liaise with different interfacing development teams
The Successful Applicant
• Bachelor's degree in a relevant field with at least 3 years' experience in a Data Engineer role.
• Must have experience in scripting languages like Scala and Python.
• Good knowledge of SQL especially in Hive
• Be able to demonstrate strong technical experience using a wide variety of technologies in environments such as Kafka, Spark, Hadoop, Azure and Snowflake.
• Excellent exploratory testing skills applied in the big data world
• Exceptional communication skills with the ability to effectively collaborate with multiple teams.