Big Data Engineer Big Data Engineer

Optus (Macquarie Park NSW, Australia) 20 days ago

The Data Platform Management  team has been being established in the Group IT team at Optus to help realise the vision of becoming a customer-centric organisation, driven by a data and analytics capability that enhances customer interactions and revenue generation.

The Big Data Engineer is responsible for development and automation of Big Data ingestion, transformation and consumption services; adopting new technology; and ensuring modern operations in order to deliver consumer driven Big Data solutions.

The role

  • Implement request for ingestion, creation, and preparation of data sources
  • Develop and execute jobs to import data periodically/ (near) real-time from an external source
  • Setup a streaming data source to ingest data into the platform
  • Delivers data sourcing approach and data sets for analysis, with activities including data staging, ETL, data quality, and archiving
  • Design a solution architecture to meet business, technical and user requirements
  • Profile source data and validate fit-for-purpose
  • Works with Delivery lead and Solution Architect to agree pragmatic means of data provision to support use cases
  • Understands and documents end user usage models and requirements

The perks

We offer all kinds of benefits, such as:

  • Onsite facilities at Macquarie Park such as a Gym, GP, Mini-Mart, Cafes
  • Training, Mentoring and further learning opportunities
  • Staff busses to Epping and Wynyard, and back again

About you

Preferred skills and experience include:

  • Bachelor’s degree in maths, statistics computer science, information management, finance or economics
  • 8+ years’ experience working in Data Engineering and Warehousing.
  • 3 -5 years’ experience integrating data into analytical platforms
  • Experience in ingestion technologies (e.g. Sqoop, NiFi, flume), processing technologies (Spark/Scala) and storage (e.g. HDFS, HBase, Hive)
  • Experience in data profiling, source-target mappings, ETL development, SQL optimisation, testing and implementation
  • Expertise in streaming frameworks (Kafka/Spark Streaming/Storm) essential
  • Experience in building Microservices, Rest APIs, Data as a Service architectures.  
  • Experience managing structured and unstructured data types
  • Experience in requirements engineering, solution architecture, design, and development / deployment
  • Experience in creating big data or analytics IT solution
  • Track record of implementing databases and data access middleware and high-volume batch and (near) real-time processing

You would be a self-starter with the ability to work independently and multitask several different activities across critical deadlines in a high-pressure environment.

Big Data Engineer Big Data Engineer

Apply On Company Site
Back to search page