1) Strong Data Engineering skills using Azure and Pyspark
2) Good Knowledge on SQL
3) Preferred experience in Big data / Hadoop technologies like
Spark, Hive, Hbase and Kafka.
4) Preferred experience in ETL process.
5) Good Communication Skills
Desired Experience Range 5 – 10 years
Location of Requirement Perth / Australia
Desired Competencies (Technical / Behavioral Competency)
Must-Have
1. Strong Data Engineering skills using Azure and Pyspark (or
Databricks, Hadoop / Spark using Java / Scala)
2. Experience in Azure Data Factory and other Azure services
3. Experience in loading and transforming the data using spark or
any big data technologies( Hive, Kafka, Hbase ,Spark or Storm).
4. Very good SQL knowledge
Good-to-Have
1. ETL Process experience in any Cloud or any on-premise big data
platform.
J-18808-Ljbffr
Data Engineer • Perth, Australia