The role requires a hands-on technologist with expertise in Data Engineering with strong programming background in Hadoop/Hive/Sparks/Scala
Should have hands-on knowledge on GCP cloud platforms.
Provide technical leadership and hands-on implementation role in the areas of data engineering including data ingestion, data access, modelling, data processing, design and implementation
Provide technical leadership and hands-on implementation role in the areas of data engineering including data ingestion, data access, modelling, data processing, design and implementation
Key Skills/Knowledge:
Mandatory to have knowledge of Big Data Architecture Patterns and experience in delivery of BigData and Hadoop Ecosystems.
Should have experience in SQL/Data Warehouse
Expert in programming languages like Java, Hadoop, Scala
Expert in at least one distributed data processing frameworks: like Spark (Core, Streaming , SQL), Storm or Flink etc.
Should have worked on any of Orchestration tools - Oozie , Airflow , Ctr-M or similar, Kubernetes.
Preferred Experience and Knowledge:
Excellent understanding of data technologies landscape / ecosystem.
Certification in at least one of the cloud platforms (GCP/AWS/Azure)