Big Data Engineer- Bachelor's Degree or Master's in Computer Science, Engineering, Software Engineering or a relevant field.
Around 8-10 years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems.
Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation
Strong experience in programming languages Java/J2EE/Scala.
Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database.
Experience with Batch Processing and AutoSys Job Scheduling and Monitoring
Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs)
Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed
Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc
Knowledge of Hadoop Security, Data Management and Governance Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD, etc.