Must-Have:
1.Minimum 5+ years of experience in development of Spark Scala
2.Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop
3.Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing.
4.Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc
5.Experience in debugging the Spark code
6.Working knowledge of basic UNIX commands and shell script
7.Experience of Autosys, Gradle
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.