Minimum 2 years of experience in application development.
Strong hands on experience in Apache Spark, Scala, and Spark SQL for distributed data processing.
Hands-on experience with Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop.
Familiarity with other Big Data technologies, including Apache Kafka, Flume, Oozie, and Nifi.
Experience building and optimizing ETL pipelines using Spark and working with structured and unstructured data.
Experience with SQL and NoSQL databases such as HBase, Hive, and PostgreSQL.
Knowledge of data warehousing concepts, dimensional modeling, and data lakes.
* Ability to troubleshoot and optimize Spark and Cloudera platform performance.
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.