Location - Bangalore Noida Gurgaon # Experience in Big data Distributed ecosystems - (Hadoop, PySpark , Hive), working with large amounts of data # Experience using Python for DE context - data transformations, data wrangling, ETL , API interact ion # Excellent knowledge of SQL(optimizations, complex aggregations, performance tuning) and relational DB # Experience building data processing frameworks, big data pipelines. Job Requirements: Hadoop, PySpark, Hive, Python, Apache Spark, Programm ing Development, ETL Pipeline Location - Bangalore Noida Gurgaon # Experience in Big data Distributed ecosystems - (Hadoop, PySpark , Hive), working with large amounts of data # Experience using Python for DE context - data transformations, dat a wrangling, ETL , API interaction # Excellent knowledge of SQL(optimizations, complex aggregations, performance tuning) and relational DB # Experience building data processing frameworks, big data pipelines. Job Requirements: Hadoop, PySpark, Hive, Python, Apache Spark, Programming Development, ETL Pipeline.
Job Requirements: PySpark, Hadoop, Hive, Python, Database Design, Infrastructure Design Document
Job Type
Full Time
Location
BANGALORE
Mandatory Skills
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.