Location : Bangalore Noida Gurgaon # Experience in Big data Distributed ecosystems - (Hadoop, PySpark , Hive), working with large amounts of data # Experience using Python for DE context - data transformations, data wrangling, ETL , API interact ion # Excellent knowledge of SQL(optimizations, complex aggregations, performance tuning) and relational DB # Experience building data processing frameworks, big data pipelines. Job Requirements: Hadoop, PySpark, Hive, Python, Apache Spark, Program ming Development, ETL Pipeline Location : Bangalore Noida Gurgaon # Experience in Big data Distributed ecosystems - (Hadoop, PySpark , Hive), working with large amounts of data # Experience using Python for DE context - data transformations, da ta wrangling, ETL , API interaction # Excellent knowledge of SQL(optimizations, complex aggregations, performance tuning) and relational DB # Experience building data processing frameworks, big data pipelines. Job Requirements: Hadoop, PySpark, Hiv e, Python, Apache Spark, Programming Development, ETL Pipeline.
Job Requirements: Hive, Hadoop, Python, PySpark, Data Analysis, ETL Pipeline
Job Type
SUBCONTRACT
Location
NOIDA
Mandatory Skills
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.