Design, develop and deploy pipelines including ETL-processes using Apache Spark Framework.
Monitor, manage, validate, and synthetic test data extraction, movement, transformation, loading, normalization, cleansing and updating processes in product development.
Coordinates with the stakeholders to understand the needs and delivery with a focus on quality, reuse, consistency, and security.
Collaborate with team-members on various models and schemas.
Collaborate with team-members on documenting source-to-target mapping.
Conceptualize and visualize frameworks
Communicate effectively with various stakeholders.
What we're looking for
--------------------------
Basic Qualifications
Bachelor's degree in computer sciences or related field
3 years of relevant experience and equivalent education in ETL Processing/data architecture or equivalent.
3+ years of experience working with big data technologies on AWS/Azure/GCP
2+ years of experience in the Apache Spark/DataBricks framework (Python/Scala)
Databricks and AWS developer/architect certifications a big plus
Other Qualifications
Strong project planning and estimating skills related to area of expertise
Strong communication skills
Good leadership skills to guide and mentor the work of less experienced personnel
Ability to be a high-impact player on multiple simultaneous engagements
Ability to think strategically, balancing long and short-term priorities
What you should expect in this role
---------------------------------------
Working environment : Remote
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.