- 4-6 years of experience in Development in Informatica Big Data Management/DEI & Informatica Power Center.
- Experience in working with Informatica BDM for data clustering.
- Proficient in data ingestion, processing, and extraction of data to and from Hadoop.
- Experienced in ETL, SQL, Scoop, Spark, Hive, Hadoop, and KAFKA.
- Knowledge of PySpark highly preferred.
- Skilled in delivering data pipelines using Informatica on Big Data Clusters.
- Hands-on experience with extraction of unstructured and semi-structured data using data processor transformation.
- Complete knowledge of designing and creating complex data models.
- Strong programming skills in SQL.
- Expertise in data architecture, designing workflows, and building ingestion framework.
- Proficient in working on Enterprise Big data distribution like Cloudera.
- Exposure to big data components HDFS, HIVEQL, Spark, HBase, Impala is essential.
- Responsible for building and optimizing data pipeline architectures and data sets using Informatica BDM.
- Design, develop, and customize mappings/workflows for different types of loads using Informatica BDM.
- Use Informatica BDM tool for data ingestion, data profiling, and data quality.
- Leverage Hadoop ecosystem knowledge to design and develop capabilities to deliver solutions.
- Support and play a pivotal role in building the data lake on Cloudera for Enterprise analytics.
- Hands-on experience with Spark and KAFKA in addition to the Informatica platform.
- Design best approaches for data movement from different sources to HDFS.
- Ensure optimum performance of the Informatica mappings and workflows.
- Collaborate with cross-functional teams including business, Solution architects, and Data engineers.
- Bachelor's degree in Computer Science or equivalent; Masters degree preferred.
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.