Develop custom software solutions to design, code, and enhance components across systems or applications. Use modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs.
Position Overview We are seeking a skilled and motivated Data Engineer to join our dynamic analytics and data transformation team. The ideal candidate will have hands-on experience with Palantir Foundry, PySpark, Python, and advanced SQL. As a key contributor, you will help design, develop, and optimize data solutions that empower business stakeholders and drive impactful data-driven decisions. Key Responsibilities . Develop and maintain robust data pipelines and workflows within the Palantir Foundry platform. . Leverage PySpark and Python for large-scale data processing, transformation, and integration across diverse data sources. . Design, implement, and optimize advanced SQL queries for data extraction, manipulation, and analysis. . Collaborate closely with business analysts, data scientists, and other engineers to translate business requirements into scalable data solutions. . Ensure data quality, integrity, and security throughout the data lifecycle. . Troubleshoot, debug, and optimize data workflows for performance and reliability. . Document architecture, processes, and best practices for ongoing support and knowledge sharing. Required Skills & Qualifications . Palantir Foundry: Proven experience building data pipelines, managing datasets, and deploying applications within Foundry. Specifics required are as follows: Practical skills required to build and maintain production-grade data pipelines, data connections, and ontologies General knowledge of platform capabilities and specific applications within the Foundry suite that are useful for performing the job of data engineer DATA PIPELINE DEVELOPMENT IN FOUNDRY Develop transforms on structured (tabular) and unstructured datasets in Foundry Apply best practices when building data pipelines DATA PIPELINE MAINTENANCE IN FOUNDRY Effectively investigate and fix common issues in data pipelines Contribute to logic changes and performance improvements to transform pipelines feeding mission critical workflows Familiarity with recommended support structures DATA CONNECTION AND INTEGRATION IN FOUNDRY Familiarity with architecture and capabilities of Data Connection Set up sources and syncs ingesting tabular data or raw files from external systems to Foundry ONTOLOGY DESIGN AND DEVELOPMENT IN FOUNDRY Provide data engineering context during ontology design and implement pipelines backing ontology objects and links based on application requirements . Spark/PySpark: Strong expertise in distributed data processing and transformation using Spark/PySpark. . Python: Proficiency in Python for scripting, automation, and data wrangling tasks. . SQL: Advanced skills in writing efficient SQL queries for complex data manipulation and reporting. . Familiarity with data modeling, ETL processes, and best practices in data engineering. . Experience working with large and complex data sets from multiple sources. . Excellent problem-solving, communication, and collaboration skills. Preferred Qualifications . BacheloraEUR(TM)s or MasteraEUR(TM)s degree in Computer Science, Engineering, Mathematics, or a related field. . Prior experience in a data engineering or analytics-focused role within a fast-paced environment. . Knowledge of additional big data technologies or cloud platforms (e.g., AWS, Azure, Databricks) is a plus. . Certification in Palantir Foundry (Foundational/Professional) will be an added advantage
15 years full time education
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.