:
Job Title: Data Engineer
Experience: 12 to 20 months
Work Mode: Work from Office
Locations: Bangalore, Chennai, Kolkata, Pune, Gurgaon
About Tredence
Tredence focuses on last-mile delivery of powerful insights into profitable actions by uniting its strengths in business analytics, data science, and software engineering. The largest companies across industries are engaging with us and deploying their prediction and optimization solutions at scale. Headquartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and Southeast Asia.
Tredence is an equal opportunity employer. We celebrate and support diversity and are committed to creating an inclusive environment for all employees.
Visit our website for more details:
Role Overview
We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL, Python, and PySpark, with exposure to cloud platforms such as Azure or GCP.
As a Data Engineer at Tredence, you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders.
Key Responsibilities
Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow.
Write optimized SQL queries for data transformation, analysis, and validation.
Implement and support data warehouse models and principles, including:
Fact and Dimension modeling
Star and Snowflake schemas
Slowly Changing Dimensions (SCD)
Change Data Capture (CDC)
Medallion Architecture
Monitor, troubleshoot, and improve pipeline performance and data quality.
Work with teams across analytics, business, and IT functions to deliver data-driven solutions.
Communicate technical updates and contribute to sprint-level delivery.
Mandatory Skills
Strong hands-on experience with SQL and Python
Working knowledge of PySpark for data transformation
Exposure to at least one cloud platform: Azure or GCP.
Good understanding of data engineering and warehousing fundamentals
Excellent debugging and problem-solving skills
Strong written and verbal communication skills
Preferred Skills
Experience working with Databricks Community Edition or enterprise version
Familiarity with data orchestration tools like Airflow or Azure Data Factory
Exposure to CI/CD processes and version control (e.g., Git)
Understanding of Agile/Scrum methodology and collaborative development
Basic knowledge of handling structured and semi-structured data (JSON, Parquet, etc.)
Skills:
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.