Associate Data Engineer

Year    Bangalore, Karnataka, India

Job Description

:
Job Title: Data Engineer
Experience: 12 to 20 months
Work Mode: Work from Office
Locations: Bangalore, Chennai, Kolkata, Pune, Gurgaon
About Tredence
Tredence focuses on last-mile delivery of powerful insights into profitable actions by uniting its strengths in business analytics, data science, and software engineering. The largest companies across industries are engaging with us and deploying their prediction and optimization solutions at scale. Headquartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and Southeast Asia.
Tredence is an equal opportunity employer. We celebrate and support diversity and are committed to creating an inclusive environment for all employees.
Visit our website for more details:
Role Overview
We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL, Python, and PySpark, with exposure to cloud platforms such as Azure or GCP.
As a Data Engineer at Tredence, you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders.
Key Responsibilities
Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow.
Write optimized SQL queries for data transformation, analysis, and validation.
Implement and support data warehouse models and principles, including:
Fact and Dimension modeling
Star and Snowflake schemas
Slowly Changing Dimensions (SCD)
Change Data Capture (CDC)
Medallion Architecture
Monitor, troubleshoot, and improve pipeline performance and data quality.
Work with teams across analytics, business, and IT functions to deliver data-driven solutions.
Communicate technical updates and contribute to sprint-level delivery.
Mandatory Skills
Strong hands-on experience with SQL and Python
Working knowledge of PySpark for data transformation
Exposure to at least one cloud platform: Azure or GCP.
Good understanding of data engineering and warehousing fundamentals
Excellent debugging and problem-solving skills
Strong written and verbal communication skills
Preferred Skills
Experience working with Databricks Community Edition or enterprise version
Familiarity with data orchestration tools like Airflow or Azure Data Factory
Exposure to CI/CD processes and version control (e.g., Git)
Understanding of Agile/Scrum methodology and collaborative development
Basic knowledge of handling structured and semi-structured data (JSON, Parquet, etc.)
Skills:

  • Azure Databricks / GCP
  • Python
  • SQL
  • Pyspark
About Company:
Welcome to Tredence! Since our founding in 2013, we have been dedicated to transforming data into actionable insights, helping over 50 Fortune 500 clients win in their respective industries. With our headquarters in San Jose and a presence in 5 countries, our mission is to be the world's most indispensable analytics partner. At Tredence, we blend deep domain expertise with advanced AI and data science to drive unparalleled business value. We are excited to have you join us on this innovative journey.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD3815804
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Bangalore, Karnataka, India
  • Education
    Not mentioned
  • Experience
    Year