Lead Ii Data Engineering

Year    KA, IN, India

Job Description

7 - 9 Years
1 Opening
Bangalore


Role description




Role Proficiency:



This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.



Outcomes:



Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others.

Interpret requirements create optimal architecture and design solutions in accordance with specifications.

Document and communicate milestones/stages for end-to-end delivery.

Code using best standards debug and test solutions to ensure best-in-class quality.

Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure.

Create data schemas and models effectively.

Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes.

Validate results with user representatives integrating the overall solution.

Influence and enhance customer satisfaction and employee engagement within project teams.


Measures of Outcomes:



TeamOne's Adherence to engineering processes and standards

TeamOne's Adherence to schedule / timelines

TeamOne's Adhere to SLAs where applicable

TeamOne's # of defects post delivery

TeamOne's # of non-compliance issues

TeamOne's Reduction of reoccurrence of known defects

TeamOne's Quickly turnaround production bugs

Completion of applicable technical/domain certifications

Completion of all mandatory training requirementst

Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times).

TeamOne's Average time to detect respond to and resolve pipeline failures or data issues.

TeamOne's Number of data security incidents or compliance breaches.


Outputs Expected:



Code:



Develop data processing code with guidance
ensuring performance and scalability requirements are met.


Define coding standards
templates

and checklists.


Review code for team and peers.



Documentation:



Create/review templates
checklists

guidelines

and standards for design/process/development.


Create/review deliverable documents
including design documents

architecture documents

infra costing

business requirements

source-target mappings

test cases

and results.




Configure:



Define and govern the configuration management plan.

Ensure compliance from the team.



Test:



Review/create unit test cases
scenarios

and execution.


Review test plans and strategies created by the testing team.

Provide clarifications to the testing team.



Domain Relevance:



Advise data engineers on the design and development of features and components
leveraging a deeper understanding of business needs.


Learn more about the customer domain and identify opportunities to add value.

Complete relevant domain certifications.



Manage Project:



Support the Project Manager with project inputs.

Provide inputs on project plans or sprints as needed.

Manage the delivery of modules.



Manage Defects:



Perform defect root cause analysis (RCA) and mitigation.

Identify defect trends and implement proactive measures to improve quality.



Estimate:



Create and provide input for effort and size estimation
and plan resources for projects.




Manage Knowledge:



Consume and contribute to project-related documents
SharePoint

libraries

and client universities.


Review reusable documents created by the team.



Release:



Execute and monitor the release process.



Design:



Contribute to the creation of design (HLD
LLD

SAD)/architecture for applications

business components

and data models.




Interface with Customer:



Clarify requirements and provide guidance to the Development Team.

Present design options to customers.

Conduct product demos.

Collaborate closely with customer architects to finalize designs.



Manage Team:



Set FAST goals and provide feedback.

Understand team members' aspirations and provide guidance and opportunities.

Ensure team members are upskilled.

Engage the team in projects.

Proactively identify attrition risks and collaborate with BSE on retention measures.



Certifications:



Obtain relevant domain and technology certifications.


Skill Examples:



Proficiency in SQL Python or other programming languages used for data manipulation.

Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.

Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).

Conduct tests on data pipelines and evaluate results against data quality and performance specifications.

Experience in performance tuning.

Experience in data warehouse design and cost improvements.

Apply and optimize data models for efficient storage retrieval and processing of large datasets.

Communicate and explain design/development aspects to customers.

Estimate time and resource requirements for developing/debugging features/components.

Participate in RFP responses and solutioning.

Mentor team members and guide them in relevant upskilling and certification.


Knowledge Examples:



Knowledge Examples



Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.

Proficient in SQL for analytics and windowing functions.

Understanding of data schemas and models.

Familiarity with domain-related data.

Knowledge of data warehouse optimization techniques.

Understanding of data security concepts.

Awareness of patterns frameworks and automation practices.


Additional Comments:



Job Title: Tech Lead - Data Engineering Location: [Specify - e.g., Pune / Bengaluru / Hybrid] Experience: 8-12 years Employment Type: Full-time ________________________________________ Role Summary We are looking for an experienced Tech Lead - Data Engineering to design, lead, and deliver scalable data solutions in a modern cloud environment. The ideal candidate will have deep hands-on expertise in ETL/ELT development, data lake architecture, and data warehousing, along with strong command over AWS data services, Python, and Spark/Databricks. The candidate will act as a technical lead and mentor, guiding a team of 3-7 engineers, ensuring delivery excellence, and aligning technical execution with architectural best practices and organizational data strategy. ________________________________________ Key Responsibilities o Lead the end-to-end design and delivery of modern data engineering solutions, ensuring performance, scalability, and reliability. o Architect and develop ETL/ELT pipelines using tools such as AWS Glue, DBT, and Airflow, integrating multiple structured and semi-structured data sources. o Design and maintain data lakes and data warehouse environments on AWS (S3, Redshift, Athena, Glue). o Build and optimize Spark / Databricks jobs for large-scale data transformation and processing. o Define and enforce best practices in coding, version control, testing, CI/CD, and data quality management. o Oversee infrastructure setup and automation using Terraform, Kubernetes, and Docker for data environments. o Collaborate closely with data architects, analysts, and business stakeholders to translate business needs into robust data models and pipelines. o Manage and mentor a team of 3-7 engineers, conducting technical reviews, workload planning, and skill development. o Monitor, troubleshoot, and optimize data pipelines in production to ensure reliability and SLAs. o Drive continuous improvement initiatives for pipeline automation, observability, and cost optimization. ________________________________________ Technical Skills and Tools Core Technical Expertise: o Programming: Python (preferred), SQL, and scripting for data transformation and automation. o ETL/ELT & Orchestration: AWS Glue, DBT, Airflow, Step Functions. o Cloud Platforms: AWS (S3, Glue, Lambda, Redshift, Athena, EMR), exposure to Azure Data Services a plus. o Data Processing: Apache Spark, Databricks. o Databases: PostgreSQL, Snowflake, MongoDB. o CI/CD & DevOps: GitHub Actions, CircleCI, Jenkins, with automation via Terraform and Docker. o Infrastructure Management: Kubernetes, Terraform, CloudFormation. o Data Modeling & Warehousing: Dimensional modeling, partitioning, and schema design. Good-to-Have: o Exposure to streaming data platforms like Kafka or Kinesis. o Familiarity with data governance, metadata management, and data cataloging tools. o Experience with cost optimization and performance tuning on cloud environments. o Knowledge of DevOps for data engineering and infrastructure automation best practices. ________________________________________ Leadership & Soft Skills o Proven ability to lead and mentor engineering teams, fostering a collaborative and growth-oriented culture. o Strong analytical and problem-solving mindset with focus on delivery ownership. o Effective communication and stakeholder management, bridging technical and business domains. o Hands-on leadership with a proactive approach to technical challenges. o Strong organization skills with the ability to manage multiple priorities in an agile setup. ________________________________________ Education & Experience o Bachelor's or Master's in Computer Science, Information Technology, or related field. o 8-12 years of experience in data engineering, with at least 2-3 years in a technical lead or architect role. o Proven track record of delivering data platforms and pipelines using AWS and open-source technologies. ________________________________________ Preferred Qualifications o AWS Certified Data Analytics - Specialty, Solutions Architect - Associate, or equivalent. o Experience leading data platform modernization or migration initiatives. o Background in implementing CI/CD pipelines for data workflows and data infrastructure automation.


Skills




Python,ETL,SQL,AWS



About UST




UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world's best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients' organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact--touching billions of lives in the process.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD4478305
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    KA, IN, India
  • Education
    Not mentioned
  • Experience
    Year