CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right--whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise.
Job Summary We are seeking a motivated Data Engineer who has strong experience in building cloud-based data lake and analytics architectures using AWS and Databricks, and is proficient in Python programming for data processing and automation. This is a Full-time and Remote position.
What You'll Do
Design, develop, and maintain data lakes and data pipelines on AWS using ETL frameworks and Databricks.
Integrate and transform large-scale data from multiple heterogeneous sources into a centralized data lake environment.
Implement and manage Delta Lake architecture using Databricks Delta or Apache Hudi.
Develop end-to-end data workflows using PySpark, Databricks Notebooks, and Python scripts for ingestion, transformation, and enrichment.
Design and develop data warehouses and data marts for analytical workloads using Snowflake, Redshift, or similar systems.
Design and evaluate data models (Star, Snowflake, Flattened) for analytical and transactional systems.
Optimize data storage, query performance, and cost across the AWS and Databricks ecosystem.
Build and maintain CI/CD pipelines for Databricks notebooks, jobs, and Python-based data processing scripts.
Collaborate with data scientists, analysts, and stakeholders to deliver high-performance, reusable data assets.
Maintain and manage code repositories (Git) and promote best practices in version control, testing, and deployment.
Participate in making major technical and architectural decisions for data engineering initiatives.
Monitor and troubleshoot Databricks clusters, Spark jobs, and ETL processes for performance and reliability.
Coordinate with business and technical teams through all phases of the software development life cycle.
What You'll Need
5+ years of experience building and managing Data Lake Architecture on AWS Cloud
3+ years of experience with AWS Data services such as S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS, and Redshift.
3+ years of experience building Data Warehouses on Snowflake, Redshift, HANA, Teradata, or Exasol.
3+ years of hands-on experience working with Apache Spark or PySpark, on Databricks.
3+ years of experience implementing Delta Lakes using Databricks Delta or Apache Hudi.
3+ years of experience in ETL development using Databricks, AWS Glue, or other modern frameworks.
Proficiency in Python for data engineering, automation, and API integrations.
Experience in Databricks Jobs, Workflows, and Cluster Management.
Experience with CI/CD pipelines and Infrastructure as Code (IaC) tools like Terraform or CloudFormation is a plus.
Bachelor's degree in computer science, Information Technology, Data Science, or related field
Physical Demands
This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc.
Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor.
Reasonable accommodation statement
If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to application.accommodations@cai.io or (888) 824 - 8111.
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.