Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. ?As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency.?
Role: ?Senior Data Engineer
Experience: 4-6 Yrs
Location: Udaipur
,
Jaipur,Kolkata
:
??
We are looking for a highly skilled and experienced Data Engineer with 4-6 years of hands-on experience in designing and implementing robust, scalable data pipelines and infrastructure. The ideal candidate will be proficient in SQL and Python and have a strong understanding of modern data engineering practices. You will play a key role in building and optimizing data systems, enabling data accessibility and analytics across the organization, and collaborating closely with cross-functional teams including Data Science, Product, and Engineering.
Key Responsibilities:
Design,develop, and maintain scalable ETL/ELT data pipelines using SQL and Python
Collaborate with data analysts, data scientists, and product teams to understand data needs
Optimize queries and data models for performance and reliability
Integrate data from various sources, including APIs, internal databases, and third-party systems
Monitor and troubleshoot data pipelines to ensure data quality and integrity
Document processes, data flows, and system architecture
Participate in code reviews and contribute to a culture of continuous improvement
Required Skills:
4-6 years of experience in data engineering, data architecture, or backend
development with a focus on data
Strong command of SQL for data transformation and performance tuning
Experience with Python (e.g., pandas, Spark, ADF)
Solid understanding of ETL/ELT processes and data pipeline orchestration
Proficiency with RDBMS (e.g., PostgreSQL, MySQL, SQL Server)
Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery)
Familiarity with version control (Git), CI/CD workflows, and containerized environments (Docker, Kubernetes)
Basic Programming Skills
Excellent problem-solving skills and a passion for clean, efficient data systems
Preferred Skills:
Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Glue, Dataflow,
etc.
Exposure to enterprise solutions (e.g., Databricks, Synapse)
Knowledge of big data technologies (e.g., Spark, Kafka, Hadoop)
Background in real-time data streaming and event-driven architectures
Understanding of data governance, security, and compliance best practices
Prior experience working in agile development environment
Educational Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.