Around 5 yrs of IT experience with at least 3 yrs in ETL/Pipeline Development using tools such as Azure Databricks/Apache Spark and Azure Data Factory with development expertise on batch and real-time data integration
Experience in programming using Python
Good knowledge of RDBMS and should be able to write complex SQL queries
Experience in data ingestion, preparation, integration, and operationalization techniques in optimally addressing the data requirements
Should be able to understand system architecture which involves Data Lakes, Data Warehouses and Data Marts
Experience in relational data processing technology like MS SQL, Delta Lake, Spark SQL, SQL Server
Experience to own end-to-end development, including coding, testing, debugging and deployment
Extensive knowledge of ETL and Data Warehousing concepts, strategies, methodologies
Experience working with structured and unstructured data
Familiarity with Azure services like Azure functions, Azure Data Lake Store, Azure Cosmos
Must be team oriented with strong collaboration, prioritization, and adaptability skill
Excellent written and verbal communication skills including presentation skills