Senior Data Engineer

Year    Remote, India

Job Description


Hiring Senior Data Engineer Job Summary Our company is searching for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architect, BI developers and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company\xe2\x80\x99s data architecture to support our next generation of products and data initiatives. This is a full-time position located in Mumbai. Responsibilities \xe2\x97\x8f Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. \xe2\x97\x8f Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. \xe2\x97\x8f Migrate on-premise Data server into cloud data platform using AWS & Snowflake. \xe2\x97\x8f Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS \xe2\x80\x98big data\xe2\x80\x99 technologies. \xe2\x97\x8f Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. \xe2\x97\x8f Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. \xe2\x97\x8f Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. \xe2\x97\x8f Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. \xe2\x97\x8f Work with data and analytics experts to strive for greater functionality in our data systems. \xe2\x97\x8f Assembling large, complex sets of data that meet non-functional and functional business requirements. \xe2\x97\x8f Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes \xe2\x97\x8f Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS and SQL technologies \xe2\x97\x8f Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition. \xe2\x97\x8f Working with stakeholders including data, design, product and executive teams and assisting them with data-related technical issues Skills Required \xe2\x97\x8f 10+ Years of strong work experience in Data Engineering using AWS, Pyspark, Python and SQL. \xe2\x97\x8f Work in the potential of AWS Solution Architect and design complex AWS solutions and pipelines. \xe2\x97\x8f Constructing the AWS data pipelines using VPC, EC2,S3, Auto Scaling Groups, IAM , CloudWatch , CloudFront, lambda, DMS, Glue ...etc \xe2\x97\x8f Strong understanding of Python and Pyspark Data engineering \xe2\x97\x8f experience working with CICD pipeline using tools like Terraform, Jenkins, aws code commit, code pipeline and CloudFormation etc. \xe2\x97\x8f Hands on experience in data modeling and dimensional modeling. \xe2\x97\x8f exposure to full Lifecycle of Data warehouse projects \xe2\x97\x8f Designing and developing of data models and writing complex SQL queries to extract, manipulate and analyze data. \xe2\x97\x8f experience working with Streaming data. \xe2\x97\x8f Build ETL pipeline end to end from AWS S3 to Snowflake Data Warehouse for analytical. \xe2\x97\x8f Build Glue code to extract, transform and load using Pyspark common code and model. \xe2\x97\x8f experience working with Airflow for automation of AWS and Snowflake \xe2\x97\x8f Snowflake expertise is good to have. \xe2\x97\x8f Strong experience working with SQL and PL SQL procedures. Good to Have \xe2\x97\x8f Familiarity or Working experience with one of BI & Analytical applications like Qlik Sense, Microsoft Power BI, Tableau \xe2\x97\x8f Any Cloud Application, Database, Data Integration, Business Intelligence Application Certifications are preferable. \xe2\x97\x8f Exposure to Data Streaming, and Data API Design & Development experience would be preferable. \xe2\x97\x8f Familiarity or Working experience with GitHub, JIRA, and CI/CD tools Benefits \xe2\x97\x8f Salary package in line with job responsibilities \xe2\x97\x8f Great work atmosphere \xe2\x97\x8f Health insurance and benefits \xe2\x97\x8f Work remotely temporarily due to COVID-19 Job Types: Full-time, Permanent Salary: \xe2\x82\xb970,000.00 - \xe2\x82\xb9250,000.00 per month Benefits:

  • Cell phone reimbursement
  • Health insurance
Schedule:
  • Day shift
  • Monday to Friday
Supplemental pay types:
  • Performance bonus
Work Location: Remote

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD3233947
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Remote, India
  • Education
    Not mentioned
  • Experience
    Year