Se Big Data

Year    Kolkata, West Bengal, India

Job Description



Responsibilities # Partner with business stakeholders to gather requirements and translate them into technical specifications and process documentation for IT counterparts (onshore and offshore) # Highly proficient in the architecture and development of an event driven data warehouse streaming, batch, data modeling, and storage # Advanced database knowledge creating optimizing SQL queries, stored procedures, functions, partitioning data, indexing, and reading execution plans # Skilled experien ce in writing and troubleshooting Python PySpark scripts to generate extracts, cleanse, conform and deliver data for consumption # Expert level of understanding and implementing ETL architecture data profiling, process flow, metric logging, and erro r handling # Support continuous improvement by investigating and presenting alternatives to processes and technologies to an architectural review board # Develop and ensure adherence to published system architectural decisions and development standar ds # Lead and foster junior data engineers in their careers to produce higher quality solutions at a faster velocity through optimization training and code review # Multi-task across several ongoing projects and daily duties of varying priorities as required # Interact with global technical teams to communicate business requirements and collaboratively build data solutions The duties listed above are the essential functions, or fundamental duties within the job classification. The essential fun ctions of individual positions within the classification may differ. May assign reasonably related additional duties to individual employees consistent with standard departmental policy. Qualifications # Bachelor#s degree in computer science or MIS r elated area required or equivalent experience (industry experience substitutable) # 10 years of experience in data development # 3 years of experience in Banking and financial domain # Expert level in data warehouse design architecture, dimensional data modeling and ETL process development # Advanced level development in SQL NoSQL scripting and complex stored procedures (Snowflake, SQL Server, DynamoDB, NEO4J a plus) # Extremely proficient in Python, PySpark, and Java # AWS Expertise # Kinesis.

Job Requirements: PySpark, Data Processing, ETL Pipeline

Job Type

Full Time

Location

KOLKATA

Mandatory Skills

  • PySpark

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD2950531
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Kolkata, West Bengal, India
  • Education
    Not mentioned
  • Experience
    Year