Senior Software Engineer

Year    Bengaluru, Karnataka, India

Job Description










  • kafka, Spark, phython, pysprak


  • Bangalore


  • Full Time






Experience
02.0 - 08.0 years



Offered Salary
15.00 - 30.00



Notice Period
Not Disclosed
:
  • 2 - 8+ years of IT experience / Engineering/Tech Stacks / Frameworks / Programming Languages
  • 2 - 5+ years in data engineering, ETL/ELT, Data Analytics and Reporting
  • 3 - 5+ years of experience programming in a backend language (Java, J2EE, Kafka, Spark, Python, Pyspark, Glue, API etc.)
  • Advanced working SQL and No-SQL knowledge and experience working with a variety of databases (MySQL, NoSQL, Hadoop, MongoDB, etc.)and/or Have good knowledge of SQL, ETL and ELT concepts in Datawarehouse and/or proficiency in SQL, data modelling and data warehouse concepts.
  • Experience in building pipelines to migrate data from on-prem to cloud data repositories (e.g., Snowflake, Redshift, Synapse, Databricks)
  • Experience in ingesting data into cloud repositories from sources including flat files, SQL server, Kafka, CDC and Web APIs
  • Experience with development work to support data migration, cloud applications and related work
  • Experience with Spark, or the Hadoop ecosystem and similar frameworks
  • Strong analytic skills related to working with unstructured datasets
  • Familiarity with various cloud technologies such as AWS (EMR, RDS, Redshift, etc.) and Azure
  • Experienced in designing and developing data pipelines using PySpark in any Public Cloud g. AWS, GCP, Azure etc or hybrid environments, using AWS Glue, Glue Studio, Blueprints etc
  • Proficient in building large scale systems end to end using Java/Python/GoLang or other high-performance languages, developer tools, CI/CD, DevOps, Github, Terraform, monitoring tools and engineering cloud migration solutions
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet business requirementsand/or build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS \xe2\x80\x98big data\xe2\x80\x99 technologies
  • Hands on Coding80%, code walkthrough, etc
  • Able to develop complex PySpark code using SparkSQL, Dataframes, joins, transposes etc. to load data to an MPP Datawarehouse g. Snowflake.
  • Experience in all facets of software development life cycle like analysis, design, development, data conversion, data security, system integration, and implementation
  • Experience in working with modern IDE\xe2\x80\x99s (such as Visual Studio Code, Intellij)

Education
  • Bachelors\xe2\x80\x99 or Masters\xe2\x80\x99 degree from an accredited college/university in business-related or technology-related field.
  • Relevant certifications in AWS g. Cloud Practitioner, Solutions Architect Associate
  • Migration and Data Integration strategies/certification

Required Knowledge, Skills, and Abilities


  • Kafka

    Spark

    Phython

    Pysprak

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD2981841
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Bengaluru, Karnataka, India
  • Education
    Not mentioned
  • Experience
    Year