Data Warehouse Engineer

Year    Chennai, Tamil Nadu, India

Job Description

:

Responsibilities

  • Responsible for designing, coding and deploying of high-performance applications Energy Analytics platform.
  • Analyze and resolve complex problems associated with the development of Energy Analytics platform.
  • Gather and address technical and design requirements. Build reusable code and libraries for future use
  • Perform user testing, A/B testing, and other continuous improvement projects
  • Develop communication, admin and management tools for Fusion platform using AWS and related technologies.
  • Stay abreast with new innovations and the latest technology trends and explore ways of leveraging these for improving the product in alignment with the business.
Qualifications/Skills
  • Experience:
  • B Tech/B.E. in computer science or any other allied fields from top schools.
  • 5+ years of experience working as Data warehouse Developer in high performing scalable environment.
  • Preferable:
  • Experience and understanding of government services, Security & Mobility domains
  • Must have skills:
  • You will bring this to the table Ability to work in ambiguous situations with unstructured problems and anticipate potential issues risks Demonstrated experience in building data pipelines in data analytics implementations such as Data Lake and Data Warehouse At least 2 instances of end-to-end implementation of the data processing pipeline.
  • Experience configuring or developing custom code components for data ingestion, data processing, and data provisioning, using Big data & distributed computing platforms such as as AWS Firehose, Kafka, Glue, Athena, S3, Apache Spark on Cloud platforms such as AWS or Google.
  • Hands-on experience developing enterprise solutions using designing and building frameworks, enterprise patterns, database design, and development in 2 or more of the following areas, End-to-end implementation of Cloud data engineering solution Kafka Spark streaming, transform and storage in HDFS.
  • Realtime solution using Spark streaming, Kafka, Kinesis Distributed compute solution (Spark Storm, Hive).
  • Distributed storage and NoSQL storage (Cassandra, S3, NoSQL)
  • Batch solution and distributed computing using ETL (Spark SQL Spark Data frame AWS Glue)
  • Manage Performance tuning, memory optimization and partitioning.
  • Development of Frameworks, reusable components, accelerators, CI/CD automation
  • Expert level in Python using Kafka.
  • Proficiency in data modelling, for both structured and unstructured data, for various layers of storage
  • Ability to collaborate closely with business analysts, architects, and client stakeholders to create technical specifications.
  • Ensure quality of code components delivered by employing unit testing and test automation techniques including CI in DevOps environments.
  • Ability to profile data, assess data quality in the context of business rules, and incorporate validation and certification mechanisms to ensure data quality.
  • Ability to review technical deliverables, mentor and drive technical teams to deliver quality technical deliverables.
  • Understand system Architecture and provide component-level design specifications, both high level, and low-level design.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD2972268
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Chennai, Tamil Nadu, India
  • Education
    Not mentioned
  • Experience
    Year