Consulting Sa&ma A&c Big Data (py Spark) Engineer

Year    Bangalore, Karnataka, India

Job Description


Big Data Engineer role

Looking for competent big data engineer with the following technical and functional skillsets :

  • 4+ years of overall experience with hands-on development experience in big data stack
  • Must have experience to implement complex data pipeline and data processing jobs using Py-Spark
  • Strong technical fundamentals with SQL and hands-on experience with the same
  • Excellent communication skills and solution oriented approach in problem solving
  • Experience to work with a large team (preferably cross-geography), in client facing roles
Notice Period : Immediate joiners with
= 30 days of notice period

Location : Hyderabad Preferred, other locations with Deloitte preence : only for exceptional candidates

Data Platform Engineer Role

Job Summary
Looking for talented, passionate, and results-oriented individuals to join our team to build data platforms and tools to craft the future of commerce and Apple Pay. You will design and implement scalable, extensible and highly-available platforms capable of processing large volume data sets, that will enable great insights & strategy from a sea of data actionable for payment products. Our analytics platform need innovative ideas and out of the box thinking to handle the challenges of building and maintaining large-scale distributed systems. This person will collaborate with various infrastructure teams, data engineering, analysts and operations teams to identify requirements that will derive the creation of smart platform to build and execute datapipelines. The ideal candidate is a self-motived team player with excellent programming, problem solving and communication skills and the ability to adapt and learn quickly, provide results with limited direction, and choose the best possible architectural decisions.

Key Qualifications
1. 4+ years of professional experience with Scala/Java based data platforms.
2. 2+ years of professional experience with Kafka, HDFS/S3 & Data lake\'s like Apache Iceberg.
3. Worked with and understand deeply technologies like Apache Spark/Flink.
4. Hands-on experience on one or more orchestration frameworks like Airflow, Argo, Kubeflow.
5. Expertise in one or more programing languages Scala/Java/Python and NoSQL technologies like Apache Cassandra and building API services.
6. Knowledge and experience in working with Docker/Kubernetes, Splunk, Grafana is a big plus.
7. Excellent time management skills with the ability to manage work to tight deadlines.

Description
1. In this role, you will be responsible for crafting, building, integrating, and maintaining analytics data platform on hybrid cloud infrastructure.
2. Enable continued innovation and progress within the platform through research and development.
3. Build self-serve and one touch tooling capabilities to the analytics platform.
4. Manage and visualize the use of the platform by building intelligent telemetry.
5. Build data ingestion pipelines for batch and near real time analytics use-cases.
6. Have a strong desire to educate others and implements standard design methodologies.
7. Position yourself as a go-to consultative resource and solution specialist for DataEngineers and analysts!
8. Work collaboratively with data operations team to develop monitoring and alerting on data platforms and services

Notice Period : Immediate joiners with
= 30 days of notice period

Location : Hyderabad Preferred, other locations with Deloitte preence : only for exceptional candidates

Deloitte

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD3207940
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Bangalore, Karnataka, India
  • Education
    Not mentioned
  • Experience
    Year