Azure Databricks With Pyspark

Year    WB, IN, India

Job Description

Skill: Azure Databricks with Pyspark



Experience: 6 to 13 Yrs



Location: AIA Kolkata



Job description



Responsibilities:



Designing, building, and deploying data pipelines and workflows using Databricks. Developing and optimizing Apache Spark applications for data processing and analytics. Integrating Databricks with various data sources and platforms. Implementing data quality checks and monitoring processes. Managing and configuring the Databricks environment, including clusters and notebooks. Troubleshooting and resolving issues related to Databricks performance and functionality. Collaborating with data scientists and analysts to implement machine learning models. Staying up-to-date with the latest Databricks features and best practices.

Qualifications:



Strong proficiency in Python or Scala programming languages. Experience with Apache Spark and big data processing frameworks. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with data warehousing and ETL processes. Understanding of machine learning concepts and algorithms. Strong problem-solving and analytical skills.

Additional Skills (depending on the specific role):



Experience with Databricks Delta Lake or MLflow. Knowledge of infrastructure as code (IaC) tools like Terraform. Familiarity with CI/CD pipelines for data and machine learning workflows. * Experience with containerization technologies like Docker.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD4848365
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    WB, IN, India
  • Education
    Not mentioned
  • Experience
    Year