Description
Overall Purpose: Responsible for the development of high performance, distributed computing tasks using Big Data technologies such as Hadoop, NoSQL, text mining and other distributed environment technologies. Familiarity with JVM-based function languages including Scala and Clojure; Hadoop query languages including Pig, Hive, Scalding, Cascalog, PyCascading; along with alternative HDFS-based computing frameworks including Spark and STORM are desirable.
Key Roles and Responsibilities: Uses Big Data programming languages and technology, writes code, completes programming and documentation, and performs testing and debugging of applications. Analyzes, designs, programs, debugs and modifies software enhancements and/or new products used in distributed, large scale analytics and visualization solutions. Interacts with data scientists and industry experts to understand how data needs to be converted, loaded and presented. Works in a highly agile environment. Job Contribution: Senior level technical expertise. Deep technical knowledge and subject matter expert on ATT technologies.
Education: Preferred Bachelors of Science in Computer Science, Math or Scientific Computing preferred.
Experience: Typically requires 5-8 years experience.
Skills sets: * Databricks expertise
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.