Software Engineer 2

Year    Bangalore, Karnataka, India

Job Description

Company Description

When you're one of us, you get to run with the best. For decades, we've been helping marketers from the world's top brands personalize experiences for millions of people with our cutting-edge technology, solutions and services. Epsilon's best-in-class identity gives brands a clear, privacy-safe view of their customers, which they can use across our suite of digital media, messaging and loyalty solutions. We process 400+ billion consumer actions each day and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon India is now Great Place to Work-Certified(TM). Epsilon has also been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Positioned at the core of Publicis Groupe, Epsilon is a global company with more than 8,000 employees around the world. For more information, visit or our page.



Why we are looking for you

  • You have experience in Product Engineering & Software Development.
  • You are self-driven and enjoy working collaboratively as part of a team.
  • You have a strong experience of building products/platforms of scale.
  • You take pride in producing well written code that is thoroughly tested.
  • Design and develop big data solutions using Spark & Python (PySpark) along with Hadoop ecosystems like HDFS/Hive, SQL etc.
What you will enjoy in this role
  • This position is responsible for hands on analysis, design, and implementation of business requirements around big data and Hadoop framework.
  • Development experience with PySpark & SparkSql with good analytical & debugging skills.
  • Development work for building new solutions around Hadoop and automation of operational tasks.
  • Assisting teams and troubleshooting production issues.
  • The open and friendly work environment that values communication and efficiency.
What you will do
  • Design, development, and coding around Hadoop ecosystems using HDFS/Hive, Spark, Python, SQL, Shell etc.
  • Experience with RDD and Data Frames within Spark.
  • Experience with data analytics, and working knowledge of big data infrastructure including Hadoop Ecosystems like HDFS, Hive, Spark, Python etc.
  • Work with gigabytes/terabytes of data and must understand the challenges of transforming and enriching such large datasets.
  • Provide effective solutions to address the business problems - strategical and tactical.
  • Collaboration with team members, project managers, business analysts and business users in conceptualizing, estimating, and developing new solutions and enhancements.
  • Work closely with the stake holders to define and refine the big data platform to achieve company product and business objectives.
  • Collaborate with other technology teams and architects to define and develop cross-functional technology stack interactions.
  • Read, extract, transform, stage and load data to multiple targets, including Hadoop and Oracle.
  • Develop automation scripts around Hadoop framework to automate processes and existing flows around.
  • Should be able to modify existing programming/codes for new requirements.
  • Unit testing and debugging. Perform root cause analysis (RCA) for any failed processes.
  • Document existing processes as well as analyse for potential automation and performance improvements.
  • Convert business requirements into technical design specifications and execute on them.
  • Participate in code reviews and keep applications/code base in sync with version control.
  • Effective communication, self-motivated with ability to work independently yet still aligned within a team environment.
Qualifications

Bachelor's in computer science (or equivalent) with
5 years of experience working with Hadoop/Bigdata against ingestion, transformation and staging using following technologies/principles/methodologies.

Required Skills
  • 3-5 years of experience working with Hadoop distributed frameworks.
  • 2-5 years - Apache Spark
  • 2- 5 years - Python
  • 2- 5 years - HDFS/Hive
  • 2- 5 years - SQL
  • 2- 5 years - LINUX/Shell scripting
  • Experience designing, developing, and implementing ETL/ELT workflows.
  • Experience working with large datasets.
  • Software engineering processes (SDLC) Agile/Kanban.
  • Strong analytical and decision-making skills.
  • Self-starter as well as a team player.
Additional Skills that are a plus
  • Hadoop Admin & Dev-Ops.
  • Working knowledge of Oracle data base and PLSQL.
  • Data visualization and reporting.
  • Experience with performance tuning for large data sets.
  • Experience with JIRA for user-story/bug tracking.
  • Experience with GIT/Bitbucket.
  • Marketing business domain knowledge - added plus.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD2972661
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Bangalore, Karnataka, India
  • Education
    Not mentioned
  • Experience
    Year