Lead Data Engineer

Year    Bengaluru, Karnataka, India

Job Description


Company Description
Informa is a leading academic publishing, business intelligence, knowledge and events business, creating unique content and connectivity for customers all over the world. It is listed on the London Stock Exchange and is a member of the FTSE 100.
Taylor & Francis Group produces high quality, peer reviewed books and journals. We produce unique, trusted content by expert authors, spreading knowledge and promoting discovery globally. We aim to broaden thinking and advance understanding, providing academics and professionals with a platform to share ideas and realise their individual potential.


We are currently looking to bring on a talented, self-motivated and conscientious Lead Data Engineer to work in our technology group in Data, Insights and Analytics team focusing and gaining insights from analyzing company data. You will be providing technical leadership in solving complex analytics’ problem. The lead must be comfortable working with a wide range of data generated by applications, systems, services, users, websites, partner sites and systems and these data continuously flowed in from functions like editorial, production, products, customers, sales, marketing, finance and others in academic publishing. These data are made available by AWS cloud services, complex ERP and CRM systems, events and logs across business functions. The right candidate must have strong experience using a variety of ETL/ELT techniques and analytics practices and data tools. The lead must be a passionate engineer who can uncover and answer the problems that can be found by understanding large amount of data sets. Closing date: May 31, 2021 What you’ll be doing:

  • Evaluate, implement and recommend Big Data technology stack that would align with company's technology on AWS.
  • Drive significant technology initiatives end to end and across multiple layers of data and analytics engineering.
  • Ability to evangelize next generation infrastructure in analytics space (Batch, Near Realtime, Realtime technologies) on AWS cloud.
  • Passionate for continuous learning, experimenting, applying the open-source technologies and software paradigms.
  • Enrich and refine the backlog by working closely with engineering managers and delivery leads of the academic publishing business domains.
  • Managing the dependencies across the team, setting the expectations and working towards addressing various goals and OKRs.
  • Work closely with several data engineering scrum teams and manage day-to-day chores.
  • Eager to assure quality by a helping in developing a strong validation and verification checks in the data engineering solutions.
  • Provides strong technical expertise in adopting and contributing to open-source technologies related to Bigdata and analytics.
  • Define and Drive best practices.
  • Be a role model to data engineers pursuing technical career path in engineering.


Qualifications
  • Minimum of 10+ years of overall experience including but not limited to architect, design and development using various database technologies.
  • A minimum 4+ years of hands-on association with AWS Analytical services and big data stack.
  • Ability to Design and build High Availability architecture.
  • Ability to successfully lead a team of data engineers, data evangelists, stewards and business analysts through all phases of the data engineering life cycle.
  • Active in the community in terms of articles / blogs / speaking engagements at conferences – this is an added big advantage.
  • Big advantage will be for having experience in presenting data using: PowerBI/ Tableau/Quick Sight.
  • Knowledge in a variety of machine learning is an added advantage.
  • Excellent written and verbal communication skills for coordinating across teams.
  • The ideal candidate is someone with the following:
    • Experience using Cloud technologies: AWS Athena, Glue, EMR, Lake formation, RDS, Redshift, S3, Kinesis, Data pipelines etc.
    • Data engineer having extensive exposure to dimensions modelling
    • Exposure in developing batch etl and reverse etl pipeline using AWS stack
    • Hands on experience in developing real time analytics pipeline
      Experience working in data warehouse house and MDM ,datamart and edw project's
    • Good in writing complex SQL queries and developing semantic layer on top of DWH
    • Experience querying databases both SQL and No-SQL.
    • Coding knowledge and experience with several Python/Java/Scala, etc.
    • Experience analysing data from 3rd party providers: Google Analytics, Facebook Insights, etc.
    • Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, etc.

Additional Information
  • 24 days annual leave
  • Volunteering days annually
  • Day off for your birthday
  • Pension contributions
  • Medical insurance for self and dependants; life cover and personal accident cover for self
  • Seasonal social and charitable events
  • Training and development
  • Blended style and flexi working time
  • Right tools for remote working
At Taylor & Francis we care about our colleagues, promoting work-life balance, wellbeing and flexible working. We believe that the skills and experience you bring to Taylor & Francis are invaluable. We want you to have the opportunity to develop your abilities, and to innovate and develop in areas which you are passionate about.
  • You must have the right to live and work in India
  • The role is based at our Bengaluru office or may be remote

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD2909831
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Bengaluru, Karnataka, India
  • Education
    Not mentioned
  • Experience
    Year