Commercial Banking IT is looking for a strong Big Data candidate with skills and experience with large-scale Hadoop-based data platforms who will be responsible for design, development and testing of a next generation enterprise data hub and reporting and analytic applications. This individual will work with an existing development team to create the new Hadoop-based platform , migrate the existing data platforms to Cloud and provide production support. The current platform uses many tools including Sqoop, Hive, Impala, Spark, Kafka, Streaming, SQL, Python, Java, etc. The candidate will be accountable for design, development, implementation and post-implementation maintenance and support. The candidate will develop and test new interfaces, enhancements/changes to existing interfaces, new data structures, and new reporting capabilities.
Job responsibilities
Develop ,enhance and test new/existing interfaces. The candidate will be part of existing agile team and will work on developing, enhancing ETL pipelines,design solutions,
Handle Dev Ops effort in terms of CICD, Scanning, Code, Performance testing and Test coverage
Identify, analyze, and interpret trends or patterns in complex data sets
Transforming existing ETL logic into Hadoop Platform
Innovate new ways of managing, transforming and validating data
Establish and enforce guidelines to ensure consistency, quality and completeness of data assets
10-15% support on legacy jobs which is primarily developed using unix shell scripts and oracle
Required qualifications, capabilities, and skills
Formal training, or certification on software engineering concepts and 3+ years applied experience in Data Engineering pipeline
Experience in Scripting,SQL, Complex sql query Writing Haddop, Imapla, Hive
Experience in Sqoop, SparkSQL
Minimum 7 experience in a Big Data technology (Hadoop and Spark Architecture, Performance tuning ,Spark SQL, Streaming, HIVE, SQOOP, KAFKA, Impala, HBASE, Entitlements etc., )
Experience in real time streaming data
4+ years of Experience in Python is a must
Experience in writing SQL queries and UNIX shell is a must, Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
Preferred qualifications, capabilities, and skills
Minimum 7+ Experience in BigData, Data Pipeline, SQL Services
Experience in SparkSQL,Imapala and Bigdata technologies
Minimum 3+ Experience in a Big Data technology (Hadoop and Spark Architecture, Performance tuning ,Spark SQL, Streaming, HIVE, SQOOP, KAFKA, Impala, HBASE, Entitlements etc., )
JPMorgan Chase & Co., one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world\'s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. In accordance with applicable law, we make reasonable accommodations for applicants\' and employees\' religious practices and beliefs, as well as any mental health or physical disability needs.
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.