Are you passionate about cutting-edge technology and big data? At Prevalent AI, we're at the forefront of building
innovative solutions to handle petabyte-scale data. As a Data Engineer, you will play a pivotal role in supporting
the development of key data engineering components for our Data Fabric and Exposure Management products.
In this exciting role, you'll work with open-source big data technologies to help us collect, transport, transform,
and ingest massive amounts of data in a distributed architecture.
If you thrive in an environment that encourages continuous learning, embraces modern technology, and values
the power of data, this is the perfect opportunity for you.
Key accountabilities
The ideal candidate would be a self-motivated individual with strong technology skills, commitment to quality
and positive work ethic, who can
Design, develop, and deploy data management modules, including data ingestion, parsing, scheduling,
and processing, using agile practices
Conduct unit, system, and integration testing of developed modules, ensuring high-quality data
solutions
Select and integrate Big Data tools and frameworks to meet capability requirements
Collaborate with client and partner teams to deliver effective data solutions and resolve operational
issues promptly
Stay updated on industry trends and best practices in Data Engineering and Open-Source Big Data
technologies
Contribute to agile teamwork and follow a personal education plan for technology stack and solution
architecture
Build processes for data transformation, structures, metadata, and workload management
Skills and Experience
Relevant experience in data engineering with open-source big data tools (Hadoop, Spark, Kafka) and
NoSQL databases (Postgres, MongoDB, Elastic)
Skilled in data pipeline management (e.g., Airflow), AWS services (EC2, EMR, RDS), and stream
processing (e.g., Spark-Streaming)
Proficient in object-oriented and functional programming languages (Python, Java, Scala), and data
lake concepts (ingestion, transformation).
Strong understanding of Hadoop and Spark frameworks, Agile methodology, and scalable big data
architectures Excellent communication, with a self-motivated and analytical mindset for fast-paced environments
Experience working with cross-functional teams to optimize big data pipelines and solutions
Education
* Master's/Bachelor's in Computer Science Engineering.
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.