As a Junior Data Engineer at Prevalent, you will play a key role in supporting the development of our Data Fabric
and Exposure Management products. This entry-level position offers an exciting opportunity to work with
cutting-edge big data technologies and gain hands-on experience in building and maintaining data systems at
a petabyte scale.
You will be involved in the collection, transformation, transportation, and ingestion of large volumes of data
across distributed systems, helping to drive the efficiency and scalability of our data infrastructure. This role
provides the perfect environment for fresh graduates or entry-level professionals who are eager to learn, grow,
and contribute to building innovative solutions in the world of big data.
At Prevalent, we embrace modern technology, and as a Junior Data Engineer, you'll gain valuable experience
with open-source big data tools and frameworks, developing critical skills in the data engineering field. Your
work will support the development of high-impact data solutions while ensuring high-quality data management
across our systems.
Key accountabilities
:
Conduct unit, system, and integration testing of developed modules and integrate relevant Big Data
tools and framework
Participate in the design, development, and deployment of our product modules both on the Cloud and
on-premises
Collaborate with client and partner teams to deliver high-quality data solutions and resolve operational
issues promptly while aligning with business priorities
Stay updated on industry trends and best practices in Data Engineering and Open-Source Big Data
technologies
Follow a personal education plan to enhance expertise in the technology stack and solution
architecture
Contribute to agile teamwork and build processes supporting data transformation, structures,
metadata, and workload management
Skills and Experience:
Fresher with a strong interest in Data Engineering and a solid analytical mindset for working with
unstructured datasets.
Proficient in SQL and relational databases, with experience in query authoring and familiarity with
various databases.
Hands-on experience with big data tools like Hadoop, Spark, Kafka, and NoSQL databases (e.g.,
Postgres, MongoDB, Elastic) Strong understanding of Data Lake concepts, including data ingestion, transformation, and the
Hadoop Framework (MapReduce, parallel processing).
Familiar with Spark (SparkSQL, PySpark) and Agile methodology.
Knowledge in message queuing, stream processing, and scalable big data stores.
Excellent communication skills, both verbal and written, with the ability to work autonomously and
collaboratively in a fast-paced environment.
Education:
* Master's/Bachelor's in Computer Science Engineering
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.