to lead the development and deployment of deep learning models on IoT and edge computing platforms. You'll work on converting and optimizing computer vision models--such as CNNs, YOLO, RNNs, and OCR--for execution on high-efficiency hardware accelerators like Hailo, NVIDIA Jetson, Intel Movidius, or ARM-based NPUs.
This role is central to scaling client's next-generation Edge AI platform, enabling real-time safety decisions and unlocking new capabilities for school bus fleets, student safety, and public transportation environments.
About Client:
The client is transforming how communities protect students and enforce traffic safety--through intelligent, real-time video analytics and automated enforcement. As part of this mission, we are advancing the shift from cloud-dependent AI to intelligent edge computing.
Key Responsibilities
Develop and optimize embedded software and AI model inference pipelines for real-time object detection, tracking, and classification at the edge.
Deploy and maintain vision-based deep learning models (YOLO, CNN, RNN, OCR) on resource-constrained devices using AI accelerators (e.g., Hailo, Jetson, Coral, ARM).
Lead cloud-to-edge transformation of AI models, including quantization, pruning, conversion, and hardware-aware optimization.
Collaborate with AI/ML researchers to integrate trained models into embedded applications and validate edge inference accuracy.
Optimize for performance, latency, and power efficiency in real-world mobile or vehicle-mounted environments.
Work with hardware abstraction layers, toolchains, and SDKs for AI acceleration (e.g., HailoRT, TensorRT, OpenVINO, ONNX, TFLite).
Design and implement robust, fault-tolerant inference pipelines using tools like GStreamer, OpenCV, or FFmpeg.
Contribute to system architecture discussions focused on scalable AI at the edge, real-time event capture, and safety-critical decision support.
Preferred Qualifications
5+ years of experience in embedded systems software and edge AI model deployment.
Proficient in C/C++ and Python for embedded development and AI integration.
Experience with quantization, pruning, and model conversion for edge inference (ONNX, TFLite, TensorRT, Dataflow Compiler etc.).
Hands-on experience with at least one AI hardware platform (e.g., Hailo, NVIDIA Jetson, Intel Movidius, ARM Ethos-U, Google Coral).
Solid understanding of deep learning architectures and model behavior (CNN, YOLOv5/v8, OCR, ReID, RNNs).
Strong knowledge of video/image processing pipelines using GStreamer, OpenCV, or similar.
Experience working with Linux-based embedded systems, build systems (Yocto, CMake), and cross-compilation toolchains.
Understanding of IoT/vehicle edge environments, including system resource constraints, fault tolerance, and field deployment challenges.
Bachelor's or Master's degree in Electrical Engineering, Computer Engineering, AI, or a related technical field.
Nice to Have
Familiarity with multi-camera systems, object re-identification, or behavior/event detection at the edge.
Knowledge of real-time messaging and telemetry systems (MQTT, ZeroMQ, or gRPC).
Experience in public safety, ADAS, surveillance, or other vision-based safety-critical applications.
Familiarity with data privacy, security, and compliance concerns for edge AI systems.
Why Join Us
Be a core driver in bringing safety AI to the edge, where milliseconds matter.
Help scale an intelligent, distributed AI platform that makes real-world impact across thousands of vehicles.
Work at the intersection of AI, embedded systems, and public safety in a fast-paced, purpose-driven environment.
Collaborate with engineers, data scientists, product managers, and field ops to build AI that directly saves lives.
Job Types: Full-time, Permanent
Experience:
Embedded AI: 4 years (Preferred)
Deep learning: 4 years (Preferred)
Computer vision: 3 years (Preferred)
Embedded software: 4 years (Preferred)
Work Location: In person
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.