✨ About The Role
- The role involves owning projects that enhance data replication, storage, enrichment, and reporting capabilities.
- The engineer will build and optimize efficient streaming and batch data pipelines that support the core product and API.
- Designing scalable storage solutions for handling large volumes of IoT and time-series data is a key responsibility.
- The position requires developing and maintaining real-time data systems to manage growing data volumes.
- The engineer will implement distributed tracing and observability patterns to improve monitoring and troubleshooting.
⚡ Requirements
- The ideal candidate will have at least 4 years of experience in platform engineering or data engineering.
- A strong background in designing and optimizing data pipelines at terabyte to petabyte scale is essential.
- Familiarity with lake-house architectures and real-time data processing tools is crucial for success in this role.
- Proficiency in Java and a focus on writing clean, maintainable code are required.
- The candidate should possess strong problem-solving skills and a collaborative mindset to work effectively within a team.