We are looking for a highly skilled engineer with strong experience in Apache Flink, DevOps practices, and real-time streaming data pipelines. The ideal candidate will design, build, and maintain robust data streaming solutions, ensuring reliability, scalability, and performance in production environments.
Key Responsibilities:
• Design, implement, and optimize real-time data streaming pipelines using Apache Flink, Kafka, or similar technologies.
• Develop and maintain CI/CD pipelines for automated deployment and monitoring of streaming applications.
• Collaborate with data engineers, DevOps teams, and architects to ensure smooth data flow across systems.
• Implement infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation.
• Manage and monitor streaming workloads on cloud platforms (AWS, Google Cloud Platform, or Azure).
• Build observability solutions using Prometheus, Grafana, or ELK stack.
• Ensure high availability, low latency, and fault tolerance in production-grade streaming pipelines.
• Participate in performance tuning, debugging, and troubleshooting complex streaming systems.
Required Skills & Experience:
• Strong hands-on experience with Apache Flink (DataStream API, Table API, CEP).
• Proficiency with Kafka, Spark Streaming, or Pulsar is a plus.
• Experience with DevOps tools: Docker, Kubernetes, Jenkins, GitLab CI/CD, etc.
• Working knowledge of cloud services (AWS Kinesis, Google Cloud Platform Pub/Sub, or Azure Event Hubs).
• Strong programming skills in Java, Scala, or Python.
• Good understanding of data serialization formats like Avro, Parquet, or Protobuf.
• Experience with monitoring, alerting, and logging tools in distributed systems.
• Familiarity with microservices and RESTful API integration.
Nice-to-Have:
• Experience with Flink SQL and stateful stream processing.
• Exposure to machine learning pipelines or real-time analytics.
• Certification in AWS DevOps, Google Cloud Platform Data Engineer, or similar cloud certifications.
Apply Now
Apply Now