Senior Data Engineer (Real-Time Analytics & OLAP)- Full Time
About Rezolve.ai
We’re an AI-first SaaS company leveraging the latest advancements in Generative AI. We are proud to build a world-class employee support Agentic AI solution that is disrupting ITSM and HR operations. Rezolve.ai is recognized by Gartner and Forrester for its rapid adoption and end-user benefits. We are in an exciting growth phase and are looking for experienced, ambitious professionals who want to accelerate their own career goals and ours.
Job Title: Senior Data Engineer (Real-Time Analytics & OLAP)- Full Time
Location: Bangalore/Chennai- Onsite
Experience Level: 5-10 Years
About the Role
We are seeking a hands-on Senior Data Engineer to build and operate our real-time data platform that powers metrics, SLAs, and dashboards for our enterprise SaaS product. You will design pipelines that ingest flexible JSON data from transactional databases, compute state/queue/assignment metrics in real time, and serve low-latency, drillable analytics across hundreds of dimensions. The role requires expertise in CDC, streaming transformations, OLAP engines, and data lake architectures at millions of events/hour scale.
Key Responsibilities
- Implement CDC pipelines (Debezium/Kafka) to replicate Postgres JSON data into real-time systems.
- Build streaming transformations (Kafka Streams/Flink/ksqlDB) for SLA & metric calculations.
- Design and optimize ClickHouse schemas for fast slice/dice analytics across hundreds of dimensions.
- Ensure data accuracy and consistency between OLTP and OLAP/metrics systems.
- Develop ETL/ELT pipelines with dbt for batch transformations and long-term storage in Azure Data Lake.
- Implement data quality, freshness, and validation checks.
- Optimize for low-latency queries supporting dashboards and real-time monitoring.
- Define archival and retention strategies (Parquet/Delta Lake on ADLS).
- Collaborate with product/analytics teams to model metrics and KPIs.
- Automate observability and monitoring for pipelines (Kafka + ClickHouse health, SLA tracking).
Qualifications
- 5+ years in data engineering roles with real-time/streaming pipelines.
- Strong experience with:
- Postgres + JSON (flexible schema handling).
- Kafka/Redpanda and Debezium CDC.
- ClickHouse (or Druid/Pinot) for OLAP.
- Azure Data Lake Storage (ADLS) for archival.
- dbt for ELT modeling.
- Hands-on with stream processing frameworks (Flink, Kafka Streams, or ksqlDB).
- Strong SQL, Python, or Go skills for transformations and tooling.
- Proven ability to deliver real-time dashboards at millions of events/hour scale.
- Deep understanding of SLA/metric computation from event streams.
Nice to Have
- Experience with Apache Iceberg/Hudi/Delta Lake for lakehouse architecture.
- Familiarity with Superset, Metabase, or Grafana for BI/visualization.
- Exposure to data governance and compliance frameworks.
- Prior work with multi-tenant SaaS analytics.
Why Join Us?
- Build the core real-time metrics engine powering enterprise SaaS.
- Work with a modern, open-source-first data stack deployed on Azure.
- Tackle challenges at high scale and low latency.
- Competitive salary and options.

.png)






