Headquarters: Not Specified
Data & AI Engineer – Cyber Risk Intelligence Platform – India
Location: India (Remote)
About Quantara AI & the Role
Quantara AI is a next-generation Cyber Risk Intelligence and Governance platform that helps CISOs, Boards, and executive teams quantify, prioritize, and communicate cyber risk in business terms. Our AI-powered solution combines Cyber Risk Quantification (CRQ) and Continuous Threat Exposure Management (CTEM) to automate compliance, identify the top 1% of exposures that truly matter, and deliver insights that drive measurable business resilience.
We are seeking a highly skilled Data & AI Engineer to help design and scale the data and AI backbone of our platform. This role involves developing large-scale data pipelines, building AI/LLM-powered systems, and implementing enterprise-grade backend and orchestration architectures that support data-driven decision-making.
You will work on end-to-end data and AI infrastructure, including ETL/ELT development, LLM orchestration, API engineering, and metric computation-helping evolve a scalable, secure, and intelligent enterprise platform.
Key Responsibilities
1. Data Engineering & Architecture
- Design, build, and maintain enterprise-scale data pipelines for structured, semi-structured, and unstructured data.
 - Develop data acquisition and transformation workflows integrating multiple APIs and business data sources.
 - Create and optimize relational and analytical data models for performance, scalability, and reliability.
 - Establish data quality, validation, and governance standards across ingestion and analytics workflows.
 - Enable real-time and batch processing pipelines supporting large-scale enterprise applications.
 
2. AI/LLM Development & Orchestration
- Design, develop, and deploy LLM-driven and agentic AI applications for analytics, automation, and reasoning.
 - Build Retrieval-Augmented Generation (RAG) pipelines and knowledge orchestration layers across enterprise data.
 - Fine-tune and train language models using modern open-source frameworks and libraries.
 - Implement NLP and conversational AI components, including chatbots, summarization, and question-answering systems.
 - Optimize model orchestration, embeddings, and context management for scalable AI inference.
 
3. Backend Development & API Engineering
- Develop and manage RESTful APIs and backend services to support AI, analytics, and data operations.
 - Implement secure API access controls, error handling, and logging.
 - Build microservices and event-driven architectures to deliver modular, reliable data and AI capabilities.
 - Integrate backend components with data pipelines, analytics engines, and external systems.
 
4. Metrics Computation & Quantification
- Design automated engines for computing risk, ROI, RRI, maturity, and performance metrics.
 - Integrate quantification logic into business and risk data models to provide real-time visibility.
 - Develop scalable data and AI computation frameworks that support executive reporting and analytics.
 - Collaborate with product and data teams to ensure metric accuracy, transparency, and explainability.
 
5. CI/CD, Deployment & Cloud Operations
- Implement and manage CI/CD pipelines for testing, deployment, and environment management.
 - Work with cloud-native technologies for infrastructure automation, monitoring, and scaling.
 - Use containerization and orchestration tools for consistent, portable, and secure deployment.
 - Establish performance monitoring, observability, and alerting across production systems.
 
Qualifications
- 6-10 years of experience in data engineering, backend development, or AI platform engineering.
 - Proven success in product development environments and experience building enterprise-grade SaaS applications.
 - Strong programming proficiency in Python or equivalent languages for backend and data systems.
 - Deep understanding of SQL and relational databases, including schema design and performance tuning.
 - Experience building ETL/ELT pipelines, API integrations, and data orchestration workflows.
 - Hands-on experience with AI and LLM technologies (e.g., Transformers, RAG, embeddings, vector databases).
 - Familiarity with MLOps and LLMOps concepts, including model deployment, scaling, and monitoring.
 - Practical experience with technologies such as:
 - Data frameworks: Airflow, dbt, Spark, Pandas, Kafka, Kinesis
 - Cloud & DevOps: AWS, GCP, Azure, Terraform, Docker, Kubernetes
 - Databases: PostgreSQL, MySQL, Snowflake, BigQuery, DynamoDB
 - AI/LLM: LangChain, Hugging Face, OpenAI API, LlamaIndex, Weaviate, Pinecone, FAISS
 - CI/CD: Jenkins, GitHub Actions, GitLab CI, or similar tools
 - Strong knowledge of data security, scalability, and performance optimization in production systems.
 
Preferred Skills
- Background in cybersecurity, risk analytics, or financial data systems is a plus.
 - Experience with agentic AI systems, autonomous orchestration, or conversational analytics.
 - Understanding of data governance, metadata management, and compliance automation.
 - Exposure to streaming data systems and real-time analytics architectures.
 - Ability to mentor junior engineers and contribute to design and architectural discussions.
 
Compensation
- Competitive India market base salary + performance-based incentives.
 - Open to Contract-to-Hire (CTH) with potential for full-time conversion based on performance.
 
