As our AI Engineer you will play a key role in developing our internal AI capabilities. Working closely together with the AI solution Architect, the AI engineer is responsible for designing, developing, and maintaining secure, scalable, and compliant AI systems, including on-premise LLM, RAG pipelines, and MCP servers.Your tasks include:Design, build, and operate on-prem LLM infrastructure and related services. Develop and maintain MCP servers to connect AI capabilities with enterprise systems. Build and optimize RAG pipelines, including data ingestion, embeddings, and retrieval logic. Ensure scalability, observability, and security of AI workloads running in Kubernetes environments. Collaborate with the AI Solution Architect on overall system design, integration patterns, and platform evolution. Establish practices for monitoring, evaluation, and continuous improvement of AI models and services. Support experimentation and pilot projects to validate new AI use cases and technologies across Aeven and towards customers Required Qualifications:Bachelors or Masters degree in Computer Science, Engineering, or related field. Strong programming skills in Python and frameworks like PyTorch, TensorFlow, or Scikit-learn as well as experience with Kubernetes and containerized workloads. Experience in designing or operating AI/ML infrastructure (e.g., vector databases, embedding pipelines, API services). Understanding of Retrieval-Augmented Generation (RAG) architectures and principles. Experience with deploying or integrating large language models, preferably in on-premise or private cloud environments. Familiarity with Model Context Protocol (MCP) or similar AI integration frameworks is a strong plus. Knowledge of CI/CD pipelines, observability tools, and version control best practices. A proactive, structured mindset and a strong interest in building secure, enterprise-grade AI systems. Understanding of data governance, security, and compliance requirements. English language skills in profssional level Preferred Qualifications:Familiarity with edge AI and real-time inference systems. Background in enterprise architecture frameworks (TOGAF, Zachman).