FAQs

  • Our stateless inference API seamlessly integrates with existing system architecture, interprets system documentation, and factors in user input to deliver modernization recommendations, current and future state system documentation, and process optimization recommendations.

    Stateless → Each request is independent. The API does not remember what you sent before and does not store session data.

    Inference → Running a trained model to generate outputs (like predictions, classifications, embeddings, or text completions), as opposed to training or fine-tuning.

    API → An interface (usually HTTP/REST or gRPC) that lets you call the model from your code.

  • Yes, Kavaya AI calls out OpenAI and, by default, OpenAI does not use inputs or outputs from the API for model training or improvement for business/API users.

    Additionally, all data sent to OpenAI via the API is encrypted in flight (TLS 1.2 or higher) and at rest (AES-256 or equivalent).

    Please note, even when OpenAI does not use data for training, some data is retained temporarily for security / abuse detection / compliance. For example, some inputs/outputs may be stored for a limited time. However, if your use case requires that data never be used for training (or for anything beyond serving your request) opt-out controls can be enabled.

  • We can integrate with most legacy infrastructure. Our application is not confined to specific SaaS technology or coding languages.

  • Our application can consume most types of system data and documentation (word, excel, PDF, CSV, etc.). We can also connect directly to your Github repository to analyze the legacy system metadata.