Skip to main content

Usage

Getting started with nilAI is straightforward and involves just two steps:

  1. Query the /v1/models endpoint using your preferred programming language or client.
  2. Select one of the available models and use its name to query the /v1/chat/completions endpoint.

Since nilAI offers OpenAI API compatibility, you can seamlessly use libraries like openai to interact with the service.


All of these endpoints are available for testing in our API section.

  • Chat Completion: Standard Chat completion endpoint to generate a response from the AI model
  • Attestation: Generate a cryptographic attestation report.
  • Health: Check the health status of the NilAI operational status
  • Models: Get the list of available models running
  • Usage: Track the current usage of tokens with your account

Here is an Python/Node examples for querying the Llama-3.1-8B model:

from openai import OpenAI

# Initialize the OpenAI client

client = OpenAI( # Replace <node> with the specific node identifier
base_url="https://nilai-<node>.nillion.network/v1/",
api_key="YOUR_API_KEY"
)

# Send a chat completion request

response = client.chat.completions.create(
model="meta-llama/Llama-3.1-8B-Instruct",
messages=[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is your name?"
}
],
stream=False
)