Skip to main content

OpenAI Chat Node

The OpenAI Chat node enables conversations with OpenAI's GPT models for text generation, completion, and intelligent responses.

Overview

This node provides a simple interface to OpenAI's Chat Completion API, supporting various GPT models including GPT-4 and GPT-3.5-turbo.

Configuration

Required Parameters

ParameterTypeDescription
apiKeyStringYour OpenAI API key
modelStringGPT model to use (default: gpt-3.5-turbo)
messagesArrayArray of message objects

Optional Parameters

ParameterTypeDefaultDescription
temperatureNumber0.7Controls randomness (0-2)
max_tokensNumber150Maximum tokens in response
top_pNumber1Nucleus sampling parameter
frequency_penaltyNumber0Frequency penalty (-2 to 2)
presence_penaltyNumber0Presence penalty (-2 to 2)

Input Schema

{
"apiKey": "sk-...",
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello, how are you?"
}
],
"temperature": 0.7,
"max_tokens": 150
}

Output Schema

{
"id": "chatcmpl-...",
"object": "chat.completion",
"created": 1677652288,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking. How can I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 18,
"total_tokens": 38
}
}

Examples

Basic Chat

// Input
{
"apiKey": "sk-your-api-key",
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}

// Output
{
"choices": [
{
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
}
}
]
}

System Prompt with Context

// Input
{
"apiKey": "sk-your-api-key",
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are an expert software developer specializing in JavaScript."
},
{
"role": "user",
"content": "How do I create a REST API in Node.js?"
}
],
"temperature": 0.3
}

Error Handling

Common error scenarios and how to handle them:

  • Invalid API Key: Returns 401 Unauthorized
  • Rate Limiting: Returns 429 Too Many Requests
  • Model Not Found: Returns 404 with model error
  • Token Limit Exceeded: Automatically truncates or returns error

Best Practices

  1. API Key Security: Store API keys securely, never in plaintext
  2. Temperature Settings: Use lower values (0.1-0.3) for factual responses, higher (0.7-1.0) for creative content
  3. Token Management: Monitor token usage to control costs
  4. Rate Limiting: Implement exponential backoff for rate limit handling
  5. Error Handling: Always handle API errors gracefully

Cost Optimization

  • Use gpt-3.5-turbo for basic tasks to reduce costs
  • Set appropriate max_tokens limits
  • Use system prompts to reduce repetitive context
  • Cache responses when appropriate
  • OpenAI Embeddings - Generate text embeddings
  • OpenAI Images - Generate images with DALL-E
  • Text Completion - Simple text completion

Troubleshooting

Common Issues

  1. Authentication Failed

    • Verify API key is correct and active
    • Check account billing status
  2. Model Access Denied

    • Ensure you have access to the requested model
    • Check model availability in your region
  3. High Latency

    • Consider using streaming for long responses
    • Optimize prompt length and complexity

Need Help?