OpenAI Chat Node
The OpenAI Chat node enables conversations with OpenAI's GPT models for text generation, completion, and intelligent responses.
Overview
This node provides a simple interface to OpenAI's Chat Completion API, supporting various GPT models including GPT-4 and GPT-3.5-turbo.
Configuration
Required Parameters
Parameter | Type | Description |
---|---|---|
apiKey | String | Your OpenAI API key |
model | String | GPT model to use (default: gpt-3.5-turbo) |
messages | Array | Array of message objects |
Optional Parameters
Parameter | Type | Default | Description |
---|---|---|---|
temperature | Number | 0.7 | Controls randomness (0-2) |
max_tokens | Number | 150 | Maximum tokens in response |
top_p | Number | 1 | Nucleus sampling parameter |
frequency_penalty | Number | 0 | Frequency penalty (-2 to 2) |
presence_penalty | Number | 0 | Presence penalty (-2 to 2) |
Input Schema
{
"apiKey": "sk-...",
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello, how are you?"
}
],
"temperature": 0.7,
"max_tokens": 150
}
Output Schema
{
"id": "chatcmpl-...",
"object": "chat.completion",
"created": 1677652288,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking. How can I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 18,
"total_tokens": 38
}
}
Examples
Basic Chat
// Input
{
"apiKey": "sk-your-api-key",
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}
// Output
{
"choices": [
{
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
}
}
]
}
System Prompt with Context
// Input
{
"apiKey": "sk-your-api-key",
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are an expert software developer specializing in JavaScript."
},
{
"role": "user",
"content": "How do I create a REST API in Node.js?"
}
],
"temperature": 0.3
}
Error Handling
Common error scenarios and how to handle them:
- Invalid API Key: Returns 401 Unauthorized
- Rate Limiting: Returns 429 Too Many Requests
- Model Not Found: Returns 404 with model error
- Token Limit Exceeded: Automatically truncates or returns error
Best Practices
- API Key Security: Store API keys securely, never in plaintext
- Temperature Settings: Use lower values (0.1-0.3) for factual responses, higher (0.7-1.0) for creative content
- Token Management: Monitor token usage to control costs
- Rate Limiting: Implement exponential backoff for rate limit handling
- Error Handling: Always handle API errors gracefully
Cost Optimization
- Use
gpt-3.5-turbo
for basic tasks to reduce costs - Set appropriate
max_tokens
limits - Use system prompts to reduce repetitive context
- Cache responses when appropriate
Related Nodes
- OpenAI Embeddings - Generate text embeddings
- OpenAI Images - Generate images with DALL-E
- Text Completion - Simple text completion
Troubleshooting
Common Issues
-
Authentication Failed
- Verify API key is correct and active
- Check account billing status
-
Model Access Denied
- Ensure you have access to the requested model
- Check model availability in your region
-
High Latency
- Consider using streaming for long responses
- Optimize prompt length and complexity