Troubleshooting
Find solutions to common issues with the gintonic inference subchain. Learn debugging techniques and get answers to frequently encountered problems.
Troubleshooting
Even the smoothest systems hit a bump now and then. When that happens, we've got your back. This guide will help you diagnose and solve common issues you might encounter while using the inference subchain.
Common Issues and Solutions
1. Connection Problems
Symptom: Unable to establish a WebSocket connection.
Possible Causes and Solutions:
a) Invalid API Key
Double-check your API key.
Ensure you're using the correct key for the environment (development/production).
b) Network Issues
Check your internet connection.
If you're behind a firewall, ensure it's not blocking WebSocket connections.
Code to Test Connection:
2. Rate Limiting
Symptom: Receiving 429 (Too Many Requests) errors.
Solution:
Implement exponential backoff in your requests.
If you consistently hit rate limits, consider upgrading your plan.
Example Backoff Implementation:
3. Unexpected End of Chat Session
Symptom: Chat session ends unexpectedly or returns an error about maximum tokens.
Possible Causes and Solutions:
a) Exceeded Maximum Tokens
The conversation has reached the model's token limit.
Start a new chat session for continued interaction.
b) Inactivity Timeout
Chat sessions automatically end after 24 hours of inactivity.
Implement a keep-alive mechanism or start a new session if needed.
4. Unexpected Model Outputs
Symptom: The model's responses are irrelevant or seem to ignore context.
Possible Causes and Solutions:
a) Unclear or Ambiguous Prompts
Refine your prompts to be more specific and provide necessary context.
b) Inconsistent Chat History
Ensure you're sending the correct chat ID with each request.
Consider summarizing long conversations to maintain context without exceeding token limits.
5. High Latency
Symptom: Responses are taking longer than expected.
Possible Causes and Solutions:
a) Complex Queries
Break down complex tasks into smaller, more manageable requests.
b) Network Issues
Check your network connection and latency to our servers.
c) High System Load
If persistent, it might indicate high load on our systems. Check our status page.
Error Message Explanations
Here's a quick reference for common error messages you might encounter:
401
"Unauthorized"
Invalid or missing API key
Check your API key and ensure it's correctly included in the request header
403
"Forbidden"
Insufficient permissions or depleted balance
Check your account balance and permissions
404
"Chat Not Found"
The specified chat ID doesn't exist
Verify the chat ID or start a new chat session
429
"Too Many Requests"
You've exceeded the rate limit
Implement backoff and retry logic
500
"Internal Server Error"
An unexpected error occurred on our end
If persistent, contact our support team
| "Max Token Limit Exceeded" | The maximum token limit in the model response has been exceeded | Start a new chat session. The current session will be lost | | "Invalid Data" | The server-side script encountered issues due to incorrect data in the request | Check your request format and data |
Debugging Tools
To help diagnose issues, we provide several debugging tools:
Verbose Mode: Enable detailed logging in the client.
Request ID: Each request has a unique ID. Include this when contacting support.
Health Check Endpoint: Use this to verify the service status.
Still Stuck?
If you've tried these solutions and are still experiencing issues:
Check our FAQ page for more common questions and answers.
Visit our Community Forum to see if others have encountered and solved similar issues.
If all else fails, don't hesitate to contact our support team. We're here to help!
Remember to include as much relevant information as possible:
Your API key (don't share the full key, just the last 4 characters)
The request ID (if applicable)
A description of the issue
Any error messages you're receiving
Steps to reproduce the problem
We're committed to providing a smooth experience with the inference subchain. Your feedback helps us improve, so don't be shy about reporting issues or suggesting improvements!
Last updated