Troubleshooting
Connectivity and Model Configuration
Model Initialization Fails
If your application fails to connect to a specific LLM, ensure you have correctly configured the model-agnostic bridge.
- Check Model String: Verify that the model name matches the specific provider's naming convention (e.g.,
gpt-4o,claude-3-opus, or your specific open-source identifier). - Environment Variables: Ensure your API keys are correctly exported in your environment. Deeptrain requires these to interface with private model providers.
# Example for an OpenAI-based model export OPENAI_API_KEY='your-api-key-here'
Unsupported Model Errors
While Deeptrain supports 200+ models, certain cutting-edge or niche open-source models may require custom wrappers.
- Resolution: If a model is not recognized, ensure you are using the latest version of the Deeptrain library. If the issue persists, verify if the model follows the standard inference API format supported by the platform.
Video Processing and Transcribe API
Transcribe API Timeouts
Processing high-resolution or long-duration videos from YouTube or Vimeo can occasionally lead to timeouts.
- File Size/Duration: Very large files may exceed the default processing window. Try splitting large video files into smaller segments before passing them to the Transcribe API.
- URL Accessibility: Ensure the video URL is public. Deeptrain cannot access private videos or videos behind a paywall/authentication layer without specific credentials.
Transcription Inaccuracy
If the text output from a video or audio file is incoherent:
- Audio Quality: Ensure the source audio is clear. Background noise can interfere with the transcription engine.
- Language Parameters: Ensure the input language matches the expected model configuration. While Deeptrain is multi-modal, explicitly defining the source language (if the model allows) improves accuracy.
Computer Vision and Graphic Understanding
AI Cannot "See" Images/Graphs
If you are using a non-vision model (like a standard GPT-3.5 or an older Llama variant) and it fails to interpret images or flowcharts:
- Wrapper Check: Ensure you have initialized the Deeptrain CV wrapper. Deeptrain enables vision for non-vision models by pre-processing images into interpretable context.
- Format Compatibility: Verify the image format. Supported formats include
.jpg,.png, and.svgfor graphs and diagrams. - Resolution: Extremely low-resolution images may fail to be parsed into meaningful data for the model.
Embedding Database and Retrieval
Irrelevant Context Retrieval
If the AI is providing outdated or irrelevant information despite having the correct data source:
- Clear Local Cache: The localized embedding database might be retrieving stale vectors. Re-index your data source to ensure the embeddings reflect the most recent content.
- Context Window Limits: While Deeptrain helps bypass context window limitations, sending too many retrieved "chunks" can still dilute the model's focus. Adjust your retrieval parameters to limit the number of top-k results returned.
Real-Time Data Sync Issues
If live data sources are not appearing in responses:
- Sync Interval: Check the polling interval of your data connector.
- Database Permissions: Ensure the localized database has write permissions to the directory where embeddings are stored.
API Input/Output Reference
When troubleshooting programmatic integrations, verify your request structure against the following schema:
Transcribe API
| Parameter | Type | Description |
| :--- | :--- | :--- |
| source_url | String | Public URL of the video (YouTube, Vimeo, etc.) |
| file | Blob/Buffer | Local video/audio file input |
| model_preference | String | Optional: Specify a model for transcription |
Common Error Codes:
400 Bad Request: Invalid file format or missing URL.401 Unauthorized: API key missing or invalid.413 Payload Too Large: The video file exceeds the maximum allowed size.