Quick Start
Installation
Get started by installing the Deeptrain SDK via pip. Deeptrain requires Python 3.8 or higher.
pip install deeptrain
Quick Start: Connect Your First Data Source
In this 5-minute guide, we will initialize the Deeptrain client and process a video URL to extract searchable data for your AI agent.
1. Initialize the Client
First, import Deeptrain and configure your environment. Deeptrain is model-agnostic, so you can specify your preferred LLM provider.
from deeptrain import Deeptrain
# Initialize the connector
dt = Deeptrain(api_key="YOUR_DEEPTRAIN_API_KEY")
2. Connect a Multi-modal Source
Deeptrain handles the heavy lifting of converting non-textual data into a format your LLM can understand. Let's process a video from YouTube using the Transcribe API.
# Process a video for your AI agent's knowledge base
video_data = dt.video.transcribe(
source="https://www.youtube.com/watch?v=example",
mode="high_precision"
)
print(f"Transcription complete: {video_data.text[:100]}...")
3. Query via Localized Embeddings
Once the data is processed, Deeptrain stores it in a localized embedding database. You can now query this data directly without worrying about the LLM's context window limits.
# Retrieve relevant context based on a natural language query
context = dt.embeddings.query(
query="What are the key takeaways from the video?",
top_k=3
)
# Pass this context directly to your LLM
response = dt.chat.complete(
model="gpt-4",
prompt=f"Based on this context: {context}, answer the user query."
)
Supported Data Types
You can use the same pattern to integrate various data formats:
| Module | Description | Example Usage |
| :--- | :--- | :--- |
| dt.text | Manage live data streams and embeddings. | dt.text.sync(url="https://docs.example.com") |
| dt.vision | Process flowcharts, graphs, and images. | dt.vision.analyze(image_path="./flowchart.png") |
| dt.audio | Convert audio files into actionable training data. | dt.audio.process(file_path="./interview.mp3") |
| dt.video | Multi-dimensional processing for Vimeo/YouTube. | dt.video.transcribe(source="vimeo_url") |
Next Steps
- Explore Models: Check the Model Support Guide to see how to configure any of the 200+ supported private and open-source models.
- Advanced Vision: Learn how to turn non-vision models into vision-enabled agents in the Computer Vision Tutorial.
- API Reference: View the full API Documentation for detailed parameter definitions.