Generative Nodes
Generative nodes are the core of Agent Docks. They use AI models to create text, images, and videos based on your prompts and inputs.
Overview
Generative nodes connect to AI models from leading providers (OpenAI, Google, Anthropic, Runway, and more) to create content. Each generative node:
- •Accepts prompts and optional attachments as input
- •Lets you select from multiple AI models and providers
- •Shows estimated credit cost before running
- •Outputs generated content to preview nodes
- •Can trigger downstream nodes via trigger connections
Text Generate Node
Generate text content using large language models (LLMs). Perfect for writing, brainstorming, analysis, and content creation.
Supported Models
GPT-4o
OpenAI • Fast, versatile
Claude 3.7 Sonnet
Anthropic • Long context
Gemini 2.0 Flash
Google • Multimodal
GPT-4o Mini
OpenAI • Cost-effective
Key Features
- •Prompt Editor: Rich text editor with syntax highlighting
- •Attachments: Include images, documents, or context files
- •System Prompt: Set model behavior and personality
- •Temperature: Control creativity vs. consistency
- •Max Tokens: Limit response length
Ports
Prompt (Text)
Main prompt input
Attachments (Files)
Optional context files
Trigger In
Execution control
Text Output
Generated text
Trigger Out
Trigger downstream nodes
Image Generate Node
Create images from text prompts using state-of-the-art image generation models. Perfect for concept art, illustrations, marketing assets, and visual content.
Supported Models
FLUX Pro 1.1
Black Forest Labs • High quality
DALL-E 3
OpenAI • Prompt adherence
Stable Diffusion XL
Stability AI • Versatile
Imagen 3
Google • Photorealistic
Key Features
- •Aspect Ratio: Square, portrait, landscape, or custom
- •Resolution: Control output dimensions
- •Style Presets: Photography, illustration, 3D, anime, etc.
- •Negative Prompt: Specify what to avoid
- •Seed: Reproducible generations
- •Image Reference: Use reference images for style or composition
Ports
Prompt (Text)
Image description
Attachments (Images)
Reference images
Trigger In
Execution control
Image Output
Generated image
Trigger Out
Trigger downstream nodes
Video Generate Node
Generate videos from text prompts or images. Create motion content, animations, and video clips using cutting-edge AI video models.
Supported Models
Kling 1.6
Kuaishou • High quality
Runway Gen-3
Runway • Creative control
Veo 2
Google • Cinematic
Pika 1.5
Pika • Fast generation
Key Features
- •Duration: 5-10 second clips (model dependent)
- •Aspect Ratio: 16:9, 9:16, 1:1, or custom
- •Quality Mode: Standard or high quality
- •Sound: Generate with or without audio
- •First Frame: Use an image as the starting frame
- •Camera Movement: Control motion and perspective
Ports
Prompt (Text)
Video description
Attachments (Images)
First frame reference
Trigger In
Execution control
Video Output
Generated video
Trigger Out
Trigger downstream nodes
Run Modes
Generative nodes support two run modes that control how outputs are handled:
Run (Default)
Fills empty preview nodes or creates new ones. Never overwrites existing outputs. This is the safe default mode that prevents accidental data loss.
Overwrite
Replaces the first connected preview node's output. Use this when you want to regenerate and replace an existing result. Requires confirmation.
Best Practices
- •Be specific: Detailed prompts produce better results
- •Use references: Attach reference images for style consistency
- •Test models: Different models excel at different tasks
- •Monitor costs: Check credit estimates before running
- •Save workflows: Reuse successful prompts as templates