Storage Types

Storage types handle the persistence of messages and metadata.

In-Memory

Default storage type when no storage is specified.

# No configuration needed - this is the default
memory:
  id: my_memory
  vector:
    use: compozy/vector:pg-vector
    dimensions: 1536

Best Used For:

  • Development and testing
  • Quick prototyping
  • Temporary storage needs
  • Single-instance applications

PostgreSQL

Production-grade storage built on PostgreSQL.

storage:
  use: compozy/storage:postgres
  config:
    url: "{{ env.POSTGRES_URL }}"

Best Used For:

  • Production deployments
  • High-concurrency workloads
  • Complex querying needs
  • Data persistence requirements

Redis

In-memory data store with optional persistence.

storage:
  use: compozy/storage:redis
  config:
    url: "{{ env.REDIS_URL }}"

Best Used For:

  • High-performance needs
  • Caching requirements
  • Real-time applications
  • Temporary storage with persistence

Upstash

Serverless Redis-compatible storage.

storage:
  use: compozy/storage:upstash
  config:
    url: "{{ env.UPSTASH_URL }}"
    token: "{{ env.UPSTASH_TOKEN }}"

Best Used For:

  • Serverless deployments
  • Edge computing
  • Pay-per-use scenarios
  • Low-latency requirements

SQLite

Lightweight file-based storage.

storage:
  use: compozy/storage:sqlite
  config:
    path: ./data/memory.db

Best Used For:

  • Local development
  • Single-file deployments
  • Simple applications
  • Embedded systems

LibSQL

Distributed SQLite-compatible storage.

storage:
  use: compozy/storage:libsql
  config:
    url: "{{ env.LIBSQL_URL }}"

Best Used For:

  • Edge deployments
  • Distributed systems
  • SQLite compatibility
  • Serverless applications

Vector Types

Vector types handle the storage and retrieval of embeddings.

PgVector

PostgreSQL-based vector storage.

vector:
  use: compozy/vector:pg-vector
  dimensions: 1536
  config:
    url: "{{ env.POSTGRES_URL }}"

Best Used For:

  • Production deployments
  • Integrated PostgreSQL setups
  • Cost-effective vector storage
  • Simple vector search needs

Pinecone

Managed vector database service.

vector:
  use: compozy/vector:pinecone
  dimensions: 1536
  config:
    apiKey: "{{ env.PINECONE_API_KEY }}"

Best Used For:

  • Large-scale deployments
  • High-performance vector search
  • Managed infrastructure
  • Production ML applications

Qdrant

Open-source vector database.

vector:
  use: compozy/vector:qdrant
  dimensions: 1536
  config:
    url: "{{ env.QDRANT_URL }}"

Best Used For:

  • Complex filtering needs
  • Self-hosted deployments
  • High-performance search
  • Flexible deployment options

Chroma

Open-source embedding database.

vector:
  use: compozy/vector:chroma
  dimensions: 1536
  config:
    path: ./data/embeddings

Best Used For:

  • Local development
  • Simple vector search
  • Quick prototyping
  • Small to medium datasets

Milvus

Distributed vector database.

vector:
  use: compozy/vector:milvus
  dimensions: 1536
  config:
    url: "{{ env.MILVUS_URL }}"

Best Used For:

  • Large-scale deployments
  • Distributed search
  • High availability needs
  • Complex vector operations

Embedder Types

Embedder types handle the conversion of text to vector embeddings.

OpenAI

OpenAI’s text embedding models.

embedder:
  use: compozy/embedder:openai
  config:
    model: text-embedding-3-small

Best Used For:

  • High-quality embeddings
  • Production use cases
  • Multi-lingual support
  • Latest embedding models

FastEmbed

Lightweight local embedding models.

embedder:
  use: compozy/embedder:fastembed
  config:
    model: bge-small-en

Best Used For:

  • Local deployment
  • Privacy requirements
  • Cost-effective solutions
  • Quick prototyping

Cohere

Cohere’s embedding service.

embedder:
  use: compozy/embedder:cohere
  config:
    model: embed-english-v3.0

Best Used For:

  • Enterprise use cases
  • Specialized embeddings
  • Multi-lingual needs
  • Alternative to OpenAI

HuggingFace

Open-source models from HuggingFace.

embedder:
  use: compozy/embedder:huggingface
  config:
    model: sentence-transformers/all-mpnet-base-v2

Best Used For:

  • Custom models
  • Self-hosted solutions
  • Research applications
  • Specialized domains

Vertex AI

Google Cloud’s embedding service.

embedder:
  use: compozy/embedder:vertex
  config:
    model: textembedding-gecko
    project: "{{ env.GOOGLE_CLOUD_PROJECT }}"

Best Used For:

  • Google Cloud integration
  • Enterprise deployments
  • Scalable solutions
  • Managed infrastructure

Best Practices

Choose the Right Storage

Select storage based on your persistence and performance needs:

  • Use in-memory for development
  • Use PostgreSQL for traditional deployments
  • Use Redis for high-performance needs
  • Use Upstash for serverless deployments

Vector Configuration

Choose vector storage based on scale and requirements:

  • Use PgVector for integrated PostgreSQL setups
  • Use Pinecone for managed vector search
  • Use Qdrant for self-hosted deployments
  • Use Chroma for local development

Embedder Selection

Select embedder based on your use case:

  • Use OpenAI for production quality
  • Use FastEmbed for local deployment
  • Use Cohere for enterprise needs
  • Use HuggingFace for custom models

Security

Implement security best practices:

  • Use environment variables for credentials
  • Enable SSL/TLS in production
  • Implement proper access controls