Files
Perplexica/docker-compose.yaml
haddadrm 5a603a7fd4 Implemented the configurable stream delay feature for
the reasoning models using ReasoningChatModel Custom Class.

1. Added the STREAM_DELAY parameter to the sample.config.toml file:

[MODELS.DEEPSEEK]
API_KEY = ""
STREAM_DELAY = 20  # Milliseconds between token emissions for reasoning models (higher = slower, 0 = no delay)

2. Updated the Config interface in src/config.ts to include the new parameter:

DEEPSEEK: {
  API_KEY: string;
  STREAM_DELAY: number;
};

3. Added a getter function in src/config.ts to retrieve the configured value:

export const getDeepseekStreamDelay = () =>
  loadConfig().MODELS.DEEPSEEK.STREAM_DELAY || 20; // Default to 20ms if not specified
Updated the deepseek.ts provider to use the configured stream delay:

const streamDelay = getDeepseekStreamDelay();
logger.debug(`Using stream delay of ${streamDelay}ms for ${model.id}`);

// Then using it in the model configuration
model: new ReasoningChatModel({
  // ...other params
  streamDelay
}),

4. This implementation provides several benefits:

-User-Configurable: Users can now adjust the stream delay without modifying code
-Descriptive Naming: The parameter name "STREAM_DELAY" clearly indicates its purpose
-Documented: The comment in the config file explains what the parameter does
-Fallback Default: If not specified, it defaults to 20ms
-Logging: Added debug logging to show the configured value when loading models

To adjust the stream delay, users can simply modify the STREAM_DELAY value in
their config.toml file. Higher values will slow down token generation
(making it easier to read in real-time), while lower values will speed it up.
 Setting it to 0 will disable the delay entirely.
2025-02-26 00:03:36 +04:00

55 lines
1.2 KiB
YAML

services:
searxng:
image: docker.io/searxng/searxng:latest
volumes:
- ./searxng:/etc/searxng:rw
ports:
- 4000:8080
networks:
- perplexica-network
restart: unless-stopped
perplexica-backend:
build:
context: .
dockerfile: backend.dockerfile
image: itzcrazykns1337/perplexica-backend:main
environment:
- SEARXNG_API_URL=http://searxng:8080
depends_on:
- searxng
ports:
- 3001:3001
volumes:
- backend-dbstore:/home/perplexica/data
- uploads:/home/perplexica/uploads
- ./config.toml:/home/perplexica/config.toml
extra_hosts:
- 'host.docker.internal:host-gateway'
networks:
- perplexica-network
restart: unless-stopped
perplexica-frontend:
build:
context: .
dockerfile: app.dockerfile
args:
- NEXT_PUBLIC_API_URL=http://127.0.0.1:3001/api
- NEXT_PUBLIC_WS_URL=ws://127.0.0.1:3001
image: itzcrazykns1337/perplexica-frontend:main
depends_on:
- perplexica-backend
ports:
- 3000:3000
networks:
- perplexica-network
restart: unless-stopped
networks:
perplexica-network:
volumes:
backend-dbstore:
uploads: