Commit Graph

16 Commits

Author SHA1 Message Date
haddadrm
c2df5e47c9 refactor: remove unused deepseekChat.ts in favor
of reasoningChatModel.ts and messageProcessor.ts in favor of
alternaitngMessageValidator.ts

- Removed src/lib/deepseekChat.ts as it was duplicative
- All functionality is now handled by reasoningChatModel.ts
- No imports or references to deepseekChat.ts found in codebase

- Removed src/utils/messageProcessor.ts as it was duplicative
- All functionality is now handled by alternatingMessaageValidator.ts
- No imports or references messageProcessor.ts found in codebase
2025-02-28 00:02:21 +04:00
haddadrm
5a603a7fd4 Implemented the configurable stream delay feature for
the reasoning models using ReasoningChatModel Custom Class.

1. Added the STREAM_DELAY parameter to the sample.config.toml file:

[MODELS.DEEPSEEK]
API_KEY = ""
STREAM_DELAY = 20  # Milliseconds between token emissions for reasoning models (higher = slower, 0 = no delay)

2. Updated the Config interface in src/config.ts to include the new parameter:

DEEPSEEK: {
  API_KEY: string;
  STREAM_DELAY: number;
};

3. Added a getter function in src/config.ts to retrieve the configured value:

export const getDeepseekStreamDelay = () =>
  loadConfig().MODELS.DEEPSEEK.STREAM_DELAY || 20; // Default to 20ms if not specified
Updated the deepseek.ts provider to use the configured stream delay:

const streamDelay = getDeepseekStreamDelay();
logger.debug(`Using stream delay of ${streamDelay}ms for ${model.id}`);

// Then using it in the model configuration
model: new ReasoningChatModel({
  // ...other params
  streamDelay
}),

4. This implementation provides several benefits:

-User-Configurable: Users can now adjust the stream delay without modifying code
-Descriptive Naming: The parameter name "STREAM_DELAY" clearly indicates its purpose
-Documented: The comment in the config file explains what the parameter does
-Fallback Default: If not specified, it defaults to 20ms
-Logging: Added debug logging to show the configured value when loading models

To adjust the stream delay, users can simply modify the STREAM_DELAY value in
their config.toml file. Higher values will slow down token generation
(making it easier to read in real-time), while lower values will speed it up.
 Setting it to 0 will disable the delay entirely.
2025-02-26 00:03:36 +04:00
haddadrm
a6e4402616 Add DeepSeek and LMStudio providers
- Integrate DeepSeek and LMStudio AI providers
- Add message processing utilities for improved handling
- Implement reasoning panel for message actions
- Add logging functionality to UI
- Update configurations and dependencies
2025-02-25 08:53:53 +04:00
ItzCrazyKns
07dc7d7649 feat(config): update config & custom openai 2025-02-15 11:26:38 +05:30
ItzCrazyKns
177746235a feat(providers): add gemini 2024-11-28 20:47:18 +05:30
ItzCrazyKns
c650d1c3d9 feat(ollama): add keep_alive param 2024-11-20 19:11:47 +05:30
ItzCrazyKns
1fcd64ad42 feat(docker-file): use SearXNG URL from env 2024-09-05 18:40:07 +05:30
ItzCrazyKns
f02393dbe9 feat(providers): add anthropic 2024-07-15 21:20:16 +05:30
ItzCrazyKns
0993c5a760 feat(app): revert port & network changes 2024-05-13 19:58:17 +05:30
ItzCrazyKns
9816eb1d36 feat(server): add bind address 2024-05-12 12:15:25 +05:30
ItzCrazyKns
f618b713af feat(chatModels): load model from localstorage 2024-05-02 12:14:26 +05:30
ItzCrazyKns
edc40d8fe6 feat(providers): add Groq provider 2024-05-01 19:43:06 +05:30
ItzCrazyKns
7653eaf146 feat(config): avoid updating blank fields 2024-04-23 16:54:39 +05:30
ItzCrazyKns
a86378e726 feat(config): add updateConfig method 2024-04-23 16:45:14 +05:30
ItzCrazyKns
d37a1a8020 feat(agents): support local LLMs 2024-04-20 11:18:52 +05:30
ItzCrazyKns
c6a5790d33 feat(config): Use toml instead of env 2024-04-20 09:32:19 +05:30