Commit Graph

44 Commits

Author SHA1 Message Date
f473a581ce implemented a refactoring plan with the
configurable delay feature.

1. Created AlternatingMessageValidator
(renamed from MessageProcessor):

-Focused on handling alternating message patterns
-Made it model-agnostic with configuration-driven approach
-Kept the core validation logic intact

2. Created ReasoningChatModel
(renamed from DeepSeekChat):

-Made it generic for any model with reasoning/thinking capabilities
-Added configurable streaming delay parameter (streamDelay)
-Implemented delay logic in the streaming process

3. Updated the DeepSeek provider:

-Now uses ReasoningChatModel for deepseek-reasoner with a 50ms delay
-Uses standard ChatOpenAI for deepseek-chat
-Added a clear distinction between models that need reasoning capabilities
Updated references in metaSearchAgent.ts:

4. Changed import from messageProcessor to alternatingMessageValidator

-Updated function calls to use the new validator
-The configurable delay implementation allows
to control the speed of token generation, which
can help with the issue you were seeing. The
delay is set to 20ms by default for the
deepseek-reasoner model, but you can adjust
his value in the deepseek.ts provider file
to find the optimal speed.

This refactoring maintains all the existing
functionality while making the code more
maintainable and future-proof. The separation of
concerns between message validation and model
implementation will make it easier to add support
for other models with similar requirements in the future.
2025-02-25 10:13:54 +04:00
a6e4402616 Add DeepSeek and LMStudio providers
- Integrate DeepSeek and LMStudio AI providers
- Add message processing utilities for improved handling
- Implement reasoning panel for message actions
- Add logging functionality to UI
- Update configurations and dependencies
2025-02-25 08:53:53 +04:00
811c0c6fe1 Merge branch 'master' of https://github.com/ItzCrazyKns/Perplexica 2025-02-15 11:31:20 +05:30
41d056e755 feat(handlers): use new custom openai 2025-02-15 11:29:08 +05:30
3582695054 feat: add Gemini 2.0 Flash Exp models
# Description
   Added two new Gemini models:
   - gemini-2.0-flash-exp
   - gemini-2.0-flash-thinking-exp-01-21

   # Changes Made
   - Updated src/lib/providers/gemini.ts to include new models
   - Maintained consistent configuration with existing models

   # Testing
   - Tested locally using Docker
   - Verified models appear in UI and are selectable
   - Confirmed functionality with sample queries

   # Additional Notes
   These models expand the available options for users who want to use the latest Gemini capabilities.
2025-02-05 00:47:34 +01:00
f37686189e feat(output-parsers): add empty check 2025-01-31 17:51:16 +05:30
409c811a42 feat(ollama): use axios instead of fetch 2024-12-26 19:02:20 +05:30
960e34aa3d Add Llama 3.3 model from Groq
Signed-off-by: Bart Jaskulski <bjaskulski@protonmail.com>
2024-12-19 08:07:36 +01:00
4cb38148b3 Remove deprecated Groq models
Signed-off-by: Bart Jaskulski <bjaskulski@protonmail.com>
2024-12-19 08:07:14 +01:00
1c3c689039 feat(anthropic): update chat models to include Claude 3.5 Haiku and new version for Sonnet 2024-12-13 17:24:15 +08:00
177746235a feat(providers): add gemini 2024-11-28 20:47:18 +05:30
4b89008f3a feat(app): add file uploads 2024-11-23 15:04:19 +05:30
c650d1c3d9 feat(ollama): add keep_alive param 2024-11-20 19:11:47 +05:30
012dfa5a74 feat(listLineOutputParser): handle unclosed tags 2024-10-30 10:29:21 +05:30
54e0bb317a feat(groq): update deprecated models 2024-10-18 11:05:57 +05:30
425a08432b feat(groq): add Llama 3.2 2024-09-26 21:37:05 +05:30
1589f16d5a feat(providers): add displayName property 2024-09-24 22:34:43 +05:30
f620252406 feat(linkDocument): add error handling 2024-08-29 16:51:12 +05:30
c4932c659a feat(app): lint 2024-07-31 20:17:57 +05:30
8e4f0c6a6d feat(web-search): add URL & PDF searching capibilities 2024-07-30 10:09:05 +05:30
6f50e25bf3 feat(output-parsers): add line output parser 2024-07-30 10:08:29 +05:30
0a29237732 feat(listLineOutputParser): handle invalid keys 2024-07-30 10:06:52 +05:30
8a76f92e23 feat(groq): add Llama 3.1 2024-07-23 20:49:17 +05:30
9195cbcce0 feat(openai): add GPT-4 Omni mini 2024-07-20 09:26:46 +05:30
f02393dbe9 feat(providers): add anthropic 2024-07-15 21:20:16 +05:30
fac41d3812 add gemma2-9b-it 2024-07-13 20:20:23 -07:00
8539ce82ad feat(providers): fix loading issues 2024-07-08 15:39:27 +05:30
3b4b8a8b02 feat(providers): add custom_openai 2024-07-08 15:24:45 +05:30
25b5dbd63e feat(providers): separate each provider 2024-07-06 14:19:33 +05:30
3bfaf9be28 feat(app): add suggestion generation 2024-05-18 13:10:39 +05:30
180e204c2d feat(providers): add GPT-4 omni 2024-05-14 19:33:54 +05:30
9a7af945b0 lint 2024-05-09 20:43:04 +05:30
5e940914a3 feat(output-parsers): add list line output parser 2024-05-09 20:39:38 +05:30
321e60b993 feat(embedding-providers): load separately, add bert & bge 2024-05-07 12:33:44 +05:30
68837e06ee feat(embedding-providers): add local models 2024-05-07 11:52:53 +05:30
ba7b92ffde feat(providers): add Content-Type header 2024-05-05 10:53:27 +05:30
9f45ecb98d feat(providers): separate embedding providers, add custom-openai provider 2024-05-04 10:51:06 +05:30
ed9ff3c20f feat(providers): use correct model name 2024-05-02 12:09:25 +05:30
edc40d8fe6 feat(providers): add Groq provider 2024-05-01 19:43:06 +05:30
6e304e7051 feat(video-search): add video search 2024-04-30 14:31:32 +05:30
aae85cd767 feat(logging): add logger 2024-04-30 12:18:18 +05:30
639129848a feat(providers): add gpt-4-turbo provider 2024-04-29 10:49:15 +02:00
fd65af53c3 feat(providers): add error handling 2024-04-21 20:52:47 +05:30
d37a1a8020 feat(agents): support local LLMs 2024-04-20 11:18:52 +05:30