of reasoningChatModel.ts and messageProcessor.ts in favor of
alternaitngMessageValidator.ts
- Removed src/lib/deepseekChat.ts as it was duplicative
- All functionality is now handled by reasoningChatModel.ts
- No imports or references to deepseekChat.ts found in codebase
- Removed src/utils/messageProcessor.ts as it was duplicative
- All functionality is now handled by alternatingMessaageValidator.ts
- No imports or references messageProcessor.ts found in codebase
names in the dropdown menus:
1. Created a formatProviderName utility function in ui/lib/utils.ts that:
-Contains a comprehensive mapping of provider keys to their properly
formatted display names
-Handles current providers like "openai" → "OpenAI" and "lm_studio" → "LM Studio"
-Includes future-proofing for many additional providers like NVIDIA,
OpenRouter, Mistral AI, etc.
-Provides a fallback formatting mechanism for any unknown providers
(replacing underscores with spaces and capitalizing each word)
2. Updated both dropdown menus in the settings page to use this function:
-The Chat Model Provider dropdown now displays properly formatted names
-The Embedding Model Provider dropdown also uses the same formatting
This is a purely aesthetic change that improves the UI by displaying
provider names with proper capitalization and spacing that matches
their official branding. The internal values and functionality remain
unchanged since only the display labels were modified.
The app will now show properly formatted provider names like "OpenAI",
"LM Studio", and "DeepSeek" instead of "Openai", "Lm_studio", and "Deepseek".
the reasoning models using ReasoningChatModel Custom Class.
1. Added the STREAM_DELAY parameter to the sample.config.toml file:
[MODELS.DEEPSEEK]
API_KEY = ""
STREAM_DELAY = 20 # Milliseconds between token emissions for reasoning models (higher = slower, 0 = no delay)
2. Updated the Config interface in src/config.ts to include the new parameter:
DEEPSEEK: {
API_KEY: string;
STREAM_DELAY: number;
};
3. Added a getter function in src/config.ts to retrieve the configured value:
export const getDeepseekStreamDelay = () =>
loadConfig().MODELS.DEEPSEEK.STREAM_DELAY || 20; // Default to 20ms if not specified
Updated the deepseek.ts provider to use the configured stream delay:
const streamDelay = getDeepseekStreamDelay();
logger.debug(`Using stream delay of ${streamDelay}ms for ${model.id}`);
// Then using it in the model configuration
model: new ReasoningChatModel({
// ...other params
streamDelay
}),
4. This implementation provides several benefits:
-User-Configurable: Users can now adjust the stream delay without modifying code
-Descriptive Naming: The parameter name "STREAM_DELAY" clearly indicates its purpose
-Documented: The comment in the config file explains what the parameter does
-Fallback Default: If not specified, it defaults to 20ms
-Logging: Added debug logging to show the configured value when loading models
To adjust the stream delay, users can simply modify the STREAM_DELAY value in
their config.toml file. Higher values will slow down token generation
(making it easier to read in real-time), while lower values will speed it up.
Setting it to 0 will disable the delay entirely.
Restructured the Discover page to prevent the entire page from
refreshing when selecting categories or updating settings
1. Component Separation
-Split the page into three main components:
-DiscoverHeader: Contains the title, settings button, and category navigation
-DiscoverContent: Contains the grid of articles with its own loading state
-PreferencesModal: Manages the settings modal with temporary state
2. Optimized Rendering
-Used React.memo for all components to prevent unnecessary re-renders
-Each component only receives the props it needs
-The header remains stable while only the content area updates
3. Improved Loading States
3.1. Added separate loading states:
-Initial loading for the first page load
-Content-only loading when changing categories or preferences
-Loading spinners now only appear in the content area when changing
categories
3.2. Better State Management
-Main state is managed in the parent component
-Modal uses temporary state that only updates the main state after saving
-Clear separation of concerns between components
These changes create a more polished user experience where the header
and sidebar remain stable while only the content area refreshes when
needed. The page now feels more responsive and app-like, rather than
having the entire page refresh on every interaction
Enhanced the Discover section with personalization f
eatures and category navigation
1. Backend Enhancements
1.1. Database Schema Updates
-Added a user Preferences table to store user
category preferences
-Set default preferences to AI and Technology
1.2. Category-Based Search
-Created a comprehensive category system with specialized search queries
for each category
-Implemented 11 categories: AI, Technology, Current News, Sports, Money,
Gaming, Weather, Entertainment, Art & Culture, Science, Health, and Travel
-Each category searches relevant websites with appropriate keywords
-Updated the search sources for each category with more reputable websites
1.3. New API Endpoints
-Enhanced the main /discover endpoint to support category filtering and
preference-based content
-Added /discover/preferences endpoints for getting and saving user
preferences
2. Frontend Improvements
2.1 Category Navigation Bar
-Added a horizontal scrollable category bar at the top of the Discover
page
-Active category is highlighted with the primary color with smooth
scrolling animation via tight/left buttons
"For You" category shows personalised content based on saved preferences.
2.2 Personalization Feature
- Added a Settings button in the top-right corner
- Implemented a personalisation modal that allows users to select their
preferred categories
- Implemented language checkboxes grid for 12 major languages that allow
users to select multiple languages for their preferred language in the
results
-Updated the backend to filter search results by the selected language
- Preferences are saved to the backend and persist between sessions
3.2 UI Enhancements
Improved layout with better spacing and transitions
Added hover effects for better interactivity
Ensured the design is responsive across different screen sizes
How It Works
-Users can click on category tabs to view news specific to that category
The "For You" tab shows a personalized feed based on the user's saved
preferences
-Users can customize their preferences by clicking the Settings icon and
selecting categories and preferered language(s).
-When preferences are saved, the "For You" feed automatically updates to
reflect those preferences
-These improvements make the Discover section more engaging and
personalized, allowing users to easily find content that interests
them across a wide range of categories.
1. Search Functionality:
-Added a search box with search icon and "Search your threads..." placeholder
-Real-time filtering of threads as you type
-Clear button (X) when text is entered
2. Thread Count Display:
-Added "You have X threads in Perplexica" below the search box
-Only shows in normal mode (hidden during selection)
3. Multiple delete functionality:
-"Select" button in the top right below Search Box
-Checkboxes that appear on hover and when in selection mode
-Selection mode header showing count and actions
-When in selection mode, shows "X selected thread(s)" on the left
-Action buttons (Select all, Cancel, Delete Selected) on the right
-Disabled Delete Selected button when no threads are selected
-Confirmation dialog using the new BatchDeleteChats component
4. Terminology Update:
-Changed all instances of "chats" to "threads" throughout the interface
configurable delay feature.
1. Created AlternatingMessageValidator
(renamed from MessageProcessor):
-Focused on handling alternating message patterns
-Made it model-agnostic with configuration-driven approach
-Kept the core validation logic intact
2. Created ReasoningChatModel
(renamed from DeepSeekChat):
-Made it generic for any model with reasoning/thinking capabilities
-Added configurable streaming delay parameter (streamDelay)
-Implemented delay logic in the streaming process
3. Updated the DeepSeek provider:
-Now uses ReasoningChatModel for deepseek-reasoner with a 50ms delay
-Uses standard ChatOpenAI for deepseek-chat
-Added a clear distinction between models that need reasoning capabilities
Updated references in metaSearchAgent.ts:
4. Changed import from messageProcessor to alternatingMessageValidator
-Updated function calls to use the new validator
-The configurable delay implementation allows
to control the speed of token generation, which
can help with the issue you were seeing. The
delay is set to 20ms by default for the
deepseek-reasoner model, but you can adjust
his value in the deepseek.ts provider file
to find the optimal speed.
This refactoring maintains all the existing
functionality while making the code more
maintainable and future-proof. The separation of
concerns between message validation and model
implementation will make it easier to add support
for other models with similar requirements in the future.
# Description
Added two new Gemini models:
- gemini-2.0-flash-exp
- gemini-2.0-flash-thinking-exp-01-21
# Changes Made
- Updated src/lib/providers/gemini.ts to include new models
- Maintained consistent configuration with existing models
# Testing
- Tested locally using Docker
- Verified models appear in UI and are selectable
- Confirmed functionality with sample queries
# Additional Notes
These models expand the available options for users who want to use the latest Gemini capabilities.