Compare commits

..

5 Commits

Author SHA1 Message Date
ItzCrazyKns
df33229934 feat(custom-openai): use apiKey instead of openAIApiKey 2025-07-19 16:14:46 +05:30
ItzCrazyKns
49fafaa096 feat(metaSearchAgent): implement structured outputs 2025-07-19 16:10:04 +05:30
ItzCrazyKns
ca9b32a23b feat(ollama): use @langchain/ollama library 2025-07-19 16:09:46 +05:30
ItzCrazyKns
76e3ff4e02 feat(providers): switch to apiKey key 2025-07-19 16:09:21 +05:30
ItzCrazyKns
eabf3ca7d3 feat(modules): update langchain packages 2025-07-19 16:08:45 +05:30
38 changed files with 1097 additions and 1320 deletions

View File

@@ -53,7 +53,7 @@ Want to know more about its architecture and how it works? You can read it [here
## Features ## Features
- **Local LLMs**: You can utilize local LLMs such as Qwen, DeepSeek, Llama, and Mistral. - **Local LLMs**: You can make use local LLMs such as Llama3 and Mixtral using Ollama.
- **Two Main Modes:** - **Two Main Modes:**
- **Copilot Mode:** (In development) Boosts search by generating different queries to find more relevant internet sources. Like normal search instead of just using the context by SearxNG, it visits the top matches and tries to find relevant sources to the user's query directly from the page. - **Copilot Mode:** (In development) Boosts search by generating different queries to find more relevant internet sources. Like normal search instead of just using the context by SearxNG, it visits the top matches and tries to find relevant sources to the user's query directly from the page.
- **Normal Mode:** Processes your query and performs a web search. - **Normal Mode:** Processes your query and performs a web search.
@@ -87,7 +87,6 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
4. Rename the `sample.config.toml` file to `config.toml`. For Docker setups, you need only fill in the following fields: 4. Rename the `sample.config.toml` file to `config.toml`. For Docker setups, you need only fill in the following fields:
- `OPENAI`: Your OpenAI API key. **You only need to fill this if you wish to use OpenAI's models**. - `OPENAI`: Your OpenAI API key. **You only need to fill this if you wish to use OpenAI's models**.
- `CUSTOM_OPENAI`: Your OpenAI-API-compliant local server URL, model name, and API key. You should run your local server with host set to `0.0.0.0`, take note of which port number it is running on, and then use that port number to set `API_URL = http://host.docker.internal:PORT_NUMBER`. You must specify the model name, such as `MODEL_NAME = "unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF:Q4_K_XL"`. Finally, set `API_KEY` to the appropriate value. If you have not defined an API key, just put anything you want in-between the quotation marks: `API_KEY = "whatever-you-want-but-not-blank"` **You only need to configure these settings if you want to use a local OpenAI-compliant server, such as Llama.cpp's [`llama-server`](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md)**.
- `OLLAMA`: Your Ollama API URL. You should enter it as `http://host.docker.internal:PORT_NUMBER`. If you installed Ollama on port 11434, use `http://host.docker.internal:11434`. For other ports, adjust accordingly. **You need to fill this if you wish to use Ollama's models instead of OpenAI's**. - `OLLAMA`: Your Ollama API URL. You should enter it as `http://host.docker.internal:PORT_NUMBER`. If you installed Ollama on port 11434, use `http://host.docker.internal:11434`. For other ports, adjust accordingly. **You need to fill this if you wish to use Ollama's models instead of OpenAI's**.
- `GROQ`: Your Groq API key. **You only need to fill this if you wish to use Groq's hosted models**. - `GROQ`: Your Groq API key. **You only need to fill this if you wish to use Groq's hosted models**.
- `ANTHROPIC`: Your Anthropic API key. **You only need to fill this if you wish to use Anthropic models**. - `ANTHROPIC`: Your Anthropic API key. **You only need to fill this if you wish to use Anthropic models**.
@@ -121,17 +120,7 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
See the [installation documentation](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/installation) for more information like updating, etc. See the [installation documentation](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/installation) for more information like updating, etc.
### Troubleshooting ### Ollama Connection Errors
#### Local OpenAI-API-Compliant Servers
If Perplexica tells you that you haven't configured any chat model providers, ensure that:
1. Your server is running on `0.0.0.0` (not `127.0.0.1`) and on the same port you put in the API URL.
2. You have specified the correct model name loaded by your local LLM server.
3. You have specified the correct API key, or if one is not defined, you have put *something* in the API key field and not left it empty.
#### Ollama Connection Errors
If you're encountering an Ollama connection error, it is likely due to the backend being unable to connect to Ollama's API. To fix this issue you can: If you're encountering an Ollama connection error, it is likely due to the backend being unable to connect to Ollama's API. To fix this issue you can:

View File

@@ -19,7 +19,6 @@
"@langchain/community": "^0.3.49", "@langchain/community": "^0.3.49",
"@langchain/core": "^0.3.66", "@langchain/core": "^0.3.66",
"@langchain/google-genai": "^0.2.15", "@langchain/google-genai": "^0.2.15",
"@langchain/groq": "^0.2.3",
"@langchain/ollama": "^0.2.3", "@langchain/ollama": "^0.2.3",
"@langchain/openai": "^0.6.2", "@langchain/openai": "^0.6.2",
"@langchain/textsplitters": "^0.1.0", "@langchain/textsplitters": "^0.1.0",

View File

@@ -1,7 +1,11 @@
import prompts from '@/lib/prompts';
import MetaSearchAgent from '@/lib/search/metaSearchAgent';
import crypto from 'crypto'; import crypto from 'crypto';
import { AIMessage, BaseMessage, HumanMessage } from '@langchain/core/messages'; import { AIMessage, BaseMessage, HumanMessage } from '@langchain/core/messages';
import { EventEmitter } from 'stream'; import { EventEmitter } from 'stream';
import { import {
chatModelProviders,
embeddingModelProviders,
getAvailableChatModelProviders, getAvailableChatModelProviders,
getAvailableEmbeddingModelProviders, getAvailableEmbeddingModelProviders,
} from '@/lib/providers'; } from '@/lib/providers';
@@ -134,8 +138,6 @@ const handleHistorySave = async (
where: eq(chats.id, message.chatId), where: eq(chats.id, message.chatId),
}); });
const fileData = files.map(getFileDetails);
if (!chat) { if (!chat) {
await db await db
.insert(chats) .insert(chats)
@@ -144,15 +146,9 @@ const handleHistorySave = async (
title: message.content, title: message.content,
createdAt: new Date().toString(), createdAt: new Date().toString(),
focusMode: focusMode, focusMode: focusMode,
files: fileData,
})
.execute();
} else if (JSON.stringify(chat.files ?? []) != JSON.stringify(fileData)) {
db.update(chats)
.set({
files: files.map(getFileDetails), files: files.map(getFileDetails),
}) })
.where(eq(chats.id, message.chatId)); .execute();
} }
const messageExists = await db.query.messages.findFirst({ const messageExists = await db.query.messages.findFirst({

View File

@@ -11,7 +11,6 @@ import {
getAimlApiKey, getAimlApiKey,
getLMStudioApiEndpoint, getLMStudioApiEndpoint,
updateConfig, updateConfig,
getOllamaApiKey,
} from '@/lib/config'; } from '@/lib/config';
import { import {
getAvailableChatModelProviders, getAvailableChatModelProviders,
@@ -54,7 +53,6 @@ export const GET = async (req: Request) => {
config['openaiApiKey'] = getOpenaiApiKey(); config['openaiApiKey'] = getOpenaiApiKey();
config['ollamaApiUrl'] = getOllamaApiEndpoint(); config['ollamaApiUrl'] = getOllamaApiEndpoint();
config['ollamaApiKey'] = getOllamaApiKey();
config['lmStudioApiUrl'] = getLMStudioApiEndpoint(); config['lmStudioApiUrl'] = getLMStudioApiEndpoint();
config['anthropicApiKey'] = getAnthropicApiKey(); config['anthropicApiKey'] = getAnthropicApiKey();
config['groqApiKey'] = getGroqApiKey(); config['groqApiKey'] = getGroqApiKey();
@@ -95,7 +93,6 @@ export const POST = async (req: Request) => {
}, },
OLLAMA: { OLLAMA: {
API_URL: config.ollamaApiUrl, API_URL: config.ollamaApiUrl,
API_KEY: config.ollamaApiKey,
}, },
DEEPSEEK: { DEEPSEEK: {
API_KEY: config.deepseekApiKey, API_KEY: config.deepseekApiKey,

View File

@@ -1,72 +1,55 @@
import { searchSearxng } from '@/lib/searxng'; import { searchSearxng } from '@/lib/searxng';
const websitesForTopic = { const articleWebsites = [
tech: { 'yahoo.com',
query: ['technology news', 'latest tech', 'AI', 'science and innovation'], 'www.exchangewire.com',
links: ['techcrunch.com', 'wired.com', 'theverge.com'], 'businessinsider.com',
}, /* 'wired.com',
finance: { 'mashable.com',
query: ['finance news', 'economy', 'stock market', 'investing'], 'theverge.com',
links: ['bloomberg.com', 'cnbc.com', 'marketwatch.com'], 'gizmodo.com',
}, 'cnet.com',
art: { 'venturebeat.com', */
query: ['art news', 'culture', 'modern art', 'cultural events'], ];
links: ['artnews.com', 'hyperallergic.com', 'theartnewspaper.com'],
},
sports: {
query: ['sports news', 'latest sports', 'cricket football tennis'],
links: ['espn.com', 'bbc.com/sport', 'skysports.com'],
},
entertainment: {
query: ['entertainment news', 'movies', 'TV shows', 'celebrities'],
links: ['hollywoodreporter.com', 'variety.com', 'deadline.com'],
},
};
type Topic = keyof typeof websitesForTopic; const topics = ['AI', 'tech']; /* TODO: Add UI to customize this */
export const GET = async (req: Request) => { export const GET = async (req: Request) => {
try { try {
const params = new URL(req.url).searchParams; const params = new URL(req.url).searchParams;
const mode: 'normal' | 'preview' = const mode: 'normal' | 'preview' =
(params.get('mode') as 'normal' | 'preview') || 'normal'; (params.get('mode') as 'normal' | 'preview') || 'normal';
const topic: Topic = (params.get('topic') as Topic) || 'tech';
const selectedTopic = websitesForTopic[topic];
let data = []; let data = [];
if (mode === 'normal') { if (mode === 'normal') {
const seenUrls = new Set();
data = ( data = (
await Promise.all( await Promise.all([
selectedTopic.links.flatMap((link) => ...new Array(articleWebsites.length * topics.length)
selectedTopic.query.map(async (query) => { .fill(0)
.map(async (_, i) => {
return ( return (
await searchSearxng(`site:${link} ${query}`, { await searchSearxng(
engines: ['bing news'], `site:${articleWebsites[i % articleWebsites.length]} ${
pageno: 1, topics[i % topics.length]
language: 'en', }`,
}) {
engines: ['bing news'],
pageno: 1,
language: 'en',
},
)
).results; ).results;
}), }),
), ])
)
) )
.map((result) => result)
.flat() .flat()
.filter((item) => {
const url = item.url?.toLowerCase().trim();
if (seenUrls.has(url)) return false;
seenUrls.add(url);
return true;
})
.sort(() => Math.random() - 0.5); .sort(() => Math.random() - 0.5);
} else { } else {
data = ( data = (
await searchSearxng( await searchSearxng(
`site:${selectedTopic.links[Math.floor(Math.random() * selectedTopic.links.length)]} ${selectedTopic.query[Math.floor(Math.random() * selectedTopic.query.length)]}`, `site:${articleWebsites[Math.floor(Math.random() * articleWebsites.length)]} ${topics[Math.floor(Math.random() * topics.length)]}`,
{ {
engines: ['bing news'], engines: ['bing news'],
pageno: 1, pageno: 1,

View File

@@ -81,7 +81,8 @@ export const POST = async (req: Request) => {
if (body.chatModel?.provider === 'custom_openai') { if (body.chatModel?.provider === 'custom_openai') {
llm = new ChatOpenAI({ llm = new ChatOpenAI({
modelName: body.chatModel?.name || getCustomOpenaiModelName(), modelName: body.chatModel?.name || getCustomOpenaiModelName(),
apiKey: body.chatModel?.customOpenAIKey || getCustomOpenaiApiKey(), apiKey:
body.chatModel?.customOpenAIKey || getCustomOpenaiApiKey(),
temperature: 0.7, temperature: 0.7,
configuration: { configuration: {
baseURL: baseURL:

View File

@@ -1,10 +1,7 @@
export const POST = async (req: Request) => { export const POST = async (req: Request) => {
try { try {
const body: { const body: { lat: number; lng: number; temperatureUnit: 'C' | 'F' } =
lat: number; await req.json();
lng: number;
measureUnit: 'Imperial' | 'Metric';
} = await req.json();
if (!body.lat || !body.lng) { if (!body.lat || !body.lng) {
return Response.json( return Response.json(
@@ -16,9 +13,7 @@ export const POST = async (req: Request) => {
} }
const res = await fetch( const res = await fetch(
`https://api.open-meteo.com/v1/forecast?latitude=${body.lat}&longitude=${body.lng}&current=weather_code,temperature_2m,is_day,relative_humidity_2m,wind_speed_10m&timezone=auto${ `https://api.open-meteo.com/v1/forecast?latitude=${body.lat}&longitude=${body.lng}&current=weather_code,temperature_2m,is_day,relative_humidity_2m,wind_speed_10m&timezone=auto${body.temperatureUnit === 'C' ? '' : '&temperature_unit=fahrenheit'}`,
body.measureUnit === 'Metric' ? '' : '&temperature_unit=fahrenheit'
}${body.measureUnit === 'Metric' ? '' : '&wind_speed_unit=mph'}`,
); );
const data = await res.json(); const data = await res.json();
@@ -40,15 +35,13 @@ export const POST = async (req: Request) => {
windSpeed: number; windSpeed: number;
icon: string; icon: string;
temperatureUnit: 'C' | 'F'; temperatureUnit: 'C' | 'F';
windSpeedUnit: 'm/s' | 'mph';
} = { } = {
temperature: data.current.temperature_2m, temperature: data.current.temperature_2m,
condition: '', condition: '',
humidity: data.current.relative_humidity_2m, humidity: data.current.relative_humidity_2m,
windSpeed: data.current.wind_speed_10m, windSpeed: data.current.wind_speed_10m,
icon: '', icon: '',
temperatureUnit: body.measureUnit === 'Metric' ? 'C' : 'F', temperatureUnit: body.temperatureUnit,
windSpeedUnit: body.measureUnit === 'Metric' ? 'm/s' : 'mph',
}; };
const code = data.current.weather_code; const code = data.current.weather_code;

View File

@@ -1,17 +1,9 @@
'use client';
import ChatWindow from '@/components/ChatWindow'; import ChatWindow from '@/components/ChatWindow';
import { useParams } from 'next/navigation';
import React from 'react'; import React from 'react';
import { ChatProvider } from '@/lib/hooks/useChat';
const Page = () => { const Page = ({ params }: { params: Promise<{ chatId: string }> }) => {
const { chatId }: { chatId: string } = useParams(); const { chatId } = React.use(params);
return ( return <ChatWindow id={chatId} />;
<ChatProvider id={chatId}>
<ChatWindow />
</ChatProvider>
);
}; };
export default Page; export default Page;

View File

@@ -4,7 +4,6 @@ import { Search } from 'lucide-react';
import { useEffect, useState } from 'react'; import { useEffect, useState } from 'react';
import Link from 'next/link'; import Link from 'next/link';
import { toast } from 'sonner'; import { toast } from 'sonner';
import { cn } from '@/lib/utils';
interface Discover { interface Discover {
title: string; title: string;
@@ -13,66 +12,60 @@ interface Discover {
thumbnail: string; thumbnail: string;
} }
const topics: { key: string; display: string }[] = [
{
display: 'Tech & Science',
key: 'tech',
},
{
display: 'Finance',
key: 'finance',
},
{
display: 'Art & Culture',
key: 'art',
},
{
display: 'Sports',
key: 'sports',
},
{
display: 'Entertainment',
key: 'entertainment',
},
];
const Page = () => { const Page = () => {
const [discover, setDiscover] = useState<Discover[] | null>(null); const [discover, setDiscover] = useState<Discover[] | null>(null);
const [loading, setLoading] = useState(true); const [loading, setLoading] = useState(true);
const [activeTopic, setActiveTopic] = useState<string>(topics[0].key);
const fetchArticles = async (topic: string) => {
setLoading(true);
try {
const res = await fetch(`/api/discover?topic=${topic}`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
},
});
const data = await res.json();
if (!res.ok) {
throw new Error(data.message);
}
data.blogs = data.blogs.filter((blog: Discover) => blog.thumbnail);
setDiscover(data.blogs);
} catch (err: any) {
console.error('Error fetching data:', err.message);
toast.error('Error fetching data');
} finally {
setLoading(false);
}
};
useEffect(() => { useEffect(() => {
fetchArticles(activeTopic); const fetchData = async () => {
}, [activeTopic]); try {
const res = await fetch(`/api/discover`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
},
});
return ( const data = await res.json();
if (!res.ok) {
throw new Error(data.message);
}
data.blogs = data.blogs.filter((blog: Discover) => blog.thumbnail);
setDiscover(data.blogs);
} catch (err: any) {
console.error('Error fetching data:', err.message);
toast.error('Error fetching data');
} finally {
setLoading(false);
}
};
fetchData();
}, []);
return loading ? (
<div className="flex flex-row items-center justify-center min-h-screen">
<svg
aria-hidden="true"
className="w-8 h-8 text-light-200 fill-light-secondary dark:text-[#202020] animate-spin dark:fill-[#ffffff3b]"
viewBox="0 0 100 101"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M100 50.5908C100.003 78.2051 78.1951 100.003 50.5908 100C22.9765 99.9972 0.997224 78.018 1 50.4037C1.00281 22.7993 22.8108 0.997224 50.4251 1C78.0395 1.00281 100.018 22.8108 100 50.4251ZM9.08164 50.594C9.06312 73.3997 27.7909 92.1272 50.5966 92.1457C73.4023 92.1642 92.1298 73.4365 92.1483 50.6308C92.1669 27.8251 73.4392 9.0973 50.6335 9.07878C27.8278 9.06026 9.10003 27.787 9.08164 50.594Z"
fill="currentColor"
/>
<path
d="M93.9676 39.0409C96.393 38.4037 97.8624 35.9116 96.9801 33.5533C95.1945 28.8227 92.871 24.3692 90.0681 20.348C85.6237 14.1775 79.4473 9.36872 72.0454 6.45794C64.6435 3.54717 56.3134 2.65431 48.3133 3.89319C45.869 4.27179 44.3768 6.77534 45.014 9.20079C45.6512 11.6262 48.1343 13.0956 50.5786 12.717C56.5073 11.8281 62.5542 12.5399 68.0406 14.7911C73.527 17.0422 78.2187 20.7487 81.5841 25.4923C83.7976 28.5886 85.4467 32.059 86.4416 35.7474C87.1273 38.1189 89.5423 39.6781 91.9676 39.0409Z"
fill="currentFill"
/>
</svg>
</div>
) : (
<> <>
<div> <div>
<div className="flex flex-col pt-4"> <div className="flex flex-col pt-4">
@@ -83,73 +76,35 @@ const Page = () => {
<hr className="border-t border-[#2B2C2C] my-4 w-full" /> <hr className="border-t border-[#2B2C2C] my-4 w-full" />
</div> </div>
<div className="flex flex-row items-center space-x-2 overflow-x-auto"> <div className="grid lg:grid-cols-3 sm:grid-cols-2 grid-cols-1 gap-4 pb-28 lg:pb-8 w-full justify-items-center lg:justify-items-start">
{topics.map((t, i) => ( {discover &&
<div discover?.map((item, i) => (
key={i} <Link
className={cn( href={`/?q=Summary: ${item.url}`}
'border-[0.1px] rounded-full text-sm px-3 py-1 text-nowrap transition duration-200 cursor-pointer', key={i}
activeTopic === t.key className="max-w-sm rounded-lg overflow-hidden bg-light-secondary dark:bg-dark-secondary hover:-translate-y-[1px] transition duration-200"
? 'text-cyan-300 bg-cyan-300/30 border-cyan-300/60' target="_blank"
: 'border-black/30 dark:border-white/30 text-black/70 dark:text-white/70 hover:text-black dark:hover:text-white hover:border-black/40 dark:hover:border-white/40 hover:bg-black/5 dark:hover:bg-white/5', >
)} <img
onClick={() => setActiveTopic(t.key)} className="object-cover w-full aspect-video"
> src={
<span>{t.display}</span> new URL(item.thumbnail).origin +
</div> new URL(item.thumbnail).pathname +
))} `?id=${new URL(item.thumbnail).searchParams.get('id')}`
</div> }
alt={item.title}
{loading ? ( />
<div className="flex flex-row items-center justify-center min-h-screen"> <div className="px-6 py-4">
<svg <div className="font-bold text-lg mb-2">
aria-hidden="true" {item.title.slice(0, 100)}...
className="w-8 h-8 text-light-200 fill-light-secondary dark:text-[#202020] animate-spin dark:fill-[#ffffff3b]"
viewBox="0 0 100 101"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M100 50.5908C100.003 78.2051 78.1951 100.003 50.5908 100C22.9765 99.9972 0.997224 78.018 1 50.4037C1.00281 22.7993 22.8108 0.997224 50.4251 1C78.0395 1.00281 100.018 22.8108 100 50.4251ZM9.08164 50.594C9.06312 73.3997 27.7909 92.1272 50.5966 92.1457C73.4023 92.1642 92.1298 73.4365 92.1483 50.6308C92.1669 27.8251 73.4392 9.0973 50.6335 9.07878C27.8278 9.06026 9.10003 27.787 9.08164 50.594Z"
fill="currentColor"
/>
<path
d="M93.9676 39.0409C96.393 38.4037 97.8624 35.9116 96.9801 33.5533C95.1945 28.8227 92.871 24.3692 90.0681 20.348C85.6237 14.1775 79.4473 9.36872 72.0454 6.45794C64.6435 3.54717 56.3134 2.65431 48.3133 3.89319C45.869 4.27179 44.3768 6.77534 45.014 9.20079C45.6512 11.6262 48.1343 13.0956 50.5786 12.717C56.5073 11.8281 62.5542 12.5399 68.0406 14.7911C73.527 17.0422 78.2187 20.7487 81.5841 25.4923C83.7976 28.5886 85.4467 32.059 86.4416 35.7474C87.1273 38.1189 89.5423 39.6781 91.9676 39.0409Z"
fill="currentFill"
/>
</svg>
</div>
) : (
<div className="grid lg:grid-cols-3 sm:grid-cols-2 grid-cols-1 gap-4 pb-28 pt-5 lg:pb-8 w-full justify-items-center lg:justify-items-start">
{discover &&
discover?.map((item, i) => (
<Link
href={`/?q=Summary: ${item.url}`}
key={i}
className="max-w-sm rounded-lg overflow-hidden bg-light-secondary dark:bg-dark-secondary hover:-translate-y-[1px] transition duration-200"
target="_blank"
>
<img
className="object-cover w-full aspect-video"
src={
new URL(item.thumbnail).origin +
new URL(item.thumbnail).pathname +
`?id=${new URL(item.thumbnail).searchParams.get('id')}`
}
alt={item.title}
/>
<div className="px-6 py-4">
<div className="font-bold text-lg mb-2">
{item.title.slice(0, 100)}...
</div>
<p className="text-black-70 dark:text-white/70 text-sm">
{item.content.slice(0, 100)}...
</p>
</div> </div>
</Link> <p className="text-black-70 dark:text-white/70 text-sm">
))} {item.content.slice(0, 100)}...
</div> </p>
)} </div>
</Link>
))}
</div>
</div> </div>
</> </>
); );

View File

@@ -1,5 +1,4 @@
import ChatWindow from '@/components/ChatWindow'; import ChatWindow from '@/components/ChatWindow';
import { ChatProvider } from '@/lib/hooks/useChat';
import { Metadata } from 'next'; import { Metadata } from 'next';
import { Suspense } from 'react'; import { Suspense } from 'react';
@@ -12,9 +11,7 @@ const Home = () => {
return ( return (
<div> <div>
<Suspense> <Suspense>
<ChatProvider> <ChatWindow />
<ChatWindow />
</ChatProvider>
</Suspense> </Suspense>
</div> </div>
); );

View File

@@ -21,7 +21,6 @@ interface SettingsType {
anthropicApiKey: string; anthropicApiKey: string;
geminiApiKey: string; geminiApiKey: string;
ollamaApiUrl: string; ollamaApiUrl: string;
ollamaApiKey: string;
lmStudioApiUrl: string; lmStudioApiUrl: string;
deepseekApiKey: string; deepseekApiKey: string;
aimlApiKey: string; aimlApiKey: string;
@@ -149,9 +148,7 @@ const Page = () => {
const [automaticImageSearch, setAutomaticImageSearch] = useState(false); const [automaticImageSearch, setAutomaticImageSearch] = useState(false);
const [automaticVideoSearch, setAutomaticVideoSearch] = useState(false); const [automaticVideoSearch, setAutomaticVideoSearch] = useState(false);
const [systemInstructions, setSystemInstructions] = useState<string>(''); const [systemInstructions, setSystemInstructions] = useState<string>('');
const [measureUnit, setMeasureUnit] = useState<'Imperial' | 'Metric'>( const [temperatureUnit, setTemperatureUnit] = useState<'C' | 'F'>('C');
'Metric',
);
const [savingStates, setSavingStates] = useState<Record<string, boolean>>({}); const [savingStates, setSavingStates] = useState<Record<string, boolean>>({});
useEffect(() => { useEffect(() => {
@@ -214,9 +211,7 @@ const Page = () => {
setSystemInstructions(localStorage.getItem('systemInstructions')!); setSystemInstructions(localStorage.getItem('systemInstructions')!);
setMeasureUnit( setTemperatureUnit(localStorage.getItem('temperatureUnit')! as 'C' | 'F');
localStorage.getItem('measureUnit')! as 'Imperial' | 'Metric',
);
setIsLoading(false); setIsLoading(false);
}; };
@@ -376,8 +371,8 @@ const Page = () => {
localStorage.setItem('embeddingModel', value); localStorage.setItem('embeddingModel', value);
} else if (key === 'systemInstructions') { } else if (key === 'systemInstructions') {
localStorage.setItem('systemInstructions', value); localStorage.setItem('systemInstructions', value);
} else if (key === 'measureUnit') { } else if (key === 'temperatureUnit') {
localStorage.setItem('measureUnit', value.toString()); localStorage.setItem('temperatureUnit', value.toString());
} }
} catch (err) { } catch (err) {
console.error('Failed to save:', err); console.error('Failed to save:', err);
@@ -435,22 +430,22 @@ const Page = () => {
</div> </div>
<div className="flex flex-col space-y-1"> <div className="flex flex-col space-y-1">
<p className="text-black/70 dark:text-white/70 text-sm"> <p className="text-black/70 dark:text-white/70 text-sm">
Measurement Units Temperature Unit
</p> </p>
<Select <Select
value={measureUnit ?? undefined} value={temperatureUnit ?? undefined}
onChange={(e) => { onChange={(e) => {
setMeasureUnit(e.target.value as 'Imperial' | 'Metric'); setTemperatureUnit(e.target.value as 'C' | 'F');
saveConfig('measureUnit', e.target.value); saveConfig('temperatureUnit', e.target.value);
}} }}
options={[ options={[
{ {
label: 'Metric', label: 'Celsius',
value: 'Metric', value: 'C',
}, },
{ {
label: 'Imperial', label: 'Fahrenheit',
value: 'Imperial', value: 'F',
}, },
]} ]}
/> />
@@ -819,25 +814,6 @@ const Page = () => {
/> />
</div> </div>
<div className="flex flex-col space-y-1">
<p className="text-black/70 dark:text-white/70 text-sm">
Ollama API Key (Can be left blank)
</p>
<Input
type="text"
placeholder="Ollama API Key"
value={config.ollamaApiKey}
isSaving={savingStates['ollamaApiKey']}
onChange={(e) => {
setConfig((prev) => ({
...prev!,
ollamaApiKey: e.target.value,
}));
}}
onSave={(value) => saveConfig('ollamaApiKey', value)}
/>
</div>
<div className="flex flex-col space-y-1"> <div className="flex flex-col space-y-1">
<p className="text-black/70 dark:text-white/70 text-sm"> <p className="text-black/70 dark:text-white/70 text-sm">
GROQ API Key GROQ API Key

View File

@@ -5,11 +5,28 @@ import MessageInput from './MessageInput';
import { File, Message } from './ChatWindow'; import { File, Message } from './ChatWindow';
import MessageBox from './MessageBox'; import MessageBox from './MessageBox';
import MessageBoxLoading from './MessageBoxLoading'; import MessageBoxLoading from './MessageBoxLoading';
import { useChat } from '@/lib/hooks/useChat';
const Chat = () => {
const { messages, loading, messageAppeared } = useChat();
const Chat = ({
loading,
messages,
sendMessage,
messageAppeared,
rewrite,
fileIds,
setFileIds,
files,
setFiles,
}: {
messages: Message[];
sendMessage: (message: string) => void;
loading: boolean;
messageAppeared: boolean;
rewrite: (messageId: string) => void;
fileIds: string[];
setFileIds: (fileIds: string[]) => void;
files: File[];
setFiles: (files: File[]) => void;
}) => {
const [dividerWidth, setDividerWidth] = useState(0); const [dividerWidth, setDividerWidth] = useState(0);
const dividerRef = useRef<HTMLDivElement | null>(null); const dividerRef = useRef<HTMLDivElement | null>(null);
const messageEnd = useRef<HTMLDivElement | null>(null); const messageEnd = useRef<HTMLDivElement | null>(null);
@@ -55,8 +72,12 @@ const Chat = () => {
key={i} key={i}
message={msg} message={msg}
messageIndex={i} messageIndex={i}
history={messages}
loading={loading}
dividerRef={isLast ? dividerRef : undefined} dividerRef={isLast ? dividerRef : undefined}
isLast={isLast} isLast={isLast}
rewrite={rewrite}
sendMessage={sendMessage}
/> />
{!isLast && msg.role === 'assistant' && ( {!isLast && msg.role === 'assistant' && (
<div className="h-px w-full bg-light-secondary dark:bg-dark-secondary" /> <div className="h-px w-full bg-light-secondary dark:bg-dark-secondary" />
@@ -71,7 +92,14 @@ const Chat = () => {
className="bottom-24 lg:bottom-10 fixed z-40" className="bottom-24 lg:bottom-10 fixed z-40"
style={{ width: dividerWidth }} style={{ width: dividerWidth }}
> >
<MessageInput /> <MessageInput
loading={loading}
sendMessage={sendMessage}
fileIds={fileIds}
setFileIds={setFileIds}
files={files}
setFiles={setFiles}
/>
</div> </div>
)} )}
</div> </div>

View File

@@ -1,13 +1,17 @@
'use client'; 'use client';
import { useEffect, useRef, useState } from 'react';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';
import Navbar from './Navbar'; import Navbar from './Navbar';
import Chat from './Chat'; import Chat from './Chat';
import EmptyChat from './EmptyChat'; import EmptyChat from './EmptyChat';
import crypto from 'crypto';
import { toast } from 'sonner';
import { useSearchParams } from 'next/navigation';
import { getSuggestions } from '@/lib/actions';
import { Settings } from 'lucide-react'; import { Settings } from 'lucide-react';
import Link from 'next/link'; import Link from 'next/link';
import NextError from 'next/error'; import NextError from 'next/error';
import { useChat } from '@/lib/hooks/useChat';
export type Message = { export type Message = {
messageId: string; messageId: string;
@@ -25,8 +29,539 @@ export interface File {
fileId: string; fileId: string;
} }
const ChatWindow = () => { interface ChatModelProvider {
const { hasError, isReady, notFound, messages } = useChat(); name: string;
provider: string;
}
interface EmbeddingModelProvider {
name: string;
provider: string;
}
const checkConfig = async (
setChatModelProvider: (provider: ChatModelProvider) => void,
setEmbeddingModelProvider: (provider: EmbeddingModelProvider) => void,
setIsConfigReady: (ready: boolean) => void,
setHasError: (hasError: boolean) => void,
) => {
try {
let chatModel = localStorage.getItem('chatModel');
let chatModelProvider = localStorage.getItem('chatModelProvider');
let embeddingModel = localStorage.getItem('embeddingModel');
let embeddingModelProvider = localStorage.getItem('embeddingModelProvider');
const autoImageSearch = localStorage.getItem('autoImageSearch');
const autoVideoSearch = localStorage.getItem('autoVideoSearch');
if (!autoImageSearch) {
localStorage.setItem('autoImageSearch', 'true');
}
if (!autoVideoSearch) {
localStorage.setItem('autoVideoSearch', 'false');
}
const providers = await fetch(`/api/models`, {
headers: {
'Content-Type': 'application/json',
},
}).then(async (res) => {
if (!res.ok)
throw new Error(
`Failed to fetch models: ${res.status} ${res.statusText}`,
);
return res.json();
});
if (
!chatModel ||
!chatModelProvider ||
!embeddingModel ||
!embeddingModelProvider
) {
if (!chatModel || !chatModelProvider) {
const chatModelProviders = providers.chatModelProviders;
const chatModelProvidersKeys = Object.keys(chatModelProviders);
if (!chatModelProviders || chatModelProvidersKeys.length === 0) {
return toast.error('No chat models available');
} else {
chatModelProvider =
chatModelProvidersKeys.find(
(provider) =>
Object.keys(chatModelProviders[provider]).length > 0,
) || chatModelProvidersKeys[0];
}
if (
chatModelProvider === 'custom_openai' &&
Object.keys(chatModelProviders[chatModelProvider]).length === 0
) {
toast.error(
"Looks like you haven't configured any chat model providers. Please configure them from the settings page or the config file.",
);
return setHasError(true);
}
chatModel = Object.keys(chatModelProviders[chatModelProvider])[0];
}
if (!embeddingModel || !embeddingModelProvider) {
const embeddingModelProviders = providers.embeddingModelProviders;
if (
!embeddingModelProviders ||
Object.keys(embeddingModelProviders).length === 0
)
return toast.error('No embedding models available');
embeddingModelProvider = Object.keys(embeddingModelProviders)[0];
embeddingModel = Object.keys(
embeddingModelProviders[embeddingModelProvider],
)[0];
}
localStorage.setItem('chatModel', chatModel!);
localStorage.setItem('chatModelProvider', chatModelProvider);
localStorage.setItem('embeddingModel', embeddingModel!);
localStorage.setItem('embeddingModelProvider', embeddingModelProvider);
} else {
const chatModelProviders = providers.chatModelProviders;
const embeddingModelProviders = providers.embeddingModelProviders;
if (
Object.keys(chatModelProviders).length > 0 &&
(!chatModelProviders[chatModelProvider] ||
Object.keys(chatModelProviders[chatModelProvider]).length === 0)
) {
const chatModelProvidersKeys = Object.keys(chatModelProviders);
chatModelProvider =
chatModelProvidersKeys.find(
(key) => Object.keys(chatModelProviders[key]).length > 0,
) || chatModelProvidersKeys[0];
localStorage.setItem('chatModelProvider', chatModelProvider);
}
if (
chatModelProvider &&
!chatModelProviders[chatModelProvider][chatModel]
) {
if (
chatModelProvider === 'custom_openai' &&
Object.keys(chatModelProviders[chatModelProvider]).length === 0
) {
toast.error(
"Looks like you haven't configured any chat model providers. Please configure them from the settings page or the config file.",
);
return setHasError(true);
}
chatModel = Object.keys(
chatModelProviders[
Object.keys(chatModelProviders[chatModelProvider]).length > 0
? chatModelProvider
: Object.keys(chatModelProviders)[0]
],
)[0];
localStorage.setItem('chatModel', chatModel);
}
if (
Object.keys(embeddingModelProviders).length > 0 &&
!embeddingModelProviders[embeddingModelProvider]
) {
embeddingModelProvider = Object.keys(embeddingModelProviders)[0];
localStorage.setItem('embeddingModelProvider', embeddingModelProvider);
}
if (
embeddingModelProvider &&
!embeddingModelProviders[embeddingModelProvider][embeddingModel]
) {
embeddingModel = Object.keys(
embeddingModelProviders[embeddingModelProvider],
)[0];
localStorage.setItem('embeddingModel', embeddingModel);
}
}
setChatModelProvider({
name: chatModel!,
provider: chatModelProvider,
});
setEmbeddingModelProvider({
name: embeddingModel!,
provider: embeddingModelProvider,
});
setIsConfigReady(true);
} catch (err) {
console.error('An error occurred while checking the configuration:', err);
setIsConfigReady(false);
setHasError(true);
}
};
const loadMessages = async (
chatId: string,
setMessages: (messages: Message[]) => void,
setIsMessagesLoaded: (loaded: boolean) => void,
setChatHistory: (history: [string, string][]) => void,
setFocusMode: (mode: string) => void,
setNotFound: (notFound: boolean) => void,
setFiles: (files: File[]) => void,
setFileIds: (fileIds: string[]) => void,
) => {
const res = await fetch(`/api/chats/${chatId}`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
},
});
if (res.status === 404) {
setNotFound(true);
setIsMessagesLoaded(true);
return;
}
const data = await res.json();
const messages = data.messages.map((msg: any) => {
return {
...msg,
...JSON.parse(msg.metadata),
};
}) as Message[];
setMessages(messages);
const history = messages.map((msg) => {
return [msg.role, msg.content];
}) as [string, string][];
console.debug(new Date(), 'app:messages_loaded');
document.title = messages[0].content;
const files = data.chat.files.map((file: any) => {
return {
fileName: file.name,
fileExtension: file.name.split('.').pop(),
fileId: file.fileId,
};
});
setFiles(files);
setFileIds(files.map((file: File) => file.fileId));
setChatHistory(history);
setFocusMode(data.chat.focusMode);
setIsMessagesLoaded(true);
};
const ChatWindow = ({ id }: { id?: string }) => {
const searchParams = useSearchParams();
const initialMessage = searchParams.get('q');
const [chatId, setChatId] = useState<string | undefined>(id);
const [newChatCreated, setNewChatCreated] = useState(false);
const [chatModelProvider, setChatModelProvider] = useState<ChatModelProvider>(
{
name: '',
provider: '',
},
);
const [embeddingModelProvider, setEmbeddingModelProvider] =
useState<EmbeddingModelProvider>({
name: '',
provider: '',
});
const [isConfigReady, setIsConfigReady] = useState(false);
const [hasError, setHasError] = useState(false);
const [isReady, setIsReady] = useState(false);
useEffect(() => {
checkConfig(
setChatModelProvider,
setEmbeddingModelProvider,
setIsConfigReady,
setHasError,
);
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
const [loading, setLoading] = useState(false);
const [messageAppeared, setMessageAppeared] = useState(false);
const [chatHistory, setChatHistory] = useState<[string, string][]>([]);
const [messages, setMessages] = useState<Message[]>([]);
const [files, setFiles] = useState<File[]>([]);
const [fileIds, setFileIds] = useState<string[]>([]);
const [focusMode, setFocusMode] = useState('webSearch');
const [optimizationMode, setOptimizationMode] = useState('speed');
const [isMessagesLoaded, setIsMessagesLoaded] = useState(false);
const [notFound, setNotFound] = useState(false);
useEffect(() => {
if (
chatId &&
!newChatCreated &&
!isMessagesLoaded &&
messages.length === 0
) {
loadMessages(
chatId,
setMessages,
setIsMessagesLoaded,
setChatHistory,
setFocusMode,
setNotFound,
setFiles,
setFileIds,
);
} else if (!chatId) {
setNewChatCreated(true);
setIsMessagesLoaded(true);
setChatId(crypto.randomBytes(20).toString('hex'));
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
const messagesRef = useRef<Message[]>([]);
useEffect(() => {
messagesRef.current = messages;
}, [messages]);
useEffect(() => {
if (isMessagesLoaded && isConfigReady) {
setIsReady(true);
console.debug(new Date(), 'app:ready');
} else {
setIsReady(false);
}
}, [isMessagesLoaded, isConfigReady]);
const sendMessage = async (message: string, messageId?: string) => {
if (loading) return;
if (!isConfigReady) {
toast.error('Cannot send message before the configuration is ready');
return;
}
setLoading(true);
setMessageAppeared(false);
let sources: Document[] | undefined = undefined;
let recievedMessage = '';
let added = false;
messageId = messageId ?? crypto.randomBytes(7).toString('hex');
setMessages((prevMessages) => [
...prevMessages,
{
content: message,
messageId: messageId,
chatId: chatId!,
role: 'user',
createdAt: new Date(),
},
]);
const messageHandler = async (data: any) => {
if (data.type === 'error') {
toast.error(data.data);
setLoading(false);
return;
}
if (data.type === 'sources') {
sources = data.data;
if (!added) {
setMessages((prevMessages) => [
...prevMessages,
{
content: '',
messageId: data.messageId,
chatId: chatId!,
role: 'assistant',
sources: sources,
createdAt: new Date(),
},
]);
added = true;
}
setMessageAppeared(true);
}
if (data.type === 'message') {
if (!added) {
setMessages((prevMessages) => [
...prevMessages,
{
content: data.data,
messageId: data.messageId,
chatId: chatId!,
role: 'assistant',
sources: sources,
createdAt: new Date(),
},
]);
added = true;
}
setMessages((prev) =>
prev.map((message) => {
if (message.messageId === data.messageId) {
return { ...message, content: message.content + data.data };
}
return message;
}),
);
recievedMessage += data.data;
setMessageAppeared(true);
}
if (data.type === 'messageEnd') {
setChatHistory((prevHistory) => [
...prevHistory,
['human', message],
['assistant', recievedMessage],
]);
setLoading(false);
const lastMsg = messagesRef.current[messagesRef.current.length - 1];
const autoImageSearch = localStorage.getItem('autoImageSearch');
const autoVideoSearch = localStorage.getItem('autoVideoSearch');
if (autoImageSearch === 'true') {
document
.getElementById(`search-images-${lastMsg.messageId}`)
?.click();
}
if (autoVideoSearch === 'true') {
document
.getElementById(`search-videos-${lastMsg.messageId}`)
?.click();
}
if (
lastMsg.role === 'assistant' &&
lastMsg.sources &&
lastMsg.sources.length > 0 &&
!lastMsg.suggestions
) {
const suggestions = await getSuggestions(messagesRef.current);
setMessages((prev) =>
prev.map((msg) => {
if (msg.messageId === lastMsg.messageId) {
return { ...msg, suggestions: suggestions };
}
return msg;
}),
);
}
}
};
const res = await fetch('/api/chat', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
content: message,
message: {
messageId: messageId,
chatId: chatId!,
content: message,
},
chatId: chatId!,
files: fileIds,
focusMode: focusMode,
optimizationMode: optimizationMode,
history: chatHistory,
chatModel: {
name: chatModelProvider.name,
provider: chatModelProvider.provider,
},
embeddingModel: {
name: embeddingModelProvider.name,
provider: embeddingModelProvider.provider,
},
systemInstructions: localStorage.getItem('systemInstructions'),
}),
});
if (!res.body) throw new Error('No response body');
const reader = res.body?.getReader();
const decoder = new TextDecoder('utf-8');
let partialChunk = '';
while (true) {
const { value, done } = await reader.read();
if (done) break;
partialChunk += decoder.decode(value, { stream: true });
try {
const messages = partialChunk.split('\n');
for (const msg of messages) {
if (!msg.trim()) continue;
const json = JSON.parse(msg);
messageHandler(json);
}
partialChunk = '';
} catch (error) {
console.warn('Incomplete JSON, waiting for next chunk...');
}
}
};
const rewrite = (messageId: string) => {
const index = messages.findIndex((msg) => msg.messageId === messageId);
if (index === -1) return;
const message = messages[index - 1];
setMessages((prev) => {
return [...prev.slice(0, messages.length > 2 ? index - 1 : 0)];
});
setChatHistory((prev) => {
return [...prev.slice(0, messages.length > 2 ? index - 1 : 0)];
});
sendMessage(message.content, message.messageId);
};
useEffect(() => {
if (isReady && initialMessage && isConfigReady) {
sendMessage(initialMessage);
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [isConfigReady, isReady, initialMessage]);
if (hasError) { if (hasError) {
return ( return (
<div className="relative"> <div className="relative">
@@ -51,11 +586,31 @@ const ChatWindow = () => {
<div> <div>
{messages.length > 0 ? ( {messages.length > 0 ? (
<> <>
<Navbar /> <Navbar chatId={chatId!} messages={messages} />
<Chat /> <Chat
loading={loading}
messages={messages}
sendMessage={sendMessage}
messageAppeared={messageAppeared}
rewrite={rewrite}
fileIds={fileIds}
setFileIds={setFileIds}
files={files}
setFiles={setFiles}
/>
</> </>
) : ( ) : (
<EmptyChat /> <EmptyChat
sendMessage={sendMessage}
focusMode={focusMode}
setFocusMode={setFocusMode}
optimizationMode={optimizationMode}
setOptimizationMode={setOptimizationMode}
fileIds={fileIds}
setFileIds={setFileIds}
files={files}
setFiles={setFiles}
/>
)} )}
</div> </div>
) )

View File

@@ -5,7 +5,27 @@ import Link from 'next/link';
import WeatherWidget from './WeatherWidget'; import WeatherWidget from './WeatherWidget';
import NewsArticleWidget from './NewsArticleWidget'; import NewsArticleWidget from './NewsArticleWidget';
const EmptyChat = () => { const EmptyChat = ({
sendMessage,
focusMode,
setFocusMode,
optimizationMode,
setOptimizationMode,
fileIds,
setFileIds,
files,
setFiles,
}: {
sendMessage: (message: string) => void;
focusMode: string;
setFocusMode: (mode: string) => void;
optimizationMode: string;
setOptimizationMode: (mode: string) => void;
fileIds: string[];
setFileIds: (fileIds: string[]) => void;
files: File[];
setFiles: (files: File[]) => void;
}) => {
return ( return (
<div className="relative"> <div className="relative">
<div className="absolute w-full flex flex-row items-center justify-end mr-5 mt-5"> <div className="absolute w-full flex flex-row items-center justify-end mr-5 mt-5">
@@ -18,7 +38,17 @@ const EmptyChat = () => {
<h2 className="text-black/70 dark:text-white/70 text-3xl font-medium -mt-8"> <h2 className="text-black/70 dark:text-white/70 text-3xl font-medium -mt-8">
Research begins here. Research begins here.
</h2> </h2>
<EmptyChatMessageInput /> <EmptyChatMessageInput
sendMessage={sendMessage}
focusMode={focusMode}
setFocusMode={setFocusMode}
optimizationMode={optimizationMode}
setOptimizationMode={setOptimizationMode}
fileIds={fileIds}
setFileIds={setFileIds}
files={files}
setFiles={setFiles}
/>
</div> </div>
<div className="flex flex-col w-full gap-4 mt-2 sm:flex-row sm:justify-center"> <div className="flex flex-col w-full gap-4 mt-2 sm:flex-row sm:justify-center">
<div className="flex-1 w-full"> <div className="flex-1 w-full">

View File

@@ -1,15 +1,34 @@
import { ArrowRight } from 'lucide-react'; import { ArrowRight } from 'lucide-react';
import { useEffect, useRef, useState } from 'react'; import { useEffect, useRef, useState } from 'react';
import TextareaAutosize from 'react-textarea-autosize'; import TextareaAutosize from 'react-textarea-autosize';
import CopilotToggle from './MessageInputActions/Copilot';
import Focus from './MessageInputActions/Focus'; import Focus from './MessageInputActions/Focus';
import Optimization from './MessageInputActions/Optimization'; import Optimization from './MessageInputActions/Optimization';
import Attach from './MessageInputActions/Attach'; import Attach from './MessageInputActions/Attach';
import { useChat } from '@/lib/hooks/useChat'; import { File } from './ChatWindow';
const EmptyChatMessageInput = () => { const EmptyChatMessageInput = ({
const { sendMessage } = useChat(); sendMessage,
focusMode,
/* const [copilotEnabled, setCopilotEnabled] = useState(false); */ setFocusMode,
optimizationMode,
setOptimizationMode,
fileIds,
setFileIds,
files,
setFiles,
}: {
sendMessage: (message: string) => void;
focusMode: string;
setFocusMode: (mode: string) => void;
optimizationMode: string;
setOptimizationMode: (mode: string) => void;
fileIds: string[];
setFileIds: (fileIds: string[]) => void;
files: File[];
setFiles: (files: File[]) => void;
}) => {
const [copilotEnabled, setCopilotEnabled] = useState(false);
const [message, setMessage] = useState(''); const [message, setMessage] = useState('');
const inputRef = useRef<HTMLTextAreaElement | null>(null); const inputRef = useRef<HTMLTextAreaElement | null>(null);
@@ -65,11 +84,20 @@ const EmptyChatMessageInput = () => {
/> />
<div className="flex flex-row items-center justify-between mt-4"> <div className="flex flex-row items-center justify-between mt-4">
<div className="flex flex-row items-center space-x-2 lg:space-x-4"> <div className="flex flex-row items-center space-x-2 lg:space-x-4">
<Focus /> <Focus focusMode={focusMode} setFocusMode={setFocusMode} />
<Attach showText /> <Attach
fileIds={fileIds}
setFileIds={setFileIds}
files={files}
setFiles={setFiles}
showText
/>
</div> </div>
<div className="flex flex-row items-center space-x-1 sm:space-x-4"> <div className="flex flex-row items-center space-x-1 sm:space-x-4">
<Optimization /> <Optimization
optimizationMode={optimizationMode}
setOptimizationMode={setOptimizationMode}
/>
<button <button
disabled={message.trim().length === 0} disabled={message.trim().length === 0}
className="bg-[#24A0ED] text-white disabled:text-black/50 dark:disabled:text-white/50 disabled:bg-[#e0e0dc] dark:disabled:bg-[#ececec21] hover:bg-opacity-85 transition duration-100 rounded-full p-2" className="bg-[#24A0ED] text-white disabled:text-black/50 dark:disabled:text-white/50 disabled:bg-[#e0e0dc] dark:disabled:bg-[#ececec21] hover:bg-opacity-85 transition duration-100 rounded-full p-2"

View File

@@ -20,36 +20,32 @@ import SearchImages from './SearchImages';
import SearchVideos from './SearchVideos'; import SearchVideos from './SearchVideos';
import { useSpeech } from 'react-text-to-speech'; import { useSpeech } from 'react-text-to-speech';
import ThinkBox from './ThinkBox'; import ThinkBox from './ThinkBox';
import { useChat } from '@/lib/hooks/useChat';
const ThinkTagProcessor = ({ const ThinkTagProcessor = ({ children }: { children: React.ReactNode }) => {
children, return <ThinkBox content={children as string} />;
thinkingEnded,
}: {
children: React.ReactNode;
thinkingEnded: boolean;
}) => {
return (
<ThinkBox content={children as string} thinkingEnded={thinkingEnded} />
);
}; };
const MessageBox = ({ const MessageBox = ({
message, message,
messageIndex, messageIndex,
history,
loading,
dividerRef, dividerRef,
isLast, isLast,
rewrite,
sendMessage,
}: { }: {
message: Message; message: Message;
messageIndex: number; messageIndex: number;
history: Message[];
loading: boolean;
dividerRef?: MutableRefObject<HTMLDivElement | null>; dividerRef?: MutableRefObject<HTMLDivElement | null>;
isLast: boolean; isLast: boolean;
rewrite: (messageId: string) => void;
sendMessage: (message: string) => void;
}) => { }) => {
const { loading, messages: history, sendMessage, rewrite } = useChat();
const [parsedMessage, setParsedMessage] = useState(message.content); const [parsedMessage, setParsedMessage] = useState(message.content);
const [speechMessage, setSpeechMessage] = useState(message.content); const [speechMessage, setSpeechMessage] = useState(message.content);
const [thinkingEnded, setThinkingEnded] = useState(false);
useEffect(() => { useEffect(() => {
const citationRegex = /\[([^\]]+)\]/g; const citationRegex = /\[([^\]]+)\]/g;
@@ -65,10 +61,6 @@ const MessageBox = ({
} }
} }
if (message.role === 'assistant' && message.content.includes('</think>')) {
setThinkingEnded(true);
}
if ( if (
message.role === 'assistant' && message.role === 'assistant' &&
message?.sources && message?.sources &&
@@ -96,7 +88,7 @@ const MessageBox = ({
if (url) { if (url) {
return `<a href="${url}" target="_blank" className="bg-light-secondary dark:bg-dark-secondary px-1 rounded ml-1 no-underline text-xs text-black/70 dark:text-white/70 relative">${numStr}</a>`; return `<a href="${url}" target="_blank" className="bg-light-secondary dark:bg-dark-secondary px-1 rounded ml-1 no-underline text-xs text-black/70 dark:text-white/70 relative">${numStr}</a>`;
} else { } else {
return ``; return `[${numStr}]`;
} }
}) })
.join(''); .join('');
@@ -107,14 +99,6 @@ const MessageBox = ({
); );
setSpeechMessage(message.content.replace(regex, '')); setSpeechMessage(message.content.replace(regex, ''));
return; return;
} else if (
message.role === 'assistant' &&
message?.sources &&
message.sources.length === 0
) {
setParsedMessage(processedMessage.replace(regex, ''));
setSpeechMessage(message.content.replace(regex, ''));
return;
} }
setSpeechMessage(message.content.replace(regex, '')); setSpeechMessage(message.content.replace(regex, ''));
@@ -127,9 +111,6 @@ const MessageBox = ({
overrides: { overrides: {
think: { think: {
component: ThinkTagProcessor, component: ThinkTagProcessor,
props: {
thinkingEnded: thinkingEnded,
},
}, },
}, },
}; };

View File

@@ -6,11 +6,22 @@ import Attach from './MessageInputActions/Attach';
import CopilotToggle from './MessageInputActions/Copilot'; import CopilotToggle from './MessageInputActions/Copilot';
import { File } from './ChatWindow'; import { File } from './ChatWindow';
import AttachSmall from './MessageInputActions/AttachSmall'; import AttachSmall from './MessageInputActions/AttachSmall';
import { useChat } from '@/lib/hooks/useChat';
const MessageInput = () => {
const { loading, sendMessage } = useChat();
const MessageInput = ({
sendMessage,
loading,
fileIds,
setFileIds,
files,
setFiles,
}: {
sendMessage: (message: string) => void;
loading: boolean;
fileIds: string[];
setFileIds: (fileIds: string[]) => void;
files: File[];
setFiles: (files: File[]) => void;
}) => {
const [copilotEnabled, setCopilotEnabled] = useState(false); const [copilotEnabled, setCopilotEnabled] = useState(false);
const [message, setMessage] = useState(''); const [message, setMessage] = useState('');
const [textareaRows, setTextareaRows] = useState(1); const [textareaRows, setTextareaRows] = useState(1);
@@ -68,7 +79,14 @@ const MessageInput = () => {
mode === 'multi' ? 'flex-col rounded-lg' : 'flex-row rounded-full', mode === 'multi' ? 'flex-col rounded-lg' : 'flex-row rounded-full',
)} )}
> >
{mode === 'single' && <AttachSmall />} {mode === 'single' && (
<AttachSmall
fileIds={fileIds}
setFileIds={setFileIds}
files={files}
setFiles={setFiles}
/>
)}
<TextareaAutosize <TextareaAutosize
ref={inputRef} ref={inputRef}
value={message} value={message}
@@ -95,7 +113,12 @@ const MessageInput = () => {
)} )}
{mode === 'multi' && ( {mode === 'multi' && (
<div className="flex flex-row items-center justify-between w-full pt-2"> <div className="flex flex-row items-center justify-between w-full pt-2">
<AttachSmall /> <AttachSmall
fileIds={fileIds}
setFileIds={setFileIds}
files={files}
setFiles={setFiles}
/>
<div className="flex flex-row items-center space-x-4"> <div className="flex flex-row items-center space-x-4">
<CopilotToggle <CopilotToggle
copilotEnabled={copilotEnabled} copilotEnabled={copilotEnabled}

View File

@@ -7,11 +7,21 @@ import {
} from '@headlessui/react'; } from '@headlessui/react';
import { CopyPlus, File, LoaderCircle, Plus, Trash } from 'lucide-react'; import { CopyPlus, File, LoaderCircle, Plus, Trash } from 'lucide-react';
import { Fragment, useRef, useState } from 'react'; import { Fragment, useRef, useState } from 'react';
import { useChat } from '@/lib/hooks/useChat'; import { File as FileType } from '../ChatWindow';
const Attach = ({ showText }: { showText?: boolean }) => {
const { files, setFiles, setFileIds, fileIds } = useChat();
const Attach = ({
fileIds,
setFileIds,
showText,
files,
setFiles,
}: {
fileIds: string[];
setFileIds: (fileIds: string[]) => void;
showText?: boolean;
files: FileType[];
setFiles: (files: FileType[]) => void;
}) => {
const [loading, setLoading] = useState(false); const [loading, setLoading] = useState(false);
const fileInputRef = useRef<any>(); const fileInputRef = useRef<any>();
@@ -132,8 +142,8 @@ const Attach = ({ showText }: { showText?: boolean }) => {
key={i} key={i}
className="flex flex-row items-center justify-start w-full space-x-3 p-3" className="flex flex-row items-center justify-start w-full space-x-3 p-3"
> >
<div className="bg-light-100 dark:bg-dark-100 flex items-center justify-center w-10 h-10 rounded-md"> <div className="bg-dark-100 flex items-center justify-center w-10 h-10 rounded-md">
<File size={16} className="text-black/70 dark:text-white/70" /> <File size={16} className="text-white/70" />
</div> </div>
<p className="text-black/70 dark:text-white/70 text-sm"> <p className="text-black/70 dark:text-white/70 text-sm">
{file.fileName.length > 25 {file.fileName.length > 25

View File

@@ -8,11 +8,18 @@ import {
import { CopyPlus, File, LoaderCircle, Plus, Trash } from 'lucide-react'; import { CopyPlus, File, LoaderCircle, Plus, Trash } from 'lucide-react';
import { Fragment, useRef, useState } from 'react'; import { Fragment, useRef, useState } from 'react';
import { File as FileType } from '../ChatWindow'; import { File as FileType } from '../ChatWindow';
import { useChat } from '@/lib/hooks/useChat';
const AttachSmall = () => {
const { files, setFiles, setFileIds, fileIds } = useChat();
const AttachSmall = ({
fileIds,
setFileIds,
files,
setFiles,
}: {
fileIds: string[];
setFileIds: (fileIds: string[]) => void;
files: FileType[];
setFiles: (files: FileType[]) => void;
}) => {
const [loading, setLoading] = useState(false); const [loading, setLoading] = useState(false);
const fileInputRef = useRef<any>(); const fileInputRef = useRef<any>();
@@ -107,8 +114,8 @@ const AttachSmall = () => {
key={i} key={i}
className="flex flex-row items-center justify-start w-full space-x-3 p-3" className="flex flex-row items-center justify-start w-full space-x-3 p-3"
> >
<div className="bg-light-100 dark:bg-dark-100 flex items-center justify-center w-10 h-10 rounded-md"> <div className="bg-dark-100 flex items-center justify-center w-10 h-10 rounded-md">
<File size={16} className="text-black/70 dark:text-white/70" /> <File size={16} className="text-white/70" />
</div> </div>
<p className="text-black/70 dark:text-white/70 text-sm"> <p className="text-black/70 dark:text-white/70 text-sm">
{file.fileName.length > 25 {file.fileName.length > 25

View File

@@ -15,7 +15,6 @@ import {
} from '@headlessui/react'; } from '@headlessui/react';
import { SiReddit, SiYoutube } from '@icons-pack/react-simple-icons'; import { SiReddit, SiYoutube } from '@icons-pack/react-simple-icons';
import { Fragment } from 'react'; import { Fragment } from 'react';
import { useChat } from '@/lib/hooks/useChat';
const focusModes = [ const focusModes = [
{ {
@@ -56,9 +55,13 @@ const focusModes = [
}, },
]; ];
const Focus = () => { const Focus = ({
const { focusMode, setFocusMode } = useChat(); focusMode,
setFocusMode,
}: {
focusMode: string;
setFocusMode: (mode: string) => void;
}) => {
return ( return (
<Popover className="relative w-full max-w-[15rem] md:max-w-md lg:max-w-lg mt-[6.5px]"> <Popover className="relative w-full max-w-[15rem] md:max-w-md lg:max-w-lg mt-[6.5px]">
<PopoverButton <PopoverButton

View File

@@ -7,7 +7,6 @@ import {
Transition, Transition,
} from '@headlessui/react'; } from '@headlessui/react';
import { Fragment } from 'react'; import { Fragment } from 'react';
import { useChat } from '@/lib/hooks/useChat';
const OptimizationModes = [ const OptimizationModes = [
{ {
@@ -35,9 +34,13 @@ const OptimizationModes = [
}, },
]; ];
const Optimization = () => { const Optimization = ({
const { optimizationMode, setOptimizationMode } = useChat(); optimizationMode,
setOptimizationMode,
}: {
optimizationMode: string;
setOptimizationMode: (mode: string) => void;
}) => {
return ( return (
<Popover className="relative w-full max-w-[15rem] md:max-w-md lg:max-w-lg"> <Popover className="relative w-full max-w-[15rem] md:max-w-md lg:max-w-lg">
<PopoverButton <PopoverButton

View File

@@ -10,7 +10,6 @@ import {
Transition, Transition,
} from '@headlessui/react'; } from '@headlessui/react';
import jsPDF from 'jspdf'; import jsPDF from 'jspdf';
import { useChat } from '@/lib/hooks/useChat';
const downloadFile = (filename: string, content: string, type: string) => { const downloadFile = (filename: string, content: string, type: string) => {
const blob = new Blob([content], { type }); const blob = new Blob([content], { type });
@@ -119,12 +118,16 @@ const exportAsPDF = (messages: Message[], title: string) => {
doc.save(`${title || 'chat'}.pdf`); doc.save(`${title || 'chat'}.pdf`);
}; };
const Navbar = () => { const Navbar = ({
chatId,
messages,
}: {
messages: Message[];
chatId: string;
}) => {
const [title, setTitle] = useState<string>(''); const [title, setTitle] = useState<string>('');
const [timeAgo, setTimeAgo] = useState<string>(''); const [timeAgo, setTimeAgo] = useState<string>('');
const { messages, chatId } = useChat();
useEffect(() => { useEffect(() => {
if (messages.length > 0) { if (messages.length > 0) {
const newTitle = const newTitle =
@@ -203,7 +206,7 @@ const Navbar = () => {
</PopoverPanel> </PopoverPanel>
</Transition> </Transition>
</Popover> </Popover>
<DeleteChat redirect chatId={chatId!} chats={[]} setChats={() => {}} /> <DeleteChat redirect chatId={chatId} chats={[]} setChats={() => {}} />
</div> </div>
</div> </div>
); );

View File

@@ -1,23 +1,15 @@
'use client'; 'use client';
import { useEffect, useState } from 'react'; import { useState } from 'react';
import { cn } from '@/lib/utils';
import { ChevronDown, ChevronUp, BrainCircuit } from 'lucide-react'; import { ChevronDown, ChevronUp, BrainCircuit } from 'lucide-react';
interface ThinkBoxProps { interface ThinkBoxProps {
content: string; content: string;
thinkingEnded: boolean;
} }
const ThinkBox = ({ content, thinkingEnded }: ThinkBoxProps) => { const ThinkBox = ({ content }: ThinkBoxProps) => {
const [isExpanded, setIsExpanded] = useState(true); const [isExpanded, setIsExpanded] = useState(false);
useEffect(() => {
if (thinkingEnded) {
setIsExpanded(false);
} else {
setIsExpanded(true);
}
}, [thinkingEnded]);
return ( return (
<div className="my-4 bg-light-secondary/50 dark:bg-dark-secondary/50 rounded-xl border border-light-200 dark:border-dark-200 overflow-hidden"> <div className="my-4 bg-light-secondary/50 dark:bg-dark-secondary/50 rounded-xl border border-light-200 dark:border-dark-200 overflow-hidden">

View File

@@ -10,7 +10,6 @@ const WeatherWidget = () => {
windSpeed: 0, windSpeed: 0,
icon: '', icon: '',
temperatureUnit: 'C', temperatureUnit: 'C',
windSpeedUnit: 'm/s',
}); });
const [loading, setLoading] = useState(true); const [loading, setLoading] = useState(true);
@@ -76,7 +75,7 @@ const WeatherWidget = () => {
body: JSON.stringify({ body: JSON.stringify({
lat: location.latitude, lat: location.latitude,
lng: location.longitude, lng: location.longitude,
measureUnit: localStorage.getItem('measureUnit') ?? 'Metric', temperatureUnit: localStorage.getItem('temperatureUnit') ?? 'C',
}), }),
}); });
@@ -96,7 +95,6 @@ const WeatherWidget = () => {
windSpeed: data.windSpeed, windSpeed: data.windSpeed,
icon: data.icon, icon: data.icon,
temperatureUnit: data.temperatureUnit, temperatureUnit: data.temperatureUnit,
windSpeedUnit: data.windSpeedUnit,
}); });
setLoading(false); setLoading(false);
}); });
@@ -141,7 +139,7 @@ const WeatherWidget = () => {
</span> </span>
<span className="flex items-center text-xs text-black/60 dark:text-white/60"> <span className="flex items-center text-xs text-black/60 dark:text-white/60">
<Wind className="w-3 h-3 mr-1" /> <Wind className="w-3 h-3 mr-1" />
{data.windSpeed} {data.windSpeedUnit} {data.windSpeed} km/h
</span> </span>
</div> </div>
<span className="text-xs text-black/60 dark:text-white/60 mt-1"> <span className="text-xs text-black/60 dark:text-white/60 mt-1">

View File

@@ -3,18 +3,32 @@ import {
RunnableMap, RunnableMap,
RunnableLambda, RunnableLambda,
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { ChatPromptTemplate } from '@langchain/core/prompts'; import { PromptTemplate } from '@langchain/core/prompts';
import formatChatHistoryAsString from '../utils/formatHistory'; import formatChatHistoryAsString from '../utils/formatHistory';
import { BaseMessage } from '@langchain/core/messages'; import { BaseMessage } from '@langchain/core/messages';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { searchSearxng } from '../searxng'; import { searchSearxng } from '../searxng';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models'; import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import LineOutputParser from '../outputParsers/lineOutputParser';
const imageSearchChainPrompt = ` const imageSearchChainPrompt = `
You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search the web for images. You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search the web for images.
You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation. You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation.
Output only the rephrased query wrapped in an XML <query> element. Do not include any explanation or additional text.
Example:
1. Follow up question: What is a cat?
Rephrased: A cat
2. Follow up question: What is a car? How does it works?
Rephrased: Car working
3. Follow up question: How does an AC work?
Rephrased: AC working
Conversation:
{chat_history}
Follow up question: {query}
Rephrased question:
`; `;
type ImageSearchChainInput = { type ImageSearchChainInput = {
@@ -40,39 +54,12 @@ const createImageSearchChain = (llm: BaseChatModel) => {
return input.query; return input.query;
}, },
}), }),
ChatPromptTemplate.fromMessages([ PromptTemplate.fromTemplate(imageSearchChainPrompt),
['system', imageSearchChainPrompt],
[
'user',
'<conversation>\n</conversation>\n<follow_up>\nWhat is a cat?\n</follow_up>',
],
['assistant', '<query>A cat</query>'],
[
'user',
'<conversation>\n</conversation>\n<follow_up>\nWhat is a car? How does it work?\n</follow_up>',
],
['assistant', '<query>Car working</query>'],
[
'user',
'<conversation>\n</conversation>\n<follow_up>\nHow does an AC work?\n</follow_up>',
],
['assistant', '<query>AC working</query>'],
[
'user',
'<conversation>{chat_history}</conversation>\n<follow_up>\n{query}\n</follow_up>',
],
]),
llm, llm,
strParser, strParser,
RunnableLambda.from(async (input: string) => { RunnableLambda.from(async (input: string) => {
const queryParser = new LineOutputParser({ input = input.replace(/<think>.*?<\/think>/g, '');
key: 'query',
});
return await queryParser.parse(input);
}),
RunnableLambda.from(async (input: string) => {
const res = await searchSearxng(input, { const res = await searchSearxng(input, {
engines: ['bing images', 'google images'], engines: ['bing images', 'google images'],
}); });

View File

@@ -3,19 +3,33 @@ import {
RunnableMap, RunnableMap,
RunnableLambda, RunnableLambda,
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { ChatPromptTemplate } from '@langchain/core/prompts'; import { PromptTemplate } from '@langchain/core/prompts';
import formatChatHistoryAsString from '../utils/formatHistory'; import formatChatHistoryAsString from '../utils/formatHistory';
import { BaseMessage } from '@langchain/core/messages'; import { BaseMessage } from '@langchain/core/messages';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { searchSearxng } from '../searxng'; import { searchSearxng } from '../searxng';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models'; import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import LineOutputParser from '../outputParsers/lineOutputParser';
const videoSearchChainPrompt = ` const VideoSearchChainPrompt = `
You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search Youtube for videos. You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search Youtube for videos.
You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation. You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation.
Output only the rephrased query wrapped in an XML <query> element. Do not include any explanation or additional text.
`; Example:
1. Follow up question: How does a car work?
Rephrased: How does a car work?
2. Follow up question: What is the theory of relativity?
Rephrased: What is theory of relativity
3. Follow up question: How does an AC work?
Rephrased: How does an AC work
Conversation:
{chat_history}
Follow up question: {query}
Rephrased question:
`;
type VideoSearchChainInput = { type VideoSearchChainInput = {
chat_history: BaseMessage[]; chat_history: BaseMessage[];
@@ -41,37 +55,12 @@ const createVideoSearchChain = (llm: BaseChatModel) => {
return input.query; return input.query;
}, },
}), }),
ChatPromptTemplate.fromMessages([ PromptTemplate.fromTemplate(VideoSearchChainPrompt),
['system', videoSearchChainPrompt],
[
'user',
'<conversation>\n</conversation>\n<follow_up>\nHow does a car work?\n</follow_up>',
],
['assistant', '<query>How does a car work?</query>'],
[
'user',
'<conversation>\n</conversation>\n<follow_up>\nWhat is the theory of relativity?\n</follow_up>',
],
['assistant', '<query>Theory of relativity</query>'],
[
'user',
'<conversation>\n</conversation>\n<follow_up>\nHow does an AC work?\n</follow_up>',
],
['assistant', '<query>AC working</query>'],
[
'user',
'<conversation>{chat_history}</conversation>\n<follow_up>\n{query}\n</follow_up>',
],
]),
llm, llm,
strParser, strParser,
RunnableLambda.from(async (input: string) => { RunnableLambda.from(async (input: string) => {
const queryParser = new LineOutputParser({ input = input.replace(/<think>.*?<\/think>/g, '');
key: 'query',
});
return await queryParser.parse(input);
}),
RunnableLambda.from(async (input: string) => {
const res = await searchSearxng(input, { const res = await searchSearxng(input, {
engines: ['youtube'], engines: ['youtube'],
}); });
@@ -103,8 +92,8 @@ const handleVideoSearch = (
input: VideoSearchChainInput, input: VideoSearchChainInput,
llm: BaseChatModel, llm: BaseChatModel,
) => { ) => {
const videoSearchChain = createVideoSearchChain(llm); const VideoSearchChain = createVideoSearchChain(llm);
return videoSearchChain.invoke(input); return VideoSearchChain.invoke(input);
}; };
export default handleVideoSearch; export default handleVideoSearch;

View File

@@ -31,7 +31,6 @@ interface Config {
}; };
OLLAMA: { OLLAMA: {
API_URL: string; API_URL: string;
API_KEY: string;
}; };
DEEPSEEK: { DEEPSEEK: {
API_KEY: string; API_KEY: string;
@@ -87,8 +86,6 @@ export const getSearxngApiEndpoint = () =>
export const getOllamaApiEndpoint = () => loadConfig().MODELS.OLLAMA.API_URL; export const getOllamaApiEndpoint = () => loadConfig().MODELS.OLLAMA.API_URL;
export const getOllamaApiKey = () => loadConfig().MODELS.OLLAMA.API_KEY;
export const getDeepseekApiKey = () => loadConfig().MODELS.DEEPSEEK.API_KEY; export const getDeepseekApiKey = () => loadConfig().MODELS.DEEPSEEK.API_KEY;
export const getAimlApiKey = () => loadConfig().MODELS.AIMLAPI.API_KEY; export const getAimlApiKey = () => loadConfig().MODELS.AIMLAPI.API_KEY;

View File

@@ -1,643 +0,0 @@
'use client';
import { Message } from '@/components/ChatWindow';
import { createContext, useContext, useEffect, useRef, useState } from 'react';
import crypto from 'crypto';
import { useSearchParams } from 'next/navigation';
import { toast } from 'sonner';
import { Document } from '@langchain/core/documents';
import { getSuggestions } from '../actions';
type ChatContext = {
messages: Message[];
chatHistory: [string, string][];
files: File[];
fileIds: string[];
focusMode: string;
chatId: string | undefined;
optimizationMode: string;
isMessagesLoaded: boolean;
loading: boolean;
notFound: boolean;
messageAppeared: boolean;
isReady: boolean;
hasError: boolean;
setOptimizationMode: (mode: string) => void;
setFocusMode: (mode: string) => void;
setFiles: (files: File[]) => void;
setFileIds: (fileIds: string[]) => void;
sendMessage: (
message: string,
messageId?: string,
rewrite?: boolean,
) => Promise<void>;
rewrite: (messageId: string) => void;
};
export interface File {
fileName: string;
fileExtension: string;
fileId: string;
}
interface ChatModelProvider {
name: string;
provider: string;
}
interface EmbeddingModelProvider {
name: string;
provider: string;
}
const checkConfig = async (
setChatModelProvider: (provider: ChatModelProvider) => void,
setEmbeddingModelProvider: (provider: EmbeddingModelProvider) => void,
setIsConfigReady: (ready: boolean) => void,
setHasError: (hasError: boolean) => void,
) => {
try {
let chatModel = localStorage.getItem('chatModel');
let chatModelProvider = localStorage.getItem('chatModelProvider');
let embeddingModel = localStorage.getItem('embeddingModel');
let embeddingModelProvider = localStorage.getItem('embeddingModelProvider');
const autoImageSearch = localStorage.getItem('autoImageSearch');
const autoVideoSearch = localStorage.getItem('autoVideoSearch');
if (!autoImageSearch) {
localStorage.setItem('autoImageSearch', 'true');
}
if (!autoVideoSearch) {
localStorage.setItem('autoVideoSearch', 'false');
}
const providers = await fetch(`/api/models`, {
headers: {
'Content-Type': 'application/json',
},
}).then(async (res) => {
if (!res.ok)
throw new Error(
`Failed to fetch models: ${res.status} ${res.statusText}`,
);
return res.json();
});
if (
!chatModel ||
!chatModelProvider ||
!embeddingModel ||
!embeddingModelProvider
) {
if (!chatModel || !chatModelProvider) {
const chatModelProviders = providers.chatModelProviders;
const chatModelProvidersKeys = Object.keys(chatModelProviders);
if (!chatModelProviders || chatModelProvidersKeys.length === 0) {
return toast.error('No chat models available');
} else {
chatModelProvider =
chatModelProvidersKeys.find(
(provider) =>
Object.keys(chatModelProviders[provider]).length > 0,
) || chatModelProvidersKeys[0];
}
if (
chatModelProvider === 'custom_openai' &&
Object.keys(chatModelProviders[chatModelProvider]).length === 0
) {
toast.error(
"Looks like you haven't configured any chat model providers. Please configure them from the settings page or the config file.",
);
return setHasError(true);
}
chatModel = Object.keys(chatModelProviders[chatModelProvider])[0];
}
if (!embeddingModel || !embeddingModelProvider) {
const embeddingModelProviders = providers.embeddingModelProviders;
if (
!embeddingModelProviders ||
Object.keys(embeddingModelProviders).length === 0
)
return toast.error('No embedding models available');
embeddingModelProvider = Object.keys(embeddingModelProviders)[0];
embeddingModel = Object.keys(
embeddingModelProviders[embeddingModelProvider],
)[0];
}
localStorage.setItem('chatModel', chatModel!);
localStorage.setItem('chatModelProvider', chatModelProvider);
localStorage.setItem('embeddingModel', embeddingModel!);
localStorage.setItem('embeddingModelProvider', embeddingModelProvider);
} else {
const chatModelProviders = providers.chatModelProviders;
const embeddingModelProviders = providers.embeddingModelProviders;
if (
Object.keys(chatModelProviders).length > 0 &&
(!chatModelProviders[chatModelProvider] ||
Object.keys(chatModelProviders[chatModelProvider]).length === 0)
) {
const chatModelProvidersKeys = Object.keys(chatModelProviders);
chatModelProvider =
chatModelProvidersKeys.find(
(key) => Object.keys(chatModelProviders[key]).length > 0,
) || chatModelProvidersKeys[0];
localStorage.setItem('chatModelProvider', chatModelProvider);
}
if (
chatModelProvider &&
!chatModelProviders[chatModelProvider][chatModel]
) {
if (
chatModelProvider === 'custom_openai' &&
Object.keys(chatModelProviders[chatModelProvider]).length === 0
) {
toast.error(
"Looks like you haven't configured any chat model providers. Please configure them from the settings page or the config file.",
);
return setHasError(true);
}
chatModel = Object.keys(
chatModelProviders[
Object.keys(chatModelProviders[chatModelProvider]).length > 0
? chatModelProvider
: Object.keys(chatModelProviders)[0]
],
)[0];
localStorage.setItem('chatModel', chatModel);
}
if (
Object.keys(embeddingModelProviders).length > 0 &&
!embeddingModelProviders[embeddingModelProvider]
) {
embeddingModelProvider = Object.keys(embeddingModelProviders)[0];
localStorage.setItem('embeddingModelProvider', embeddingModelProvider);
}
if (
embeddingModelProvider &&
!embeddingModelProviders[embeddingModelProvider][embeddingModel]
) {
embeddingModel = Object.keys(
embeddingModelProviders[embeddingModelProvider],
)[0];
localStorage.setItem('embeddingModel', embeddingModel);
}
}
setChatModelProvider({
name: chatModel!,
provider: chatModelProvider,
});
setEmbeddingModelProvider({
name: embeddingModel!,
provider: embeddingModelProvider,
});
setIsConfigReady(true);
} catch (err) {
console.error('An error occurred while checking the configuration:', err);
setIsConfigReady(false);
setHasError(true);
}
};
const loadMessages = async (
chatId: string,
setMessages: (messages: Message[]) => void,
setIsMessagesLoaded: (loaded: boolean) => void,
setChatHistory: (history: [string, string][]) => void,
setFocusMode: (mode: string) => void,
setNotFound: (notFound: boolean) => void,
setFiles: (files: File[]) => void,
setFileIds: (fileIds: string[]) => void,
) => {
const res = await fetch(`/api/chats/${chatId}`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
},
});
if (res.status === 404) {
setNotFound(true);
setIsMessagesLoaded(true);
return;
}
const data = await res.json();
const messages = data.messages.map((msg: any) => {
return {
...msg,
...JSON.parse(msg.metadata),
};
}) as Message[];
setMessages(messages);
const history = messages.map((msg) => {
return [msg.role, msg.content];
}) as [string, string][];
console.debug(new Date(), 'app:messages_loaded');
document.title = messages[0].content;
const files = data.chat.files.map((file: any) => {
return {
fileName: file.name,
fileExtension: file.name.split('.').pop(),
fileId: file.fileId,
};
});
setFiles(files);
setFileIds(files.map((file: File) => file.fileId));
setChatHistory(history);
setFocusMode(data.chat.focusMode);
setIsMessagesLoaded(true);
};
export const chatContext = createContext<ChatContext>({
chatHistory: [],
chatId: '',
fileIds: [],
files: [],
focusMode: '',
hasError: false,
isMessagesLoaded: false,
isReady: false,
loading: false,
messageAppeared: false,
messages: [],
notFound: false,
optimizationMode: '',
rewrite: () => {},
sendMessage: async () => {},
setFileIds: () => {},
setFiles: () => {},
setFocusMode: () => {},
setOptimizationMode: () => {},
});
export const ChatProvider = ({
children,
id,
}: {
children: React.ReactNode;
id?: string;
}) => {
const searchParams = useSearchParams();
const initialMessage = searchParams.get('q');
const [chatId, setChatId] = useState<string | undefined>(id);
const [newChatCreated, setNewChatCreated] = useState(false);
const [loading, setLoading] = useState(false);
const [messageAppeared, setMessageAppeared] = useState(false);
const [chatHistory, setChatHistory] = useState<[string, string][]>([]);
const [messages, setMessages] = useState<Message[]>([]);
const [files, setFiles] = useState<File[]>([]);
const [fileIds, setFileIds] = useState<string[]>([]);
const [focusMode, setFocusMode] = useState('webSearch');
const [optimizationMode, setOptimizationMode] = useState('speed');
const [isMessagesLoaded, setIsMessagesLoaded] = useState(false);
const [notFound, setNotFound] = useState(false);
const [chatModelProvider, setChatModelProvider] = useState<ChatModelProvider>(
{
name: '',
provider: '',
},
);
const [embeddingModelProvider, setEmbeddingModelProvider] =
useState<EmbeddingModelProvider>({
name: '',
provider: '',
});
const [isConfigReady, setIsConfigReady] = useState(false);
const [hasError, setHasError] = useState(false);
const [isReady, setIsReady] = useState(false);
const messagesRef = useRef<Message[]>([]);
useEffect(() => {
checkConfig(
setChatModelProvider,
setEmbeddingModelProvider,
setIsConfigReady,
setHasError,
);
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
useEffect(() => {
if (
chatId &&
!newChatCreated &&
!isMessagesLoaded &&
messages.length === 0
) {
loadMessages(
chatId,
setMessages,
setIsMessagesLoaded,
setChatHistory,
setFocusMode,
setNotFound,
setFiles,
setFileIds,
);
} else if (!chatId) {
setNewChatCreated(true);
setIsMessagesLoaded(true);
setChatId(crypto.randomBytes(20).toString('hex'));
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
useEffect(() => {
messagesRef.current = messages;
}, [messages]);
useEffect(() => {
if (isMessagesLoaded && isConfigReady) {
setIsReady(true);
console.debug(new Date(), 'app:ready');
} else {
setIsReady(false);
}
}, [isMessagesLoaded, isConfigReady]);
const rewrite = (messageId: string) => {
const index = messages.findIndex((msg) => msg.messageId === messageId);
if (index === -1) return;
const message = messages[index - 1];
setMessages((prev) => {
return [...prev.slice(0, messages.length > 2 ? index - 1 : 0)];
});
setChatHistory((prev) => {
return [...prev.slice(0, messages.length > 2 ? index - 1 : 0)];
});
sendMessage(message.content, message.messageId, true);
};
useEffect(() => {
if (isReady && initialMessage && isConfigReady) {
if (!isConfigReady) {
toast.error('Cannot send message before the configuration is ready');
return;
}
sendMessage(initialMessage);
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [isConfigReady, isReady, initialMessage]);
const sendMessage: ChatContext['sendMessage'] = async (
message,
messageId,
rewrite = false,
) => {
if (loading) return;
setLoading(true);
setMessageAppeared(false);
let sources: Document[] | undefined = undefined;
let recievedMessage = '';
let added = false;
messageId = messageId ?? crypto.randomBytes(7).toString('hex');
setMessages((prevMessages) => [
...prevMessages,
{
content: message,
messageId: messageId,
chatId: chatId!,
role: 'user',
createdAt: new Date(),
},
]);
const messageHandler = async (data: any) => {
if (data.type === 'error') {
toast.error(data.data);
setLoading(false);
return;
}
if (data.type === 'sources') {
sources = data.data;
if (!added) {
setMessages((prevMessages) => [
...prevMessages,
{
content: '',
messageId: data.messageId,
chatId: chatId!,
role: 'assistant',
sources: sources,
createdAt: new Date(),
},
]);
added = true;
}
setMessageAppeared(true);
}
if (data.type === 'message') {
if (!added) {
setMessages((prevMessages) => [
...prevMessages,
{
content: data.data,
messageId: data.messageId,
chatId: chatId!,
role: 'assistant',
sources: sources,
createdAt: new Date(),
},
]);
added = true;
}
setMessages((prev) =>
prev.map((message) => {
if (message.messageId === data.messageId) {
return { ...message, content: message.content + data.data };
}
return message;
}),
);
recievedMessage += data.data;
setMessageAppeared(true);
}
if (data.type === 'messageEnd') {
setChatHistory((prevHistory) => [
...prevHistory,
['human', message],
['assistant', recievedMessage],
]);
setLoading(false);
const lastMsg = messagesRef.current[messagesRef.current.length - 1];
const autoImageSearch = localStorage.getItem('autoImageSearch');
const autoVideoSearch = localStorage.getItem('autoVideoSearch');
if (autoImageSearch === 'true') {
document
.getElementById(`search-images-${lastMsg.messageId}`)
?.click();
}
if (autoVideoSearch === 'true') {
document
.getElementById(`search-videos-${lastMsg.messageId}`)
?.click();
}
if (
lastMsg.role === 'assistant' &&
lastMsg.sources &&
lastMsg.sources.length > 0 &&
!lastMsg.suggestions
) {
const suggestions = await getSuggestions(messagesRef.current);
setMessages((prev) =>
prev.map((msg) => {
if (msg.messageId === lastMsg.messageId) {
return { ...msg, suggestions: suggestions };
}
return msg;
}),
);
}
}
};
const messageIndex = messages.findIndex((m) => m.messageId === messageId);
const res = await fetch('/api/chat', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
content: message,
message: {
messageId: messageId,
chatId: chatId!,
content: message,
},
chatId: chatId!,
files: fileIds,
focusMode: focusMode,
optimizationMode: optimizationMode,
history: rewrite
? chatHistory.slice(0, messageIndex === -1 ? undefined : messageIndex)
: chatHistory,
chatModel: {
name: chatModelProvider.name,
provider: chatModelProvider.provider,
},
embeddingModel: {
name: embeddingModelProvider.name,
provider: embeddingModelProvider.provider,
},
systemInstructions: localStorage.getItem('systemInstructions'),
}),
});
if (!res.body) throw new Error('No response body');
const reader = res.body?.getReader();
const decoder = new TextDecoder('utf-8');
let partialChunk = '';
while (true) {
const { value, done } = await reader.read();
if (done) break;
partialChunk += decoder.decode(value, { stream: true });
try {
const messages = partialChunk.split('\n');
for (const msg of messages) {
if (!msg.trim()) continue;
const json = JSON.parse(msg);
messageHandler(json);
}
partialChunk = '';
} catch (error) {
console.warn('Incomplete JSON, waiting for next chunk...');
}
}
};
return (
<chatContext.Provider
value={{
messages,
chatHistory,
files,
fileIds,
focusMode,
chatId,
hasError,
isMessagesLoaded,
isReady,
loading,
messageAppeared,
notFound,
optimizationMode,
setFileIds,
setFiles,
setFocusMode,
setOptimizationMode,
rewrite,
sendMessage,
}}
>
{children}
</chatContext.Provider>
);
};
export const useChat = () => {
const ctx = useContext(chatContext);
return ctx;
};

View File

@@ -1,63 +1,41 @@
export const webSearchRetrieverPrompt = ` export const webSearchRetrieverPrompt = `
You are an AI question rephraser. You will be given a conversation and a follow-up question, you will have to rephrase the follow up question so it is a standalone question and can be used by another LLM to search the web for information to answer it. You are an AI question rephraser. You will be given a conversation and a follow-up question; rephrase it into a standalone question that another LLM can use to search the web.
If it is a simple writing task or a greeting (unless the greeting contains a question after it) like Hi, Hello, How are you, etc. than a question then you need to return \`not_needed\` as the response (This is because the LLM won't need to search the web for finding information on this topic).
If the user asks some question from some URL or wants you to summarize a PDF or a webpage (via URL) you need to return the links inside the \`links\` XML block and the question inside the \`question\` XML block. If the user wants to you to summarize the webpage or the PDF you need to return \`summarize\` inside the \`question\` XML block in place of a question and the link to summarize in the \`links\` XML block.
You must always return the rephrased question inside the \`question\` XML block, if there are no links in the follow-up question then don't insert a \`links\` XML block in your response.
There are several examples attached for your reference inside the below \`examples\` XML block Return ONLY a JSON object that matches this schema:
query: string // the standalone question (or "summarize")
links: string[] // URLs extracted from the user query (empty if none)
searchRequired: boolean // true if web search is needed, false for greetings/simple writing tasks
searchMode: "" | "normal" | "news" // "" when searchRequired is false; "news" if the user asks for news/articles, otherwise "normal"
<examples> Rules
1. Follow up question: What is the capital of France - Greetings / simple writing tasks → query:"", links:[], searchRequired:false, searchMode:""
Rephrased question:\` - Summarizing a URL → query:"summarize", links:[url...], searchRequired:true, searchMode:"normal"
<question> - Asking for news/articles → searchMode:"news"
Capital of france
</question> Examples
\` 1. Follow-up: What is the capital of France?
"query":"capital of France","links":[],"searchRequired":true,"searchMode":"normal"
2. Hi, how are you? 2. Hi, how are you?
Rephrased question\` "query":"","links":[],"searchRequired":false,"searchMode":""
<question>
not_needed
</question>
\`
3. Follow up question: What is Docker? 3. Follow-up: What is Docker?
Rephrased question: \` "query":"what is Docker","links":[],"searchRequired":true,"searchMode":"normal"
<question>
What is Docker
</question>
\`
4. Follow up question: Can you tell me what is X from https://example.com 4. Follow-up: Can you tell me what is X from https://example.com?
Rephrased question: \` "query":"what is X","links":["https://example.com"],"searchRequired":true,"searchMode":"normal"
<question>
Can you tell me what is X?
</question>
<links> 5. Follow-up: Summarize the content from https://example.com
https://example.com "query":"summarize","links":["https://example.com"],"searchRequired":true,"searchMode":"normal"
</links>
\`
5. Follow up question: Summarize the content from https://example.com 6. Follow-up: Latest news about AI
Rephrased question: \` "query":"latest news about AI","links":[],"searchRequired":true,"searchMode":"news"
<question>
summarize
</question>
<links>
https://example.com
</links>
\`
</examples>
Anything below is the part of the actual conversation and you need to use conversation and the follow-up question to rephrase the follow-up question as a standalone question based on the guidelines shared above.
<conversation> <conversation>
{chat_history} {chat_history}
</conversation> </conversation>
Follow up question: {query} Follow-up question: {query}
Rephrased question: Rephrased question:
`; `;

View File

@@ -9,18 +9,6 @@ export const PROVIDER_INFO = {
import { BaseChatModel } from '@langchain/core/language_models/chat_models'; import { BaseChatModel } from '@langchain/core/language_models/chat_models';
const anthropicChatModels: Record<string, string>[] = [ const anthropicChatModels: Record<string, string>[] = [
{
displayName: 'Claude 4.1 Opus',
key: 'claude-opus-4-1-20250805',
},
{
displayName: 'Claude 4 Opus',
key: 'claude-opus-4-20250514',
},
{
displayName: 'Claude 4 Sonnet',
key: 'claude-sonnet-4-20250514',
},
{ {
displayName: 'Claude 3.7 Sonnet', displayName: 'Claude 3.7 Sonnet',
key: 'claude-3-7-sonnet-20250219', key: 'claude-3-7-sonnet-20250219',

View File

@@ -14,16 +14,16 @@ import { Embeddings } from '@langchain/core/embeddings';
const geminiChatModels: Record<string, string>[] = [ const geminiChatModels: Record<string, string>[] = [
{ {
displayName: 'Gemini 2.5 Flash', displayName: 'Gemini 2.5 Flash Preview 05-20',
key: 'gemini-2.5-flash', key: 'gemini-2.5-flash-preview-05-20',
}, },
{ {
displayName: 'Gemini 2.5 Flash-Lite', displayName: 'Gemini 2.5 Pro Preview',
key: 'gemini-2.5-flash-lite', key: 'gemini-2.5-pro-preview-05-06',
}, },
{ {
displayName: 'Gemini 2.5 Pro', displayName: 'Gemini 2.5 Pro Experimental',
key: 'gemini-2.5-pro', key: 'gemini-2.5-pro-preview-05-06',
}, },
{ {
displayName: 'Gemini 2.0 Flash', displayName: 'Gemini 2.0 Flash',
@@ -75,7 +75,7 @@ export const loadGeminiChatModels = async () => {
displayName: model.displayName, displayName: model.displayName,
model: new ChatGoogleGenerativeAI({ model: new ChatGoogleGenerativeAI({
apiKey: geminiApiKey, apiKey: geminiApiKey,
model: model.key, modelName: model.key,
temperature: 0.7, temperature: 0.7,
}) as unknown as BaseChatModel, }) as unknown as BaseChatModel,
}; };
@@ -108,7 +108,7 @@ export const loadGeminiEmbeddingModels = async () => {
return embeddingModels; return embeddingModels;
} catch (err) { } catch (err) {
console.error(`Error loading Gemini embeddings models: ${err}`); console.error(`Error loading OpenAI embeddings models: ${err}`);
return {}; return {};
} }
}; };

View File

@@ -1,4 +1,4 @@
import { ChatGroq } from '@langchain/groq'; import { ChatOpenAI } from '@langchain/openai';
import { getGroqApiKey } from '../config'; import { getGroqApiKey } from '../config';
import { ChatModel } from '.'; import { ChatModel } from '.';
@@ -28,10 +28,16 @@ export const loadGroqChatModels = async () => {
groqChatModels.forEach((model: any) => { groqChatModels.forEach((model: any) => {
chatModels[model.id] = { chatModels[model.id] = {
displayName: model.id, displayName: model.id,
model: new ChatGroq({ model: new ChatOpenAI({
apiKey: groqApiKey, apiKey: groqApiKey,
model: model.id, modelName: model.id,
temperature: 0.7, temperature: 0.7,
configuration: {
baseURL: 'https://api.groq.com/openai/v1',
},
metadata: {
'model-type': 'groq',
},
}) as unknown as BaseChatModel, }) as unknown as BaseChatModel,
}; };
}); });

View File

@@ -120,11 +120,7 @@ export const getAvailableChatModelProviders = async () => {
model: new ChatOpenAI({ model: new ChatOpenAI({
apiKey: customOpenAiApiKey, apiKey: customOpenAiApiKey,
modelName: customOpenAiModelName, modelName: customOpenAiModelName,
...((() => { temperature: 0.7,
const temperatureRestrictedModels = ['gpt-5-nano','gpt-5','gpt-5-mini','o1', 'o3', 'o3-mini', 'o4-mini'];
const isTemperatureRestricted = temperatureRestrictedModels.some(restrictedModel => customOpenAiModelName.includes(restrictedModel));
return isTemperatureRestricted ? {} : { temperature: 0.7 };
})()),
configuration: { configuration: {
baseURL: customOpenAiApiUrl, baseURL: customOpenAiApiUrl,
}, },

View File

@@ -1,5 +1,5 @@
import axios from 'axios'; import axios from 'axios';
import { getKeepAlive, getOllamaApiEndpoint, getOllamaApiKey } from '../config'; import { getKeepAlive, getOllamaApiEndpoint } from '../config';
import { ChatModel, EmbeddingModel } from '.'; import { ChatModel, EmbeddingModel } from '.';
export const PROVIDER_INFO = { export const PROVIDER_INFO = {
@@ -11,7 +11,6 @@ import { OllamaEmbeddings } from '@langchain/ollama';
export const loadOllamaChatModels = async () => { export const loadOllamaChatModels = async () => {
const ollamaApiEndpoint = getOllamaApiEndpoint(); const ollamaApiEndpoint = getOllamaApiEndpoint();
const ollamaApiKey = getOllamaApiKey();
if (!ollamaApiEndpoint) return {}; if (!ollamaApiEndpoint) return {};
@@ -34,9 +33,6 @@ export const loadOllamaChatModels = async () => {
model: model.model, model: model.model,
temperature: 0.7, temperature: 0.7,
keepAlive: getKeepAlive(), keepAlive: getKeepAlive(),
...(ollamaApiKey
? { headers: { Authorization: `Bearer ${ollamaApiKey}` } }
: {}),
}), }),
}; };
}); });
@@ -50,7 +46,6 @@ export const loadOllamaChatModels = async () => {
export const loadOllamaEmbeddingModels = async () => { export const loadOllamaEmbeddingModels = async () => {
const ollamaApiEndpoint = getOllamaApiEndpoint(); const ollamaApiEndpoint = getOllamaApiEndpoint();
const ollamaApiKey = getOllamaApiKey();
if (!ollamaApiEndpoint) return {}; if (!ollamaApiEndpoint) return {};
@@ -71,9 +66,6 @@ export const loadOllamaEmbeddingModels = async () => {
model: new OllamaEmbeddings({ model: new OllamaEmbeddings({
baseUrl: ollamaApiEndpoint, baseUrl: ollamaApiEndpoint,
model: model.model, model: model.model,
...(ollamaApiKey
? { headers: { Authorization: `Bearer ${ollamaApiKey}` } }
: {}),
}), }),
}; };
}); });

View File

@@ -26,10 +26,6 @@ const openaiChatModels: Record<string, string>[] = [
displayName: 'GPT-4 omni', displayName: 'GPT-4 omni',
key: 'gpt-4o', key: 'gpt-4o',
}, },
{
displayName: 'GPT-4o (2024-05-13)',
key: 'gpt-4o-2024-05-13',
},
{ {
displayName: 'GPT-4 omni mini', displayName: 'GPT-4 omni mini',
key: 'gpt-4o-mini', key: 'gpt-4o-mini',
@@ -46,34 +42,6 @@ const openaiChatModels: Record<string, string>[] = [
displayName: 'GPT 4.1', displayName: 'GPT 4.1',
key: 'gpt-4.1', key: 'gpt-4.1',
}, },
{
displayName: 'GPT 5 nano',
key: 'gpt-5-nano',
},
{
displayName: 'GPT 5',
key: 'gpt-5',
},
{
displayName: 'GPT 5 Mini',
key: 'gpt-5-mini',
},
{
displayName: 'o1',
key: 'o1',
},
{
displayName: 'o3',
key: 'o3',
},
{
displayName: 'o3 Mini',
key: 'o3-mini',
},
{
displayName: 'o4 Mini',
key: 'o4-mini',
},
]; ];
const openaiEmbeddingModels: Record<string, string>[] = [ const openaiEmbeddingModels: Record<string, string>[] = [
@@ -96,23 +64,13 @@ export const loadOpenAIChatModels = async () => {
const chatModels: Record<string, ChatModel> = {}; const chatModels: Record<string, ChatModel> = {};
openaiChatModels.forEach((model) => { openaiChatModels.forEach((model) => {
// Models that only support temperature = 1
const temperatureRestrictedModels = ['gpt-5-nano','gpt-5','gpt-5-mini','o1', 'o3', 'o3-mini', 'o4-mini'];
const isTemperatureRestricted = temperatureRestrictedModels.some(restrictedModel => model.key.includes(restrictedModel));
const modelConfig: any = {
apiKey: openaiApiKey,
modelName: model.key,
};
// Only add temperature if the model supports it
if (!isTemperatureRestricted) {
modelConfig.temperature = 0.7;
}
chatModels[model.key] = { chatModels[model.key] = {
displayName: model.displayName, displayName: model.displayName,
model: new ChatOpenAI(modelConfig) as unknown as BaseChatModel, model: new ChatOpenAI({
apiKey: openaiApiKey,
modelName: model.key,
temperature: 0.7,
}) as unknown as BaseChatModel,
}; };
}); });

View File

@@ -24,6 +24,7 @@ import computeSimilarity from '../utils/computeSimilarity';
import formatChatHistoryAsString from '../utils/formatHistory'; import formatChatHistoryAsString from '../utils/formatHistory';
import eventEmitter from 'events'; import eventEmitter from 'events';
import { StreamEvent } from '@langchain/core/tracers/log_stream'; import { StreamEvent } from '@langchain/core/tracers/log_stream';
import { z } from 'zod';
export interface MetaSearchAgentType { export interface MetaSearchAgentType {
searchAndAnswer: ( searchAndAnswer: (
@@ -52,6 +53,17 @@ type BasicChainInput = {
query: string; query: string;
}; };
const retrieverLLMOutputSchema = z.object({
query: z.string().describe('The query to search the web for.'),
links: z
.array(z.string())
.describe('The links to search/summarize if present'),
searchRequired: z
.boolean()
.describe('Wether there is a need to search the web'),
searchMode: z.enum(['', 'normal', 'news']).describe('The search mode.'),
});
class MetaSearchAgent implements MetaSearchAgentType { class MetaSearchAgent implements MetaSearchAgentType {
private config: Config; private config: Config;
private strParser = new StringOutputParser(); private strParser = new StringOutputParser();
@@ -62,73 +74,71 @@ class MetaSearchAgent implements MetaSearchAgentType {
private async createSearchRetrieverChain(llm: BaseChatModel) { private async createSearchRetrieverChain(llm: BaseChatModel) {
(llm as unknown as ChatOpenAI).temperature = 0; (llm as unknown as ChatOpenAI).temperature = 0;
return RunnableSequence.from([ return RunnableSequence.from([
PromptTemplate.fromTemplate(this.config.queryGeneratorPrompt), PromptTemplate.fromTemplate(this.config.queryGeneratorPrompt),
llm, Object.assign(
this.strParser, Object.create(Object.getPrototypeOf(llm)),
RunnableLambda.from(async (input: string) => { llm,
const linksOutputParser = new LineListOutputParser({ ).withStructuredOutput(retrieverLLMOutputSchema, {
key: 'links', ...(llm.metadata?.['model-type'] === 'groq'
}); ? {
method: 'json-object',
}
: {}),
}),
RunnableLambda.from(
async (input: z.infer<typeof retrieverLLMOutputSchema>) => {
let question = input.query;
const links = input.links;
const questionOutputParser = new LineOutputParser({ if (!input.searchRequired) {
key: 'question', return { query: '', docs: [] };
});
const links = await linksOutputParser.parse(input);
let question = this.config.summarizer
? await questionOutputParser.parse(input)
: input;
if (question === 'not_needed') {
return { query: '', docs: [] };
}
if (links.length > 0) {
if (question.length === 0) {
question = 'summarize';
} }
let docs: Document[] = []; if (links.length > 0) {
if (question.length === 0) {
const linkDocs = await getDocumentsFromLinks({ links }); question = 'summarize';
const docGroups: Document[] = [];
linkDocs.map((doc) => {
const URLDocExists = docGroups.find(
(d) =>
d.metadata.url === doc.metadata.url &&
d.metadata.totalDocs < 10,
);
if (!URLDocExists) {
docGroups.push({
...doc,
metadata: {
...doc.metadata,
totalDocs: 1,
},
});
} }
const docIndex = docGroups.findIndex( let docs: Document[] = [];
(d) =>
d.metadata.url === doc.metadata.url &&
d.metadata.totalDocs < 10,
);
if (docIndex !== -1) { const linkDocs = await getDocumentsFromLinks({ links });
docGroups[docIndex].pageContent =
docGroups[docIndex].pageContent + `\n\n` + doc.pageContent;
docGroups[docIndex].metadata.totalDocs += 1;
}
});
await Promise.all( const docGroups: Document[] = [];
docGroups.map(async (doc) => {
const res = await llm.invoke(` linkDocs.map((doc) => {
const URLDocExists = docGroups.find(
(d) =>
d.metadata.url === doc.metadata.url &&
d.metadata.totalDocs < 10,
);
if (!URLDocExists) {
docGroups.push({
...doc,
metadata: {
...doc.metadata,
totalDocs: 1,
},
});
}
const docIndex = docGroups.findIndex(
(d) =>
d.metadata.url === doc.metadata.url &&
d.metadata.totalDocs < 10,
);
if (docIndex !== -1) {
docGroups[docIndex].pageContent =
docGroups[docIndex].pageContent + `\n\n` + doc.pageContent;
docGroups[docIndex].metadata.totalDocs += 1;
}
});
await Promise.all(
docGroups.map(async (doc) => {
const res = await llm.invoke(`
You are a web search summarizer, tasked with summarizing a piece of text retrieved from a web search. Your job is to summarize the You are a web search summarizer, tasked with summarizing a piece of text retrieved from a web search. Your job is to summarize the
text into a detailed, 2-4 paragraph explanation that captures the main ideas and provides a comprehensive answer to the query. text into a detailed, 2-4 paragraph explanation that captures the main ideas and provides a comprehensive answer to the query.
If the query is \"summarize\", you should provide a detailed summary of the text. If the query is a specific question, you should answer it in the summary. If the query is \"summarize\", you should provide a detailed summary of the text. If the query is a specific question, you should answer it in the summary.
@@ -189,46 +199,50 @@ class MetaSearchAgent implements MetaSearchAgentType {
Make sure to answer the query in the summary. Make sure to answer the query in the summary.
`); `);
const document = new Document({ const document = new Document({
pageContent: res.content as string, pageContent: res.content as string,
metadata: { metadata: {
title: doc.metadata.title, title: doc.metadata.title,
url: doc.metadata.url, url: doc.metadata.url,
}, },
}); });
docs.push(document); docs.push(document);
}),
);
return { query: question, docs: docs };
} else {
question = question.replace(/<think>.*?<\/think>/g, '');
const res = await searchSearxng(question, {
language: 'en',
engines: this.config.activeEngines,
});
const documents = res.results.map(
(result) =>
new Document({
pageContent:
result.content ||
(this.config.activeEngines.includes('youtube')
? result.title
: '') /* Todo: Implement transcript grabbing using Youtubei (source: https://www.npmjs.com/package/youtubei) */,
metadata: {
title: result.title,
url: result.url,
...(result.img_src && { img_src: result.img_src }),
},
}), }),
); );
return { query: question, docs: documents }; return { query: question, docs: docs };
} } else {
}), question = question.replace(/<think>.*?<\/think>/g, '');
const res = await searchSearxng(question, {
language: 'en',
engines:
input.searchMode === 'normal'
? this.config.activeEngines
: ['bing news'],
});
const documents = res.results.map(
(result) =>
new Document({
pageContent:
result.content ||
(this.config.activeEngines.includes('youtube')
? result.title
: '') /* Todo: Implement transcript grabbing using Youtubei (source: https://www.npmjs.com/package/youtubei) */,
metadata: {
title: result.title,
url: result.url,
...(result.img_src && { img_src: result.img_src }),
},
}),
);
return { query: question, docs: documents };
}
},
),
]); ]);
} }

View File

@@ -1,11 +1,8 @@
import { BaseMessage, isAIMessage } from '@langchain/core/messages'; import { BaseMessage } from '@langchain/core/messages';
const formatChatHistoryAsString = (history: BaseMessage[]) => { const formatChatHistoryAsString = (history: BaseMessage[]) => {
return history return history
.map( .map((message) => `${message._getType()}: ${message.content}`)
(message) =>
`${isAIMessage(message) ? 'AI' : 'User'}: ${message.content}`,
)
.join('\n'); .join('\n');
}; };

View File

@@ -653,14 +653,6 @@
"@google/generative-ai" "^0.24.0" "@google/generative-ai" "^0.24.0"
uuid "^11.1.0" uuid "^11.1.0"
"@langchain/groq@^0.2.3":
version "0.2.3"
resolved "https://registry.yarnpkg.com/@langchain/groq/-/groq-0.2.3.tgz#3bfcbfc827cf469df3a1b5bb9799f4b0212b4625"
integrity sha512-r+yjysG36a0IZxTlCMr655Feumfb4IrOyA0jLLq4l7gEhVyMpYXMwyE6evseyU2LRP+7qOPbGRVpGqAIK0MsUA==
dependencies:
groq-sdk "^0.19.0"
zod "^3.22.4"
"@langchain/ollama@^0.2.3": "@langchain/ollama@^0.2.3":
version "0.2.3" version "0.2.3"
resolved "https://registry.yarnpkg.com/@langchain/ollama/-/ollama-0.2.3.tgz#4868e66db4fc480f08c42fc652274abbab0416f0" resolved "https://registry.yarnpkg.com/@langchain/ollama/-/ollama-0.2.3.tgz#4868e66db4fc480f08c42fc652274abbab0416f0"
@@ -2740,19 +2732,6 @@ graphql@^16.11.0:
resolved "https://registry.yarnpkg.com/graphql/-/graphql-16.11.0.tgz#96d17f66370678027fdf59b2d4c20b4efaa8a633" resolved "https://registry.yarnpkg.com/graphql/-/graphql-16.11.0.tgz#96d17f66370678027fdf59b2d4c20b4efaa8a633"
integrity sha512-mS1lbMsxgQj6hge1XZ6p7GPhbrtFwUFYi3wRzXAC/FmYnyXMTvvI3td3rjmQ2u8ewXueaSvRPWaEcgVVOT9Jnw== integrity sha512-mS1lbMsxgQj6hge1XZ6p7GPhbrtFwUFYi3wRzXAC/FmYnyXMTvvI3td3rjmQ2u8ewXueaSvRPWaEcgVVOT9Jnw==
groq-sdk@^0.19.0:
version "0.19.0"
resolved "https://registry.yarnpkg.com/groq-sdk/-/groq-sdk-0.19.0.tgz#564ce018172dc3e2e2793398e0227a035a357d09"
integrity sha512-vdh5h7ORvwvOvutA80dKF81b0gPWHxu6K/GOJBOM0n6p6CSqAVLhFfeS79Ef0j/yCycDR09jqY7jkYz9dLiS6w==
dependencies:
"@types/node" "^18.11.18"
"@types/node-fetch" "^2.6.4"
abort-controller "^3.0.0"
agentkeepalive "^4.2.1"
form-data-encoder "1.7.2"
formdata-node "^4.3.2"
node-fetch "^2.6.7"
guid-typescript@^1.0.9: guid-typescript@^1.0.9:
version "1.0.9" version "1.0.9"
resolved "https://registry.yarnpkg.com/guid-typescript/-/guid-typescript-1.0.9.tgz#e35f77003535b0297ea08548f5ace6adb1480ddc" resolved "https://registry.yarnpkg.com/guid-typescript/-/guid-typescript-1.0.9.tgz#e35f77003535b0297ea08548f5ace6adb1480ddc"