Compare commits

...

33 Commits

Author SHA1 Message Date
ee6e197ec0 feat(app): lint & beautify 2025-03-18 11:29:04 +05:30
32f26bb4e8 feat(app): add groq, gemini & anthropic provider 2025-03-18 11:28:47 +05:30
4cb20542a5 feat(config): update file path, add post endpoint 2025-03-18 10:33:32 +05:30
97f6196d9b feat(app): add GET config route 2025-03-18 10:25:09 +05:30
6c227cab6f feat(providers): move providers to UI 2025-03-18 10:24:51 +05:30
e9e34ddff9 feat(ui): add meta search agent 2025-03-18 10:24:33 +05:30
e29a08dc46 feat(ui): add necessary utils 2025-03-18 10:24:16 +05:30
5c313e9bed feat(ui): update packages, add config, add searxng 2025-03-18 10:23:59 +05:30
6b5bd9d79b feat(prompts): move to UI 2025-03-18 10:23:21 +05:30
64d2a467b0 Merge pull request #672 from sjiampojamarn/scrolling
Only set scrollIntoView for user msg.
2025-03-17 12:03:05 +05:30
9a2c4fe3b6 Only set scrollIntoView for user msg. 2025-03-16 22:15:58 -07:00
060c68a900 feat(message-box): lint & beautify 2025-03-14 22:05:07 +05:30
e6b87f89ec feat(sample-config): add custom openai model name 2025-03-08 20:08:27 +05:30
89b5229ce9 Merge pull request #663 from ericdachen/master
Update Readme
2025-03-05 11:11:07 +05:30
7756340dd9 Update README.md 2025-03-05 11:09:19 +05:30
bbd2e9c359 feat(readme): update warp banner 2025-03-05 11:05:25 +05:30
a32eb1dda3 feat(readme): lint & beautify, update anchor URL 2025-03-05 10:55:02 +05:30
aa834f7f04 Update README.md 2025-03-04 14:45:10 -05:00
064c0fbe42 Update README.md 2025-03-04 12:16:10 -05:00
bf4cf8eaeb Update README.md 2025-03-04 12:14:17 -05:00
a24992a3db Merge pull request #655 from ShortCipher5/patch-1
chore: Add Sealos 1-click deployment
2025-03-01 21:56:01 +05:30
d584067bb1 Update README.md 2025-02-27 23:26:45 -08:00
df4350f966 Merge branch 'master' of https://github.com/ItzCrazyKns/Perplexica 2025-02-26 10:40:34 +05:30
652ca2fdf4 Merge pull request #649 from QuietlyChan/fix/light-theme-ui-bug
fix(ui): improve dark mode text color for attachment buttons
2025-02-26 10:36:41 +05:30
216576128d fix(ui): update attachment text color for light and dark modes 2025-02-25 19:26:58 +08:00
bb3f180583 fix(ui): improve dark mode text color for attachment buttons 2025-02-25 17:26:33 +08:00
4d24d73161 Merge pull request #631 from user1007017/patch-1
Update README.md grammatical error
2025-02-20 10:37:33 +05:30
2e166c217b fix(MessageBox): break too long message title 2025-02-19 10:34:51 +08:00
4c73caadf6 feat(custom-openai): save live changes 2025-02-17 16:24:41 +05:30
5f0b87f4a9 Update README.md 2025-02-15 19:06:46 +01:00
115e6b2a71 Merge branch 'master' of https://github.com/ItzCrazyKns/Perplexica 2025-02-15 12:52:30 +05:30
a5c79c92ed feat(settings): add embedding provider settings 2025-02-15 12:52:27 +05:30
db3cea446e Update UPDATING.md 2025-02-15 12:33:43 +05:30
37 changed files with 2825 additions and 143 deletions

View File

@ -1,7 +1,22 @@
# 🚀 Perplexica - An AI-powered search engine 🔎 <!-- omit in toc -->
[![Discord](https://dcbadge.vercel.app/api/server/26aArMy8tT?style=flat&compact=true)](https://discord.gg/26aArMy8tT)
<div align="center" markdown="1">
<sup>Special thanks to:</sup>
<br>
<br>
<a href="https://www.warp.dev/perplexica">
<img alt="Warp sponsorship" width="400" src="https://github.com/user-attachments/assets/775dd593-9b5f-40f1-bf48-479faff4c27b">
</a>
### [Warp, the AI Devtool that lives in your terminal](https://www.warp.dev/perplexica)
[Available for MacOS, Linux, & Windows](https://www.warp.dev/perplexica)
</div>
<hr/>
[![Discord](https://dcbadge.vercel.app/api/server/26aArMy8tT?style=flat&compact=true)](https://discord.gg/26aArMy8tT)
![preview](.assets/perplexica-screenshot.png?)
@ -44,7 +59,7 @@ Want to know more about its architecture and how it works? You can read it [here
- **Normal Mode:** Processes your query and performs a web search.
- **Focus Modes:** Special modes to better answer specific types of questions. Perplexica currently has 6 focus modes:
- **All Mode:** Searches the entire web to find the best results.
- **Writing Assistant Mode:** Helpful for writing tasks that does not require searching the web.
- **Writing Assistant Mode:** Helpful for writing tasks that do not require searching the web.
- **Academic Search Mode:** Finds articles and papers, ideal for academic research.
- **YouTube Search Mode:** Finds YouTube videos based on the search query.
- **Wolfram Alpha Search Mode:** Answers queries that need calculations or data analysis using Wolfram Alpha.
@ -143,6 +158,7 @@ You can access Perplexica over your home network by following our networking gui
## One-Click Deployment
[![Deploy to Sealos](https://raw.githubusercontent.com/labring-actions/templates/main/Deploy-on-Sealos.svg)](https://usw.sealos.io/?openapp=system-template%3FtemplateName%3Dperplexica)
[![Deploy to RepoCloud](https://d16t0pc4846x52.cloudfront.net/deploylobe.svg)](https://repocloud.io/details/?app_id=267)
## Upcoming Features

View File

@ -7,34 +7,43 @@ To update Perplexica to the latest version, follow these steps:
1. Clone the latest version of Perplexica from GitHub:
```bash
git clone https://github.com/ItzCrazyKns/Perplexica.git
git clone https://github.com/ItzCrazyKns/Perplexica.git
```
2. Navigate to the Project Directory.
2. Navigate to the project directory.
3. Pull latest images from registry.
3. Check for changes in the configuration files. If the `sample.config.toml` file contains new fields, delete your existing `config.toml` file, rename `sample.config.toml` to `config.toml`, and update the configuration accordingly.
4. Pull the latest images from the registry.
```bash
docker compose pull
```
4. Update and Recreate containers.
5. Update and recreate the containers.
```bash
docker compose up -d
```
5. Once the command completes running go to http://localhost:3000 and verify the latest changes.
6. Once the command completes, go to http://localhost:3000 and verify the latest changes.
## For non Docker users
## For non-Docker users
1. Clone the latest version of Perplexica from GitHub:
```bash
git clone https://github.com/ItzCrazyKns/Perplexica.git
git clone https://github.com/ItzCrazyKns/Perplexica.git
```
2. Navigate to the Project Directory
3. Execute `npm i` in both the `ui` folder and the root directory.
4. Once packages are updated, execute `npm run build` in both the `ui` folder and the root directory.
5. Finally, start both the frontend and the backend by running `npm run start` in both the `ui` folder and the root directory.
2. Navigate to the project directory.
3. Check for changes in the configuration files. If the `sample.config.toml` file contains new fields, delete your existing `config.toml` file, rename `sample.config.toml` to `config.toml`, and update the configuration accordingly.
4. Execute `npm i` in both the `ui` folder and the root directory.
5. Once the packages are updated, execute `npm run build` in both the `ui` folder and the root directory.
6. Finally, start both the frontend and the backend by running `npm run start` in both the `ui` folder and the root directory.
---

View File

@ -18,6 +18,7 @@ API_KEY = ""
[MODELS.CUSTOM_OPENAI]
API_KEY = ""
API_URL = ""
MODEL_NAME = ""
[MODELS.OLLAMA]
API_URL = "" # Ollama API URL - http://host.docker.internal:11434

View File

@ -85,10 +85,12 @@ router.post('/', async (req, res) => {
if (body.chatModel?.provider === 'custom_openai') {
llm = new ChatOpenAI({
modelName: body.chatModel?.model || getCustomOpenaiModelName(),
openAIApiKey: body.chatModel?.customOpenAIKey || getCustomOpenaiApiKey(),
openAIApiKey:
body.chatModel?.customOpenAIKey || getCustomOpenaiApiKey(),
temperature: 0.7,
configuration: {
baseURL: body.chatModel?.customOpenAIBaseURL || getCustomOpenaiApiUrl(),
baseURL:
body.chatModel?.customOpenAIBaseURL || getCustomOpenaiApiUrl(),
},
}) as unknown as BaseChatModel;
} else if (

109
ui/app/api/config/route.ts Normal file
View File

@ -0,0 +1,109 @@
import {
getAnthropicApiKey,
getCustomOpenaiApiKey,
getCustomOpenaiApiUrl,
getCustomOpenaiModelName,
getGeminiApiKey,
getGroqApiKey,
getOllamaApiEndpoint,
getOpenaiApiKey,
updateConfig,
} from '@/lib/config';
import {
getAvailableChatModelProviders,
getAvailableEmbeddingModelProviders,
} from '@/lib/providers';
export const GET = async (req: Request) => {
try {
const config: Record<string, any> = {};
const [chatModelProviders, embeddingModelProviders] = await Promise.all([
getAvailableChatModelProviders(),
getAvailableEmbeddingModelProviders(),
]);
config['chatModelProviders'] = {};
config['embeddingModelProviders'] = {};
for (const provider in chatModelProviders) {
config['chatModelProviders'][provider] = Object.keys(
chatModelProviders[provider],
).map((model) => {
return {
name: model,
displayName: chatModelProviders[provider][model].displayName,
};
});
}
for (const provider in embeddingModelProviders) {
config['embeddingModelProviders'][provider] = Object.keys(
embeddingModelProviders[provider],
).map((model) => {
return {
name: model,
displayName: embeddingModelProviders[provider][model].displayName,
};
});
}
config['openaiApiKey'] = getOpenaiApiKey();
config['ollamaApiUrl'] = getOllamaApiEndpoint();
config['anthropicApiKey'] = getAnthropicApiKey();
config['groqApiKey'] = getGroqApiKey();
config['geminiApiKey'] = getGeminiApiKey();
config['customOpenaiApiUrl'] = getCustomOpenaiApiUrl();
config['customOpenaiApiKey'] = getCustomOpenaiApiKey();
config['customOpenaiModelName'] = getCustomOpenaiModelName();
return Response.json({ ...config }, { status: 200 });
} catch (err) {
console.error('An error ocurred while getting config:', err);
return Response.json(
{ message: 'An error ocurred while getting config' },
{ status: 500 },
);
}
};
export const POST = async (req: Request) => {
try {
const config = await req.json();
const updatedConfig = {
MODELS: {
OPENAI: {
API_KEY: config.openaiApiKey,
},
GROQ: {
API_KEY: config.groqApiKey,
},
ANTHROPIC: {
API_KEY: config.anthropicApiKey,
},
GEMINI: {
API_KEY: config.geminiApiKey,
},
OLLAMA: {
API_URL: config.ollamaApiUrl,
},
CUSTOM_OPENAI: {
API_URL: config.customOpenaiApiUrl,
API_KEY: config.customOpenaiApiKey,
MODEL_NAME: config.customOpenaiModelName,
},
},
};
updateConfig(updatedConfig);
return Response.json({ message: 'Config updated' }, { status: 200 });
} catch (err) {
console.error('An error ocurred while updating config:', err);
return Response.json(
{ message: 'An error ocurred while updating config' },
{ status: 500 },
);
}
};

View File

@ -116,7 +116,7 @@ const Page = () => {
useEffect(() => {
const fetchConfig = async () => {
setIsLoading(true);
const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/config`, {
const res = await fetch(`/api/config`, {
headers: {
'Content-Type': 'application/json',
},
@ -187,16 +187,13 @@ const Page = () => {
[key]: value,
} as SettingsType;
const response = await fetch(
`${process.env.NEXT_PUBLIC_API_URL}/config`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(updatedConfig),
const response = await fetch(`/api/config`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
);
body: JSON.stringify(updatedConfig),
});
if (!response.ok) {
throw new Error('Failed to update config');
@ -208,7 +205,7 @@ const Page = () => {
key.toLowerCase().includes('api') ||
key.toLowerCase().includes('url')
) {
const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/config`, {
const res = await fetch(`/api/config`, {
headers: {
'Content-Type': 'application/json',
},
@ -223,11 +220,11 @@ const Page = () => {
setChatModels(data.chatModelProviders || {});
setEmbeddingModels(data.embeddingModelProviders || {});
const currentProvider = selectedChatModelProvider;
const newProviders = Object.keys(data.chatModelProviders || {});
const currentChatProvider = selectedChatModelProvider;
const newChatProviders = Object.keys(data.chatModelProviders || {});
if (!currentProvider && newProviders.length > 0) {
const firstProvider = newProviders[0];
if (!currentChatProvider && newChatProviders.length > 0) {
const firstProvider = newChatProviders[0];
const firstModel = data.chatModelProviders[firstProvider]?.[0]?.name;
if (firstModel) {
@ -237,11 +234,11 @@ const Page = () => {
localStorage.setItem('chatModel', firstModel);
}
} else if (
currentProvider &&
currentChatProvider &&
(!data.chatModelProviders ||
!data.chatModelProviders[currentProvider] ||
!Array.isArray(data.chatModelProviders[currentProvider]) ||
data.chatModelProviders[currentProvider].length === 0)
!data.chatModelProviders[currentChatProvider] ||
!Array.isArray(data.chatModelProviders[currentChatProvider]) ||
data.chatModelProviders[currentChatProvider].length === 0)
) {
const firstValidProvider = Object.entries(
data.chatModelProviders || {},
@ -267,6 +264,55 @@ const Page = () => {
}
}
const currentEmbeddingProvider = selectedEmbeddingModelProvider;
const newEmbeddingProviders = Object.keys(
data.embeddingModelProviders || {},
);
if (!currentEmbeddingProvider && newEmbeddingProviders.length > 0) {
const firstProvider = newEmbeddingProviders[0];
const firstModel =
data.embeddingModelProviders[firstProvider]?.[0]?.name;
if (firstModel) {
setSelectedEmbeddingModelProvider(firstProvider);
setSelectedEmbeddingModel(firstModel);
localStorage.setItem('embeddingModelProvider', firstProvider);
localStorage.setItem('embeddingModel', firstModel);
}
} else if (
currentEmbeddingProvider &&
(!data.embeddingModelProviders ||
!data.embeddingModelProviders[currentEmbeddingProvider] ||
!Array.isArray(
data.embeddingModelProviders[currentEmbeddingProvider],
) ||
data.embeddingModelProviders[currentEmbeddingProvider].length === 0)
) {
const firstValidProvider = Object.entries(
data.embeddingModelProviders || {},
).find(
([_, models]) => Array.isArray(models) && models.length > 0,
)?.[0];
if (firstValidProvider) {
setSelectedEmbeddingModelProvider(firstValidProvider);
setSelectedEmbeddingModel(
data.embeddingModelProviders[firstValidProvider][0].name,
);
localStorage.setItem('embeddingModelProvider', firstValidProvider);
localStorage.setItem(
'embeddingModel',
data.embeddingModelProviders[firstValidProvider][0].name,
);
} else {
setSelectedEmbeddingModelProvider(null);
setSelectedEmbeddingModel(null);
localStorage.removeItem('embeddingModelProvider');
localStorage.removeItem('embeddingModel');
}
}
setConfig(data);
}
@ -278,6 +324,10 @@ const Page = () => {
localStorage.setItem('chatModelProvider', value);
} else if (key === 'chatModel') {
localStorage.setItem('chatModel', value);
} else if (key === 'embeddingModelProvider') {
localStorage.setItem('embeddingModelProvider', value);
} else if (key === 'embeddingModel') {
localStorage.setItem('embeddingModel', value);
}
} catch (err) {
console.error('Failed to save:', err);
@ -436,7 +486,6 @@ const Page = () => {
const value = e.target.value;
setSelectedChatModelProvider(value);
saveConfig('chatModelProvider', value);
// Auto-select first model of new provider
const firstModel =
config.chatModelProviders[value]?.[0]?.name;
if (firstModel) {
@ -511,12 +560,16 @@ const Page = () => {
<Input
type="text"
placeholder="Model name"
defaultValue={config.customOpenaiModelName}
onChange={(e) =>
setConfig({
...config,
value={config.customOpenaiModelName}
isSaving={savingStates['customOpenaiModelName']}
onChange={(e: React.ChangeEvent<HTMLInputElement>) => {
setConfig((prev) => ({
...prev!,
customOpenaiModelName: e.target.value,
})
}));
}}
onSave={(value) =>
saveConfig('customOpenaiModelName', value)
}
/>
</div>
@ -527,12 +580,16 @@ const Page = () => {
<Input
type="text"
placeholder="Custom OpenAI API Key"
defaultValue={config.customOpenaiApiKey}
onChange={(e) =>
setConfig({
...config,
value={config.customOpenaiApiKey}
isSaving={savingStates['customOpenaiApiKey']}
onChange={(e: React.ChangeEvent<HTMLInputElement>) => {
setConfig((prev) => ({
...prev!,
customOpenaiApiKey: e.target.value,
})
}));
}}
onSave={(value) =>
saveConfig('customOpenaiApiKey', value)
}
/>
</div>
@ -543,17 +600,96 @@ const Page = () => {
<Input
type="text"
placeholder="Custom OpenAI Base URL"
defaultValue={config.customOpenaiApiUrl}
onChange={(e) =>
setConfig({
...config,
value={config.customOpenaiApiUrl}
isSaving={savingStates['customOpenaiApiUrl']}
onChange={(e: React.ChangeEvent<HTMLInputElement>) => {
setConfig((prev) => ({
...prev!,
customOpenaiApiUrl: e.target.value,
})
}));
}}
onSave={(value) =>
saveConfig('customOpenaiApiUrl', value)
}
/>
</div>
</div>
)}
{config.embeddingModelProviders && (
<div className="flex flex-col space-y-4 mt-4 pt-4 border-t border-light-200 dark:border-dark-200">
<div className="flex flex-col space-y-1">
<p className="text-black/70 dark:text-white/70 text-sm">
Embedding Model Provider
</p>
<Select
value={selectedEmbeddingModelProvider ?? undefined}
onChange={(e) => {
const value = e.target.value;
setSelectedEmbeddingModelProvider(value);
saveConfig('embeddingModelProvider', value);
const firstModel =
config.embeddingModelProviders[value]?.[0]?.name;
if (firstModel) {
setSelectedEmbeddingModel(firstModel);
saveConfig('embeddingModel', firstModel);
}
}}
options={Object.keys(config.embeddingModelProviders).map(
(provider) => ({
value: provider,
label:
provider.charAt(0).toUpperCase() +
provider.slice(1),
}),
)}
/>
</div>
{selectedEmbeddingModelProvider && (
<div className="flex flex-col space-y-1">
<p className="text-black/70 dark:text-white/70 text-sm">
Embedding Model
</p>
<Select
value={selectedEmbeddingModel ?? undefined}
onChange={(e) => {
const value = e.target.value;
setSelectedEmbeddingModel(value);
saveConfig('embeddingModel', value);
}}
options={(() => {
const embeddingModelProvider =
config.embeddingModelProviders[
selectedEmbeddingModelProvider
];
return embeddingModelProvider
? embeddingModelProvider.length > 0
? embeddingModelProvider.map((model) => ({
value: model.name,
label: model.displayName,
}))
: [
{
value: '',
label: 'No models available',
disabled: true,
},
]
: [
{
value: '',
label:
'Invalid provider, please check backend logs',
disabled: true,
},
];
})()}
/>
</div>
)}
</div>
)}
</SettingsSection>
<SettingsSection title="API Keys">

View File

@ -48,11 +48,17 @@ const Chat = ({
});
useEffect(() => {
messageEnd.current?.scrollIntoView({ behavior: 'smooth' });
const scroll = () => {
messageEnd.current?.scrollIntoView({ behavior: 'smooth' });
};
if (messages.length === 1) {
document.title = `${messages[0].content.substring(0, 30)} - Perplexica`;
}
if (messages[messages.length - 1]?.role == 'user') {
scroll();
}
}, [messages]);
return (

View File

@ -68,7 +68,13 @@ const MessageBox = ({
return (
<div>
{message.role === 'user' && (
<div className={cn('w-full', messageIndex === 0 ? 'pt-16' : 'pt-8')}>
<div
className={cn(
'w-full',
messageIndex === 0 ? 'pt-16' : 'pt-8',
'break-words',
)}
>
<h2 className="text-black dark:text-white font-medium text-3xl lg:w-9/12">
{message.content}
</h2>

View File

@ -110,7 +110,7 @@ const Attach = ({
<button
type="button"
onClick={() => fileInputRef.current.click()}
className="flex flex-row items-center space-x-1 text-white/70 hover:text-white transition duration-200"
className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200"
>
<input
type="file"
@ -128,7 +128,7 @@ const Attach = ({
setFiles([]);
setFileIds([]);
}}
className="flex flex-row items-center space-x-1 text-white/70 hover:text-white transition duration-200"
className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200"
>
<Trash size={14} />
<p className="text-xs">Clear</p>
@ -145,7 +145,7 @@ const Attach = ({
<div className="bg-dark-100 flex items-center justify-center w-10 h-10 rounded-md">
<File size={16} className="text-white/70" />
</div>
<p className="text-white/70 text-sm">
<p className="text-black/70 dark:text-white/70 text-sm">
{file.fileName.length > 25
? file.fileName.replace(/\.\w+$/, '').substring(0, 25) +
'...' +

View File

@ -82,7 +82,7 @@ const AttachSmall = ({
<button
type="button"
onClick={() => fileInputRef.current.click()}
className="flex flex-row items-center space-x-1 text-white/70 hover:text-white transition duration-200"
className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200"
>
<input
type="file"
@ -100,7 +100,7 @@ const AttachSmall = ({
setFiles([]);
setFileIds([]);
}}
className="flex flex-row items-center space-x-1 text-white/70 hover:text-white transition duration-200"
className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200"
>
<Trash size={14} />
<p className="text-xs">Clear</p>
@ -117,7 +117,7 @@ const AttachSmall = ({
<div className="bg-dark-100 flex items-center justify-center w-10 h-10 rounded-md">
<File size={16} className="text-white/70" />
</div>
<p className="text-white/70 text-sm">
<p className="text-black/70 dark:text-white/70 text-sm">
{file.fileName.length > 25
? file.fileName.replace(/\.\w+$/, '').substring(0, 25) +
'...' +

117
ui/lib/config.ts Normal file
View File

@ -0,0 +1,117 @@
import fs from 'fs';
import path from 'path';
import toml from '@iarna/toml';
const configFileName = 'config.toml';
interface Config {
GENERAL: {
PORT: number;
SIMILARITY_MEASURE: string;
KEEP_ALIVE: string;
};
MODELS: {
OPENAI: {
API_KEY: string;
};
GROQ: {
API_KEY: string;
};
ANTHROPIC: {
API_KEY: string;
};
GEMINI: {
API_KEY: string;
};
OLLAMA: {
API_URL: string;
};
CUSTOM_OPENAI: {
API_URL: string;
API_KEY: string;
MODEL_NAME: string;
};
};
API_ENDPOINTS: {
SEARXNG: string;
};
}
type RecursivePartial<T> = {
[P in keyof T]?: RecursivePartial<T[P]>;
};
const loadConfig = () =>
toml.parse(
fs.readFileSync(path.join(process.cwd(), `${configFileName}`), 'utf-8'),
) as any as Config;
export const getPort = () => loadConfig().GENERAL.PORT;
export const getSimilarityMeasure = () =>
loadConfig().GENERAL.SIMILARITY_MEASURE;
export const getKeepAlive = () => loadConfig().GENERAL.KEEP_ALIVE;
export const getOpenaiApiKey = () => loadConfig().MODELS.OPENAI.API_KEY;
export const getGroqApiKey = () => loadConfig().MODELS.GROQ.API_KEY;
export const getAnthropicApiKey = () => loadConfig().MODELS.ANTHROPIC.API_KEY;
export const getGeminiApiKey = () => loadConfig().MODELS.GEMINI.API_KEY;
export const getSearxngApiEndpoint = () =>
process.env.SEARXNG_API_URL || loadConfig().API_ENDPOINTS.SEARXNG;
export const getOllamaApiEndpoint = () => loadConfig().MODELS.OLLAMA.API_URL;
export const getCustomOpenaiApiKey = () =>
loadConfig().MODELS.CUSTOM_OPENAI.API_KEY;
export const getCustomOpenaiApiUrl = () =>
loadConfig().MODELS.CUSTOM_OPENAI.API_URL;
export const getCustomOpenaiModelName = () =>
loadConfig().MODELS.CUSTOM_OPENAI.MODEL_NAME;
const mergeConfigs = (current: any, update: any): any => {
if (update === null || update === undefined) {
return current;
}
if (typeof current !== 'object' || current === null) {
return update;
}
const result = { ...current };
for (const key in update) {
if (Object.prototype.hasOwnProperty.call(update, key)) {
const updateValue = update[key];
if (
typeof updateValue === 'object' &&
updateValue !== null &&
typeof result[key] === 'object' &&
result[key] !== null
) {
result[key] = mergeConfigs(result[key], updateValue);
} else if (updateValue !== undefined) {
result[key] = updateValue;
}
}
}
return result;
};
export const updateConfig = (config: RecursivePartial<Config>) => {
const currentConfig = loadConfig();
const mergedConfig = mergeConfigs(currentConfig, config);
console.log(mergedConfig);
fs.writeFileSync(
path.join(path.join(process.cwd(), `${configFileName}`)),
toml.stringify(mergedConfig),
);
};

View File

@ -0,0 +1,48 @@
import { BaseOutputParser } from '@langchain/core/output_parsers';
interface LineOutputParserArgs {
key?: string;
}
class LineOutputParser extends BaseOutputParser<string> {
private key = 'questions';
constructor(args?: LineOutputParserArgs) {
super();
this.key = args?.key ?? this.key;
}
static lc_name() {
return 'LineOutputParser';
}
lc_namespace = ['langchain', 'output_parsers', 'line_output_parser'];
async parse(text: string): Promise<string> {
text = text.trim() || '';
const regex = /^(\s*(-|\*|\d+\.\s|\d+\)\s|\u2022)\s*)+/;
const startKeyIndex = text.indexOf(`<${this.key}>`);
const endKeyIndex = text.indexOf(`</${this.key}>`);
if (startKeyIndex === -1 || endKeyIndex === -1) {
return '';
}
const questionsStartIndex =
startKeyIndex === -1 ? 0 : startKeyIndex + `<${this.key}>`.length;
const questionsEndIndex = endKeyIndex === -1 ? text.length : endKeyIndex;
const line = text
.slice(questionsStartIndex, questionsEndIndex)
.trim()
.replace(regex, '');
return line;
}
getFormatInstructions(): string {
throw new Error('Not implemented.');
}
}
export default LineOutputParser;

View File

@ -0,0 +1,50 @@
import { BaseOutputParser } from '@langchain/core/output_parsers';
interface LineListOutputParserArgs {
key?: string;
}
class LineListOutputParser extends BaseOutputParser<string[]> {
private key = 'questions';
constructor(args?: LineListOutputParserArgs) {
super();
this.key = args?.key ?? this.key;
}
static lc_name() {
return 'LineListOutputParser';
}
lc_namespace = ['langchain', 'output_parsers', 'line_list_output_parser'];
async parse(text: string): Promise<string[]> {
text = text.trim() || '';
const regex = /^(\s*(-|\*|\d+\.\s|\d+\)\s|\u2022)\s*)+/;
const startKeyIndex = text.indexOf(`<${this.key}>`);
const endKeyIndex = text.indexOf(`</${this.key}>`);
if (startKeyIndex === -1 || endKeyIndex === -1) {
return [];
}
const questionsStartIndex =
startKeyIndex === -1 ? 0 : startKeyIndex + `<${this.key}>`.length;
const questionsEndIndex = endKeyIndex === -1 ? text.length : endKeyIndex;
const lines = text
.slice(questionsStartIndex, questionsEndIndex)
.trim()
.split('\n')
.filter((line) => line.trim() !== '')
.map((line) => line.replace(regex, ''));
return lines;
}
getFormatInstructions(): string {
throw new Error('Not implemented.');
}
}
export default LineListOutputParser;

View File

@ -0,0 +1,65 @@
export const academicSearchRetrieverPrompt = `
You will be given a conversation below and a follow up question. You need to rephrase the follow-up question if needed so it is a standalone question that can be used by the LLM to search the web for information.
If it is a writing task or a simple hi, hello rather than a question, you need to return \`not_needed\` as the response.
Example:
1. Follow up question: How does stable diffusion work?
Rephrased: Stable diffusion working
2. Follow up question: What is linear algebra?
Rephrased: Linear algebra
3. Follow up question: What is the third law of thermodynamics?
Rephrased: Third law of thermodynamics
Conversation:
{chat_history}
Follow up question: {query}
Rephrased question:
`;
export const academicSearchResponsePrompt = `
You are Perplexica, an AI model skilled in web search and crafting detailed, engaging, and well-structured answers. You excel at summarizing web pages and extracting relevant information to create professional, blog-style responses.
Your task is to provide answers that are:
- **Informative and relevant**: Thoroughly address the user's query using the given context.
- **Well-structured**: Include clear headings and subheadings, and use a professional tone to present information concisely and logically.
- **Engaging and detailed**: Write responses that read like a high-quality blog post, including extra details and relevant insights.
- **Cited and credible**: Use inline citations with [number] notation to refer to the context source(s) for each fact or detail included.
- **Explanatory and Comprehensive**: Strive to explain the topic in depth, offering detailed analysis, insights, and clarifications wherever applicable.
### Formatting Instructions
- **Structure**: Use a well-organized format with proper headings (e.g., "## Example heading 1" or "## Example heading 2"). Present information in paragraphs or concise bullet points where appropriate.
- **Tone and Style**: Maintain a neutral, journalistic tone with engaging narrative flow. Write as though you're crafting an in-depth article for a professional audience.
- **Markdown Usage**: Format your response with Markdown for clarity. Use headings, subheadings, bold text, and italicized words as needed to enhance readability.
- **Length and Depth**: Provide comprehensive coverage of the topic. Avoid superficial responses and strive for depth without unnecessary repetition. Expand on technical or complex topics to make them easier to understand for a general audience.
- **No main heading/title**: Start your response directly with the introduction unless asked to provide a specific title.
- **Conclusion or Summary**: Include a concluding paragraph that synthesizes the provided information or suggests potential next steps, where appropriate.
### Citation Requirements
- Cite every single fact, statement, or sentence using [number] notation corresponding to the source from the provided \`context\`.
- Integrate citations naturally at the end of sentences or clauses as appropriate. For example, "The Eiffel Tower is one of the most visited landmarks in the world[1]."
- Ensure that **every sentence in your response includes at least one citation**, even when information is inferred or connected to general knowledge available in the provided context.
- Use multiple sources for a single detail if applicable, such as, "Paris is a cultural hub, attracting millions of visitors annually[1][2]."
- Always prioritize credibility and accuracy by linking all statements back to their respective context sources.
- Avoid citing unsupported assumptions or personal interpretations; if no source supports a statement, clearly indicate the limitation.
### Special Instructions
- If the query involves technical, historical, or complex topics, provide detailed background and explanatory sections to ensure clarity.
- If the user provides vague input or if relevant information is missing, explain what additional details might help refine the search.
- If no relevant information is found, say: "Hmm, sorry I could not find any relevant information on this topic. Would you like me to search again or ask something else?" Be transparent about limitations and suggest alternatives or ways to reframe the query.
- You are set on focus mode 'Academic', this means you will be searching for academic papers and articles on the web.
### Example Output
- Begin with a brief introduction summarizing the event or query topic.
- Follow with detailed sections under clear headings, covering all aspects of the query if possible.
- Provide explanations or historical context as needed to enhance understanding.
- End with a conclusion or overall perspective if relevant.
<context>
{context}
</context>
Current date & time in ISO format (UTC timezone) is: {date}.
`;

32
ui/lib/prompts/index.ts Normal file
View File

@ -0,0 +1,32 @@
import {
academicSearchResponsePrompt,
academicSearchRetrieverPrompt,
} from './academicSearch';
import {
redditSearchResponsePrompt,
redditSearchRetrieverPrompt,
} from './redditSearch';
import { webSearchResponsePrompt, webSearchRetrieverPrompt } from './webSearch';
import {
wolframAlphaSearchResponsePrompt,
wolframAlphaSearchRetrieverPrompt,
} from './wolframAlpha';
import { writingAssistantPrompt } from './writingAssistant';
import {
youtubeSearchResponsePrompt,
youtubeSearchRetrieverPrompt,
} from './youtubeSearch';
export default {
webSearchResponsePrompt,
webSearchRetrieverPrompt,
academicSearchResponsePrompt,
academicSearchRetrieverPrompt,
redditSearchResponsePrompt,
redditSearchRetrieverPrompt,
wolframAlphaSearchResponsePrompt,
wolframAlphaSearchRetrieverPrompt,
writingAssistantPrompt,
youtubeSearchResponsePrompt,
youtubeSearchRetrieverPrompt,
};

View File

@ -0,0 +1,65 @@
export const redditSearchRetrieverPrompt = `
You will be given a conversation below and a follow up question. You need to rephrase the follow-up question if needed so it is a standalone question that can be used by the LLM to search the web for information.
If it is a writing task or a simple hi, hello rather than a question, you need to return \`not_needed\` as the response.
Example:
1. Follow up question: Which company is most likely to create an AGI
Rephrased: Which company is most likely to create an AGI
2. Follow up question: Is Earth flat?
Rephrased: Is Earth flat?
3. Follow up question: Is there life on Mars?
Rephrased: Is there life on Mars?
Conversation:
{chat_history}
Follow up question: {query}
Rephrased question:
`;
export const redditSearchResponsePrompt = `
You are Perplexica, an AI model skilled in web search and crafting detailed, engaging, and well-structured answers. You excel at summarizing web pages and extracting relevant information to create professional, blog-style responses.
Your task is to provide answers that are:
- **Informative and relevant**: Thoroughly address the user's query using the given context.
- **Well-structured**: Include clear headings and subheadings, and use a professional tone to present information concisely and logically.
- **Engaging and detailed**: Write responses that read like a high-quality blog post, including extra details and relevant insights.
- **Cited and credible**: Use inline citations with [number] notation to refer to the context source(s) for each fact or detail included.
- **Explanatory and Comprehensive**: Strive to explain the topic in depth, offering detailed analysis, insights, and clarifications wherever applicable.
### Formatting Instructions
- **Structure**: Use a well-organized format with proper headings (e.g., "## Example heading 1" or "## Example heading 2"). Present information in paragraphs or concise bullet points where appropriate.
- **Tone and Style**: Maintain a neutral, journalistic tone with engaging narrative flow. Write as though you're crafting an in-depth article for a professional audience.
- **Markdown Usage**: Format your response with Markdown for clarity. Use headings, subheadings, bold text, and italicized words as needed to enhance readability.
- **Length and Depth**: Provide comprehensive coverage of the topic. Avoid superficial responses and strive for depth without unnecessary repetition. Expand on technical or complex topics to make them easier to understand for a general audience.
- **No main heading/title**: Start your response directly with the introduction unless asked to provide a specific title.
- **Conclusion or Summary**: Include a concluding paragraph that synthesizes the provided information or suggests potential next steps, where appropriate.
### Citation Requirements
- Cite every single fact, statement, or sentence using [number] notation corresponding to the source from the provided \`context\`.
- Integrate citations naturally at the end of sentences or clauses as appropriate. For example, "The Eiffel Tower is one of the most visited landmarks in the world[1]."
- Ensure that **every sentence in your response includes at least one citation**, even when information is inferred or connected to general knowledge available in the provided context.
- Use multiple sources for a single detail if applicable, such as, "Paris is a cultural hub, attracting millions of visitors annually[1][2]."
- Always prioritize credibility and accuracy by linking all statements back to their respective context sources.
- Avoid citing unsupported assumptions or personal interpretations; if no source supports a statement, clearly indicate the limitation.
### Special Instructions
- If the query involves technical, historical, or complex topics, provide detailed background and explanatory sections to ensure clarity.
- If the user provides vague input or if relevant information is missing, explain what additional details might help refine the search.
- If no relevant information is found, say: "Hmm, sorry I could not find any relevant information on this topic. Would you like me to search again or ask something else?" Be transparent about limitations and suggest alternatives or ways to reframe the query.
- You are set on focus mode 'Reddit', this means you will be searching for information, opinions and discussions on the web using Reddit.
### Example Output
- Begin with a brief introduction summarizing the event or query topic.
- Follow with detailed sections under clear headings, covering all aspects of the query if possible.
- Provide explanations or historical context as needed to enhance understanding.
- End with a conclusion or overall perspective if relevant.
<context>
{context}
</context>
Current date & time in ISO format (UTC timezone) is: {date}.
`;

106
ui/lib/prompts/webSearch.ts Normal file
View File

@ -0,0 +1,106 @@
export const webSearchRetrieverPrompt = `
You are an AI question rephraser. You will be given a conversation and a follow-up question, you will have to rephrase the follow up question so it is a standalone question and can be used by another LLM to search the web for information to answer it.
If it is a smple writing task or a greeting (unless the greeting contains a question after it) like Hi, Hello, How are you, etc. than a question then you need to return \`not_needed\` as the response (This is because the LLM won't need to search the web for finding information on this topic).
If the user asks some question from some URL or wants you to summarize a PDF or a webpage (via URL) you need to return the links inside the \`links\` XML block and the question inside the \`question\` XML block. If the user wants to you to summarize the webpage or the PDF you need to return \`summarize\` inside the \`question\` XML block in place of a question and the link to summarize in the \`links\` XML block.
You must always return the rephrased question inside the \`question\` XML block, if there are no links in the follow-up question then don't insert a \`links\` XML block in your response.
There are several examples attached for your reference inside the below \`examples\` XML block
<examples>
1. Follow up question: What is the capital of France
Rephrased question:\`
<question>
Capital of france
</question>
\`
2. Hi, how are you?
Rephrased question\`
<question>
not_needed
</question>
\`
3. Follow up question: What is Docker?
Rephrased question: \`
<question>
What is Docker
</question>
\`
4. Follow up question: Can you tell me what is X from https://example.com
Rephrased question: \`
<question>
Can you tell me what is X?
</question>
<links>
https://example.com
</links>
\`
5. Follow up question: Summarize the content from https://example.com
Rephrased question: \`
<question>
summarize
</question>
<links>
https://example.com
</links>
\`
</examples>
Anything below is the part of the actual conversation and you need to use conversation and the follow-up question to rephrase the follow-up question as a standalone question based on the guidelines shared above.
<conversation>
{chat_history}
</conversation>
Follow up question: {query}
Rephrased question:
`;
export const webSearchResponsePrompt = `
You are Perplexica, an AI model skilled in web search and crafting detailed, engaging, and well-structured answers. You excel at summarizing web pages and extracting relevant information to create professional, blog-style responses.
Your task is to provide answers that are:
- **Informative and relevant**: Thoroughly address the user's query using the given context.
- **Well-structured**: Include clear headings and subheadings, and use a professional tone to present information concisely and logically.
- **Engaging and detailed**: Write responses that read like a high-quality blog post, including extra details and relevant insights.
- **Cited and credible**: Use inline citations with [number] notation to refer to the context source(s) for each fact or detail included.
- **Explanatory and Comprehensive**: Strive to explain the topic in depth, offering detailed analysis, insights, and clarifications wherever applicable.
### Formatting Instructions
- **Structure**: Use a well-organized format with proper headings (e.g., "## Example heading 1" or "## Example heading 2"). Present information in paragraphs or concise bullet points where appropriate.
- **Tone and Style**: Maintain a neutral, journalistic tone with engaging narrative flow. Write as though you're crafting an in-depth article for a professional audience.
- **Markdown Usage**: Format your response with Markdown for clarity. Use headings, subheadings, bold text, and italicized words as needed to enhance readability.
- **Length and Depth**: Provide comprehensive coverage of the topic. Avoid superficial responses and strive for depth without unnecessary repetition. Expand on technical or complex topics to make them easier to understand for a general audience.
- **No main heading/title**: Start your response directly with the introduction unless asked to provide a specific title.
- **Conclusion or Summary**: Include a concluding paragraph that synthesizes the provided information or suggests potential next steps, where appropriate.
### Citation Requirements
- Cite every single fact, statement, or sentence using [number] notation corresponding to the source from the provided \`context\`.
- Integrate citations naturally at the end of sentences or clauses as appropriate. For example, "The Eiffel Tower is one of the most visited landmarks in the world[1]."
- Ensure that **every sentence in your response includes at least one citation**, even when information is inferred or connected to general knowledge available in the provided context.
- Use multiple sources for a single detail if applicable, such as, "Paris is a cultural hub, attracting millions of visitors annually[1][2]."
- Always prioritize credibility and accuracy by linking all statements back to their respective context sources.
- Avoid citing unsupported assumptions or personal interpretations; if no source supports a statement, clearly indicate the limitation.
### Special Instructions
- If the query involves technical, historical, or complex topics, provide detailed background and explanatory sections to ensure clarity.
- If the user provides vague input or if relevant information is missing, explain what additional details might help refine the search.
- If no relevant information is found, say: "Hmm, sorry I could not find any relevant information on this topic. Would you like me to search again or ask something else?" Be transparent about limitations and suggest alternatives or ways to reframe the query.
### Example Output
- Begin with a brief introduction summarizing the event or query topic.
- Follow with detailed sections under clear headings, covering all aspects of the query if possible.
- Provide explanations or historical context as needed to enhance understanding.
- End with a conclusion or overall perspective if relevant.
<context>
{context}
</context>
Current date & time in ISO format (UTC timezone) is: {date}.
`;

View File

@ -0,0 +1,65 @@
export const wolframAlphaSearchRetrieverPrompt = `
You will be given a conversation below and a follow up question. You need to rephrase the follow-up question if needed so it is a standalone question that can be used by the LLM to search the web for information.
If it is a writing task or a simple hi, hello rather than a question, you need to return \`not_needed\` as the response.
Example:
1. Follow up question: What is the atomic radius of S?
Rephrased: Atomic radius of S
2. Follow up question: What is linear algebra?
Rephrased: Linear algebra
3. Follow up question: What is the third law of thermodynamics?
Rephrased: Third law of thermodynamics
Conversation:
{chat_history}
Follow up question: {query}
Rephrased question:
`;
export const wolframAlphaSearchResponsePrompt = `
You are Perplexica, an AI model skilled in web search and crafting detailed, engaging, and well-structured answers. You excel at summarizing web pages and extracting relevant information to create professional, blog-style responses.
Your task is to provide answers that are:
- **Informative and relevant**: Thoroughly address the user's query using the given context.
- **Well-structured**: Include clear headings and subheadings, and use a professional tone to present information concisely and logically.
- **Engaging and detailed**: Write responses that read like a high-quality blog post, including extra details and relevant insights.
- **Cited and credible**: Use inline citations with [number] notation to refer to the context source(s) for each fact or detail included.
- **Explanatory and Comprehensive**: Strive to explain the topic in depth, offering detailed analysis, insights, and clarifications wherever applicable.
### Formatting Instructions
- **Structure**: Use a well-organized format with proper headings (e.g., "## Example heading 1" or "## Example heading 2"). Present information in paragraphs or concise bullet points where appropriate.
- **Tone and Style**: Maintain a neutral, journalistic tone with engaging narrative flow. Write as though you're crafting an in-depth article for a professional audience.
- **Markdown Usage**: Format your response with Markdown for clarity. Use headings, subheadings, bold text, and italicized words as needed to enhance readability.
- **Length and Depth**: Provide comprehensive coverage of the topic. Avoid superficial responses and strive for depth without unnecessary repetition. Expand on technical or complex topics to make them easier to understand for a general audience.
- **No main heading/title**: Start your response directly with the introduction unless asked to provide a specific title.
- **Conclusion or Summary**: Include a concluding paragraph that synthesizes the provided information or suggests potential next steps, where appropriate.
### Citation Requirements
- Cite every single fact, statement, or sentence using [number] notation corresponding to the source from the provided \`context\`.
- Integrate citations naturally at the end of sentences or clauses as appropriate. For example, "The Eiffel Tower is one of the most visited landmarks in the world[1]."
- Ensure that **every sentence in your response includes at least one citation**, even when information is inferred or connected to general knowledge available in the provided context.
- Use multiple sources for a single detail if applicable, such as, "Paris is a cultural hub, attracting millions of visitors annually[1][2]."
- Always prioritize credibility and accuracy by linking all statements back to their respective context sources.
- Avoid citing unsupported assumptions or personal interpretations; if no source supports a statement, clearly indicate the limitation.
### Special Instructions
- If the query involves technical, historical, or complex topics, provide detailed background and explanatory sections to ensure clarity.
- If the user provides vague input or if relevant information is missing, explain what additional details might help refine the search.
- If no relevant information is found, say: "Hmm, sorry I could not find any relevant information on this topic. Would you like me to search again or ask something else?" Be transparent about limitations and suggest alternatives or ways to reframe the query.
- You are set on focus mode 'Wolfram Alpha', this means you will be searching for information on the web using Wolfram Alpha. It is a computational knowledge engine that can answer factual queries and perform computations.
### Example Output
- Begin with a brief introduction summarizing the event or query topic.
- Follow with detailed sections under clear headings, covering all aspects of the query if possible.
- Provide explanations or historical context as needed to enhance understanding.
- End with a conclusion or overall perspective if relevant.
<context>
{context}
</context>
Current date & time in ISO format (UTC timezone) is: {date}.
`;

View File

@ -0,0 +1,13 @@
export const writingAssistantPrompt = `
You are Perplexica, an AI model who is expert at searching the web and answering user's queries. You are currently set on focus mode 'Writing Assistant', this means you will be helping the user write a response to a given query.
Since you are a writing assistant, you would not perform web searches. If you think you lack information to answer the query, you can ask the user for more information or suggest them to switch to a different focus mode.
You will be shared a context that can contain information from files user has uploaded to get answers from. You will have to generate answers upon that.
You have to cite the answer using [number] notation. You must cite the sentences with their relevent context number. You must cite each and every part of the answer so the user can know where the information is coming from.
Place these citations at the end of that particular sentence. You can cite the same sentence multiple times if it is relevant to the user's query like [number1][number2].
However you do not need to cite it using the same number. You can use different numbers to cite the same sentence multiple times. The number refers to the number of the search result (passed in the context) used to generate that part of the answer.
<context>
{context}
</context>
`;

View File

@ -0,0 +1,65 @@
export const youtubeSearchRetrieverPrompt = `
You will be given a conversation below and a follow up question. You need to rephrase the follow-up question if needed so it is a standalone question that can be used by the LLM to search the web for information.
If it is a writing task or a simple hi, hello rather than a question, you need to return \`not_needed\` as the response.
Example:
1. Follow up question: How does an A.C work?
Rephrased: A.C working
2. Follow up question: Linear algebra explanation video
Rephrased: What is linear algebra?
3. Follow up question: What is theory of relativity?
Rephrased: What is theory of relativity?
Conversation:
{chat_history}
Follow up question: {query}
Rephrased question:
`;
export const youtubeSearchResponsePrompt = `
You are Perplexica, an AI model skilled in web search and crafting detailed, engaging, and well-structured answers. You excel at summarizing web pages and extracting relevant information to create professional, blog-style responses.
Your task is to provide answers that are:
- **Informative and relevant**: Thoroughly address the user's query using the given context.
- **Well-structured**: Include clear headings and subheadings, and use a professional tone to present information concisely and logically.
- **Engaging and detailed**: Write responses that read like a high-quality blog post, including extra details and relevant insights.
- **Cited and credible**: Use inline citations with [number] notation to refer to the context source(s) for each fact or detail included.
- **Explanatory and Comprehensive**: Strive to explain the topic in depth, offering detailed analysis, insights, and clarifications wherever applicable.
### Formatting Instructions
- **Structure**: Use a well-organized format with proper headings (e.g., "## Example heading 1" or "## Example heading 2"). Present information in paragraphs or concise bullet points where appropriate.
- **Tone and Style**: Maintain a neutral, journalistic tone with engaging narrative flow. Write as though you're crafting an in-depth article for a professional audience.
- **Markdown Usage**: Format your response with Markdown for clarity. Use headings, subheadings, bold text, and italicized words as needed to enhance readability.
- **Length and Depth**: Provide comprehensive coverage of the topic. Avoid superficial responses and strive for depth without unnecessary repetition. Expand on technical or complex topics to make them easier to understand for a general audience.
- **No main heading/title**: Start your response directly with the introduction unless asked to provide a specific title.
- **Conclusion or Summary**: Include a concluding paragraph that synthesizes the provided information or suggests potential next steps, where appropriate.
### Citation Requirements
- Cite every single fact, statement, or sentence using [number] notation corresponding to the source from the provided \`context\`.
- Integrate citations naturally at the end of sentences or clauses as appropriate. For example, "The Eiffel Tower is one of the most visited landmarks in the world[1]."
- Ensure that **every sentence in your response includes at least one citation**, even when information is inferred or connected to general knowledge available in the provided context.
- Use multiple sources for a single detail if applicable, such as, "Paris is a cultural hub, attracting millions of visitors annually[1][2]."
- Always prioritize credibility and accuracy by linking all statements back to their respective context sources.
- Avoid citing unsupported assumptions or personal interpretations; if no source supports a statement, clearly indicate the limitation.
### Special Instructions
- If the query involves technical, historical, or complex topics, provide detailed background and explanatory sections to ensure clarity.
- If the user provides vague input or if relevant information is missing, explain what additional details might help refine the search.
- If no relevant information is found, say: "Hmm, sorry I could not find any relevant information on this topic. Would you like me to search again or ask something else?" Be transparent about limitations and suggest alternatives or ways to reframe the query.
- You are set on focus mode 'Youtube', this means you will be searching for videos on the web using Youtube and providing information based on the video's transcrip
### Example Output
- Begin with a brief introduction summarizing the event or query topic.
- Follow with detailed sections under clear headings, covering all aspects of the query if possible.
- Provide explanations or historical context as needed to enhance understanding.
- End with a conclusion or overall perspective if relevant.
<context>
{context}
</context>
Current date & time in ISO format (UTC timezone) is: {date}.
`;

View File

@ -0,0 +1,63 @@
import { ChatOpenAI } from '@langchain/openai';
import { ChatModel } from '.';
import { getAnthropicApiKey } from '../config';
const anthropicChatModels: Record<string, string>[] = [
{
displayName: 'Claude 3.7 Sonnet',
key: 'claude-3-7-sonnet-20250219',
},
{
displayName: 'Claude 3.5 Haiku',
key: 'claude-3-5-haiku-20241022',
},
{
displayName: 'Claude 3.5 Sonnet v2',
key: 'claude-3-5-sonnet-20241022',
},
{
displayName: 'Claude 3.5 Sonnet',
key: 'claude-3-5-sonnet-20240620',
},
{
displayName: 'Claude 3 Opus',
key: 'claude-3-opus-20240229',
},
{
displayName: 'Claude 3 Sonnet',
key: 'claude-3-sonnet-20240229',
},
{
displayName: 'Claude 3 Haiku',
key: 'claude-3-haiku-20240307',
},
];
export const loadAnthropicChatModels = async () => {
const anthropicApiKey = getAnthropicApiKey();
if (!anthropicApiKey) return {};
try {
const chatModels: Record<string, ChatModel> = {};
anthropicChatModels.forEach((model) => {
chatModels[model.key] = {
displayName: model.displayName,
model: new ChatOpenAI({
openAIApiKey: anthropicApiKey,
modelName: model.key,
temperature: 0.7,
configuration: {
baseURL: 'https://api.anthropic.com/v1/',
},
}),
};
});
return chatModels;
} catch (err) {
console.error(`Error loading Anthropic models: ${err}`);
return {};
}
};

View File

@ -0,0 +1,94 @@
import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
import { getGeminiApiKey } from '../config';
import { ChatModel, EmbeddingModel } from '.';
const geminiChatModels: Record<string, string>[] = [
{
displayName: 'Gemini 2.0 Flash',
key: 'gemini-2.0-flash',
},
{
displayName: 'Gemini 2.0 Flash-Lite',
key: 'gemini-2.0-flash-lite',
},
{
displayName: 'Gemini 2.0 Pro Experimental',
key: 'gemini-2.0-pro-exp-02-05',
},
{
displayName: 'Gemini 1.5 Flash',
key: 'gemini-1.5-flash',
},
{
displayName: 'Gemini 1.5 Flash-8B',
key: 'gemini-1.5-flash-8b',
},
{
displayName: 'Gemini 1.5 Pro',
key: 'gemini-1.5-pro',
},
];
const geminiEmbeddingModels: Record<string, string>[] = [
{
displayName: 'Gemini Embedding',
key: 'gemini-embedding-exp',
},
];
export const loadGeminiChatModels = async () => {
const geminiApiKey = getGeminiApiKey();
if (!geminiApiKey) return {};
try {
const chatModels: Record<string, ChatModel> = {};
geminiChatModels.forEach((model) => {
chatModels[model.key] = {
displayName: model.displayName,
model: new ChatOpenAI({
openAIApiKey: geminiApiKey,
modelName: model.key,
temperature: 0.7,
configuration: {
baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/',
},
}),
};
});
return chatModels;
} catch (err) {
console.error(`Error loading Gemini models: ${err}`);
return {};
}
};
export const loadGeminiEmbeddingModels = async () => {
const geminiApiKey = getGeminiApiKey();
if (!geminiApiKey) return {};
try {
const embeddingModels: Record<string, EmbeddingModel> = {};
geminiEmbeddingModels.forEach((model) => {
embeddingModels[model.key] = {
displayName: model.displayName,
model: new OpenAIEmbeddings({
openAIApiKey: geminiApiKey,
modelName: model.key,
configuration: {
baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/',
},
}),
};
});
return embeddingModels;
} catch (err) {
console.error(`Error loading OpenAI embeddings models: ${err}`);
return {};
}
};

107
ui/lib/providers/groq.ts Normal file
View File

@ -0,0 +1,107 @@
import { ChatOpenAI } from '@langchain/openai';
import { getGroqApiKey } from '../config';
import { ChatModel } from '.';
const groqChatModels: Record<string, string>[] = [
{
displayName: 'Gemma2 9B IT',
key: 'gemma2-9b-it',
},
{
displayName: 'Llama 3.3 70B Versatile',
key: 'llama-3.3-70b-versatile',
},
{
displayName: 'Llama 3.1 8B Instant',
key: 'llama-3.1-8b-instant',
},
{
displayName: 'Llama3 70B 8192',
key: 'llama3-70b-8192',
},
{
displayName: 'Llama3 8B 8192',
key: 'llama3-8b-8192',
},
{
displayName: 'Mixtral 8x7B 32768',
key: 'mixtral-8x7b-32768',
},
{
displayName: 'Qwen QWQ 32B (Preview)',
key: 'qwen-qwq-32b',
},
{
displayName: 'Mistral Saba 24B (Preview)',
key: 'mistral-saba-24b',
},
{
displayName: 'Qwen 2.5 Coder 32B (Preview)',
key: 'qwen-2.5-coder-32b',
},
{
displayName: 'Qwen 2.5 32B (Preview)',
key: 'qwen-2.5-32b',
},
{
displayName: 'DeepSeek R1 Distill Qwen 32B (Preview)',
key: 'deepseek-r1-distill-qwen-32b',
},
{
displayName: 'DeepSeek R1 Distill Llama 70B SpecDec (Preview)',
key: 'deepseek-r1-distill-llama-70b-specdec',
},
{
displayName: 'DeepSeek R1 Distill Llama 70B (Preview)',
key: 'deepseek-r1-distill-llama-70b',
},
{
displayName: 'Llama 3.3 70B SpecDec (Preview)',
key: 'llama-3.3-70b-specdec',
},
{
displayName: 'Llama 3.2 1B Preview (Preview)',
key: 'llama-3.2-1b-preview',
},
{
displayName: 'Llama 3.2 3B Preview (Preview)',
key: 'llama-3.2-3b-preview',
},
{
displayName: 'Llama 3.2 11B Vision Preview (Preview)',
key: 'llama-3.2-11b-vision-preview',
},
{
displayName: 'Llama 3.2 90B Vision Preview (Preview)',
key: 'llama-3.2-90b-vision-preview',
},
];
export const loadGroqChatModels = async () => {
const groqApiKey = getGroqApiKey();
if (!groqApiKey) return {};
try {
const chatModels: Record<string, ChatModel> = {};
groqChatModels.forEach((model) => {
chatModels[model.key] = {
displayName: model.displayName,
model: new ChatOpenAI({
openAIApiKey: groqApiKey,
modelName: model.key,
temperature: 0.7,
configuration: {
baseURL: 'https://api.groq.com/openai/v1',
},
}),
};
});
return chatModels;
} catch (err) {
console.error(`Error loading Groq models: ${err}`);
return {};
}
};

91
ui/lib/providers/index.ts Normal file
View File

@ -0,0 +1,91 @@
import { Embeddings } from '@langchain/core/embeddings';
import { BaseChatModel } from '@langchain/core/language_models/chat_models';
import { loadOpenAIChatModels, loadOpenAIEmbeddingModels } from './openai';
import {
getCustomOpenaiApiKey,
getCustomOpenaiApiUrl,
getCustomOpenaiModelName,
} from '../config';
import { ChatOpenAI } from '@langchain/openai';
import { loadOllamaChatModels, loadOllamaEmbeddingModels } from './ollama';
import { loadGroqChatModels } from './groq';
import { loadAnthropicChatModels } from './anthropic';
import { loadGeminiChatModels, loadGeminiEmbeddingModels } from './gemini';
export interface ChatModel {
displayName: string;
model: BaseChatModel;
}
export interface EmbeddingModel {
displayName: string;
model: Embeddings;
}
const chatModelProviders: Record<
string,
() => Promise<Record<string, ChatModel>>
> = {
openai: loadOpenAIChatModels,
ollama: loadOllamaChatModels,
groq: loadGroqChatModels,
anthropic: loadAnthropicChatModels,
gemini: loadGeminiChatModels
};
const embeddingModelProviders: Record<
string,
() => Promise<Record<string, EmbeddingModel>>
> = {
openai: loadOpenAIEmbeddingModels,
ollama: loadOllamaEmbeddingModels,
gemini: loadGeminiEmbeddingModels
};
export const getAvailableChatModelProviders = async () => {
const models: Record<string, Record<string, ChatModel>> = {};
for (const provider in chatModelProviders) {
const providerModels = await chatModelProviders[provider]();
if (Object.keys(providerModels).length > 0) {
models[provider] = providerModels;
}
}
const customOpenAiApiKey = getCustomOpenaiApiKey();
const customOpenAiApiUrl = getCustomOpenaiApiUrl();
const customOpenAiModelName = getCustomOpenaiModelName();
models['custom_openai'] = {
...(customOpenAiApiKey && customOpenAiApiUrl && customOpenAiModelName
? {
[customOpenAiModelName]: {
displayName: customOpenAiModelName,
model: new ChatOpenAI({
openAIApiKey: customOpenAiApiKey,
modelName: customOpenAiModelName,
temperature: 0.7,
configuration: {
baseURL: customOpenAiApiUrl,
},
}),
},
}
: {}),
};
return models;
};
export const getAvailableEmbeddingModelProviders = async () => {
const models: Record<string, Record<string, EmbeddingModel>> = {};
for (const provider in embeddingModelProviders) {
const providerModels = await embeddingModelProviders[provider]();
if (Object.keys(providerModels).length > 0) {
models[provider] = providerModels;
}
}
return models;
};

View File

@ -0,0 +1,73 @@
import axios from 'axios';
import { getKeepAlive, getOllamaApiEndpoint } from '../config';
import { ChatModel, EmbeddingModel } from '.';
import { ChatOllama } from '@langchain/community/chat_models/ollama';
import { OllamaEmbeddings } from '@langchain/community/embeddings/ollama';
export const loadOllamaChatModels = async () => {
const ollamaApiEndpoint = getOllamaApiEndpoint();
if (!ollamaApiEndpoint) return {};
try {
const res = await axios.get(`${ollamaApiEndpoint}/api/tags`, {
headers: {
'Content-Type': 'application/json',
},
});
const { models } = res.data;
const chatModels: Record<string, ChatModel> = {};
models.forEach((model: any) => {
chatModels[model.model] = {
displayName: model.name,
model: new ChatOllama({
baseUrl: ollamaApiEndpoint,
model: model.model,
temperature: 0.7,
keepAlive: getKeepAlive(),
}),
};
});
return chatModels;
} catch (err) {
console.error(`Error loading Ollama models: ${err}`);
return {};
}
};
export const loadOllamaEmbeddingModels = async () => {
const ollamaApiEndpoint = getOllamaApiEndpoint();
if (!ollamaApiEndpoint) return {};
try {
const res = await axios.get(`${ollamaApiEndpoint}/api/tags`, {
headers: {
'Content-Type': 'application/json',
},
});
const { models } = res.data;
const embeddingModels: Record<string, EmbeddingModel> = {};
models.forEach((model: any) => {
embeddingModels[model.model] = {
displayName: model.name,
model: new OllamaEmbeddings({
baseUrl: ollamaApiEndpoint,
model: model.model,
}),
};
});
return embeddingModels;
} catch (err) {
console.error(`Error loading Ollama embeddings models: ${err}`);
return {};
}
};

View File

@ -0,0 +1,88 @@
import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
import { getOpenaiApiKey } from '../config';
import { ChatModel, EmbeddingModel } from '.';
const openaiChatModels: Record<string, string>[] = [
{
displayName: 'GPT-3.5 Turbo',
key: 'gpt-3.5-turbo',
},
{
displayName: 'GPT-4',
key: 'gpt-4',
},
{
displayName: 'GPT-4 turbo',
key: 'gpt-4-turbo',
},
{
displayName: 'GPT-4 omni',
key: 'gpt-4o',
},
{
displayName: 'GPT-4 omni mini',
key: 'gpt-4o-mini',
},
];
const openaiEmbeddingModels: Record<string, string>[] = [
{
displayName: 'Text Embedding 3 Small',
key: 'text-embedding-3-small',
},
{
displayName: 'Text Embedding 3 Large',
key: 'text-embedding-3-large',
},
];
export const loadOpenAIChatModels = async () => {
const openaiApiKey = getOpenaiApiKey();
if (!openaiApiKey) return {};
try {
const chatModels: Record<string, ChatModel> = {};
openaiChatModels.forEach((model) => {
chatModels[model.key] = {
displayName: model.displayName,
model: new ChatOpenAI({
openAIApiKey: openaiApiKey,
modelName: model.key,
temperature: 0.7,
}),
};
});
return chatModels;
} catch (err) {
console.error(`Error loading OpenAI models: ${err}`);
return {};
}
};
export const loadOpenAIEmbeddingModels = async () => {
const openaiApiKey = getOpenaiApiKey();
if (!openaiApiKey) return {};
try {
const embeddingModels: Record<string, EmbeddingModel> = {};
openaiEmbeddingModels.forEach((model) => {
embeddingModels[model.key] = {
displayName: model.displayName,
model: new OpenAIEmbeddings({
openAIApiKey: openaiApiKey,
modelName: model.key,
}),
};
});
return embeddingModels;
} catch (err) {
console.error(`Error loading OpenAI embeddings models: ${err}`);
return {};
}
};

View File

@ -0,0 +1,495 @@
import { ChatOpenAI } from '@langchain/openai';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { Embeddings } from '@langchain/core/embeddings';
import {
ChatPromptTemplate,
MessagesPlaceholder,
PromptTemplate,
} from '@langchain/core/prompts';
import {
RunnableLambda,
RunnableMap,
RunnableSequence,
} from '@langchain/core/runnables';
import { BaseMessage } from '@langchain/core/messages';
import { StringOutputParser } from '@langchain/core/output_parsers';
import LineListOutputParser from '../outputParsers/listLineOutputParser';
import LineOutputParser from '../outputParsers/lineOutputParser';
import { getDocumentsFromLinks } from '../utils/documents';
import { Document } from 'langchain/document';
import { searchSearxng } from '../searxng';
import path from 'node:path';
import fs from 'node:fs';
import computeSimilarity from '../utils/computeSimilarity';
import formatChatHistoryAsString from '../utils/formatHistory';
import eventEmitter from 'events';
import { StreamEvent } from '@langchain/core/tracers/log_stream';
export interface MetaSearchAgentType {
searchAndAnswer: (
message: string,
history: BaseMessage[],
llm: BaseChatModel,
embeddings: Embeddings,
optimizationMode: 'speed' | 'balanced' | 'quality',
fileIds: string[],
) => Promise<eventEmitter>;
}
interface Config {
searchWeb: boolean;
rerank: boolean;
summarizer: boolean;
rerankThreshold: number;
queryGeneratorPrompt: string;
responsePrompt: string;
activeEngines: string[];
}
type BasicChainInput = {
chat_history: BaseMessage[];
query: string;
};
class MetaSearchAgent implements MetaSearchAgentType {
private config: Config;
private strParser = new StringOutputParser();
constructor(config: Config) {
this.config = config;
}
private async createSearchRetrieverChain(llm: BaseChatModel) {
(llm as unknown as ChatOpenAI).temperature = 0;
return RunnableSequence.from([
PromptTemplate.fromTemplate(this.config.queryGeneratorPrompt),
llm,
this.strParser,
RunnableLambda.from(async (input: string) => {
const linksOutputParser = new LineListOutputParser({
key: 'links',
});
const questionOutputParser = new LineOutputParser({
key: 'question',
});
const links = await linksOutputParser.parse(input);
let question = this.config.summarizer
? await questionOutputParser.parse(input)
: input;
if (question === 'not_needed') {
return { query: '', docs: [] };
}
if (links.length > 0) {
if (question.length === 0) {
question = 'summarize';
}
let docs: Document[] = [];
const linkDocs = await getDocumentsFromLinks({ links });
const docGroups: Document[] = [];
linkDocs.map((doc) => {
const URLDocExists = docGroups.find(
(d) =>
d.metadata.url === doc.metadata.url &&
d.metadata.totalDocs < 10,
);
if (!URLDocExists) {
docGroups.push({
...doc,
metadata: {
...doc.metadata,
totalDocs: 1,
},
});
}
const docIndex = docGroups.findIndex(
(d) =>
d.metadata.url === doc.metadata.url &&
d.metadata.totalDocs < 10,
);
if (docIndex !== -1) {
docGroups[docIndex].pageContent =
docGroups[docIndex].pageContent + `\n\n` + doc.pageContent;
docGroups[docIndex].metadata.totalDocs += 1;
}
});
await Promise.all(
docGroups.map(async (doc) => {
const res = await llm.invoke(`
You are a web search summarizer, tasked with summarizing a piece of text retrieved from a web search. Your job is to summarize the
text into a detailed, 2-4 paragraph explanation that captures the main ideas and provides a comprehensive answer to the query.
If the query is \"summarize\", you should provide a detailed summary of the text. If the query is a specific question, you should answer it in the summary.
- **Journalistic tone**: The summary should sound professional and journalistic, not too casual or vague.
- **Thorough and detailed**: Ensure that every key point from the text is captured and that the summary directly answers the query.
- **Not too lengthy, but detailed**: The summary should be informative but not excessively long. Focus on providing detailed information in a concise format.
The text will be shared inside the \`text\` XML tag, and the query inside the \`query\` XML tag.
<example>
1. \`<text>
Docker is a set of platform-as-a-service products that use OS-level virtualization to deliver software in packages called containers.
It was first released in 2013 and is developed by Docker, Inc. Docker is designed to make it easier to create, deploy, and run applications
by using containers.
</text>
<query>
What is Docker and how does it work?
</query>
Response:
Docker is a revolutionary platform-as-a-service product developed by Docker, Inc., that uses container technology to make application
deployment more efficient. It allows developers to package their software with all necessary dependencies, making it easier to run in
any environment. Released in 2013, Docker has transformed the way applications are built, deployed, and managed.
\`
2. \`<text>
The theory of relativity, or simply relativity, encompasses two interrelated theories of Albert Einstein: special relativity and general
relativity. However, the word "relativity" is sometimes used in reference to Galilean invariance. The term "theory of relativity" was based
on the expression "relative theory" used by Max Planck in 1906. The theory of relativity usually encompasses two interrelated theories by
Albert Einstein: special relativity and general relativity. Special relativity applies to all physical phenomena in the absence of gravity.
General relativity explains the law of gravitation and its relation to other forces of nature. It applies to the cosmological and astrophysical
realm, including astronomy.
</text>
<query>
summarize
</query>
Response:
The theory of relativity, developed by Albert Einstein, encompasses two main theories: special relativity and general relativity. Special
relativity applies to all physical phenomena in the absence of gravity, while general relativity explains the law of gravitation and its
relation to other forces of nature. The theory of relativity is based on the concept of "relative theory," as introduced by Max Planck in
1906. It is a fundamental theory in physics that has revolutionized our understanding of the universe.
\`
</example>
Everything below is the actual data you will be working with. Good luck!
<query>
${question}
</query>
<text>
${doc.pageContent}
</text>
Make sure to answer the query in the summary.
`);
const document = new Document({
pageContent: res.content as string,
metadata: {
title: doc.metadata.title,
url: doc.metadata.url,
},
});
docs.push(document);
}),
);
return { query: question, docs: docs };
} else {
const res = await searchSearxng(question, {
language: 'en',
engines: this.config.activeEngines,
});
const documents = res.results.map(
(result) =>
new Document({
pageContent:
result.content ||
(this.config.activeEngines.includes('youtube')
? result.title
: '') /* Todo: Implement transcript grabbing using Youtubei (source: https://www.npmjs.com/package/youtubei) */,
metadata: {
title: result.title,
url: result.url,
...(result.img_src && { img_src: result.img_src }),
},
}),
);
return { query: question, docs: documents };
}
}),
]);
}
private async createAnsweringChain(
llm: BaseChatModel,
fileIds: string[],
embeddings: Embeddings,
optimizationMode: 'speed' | 'balanced' | 'quality',
) {
return RunnableSequence.from([
RunnableMap.from({
query: (input: BasicChainInput) => input.query,
chat_history: (input: BasicChainInput) => input.chat_history,
date: () => new Date().toISOString(),
context: RunnableLambda.from(async (input: BasicChainInput) => {
const processedHistory = formatChatHistoryAsString(
input.chat_history,
);
let docs: Document[] | null = null;
let query = input.query;
if (this.config.searchWeb) {
const searchRetrieverChain =
await this.createSearchRetrieverChain(llm);
const searchRetrieverResult = await searchRetrieverChain.invoke({
chat_history: processedHistory,
query,
});
query = searchRetrieverResult.query;
docs = searchRetrieverResult.docs;
}
const sortedDocs = await this.rerankDocs(
query,
docs ?? [],
fileIds,
embeddings,
optimizationMode,
);
return sortedDocs;
})
.withConfig({
runName: 'FinalSourceRetriever',
})
.pipe(this.processDocs),
}),
ChatPromptTemplate.fromMessages([
['system', this.config.responsePrompt],
new MessagesPlaceholder('chat_history'),
['user', '{query}'],
]),
llm,
this.strParser,
]).withConfig({
runName: 'FinalResponseGenerator',
});
}
private async rerankDocs(
query: string,
docs: Document[],
fileIds: string[],
embeddings: Embeddings,
optimizationMode: 'speed' | 'balanced' | 'quality',
) {
if (docs.length === 0 && fileIds.length === 0) {
return docs;
}
const filesData = fileIds
.map((file) => {
const filePath = path.join(process.cwd(), 'uploads', file);
const contentPath = filePath + '-extracted.json';
const embeddingsPath = filePath + '-embeddings.json';
const content = JSON.parse(fs.readFileSync(contentPath, 'utf8'));
const embeddings = JSON.parse(fs.readFileSync(embeddingsPath, 'utf8'));
const fileSimilaritySearchObject = content.contents.map(
(c: string, i: number) => {
return {
fileName: content.title,
content: c,
embeddings: embeddings.embeddings[i],
};
},
);
return fileSimilaritySearchObject;
})
.flat();
if (query.toLocaleLowerCase() === 'summarize') {
return docs.slice(0, 15);
}
const docsWithContent = docs.filter(
(doc) => doc.pageContent && doc.pageContent.length > 0,
);
if (optimizationMode === 'speed' || this.config.rerank === false) {
if (filesData.length > 0) {
const [queryEmbedding] = await Promise.all([
embeddings.embedQuery(query),
]);
const fileDocs = filesData.map((fileData) => {
return new Document({
pageContent: fileData.content,
metadata: {
title: fileData.fileName,
url: `File`,
},
});
});
const similarity = filesData.map((fileData, i) => {
const sim = computeSimilarity(queryEmbedding, fileData.embeddings);
return {
index: i,
similarity: sim,
};
});
let sortedDocs = similarity
.filter(
(sim) => sim.similarity > (this.config.rerankThreshold ?? 0.3),
)
.sort((a, b) => b.similarity - a.similarity)
.slice(0, 15)
.map((sim) => fileDocs[sim.index]);
sortedDocs =
docsWithContent.length > 0 ? sortedDocs.slice(0, 8) : sortedDocs;
return [
...sortedDocs,
...docsWithContent.slice(0, 15 - sortedDocs.length),
];
} else {
return docsWithContent.slice(0, 15);
}
} else if (optimizationMode === 'balanced') {
const [docEmbeddings, queryEmbedding] = await Promise.all([
embeddings.embedDocuments(
docsWithContent.map((doc) => doc.pageContent),
),
embeddings.embedQuery(query),
]);
docsWithContent.push(
...filesData.map((fileData) => {
return new Document({
pageContent: fileData.content,
metadata: {
title: fileData.fileName,
url: `File`,
},
});
}),
);
docEmbeddings.push(...filesData.map((fileData) => fileData.embeddings));
const similarity = docEmbeddings.map((docEmbedding, i) => {
const sim = computeSimilarity(queryEmbedding, docEmbedding);
return {
index: i,
similarity: sim,
};
});
const sortedDocs = similarity
.filter((sim) => sim.similarity > (this.config.rerankThreshold ?? 0.3))
.sort((a, b) => b.similarity - a.similarity)
.slice(0, 15)
.map((sim) => docsWithContent[sim.index]);
return sortedDocs;
}
return [];
}
private processDocs(docs: Document[]) {
return docs
.map(
(_, index) =>
`${index + 1}. ${docs[index].metadata.title} ${docs[index].pageContent}`,
)
.join('\n');
}
private async handleStream(
stream: AsyncGenerator<StreamEvent, any, any>,
emitter: eventEmitter,
) {
for await (const event of stream) {
if (
event.event === 'on_chain_end' &&
event.name === 'FinalSourceRetriever'
) {
``;
emitter.emit(
'data',
JSON.stringify({ type: 'sources', data: event.data.output }),
);
}
if (
event.event === 'on_chain_stream' &&
event.name === 'FinalResponseGenerator'
) {
emitter.emit(
'data',
JSON.stringify({ type: 'response', data: event.data.chunk }),
);
}
if (
event.event === 'on_chain_end' &&
event.name === 'FinalResponseGenerator'
) {
emitter.emit('end');
}
}
}
async searchAndAnswer(
message: string,
history: BaseMessage[],
llm: BaseChatModel,
embeddings: Embeddings,
optimizationMode: 'speed' | 'balanced' | 'quality',
fileIds: string[],
) {
const emitter = new eventEmitter();
const answeringChain = await this.createAnsweringChain(
llm,
fileIds,
embeddings,
optimizationMode,
);
const stream = answeringChain.streamEvents(
{
chat_history: history,
query: message,
},
{
version: 'v1',
},
);
this.handleStream(stream, emitter);
return emitter;
}
}
export default MetaSearchAgent;

48
ui/lib/searxng.ts Normal file
View File

@ -0,0 +1,48 @@
import axios from 'axios';
import { getSearxngApiEndpoint } from './config';
interface SearxngSearchOptions {
categories?: string[];
engines?: string[];
language?: string;
pageno?: number;
}
interface SearxngSearchResult {
title: string;
url: string;
img_src?: string;
thumbnail_src?: string;
thumbnail?: string;
content?: string;
author?: string;
iframe_src?: string;
}
export const searchSearxng = async (
query: string,
opts?: SearxngSearchOptions,
) => {
const searxngURL = getSearxngApiEndpoint();
const url = new URL(`${searxngURL}/search?format=json`);
url.searchParams.append('q', query);
if (opts) {
Object.keys(opts).forEach((key) => {
const value = opts[key as keyof SearxngSearchOptions];
if (Array.isArray(value)) {
url.searchParams.append(key, value.join(','));
return;
}
url.searchParams.append(key, value as string);
});
}
const res = await axios.get(url.toString());
const results: SearxngSearchResult[] = res.data.results;
const suggestions: string[] = res.data.suggestions;
return { results, suggestions };
};

5
ui/lib/types/compute-dot.d.ts vendored Normal file
View File

@ -0,0 +1,5 @@
declare function computeDot(vectorA: number[], vectorB: number[]): number;
declare module 'compute-dot' {
export default computeDot;
}

View File

@ -0,0 +1,17 @@
import dot from 'compute-dot';
import cosineSimilarity from 'compute-cosine-similarity';
import { getSimilarityMeasure } from '../config';
const computeSimilarity = (x: number[], y: number[]): number => {
const similarityMeasure = getSimilarityMeasure();
if (similarityMeasure === 'cosine') {
return cosineSimilarity(x, y) as number;
} else if (similarityMeasure === 'dot') {
return dot(x, y);
}
throw new Error('Invalid similarity measure');
};
export default computeSimilarity;

100
ui/lib/utils/documents.ts Normal file
View File

@ -0,0 +1,100 @@
import axios from 'axios';
import { htmlToText } from 'html-to-text';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
import { Document } from '@langchain/core/documents';
import pdfParse from 'pdf-parse';
import logger from './logger';
export const getDocumentsFromLinks = async ({ links }: { links: string[] }) => {
const splitter = new RecursiveCharacterTextSplitter();
let docs: Document[] = [];
await Promise.all(
links.map(async (link) => {
link =
link.startsWith('http://') || link.startsWith('https://')
? link
: `https://${link}`;
try {
const res = await axios.get(link, {
responseType: 'arraybuffer',
});
const isPdf = res.headers['content-type'] === 'application/pdf';
if (isPdf) {
const pdfText = await pdfParse(res.data);
const parsedText = pdfText.text
.replace(/(\r\n|\n|\r)/gm, ' ')
.replace(/\s+/g, ' ')
.trim();
const splittedText = await splitter.splitText(parsedText);
const title = 'PDF Document';
const linkDocs = splittedText.map((text) => {
return new Document({
pageContent: text,
metadata: {
title: title,
url: link,
},
});
});
docs.push(...linkDocs);
return;
}
const parsedText = htmlToText(res.data.toString('utf8'), {
selectors: [
{
selector: 'a',
options: {
ignoreHref: true,
},
},
],
})
.replace(/(\r\n|\n|\r)/gm, ' ')
.replace(/\s+/g, ' ')
.trim();
const splittedText = await splitter.splitText(parsedText);
const title = res.data
.toString('utf8')
.match(/<title>(.*?)<\/title>/)?.[1];
const linkDocs = splittedText.map((text) => {
return new Document({
pageContent: text,
metadata: {
title: title || link,
url: link,
},
});
});
docs.push(...linkDocs);
} catch (err) {
console.error(
'An error occurred while getting documents from links: ',
err,
);
docs.push(
new Document({
pageContent: `Failed to retrieve content from the link: ${err}`,
metadata: {
title: 'Failed to retrieve content',
url: link,
},
}),
);
}
}),
);
return docs;
};

View File

@ -0,0 +1,9 @@
import { BaseMessage } from '@langchain/core/messages';
const formatChatHistoryAsString = (history: BaseMessage[]) => {
return history
.map((message) => `${message._getType()}: ${message.content}`)
.join('\n');
};
export default formatChatHistoryAsString;

22
ui/lib/utils/logger.ts Normal file
View File

@ -0,0 +1,22 @@
import winston from 'winston';
const logger = winston.createLogger({
level: 'info',
transports: [
new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple(),
),
}),
new winston.transports.File({
filename: 'app.log',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json(),
),
}),
],
});
export default logger;

View File

@ -7,6 +7,7 @@ const nextConfig = {
},
],
},
serverExternalPackages: ['pdf-parse'],
};
export default nextConfig;

View File

@ -12,26 +12,35 @@
},
"dependencies": {
"@headlessui/react": "^2.2.0",
"@iarna/toml": "^2.2.5",
"@icons-pack/react-simple-icons": "^9.4.0",
"@langchain/openai": "^0.0.25",
"@tailwindcss/typography": "^0.5.12",
"axios": "^1.8.3",
"clsx": "^2.1.0",
"compute-cosine-similarity": "^1.1.0",
"compute-dot": "^1.1.0",
"html-to-text": "^9.0.5",
"langchain": "^0.1.30",
"lucide-react": "^0.363.0",
"markdown-to-jsx": "^7.7.2",
"next": "14.1.4",
"next": "^15.2.2",
"next-themes": "^0.3.0",
"pdf-parse": "^1.1.1",
"react": "^18",
"react-dom": "^18",
"react-text-to-speech": "^0.14.5",
"react-textarea-autosize": "^8.5.3",
"sonner": "^1.4.41",
"tailwind-merge": "^2.2.2",
"winston": "^3.17.0",
"yet-another-react-lightbox": "^3.17.2",
"zod": "^3.22.4"
},
"devDependencies": {
"@types/html-to-text": "^9.0.4",
"@types/node": "^20",
"@types/pdf-parse": "^1.1.4",
"@types/react": "^18",
"@types/react-dom": "^18",
"autoprefixer": "^10.0.1",

View File

@ -1,6 +1,10 @@
{
"compilerOptions": {
"lib": ["dom", "dom.iterable", "esnext"],
"lib": [
"dom",
"dom.iterable",
"esnext"
],
"allowJs": true,
"skipLibCheck": true,
"strict": true,
@ -18,9 +22,19 @@
}
],
"paths": {
"@/*": ["./*"]
}
"@/*": [
"./*"
]
},
"target": "ES2017"
},
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
"exclude": ["node_modules"]
"include": [
"next-env.d.ts",
"**/*.ts",
"**/*.tsx",
".next/types/**/*.ts"
],
"exclude": [
"node_modules"
]
}

File diff suppressed because it is too large Load Diff