* feat: add AWS Bedrock API Key option to settings panel
* feat: Bedrock API key auth method
* fix: hide IAM note when using bedrock api key
* move to camcelCase identifier for bedrock api key use
linting
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* auto model context limit detection for ollama llm provider
* auto model context limit detection for lmstudio llm provider
* Patch Ollama to function and sync context windows like Foundry
* normalize how model context windows are cached from endpoint service
todo: move this into global utility class with MODEL_MAP
eager load models on boot to pre-cache them
add performance model improvements into ollama agent as well as apply n_ctx
* remove debug log
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* add microsoft foundry local llm and agent providers
* minor change to fix early stop token + overloading of context window
always use user defined window _unless_ it is larger than the models real contenxt window
cache the context windows when we can from the API (0.7.*)+
Unload model forcefully on model change to prevent resource hogging
* add back token preference since some models have very large windows and can crash a machine
normalize cases
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* feat: Implement CometAPI integration for chat completions and model management
- Added CometApiLLM class for handling chat completions using CometAPI.
- Implemented model synchronization and caching mechanisms.
- Introduced streaming support for chat responses with timeout handling.
- Created CometApiProvider class for agent interactions with CometAPI.
- Enhanced error handling and logging throughout the integration.
- Established a structure for managing function calls and completions.
* linting
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add chroma cloud as new vector db provider
* update docker example env
* extend chroma class to chroma cloud
* update readme
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Added exa-search case to the search provider switch in web-browsing.js
* Added ExaSearchOptions component for API key input
* update
* Patch missing image crashing UI
Fix issue where ENV key did not exist or was saved on click
Update copy for provider
Add Docs for ENV keys for manual placements
update systemssettings for returning key saved to UI
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* WIP on embedder selection
TODO: apply splitting and query prefixes (if applicable)
* wip on upsert
* Support base model
support nomic-text-embed-v1
support multilingual-e5-small
Add prefixing for both embedding and query for RAG tasks
Add chunking prefix to all vector dbs to apply prefix when possible
Show dropdown and auto-pull on new selection
* norm translations
* move supported models to constants
handle null seelction or invalid selection on dropdown
update comments
* dev
* patch text splitter maximums for now
* normalize translations
* add tests for splitter functionality
* normalize
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* Enable UI/UX for model swapping in chat window
* forgot component
* patch useGetProviders hook to set loading on change of provider
* dev build
* normalize translations
* patch how model default is provided
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* PGVector support for vector db storage
* forgot files
* comments
* dev build
* Add ENV connection and table schema validations for vector table
add .reset call to drop embedding table when changing the AnythingLLM embedder
update instrutions
Add preCheck error reporting in UpdateENV
add timeout to pg connection
* update setup
* update README
* update doc
* Fixed two primary issues discovered while using AWS Bedrock with Anthropic Claude Sonnet models:
- Context Window defaults to 8192 maximum, which isn't correct
- Multimodal stopped working when removing langchain, which was transparently handling image_url to a format sonnet expects.
* Ran `yarn lint`
* Updated .env.example to have aws bedrock examples too
* Refactor for readability
move utils for AWS specific functionality to subfile
add token output max to ENV so setting persits
---------
Co-authored-by: Tristan Stahnke <tristan.stahnke+gpsec@guidepointsecurity.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* feat: add new model provider PPIO
* fix: fix ppio model fetching
* fix: code lint
* reorder LLM
update interface for streaming and chats to use valid keys
linting
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Add support for Google Generative AI (Gemini) embedder
* Add missing example in docker
Fix UI key elements in options
Add Gemini to data handling section
Patch issues with chunk handling during embedding
* remove dupe in env
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* wip hub connection page fe + backend
* lint
* implement backend for local hub items + placeholder endpoints to fetch hub app data
* fix hebrew translations
* revamp community integration flow
* change sidebar
* Auto import if id in URL param
remove preview in card screen and instead go to import flow
* get user's items + team items from hub + ui improvements to hub settings
* lint
* fix merge conflict
* refresh hook for community items
* add fallback for user items
* Disable bundle items by default on all instances
* remove translations (will complete later)
* loading skeleton
* Make community hub endpoints admin only
show visibility on items
combine import/apply for items to they are event logged for review
* improve middleware and import flow
* community hub ui updates
* Adjust importing process
* community hub to dev
* Add webscraper preload into imported plugins
* add runtime property to plugins
* Fix button status on imported skill change
show alert on skill change
Update markdown type and theme on import of agent skill
* update documentaion paths
* remove unused import
* linting
* review loading state
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* exposes `maxConcurrentChunks` parameter for the generic openai embedder through configuration. This allows setting a batch size for endpoints which don't support the default of 500
* Update new field to new UI
make getting to ensure proper type and format
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* feat: add new model provider: Novita AI
* feat: finished novita AI
* fix: code lint
* remove unneeded logging
* add back log for novita stream not self closing
* Clarify ENV vars for LLM/embedder seperation for future
Patch ENV check for workspace/agent provider
---------
Co-authored-by: Jason <ggbbddjm@gmail.com>
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* Update OpenAI TTS config to allow a custom BaseURL
* uncheck config file
* break openai generic TTS into its own provider
* add space
* hide TTS on user msg
---------
Co-authored-by: Adam <phazei@gmail.com>
* set message limit per user
* remove old limit user messages + unused admin page
* fix daily message validation
* refactor message limit input
refactor canSendChat on user to a method on user model
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Issue #1943: Add support for LLM provider - Fireworks AI
* Update UI selection boxes
Update base AI keys for future embedder support if needed
Add agent capabilites for FireworksAI
* class only return
---------
Co-authored-by: Aaron Van Doren <vandoren96+1@gmail.com>