* Migrate gemini agents away from `Untooled`
* disable agents for gemma models as they are not supported for tool calling
* Dev build
resolve#4452 via function name prefix and then stripping within provider
* auto model context limit detection for ollama llm provider
* auto model context limit detection for lmstudio llm provider
* Patch Ollama to function and sync context windows like Foundry
* normalize how model context windows are cached from endpoint service
todo: move this into global utility class with MODEL_MAP
eager load models on boot to pre-cache them
add performance model improvements into ollama agent as well as apply n_ctx
* remove debug log
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* add microsoft foundry local llm and agent providers
* minor change to fix early stop token + overloading of context window
always use user defined window _unless_ it is larger than the models real contenxt window
cache the context windows when we can from the API (0.7.*)+
Unload model forcefully on model change to prevent resource hogging
* add back token preference since some models have very large windows and can crash a machine
normalize cases
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* WIP agentic tool call streaming
- OpenAI
- Anthropic
- Azure OpenAI
* WIP rest of providers EXCLUDES Bedrock and GenericOpenAI
* patch untooled complete/streaming to use chatCallback provider from provider class and not assume OpenAI client struct
example: Ollama
* modify ollama to function with its own overrides
normalize completion/stream outputs across providers/untooled
* dev build
* fix message sanization for anthropic agent streaming
* wip fix anthropic agentic streaming sanitization
* patch gemini, webgenui, generic aibitat providers + disable providers unable to test
* refactor anthropic aibitat provider for empty message and tool call formatting
* Add frontend missing prop check
update Azure for streaming support
update Gemini to streamting support on gemini-* models
generic OpenAI disable streaming
verify localAI support
verify NVIDIA Nim support
* DPAIS, remove temp from call, support streaming'
* remove 0 temp to remove possibility of bad temp error/500s/400s
* Patch condition where model is non-streamable and no tools are present or called resulting in the provider `handleFunctionCallChat` being called - which returns a string.
This would then fail in Untooled.complete since response would be a string and not the expected `response.choices?.[0]?.message`
Modified this line to handle both conditions for stream/non-streaming and tool presence or lack thereof
* Allow generic Openai to be streamable since using untooled it should work fine
honor disabled streaming for provider where that concern may apply for regular chats
* rename function and more gemini-specific function to gemini provider
* add comments for readability
.complete on azure should be non-streaming as this is the sync response
* migrate CometAPI, but disable as we cannot test
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* Add className property to various LLM and embedder classes to fix logging bug after minification
* Fix bug with this.log method by applying the missing private field symbol
* Add User-Agent header on the requests sent by Generic OpenAI providers.
* Moved getAnythingLLMUserAgent helper fn to server/endpoints/utils.js and changed fallback version string to "unknown"
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* feat: Implement CometAPI integration for chat completions and model management
- Added CometApiLLM class for handling chat completions using CometAPI.
- Implemented model synchronization and caching mechanisms.
- Introduced streaming support for chat responses with timeout handling.
- Created CometApiProvider class for agent interactions with CometAPI.
- Enhanced error handling and logging throughout the integration.
- Established a structure for managing function calls and completions.
* linting
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Added exa-search case to the search provider switch in web-browsing.js
* Added ExaSearchOptions component for API key input
* update
* Patch missing image crashing UI
Fix issue where ENV key did not exist or was saved on click
Update copy for provider
Add Docs for ENV keys for manual placements
update systemssettings for returning key saved to UI
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* wip: create direct output switch on last block and send response to ui
* lint
* Return flow on direct output enabled
prevent new blocks below direct output block
Update executor/aibitat to handle skipping of handler outputs
* dev build
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* feat: implement iam role auth for bedrock
* fix: make client refreshes properly when switching between iam_user and iam_role
* checkout agent flow
* fix aiprovider for bedrock in agent use
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* feat: add new model provider PPIO
* fix: fix ppio model fetching
* fix: code lint
* reorder LLM
update interface for streaming and chats to use valid keys
linting
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Reranker WIP
* add cacheing and singleton loading
* Add field to workspaces for vectorSearchMode
Add UI for lancedb to change mode
update all search endpoints to pass in reranker prop if provider can use it
* update hint text
* When reranking, swap score to rerank score
* update optchain
* feat: add new model provider: Novita AI
* feat: finished novita AI
* fix: code lint
* remove unneeded logging
* add back log for novita stream not self closing
* Clarify ENV vars for LLM/embedder seperation for future
Patch ENV check for workspace/agent provider
---------
Co-authored-by: Jason <ggbbddjm@gmail.com>
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* Fix incorrect json API description.
* small edits and validity checks
* remove console.logs
* unset and recheck changes
---------
Co-authored-by: Adam <phazei@gmail.com>