* Refactor LLMPerformanceMonitor to use options object for measureStream parameters
* Refactor invocations of `measureStream` to use options arguments
* Change invocation of `measureStream` in anthropic provider to use options argument
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* add model tag to chatCompletion
* add modelTag `model` to async streaming
keeps default arguments for prompt token calculation where applied via explict arg
* fix HF default arg
* render all performance metrics as available for backward compatibility
add `timestamp` to both sync/async chat methods
* extract metrics string to function
* Add className property to various LLM and embedder classes to fix logging bug after minification
* Fix bug with this.log method by applying the missing private field symbol
* feat: Implement CometAPI integration for chat completions and model management
- Added CometApiLLM class for handling chat completions using CometAPI.
- Implemented model synchronization and caching mechanisms.
- Introduced streaming support for chat responses with timeout handling.
- Created CometApiProvider class for agent interactions with CometAPI.
- Enhanced error handling and logging throughout the integration.
- Established a structure for managing function calls and completions.
* linting
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>