-
Notifications
You must be signed in to change notification settings - Fork 1
Add LM Studio provider #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: prompt-cache
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds support for local model testing via LM Studio, enabling developers to test fine-tuned models that aren't available through the AI SDK. The changes refactor the existing gateway-based model selection into a provider abstraction and add LM Studio as a second provider option.
Key changes:
- Added provider abstraction layer with support for Vercel AI Gateway and LM Studio
- Refactored model selection and pricing logic into separate provider modules
- Enhanced result metadata to track provider type and configuration
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
| lib/providers/lmstudio.ts | New provider module for LM Studio integration with local model discovery and selection |
| lib/providers/ai-gateway.ts | Extracted and refactored Vercel AI Gateway logic from index.ts into a dedicated provider module |
| index.ts | Refactored to use provider abstraction, added provider selection UI, and updated result metadata to include provider information |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
paoloricciuti
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good, a minor comment but approving already. I agree we need something like this.
| const customUrl = await confirm({ | ||
| message: "Use custom LM Studio URL? (default: http://localhost:1234/v1)", | ||
| initialValue: false, | ||
| }); | ||
|
|
||
| if (isCancel(customUrl)) { | ||
| cancel("Operation cancelled."); | ||
| process.exit(0); | ||
| } | ||
|
|
||
| let baseURL = "http://localhost:1234/v1"; | ||
|
|
||
| if (customUrl) { | ||
| const urlInput = await text({ | ||
| message: "Enter LM Studio server URL", | ||
| placeholder: "http://localhost:1234/v1", | ||
| }); | ||
|
|
||
| if (isCancel(urlInput)) { | ||
| cancel("Operation cancelled."); | ||
| process.exit(0); | ||
| } | ||
|
|
||
| baseURL = urlInput || "http://localhost:1234/v1"; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of using two questions we can ask for the lm studio URL and prefill with the deafault no?
I know we said that we can add just use AI SDK, but if we want to test fine tunes we can't use that, so I still think it's good to have a way to do this as LM Studio can run anything from HF.