The LaraUtilX
package provides a unified interface for interacting with multiple Large Language Model (LLM) providers. Choose between OpenAI's GPT models or Google's Gemini models through a consistent API.
Both providers support:
generateResponse( string $modelName, array $messages, ?float $temperature = null, ?int $maxTokens = null, ?array $stop = null, ?float $topP = null, ?float $frequencyPenalty = null, ?float $presencePenalty = null, ?array $logitBias = null, ?string $user = null, ?bool $jsonMode = false, bool $fullResponse = false ): Response
The provider is auto-bound to the LLMProviderInterface
, so you can inject it anywhere in your Laravel app:
use LaraUtilX\LLMProviders\Contracts\LLMProviderInterface;
class MyController extends Controller
{
public function ask(LLMProviderInterface $llm)
{
$response = $llm->generateResponse(
'gpt-3.5-turbo', // or 'gemini-2.0-flash'
[
['role' => 'user', 'content' => 'What is Laravel?']
]
);
return $response->getContent();
}
}
use LaraUtilX\LLMProviders\Contracts\LLMProviderInterface;
class MyController extends Controller
{
private LLMProviderInterface $llm;
public function __construct(LLMProviderInterface $llm)
{
$this->llm = $llm;
}
public function ask()
{
$response = $this->llm->generateResponse(
'gpt-3.5-turbo',
[
['role' => 'user', 'content' => 'What is Laravel?']
]
);
return $response->getContent();
}
}
public function testLLM(Request $request)
{
$provider = config('lara-util-x.llm.default_provider', 'openai');
$model = $provider === 'gemini'
? config('lara-util-x.gemini.default_model', 'gemini-2.0-flash')
: config('lara-util-x.openai.default_model', 'gpt-3.5-turbo');
$prompt = $request->input('prompt', 'Say hello in one short sentence.');
$temperature = $request->input('temperature');
$maxTokens = $request->input('max_tokens');
$jsonMode = filter_var($request->input('json_mode', false), FILTER_VALIDATE_BOOL);
$messages = [
['role' => 'user', 'content' => $prompt]
];
$response = $this->llm->generateResponse(
modelName: $model,
messages: $messages,
temperature: $temperature !== null ? (float) $temperature : null,
maxTokens: $maxTokens !== null ? (int) $maxTokens : null,
jsonMode: $jsonMode,
fullResponse: true
);
return response()->json([
'provider' => $provider,
'model' => $response->getModel() ?? $model,
'content' => $response->getContent(),
'usage' => $response->getUsage(),
]);
}
You can use all supported chat parameters:
$response = $llm->generateResponse(
'gpt-3.5-turbo', // or 'gemini-2.0-flash'
[
['role' => 'user', 'content' => 'Tell me a joke.']
],
temperature: 0.8,
maxTokens: 100,
stop: ['\n'],
topP: 1.0,
frequencyPenalty: 0,
presencePenalty: 0,
logitBias: null,
user: 'user-123',
jsonMode: false,
fullResponse: true
);
All configurations are present in config/lara-util-x.php
:
'llm' => [
'default_provider' => env('LLM_DEFAULT_PROVIDER', 'openai'),
],
'openai' => [
'api_key' => env('OPENAI_API_KEY'),
'max_retries' => env('OPENAI_MAX_RETRIES', 3),
'retry_delay' => env('OPENAI_RETRY_DELAY', 2),
'default_model' => env('OPENAI_DEFAULT_MODEL', 'gpt-3.5-turbo'),
],
'gemini' => [
'api_key' => env('GEMINI_API_KEY'),
'max_retries' => env('GEMINI_MAX_RETRIES', 3),
'retry_delay' => env('GEMINI_RETRY_DELAY', 2),
'default_model' => env('GEMINI_DEFAULT_MODEL', 'gemini-2.0-flash'),
],
Add your API keys to your .env
file:
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=...
LLM_DEFAULT_PROVIDER=openai
Switch between providers via configuration:
// Use OpenAI
LLM_DEFAULT_PROVIDER=openai
// Use Gemini
LLM_DEFAULT_PROVIDER=gemini
The method returns a response instance (either OpenAIResponse
or GeminiResponse
), which provides:
getContent()
: The generated text from the model.getModel()
: The model used for the response.getUsage()
: Token usage information.getRawResponse()
: The full raw response object from the provider.Example:
$content = $response->getContent();
$model = $response->getModel();
$usage = $response->getUsage();
Success Result:
fullResponse
is true, metadata such as model and usage.Failure Result:
max_retries
.To add support for other LLM providers:
LLMProviderInterface
in your own provider class.This unified system simplifies the process of interacting with multiple LLM providers, providing a convenient, robust, and extensible way to generate AI-powered completions in your Laravel application.