The OpenAIProvider
in the LaraUtilX
package provides a standardized way to interact with OpenAI's GPT models for chat completions.
generateResponse( string $modelName, array $messages, ?float $temperature = null, ?int $maxTokens = null, ?array $stop = null, ?float $topP = null, ?float $frequencyPenalty = null, ?float $presencePenalty = null, ?array $logitBias = null, ?string $user = null, ?bool $jsonMode = false, bool $fullResponse = false ): OpenAIResponse
Injecting the Provider:
The provider is auto-bound to the LLMProviderInterface
, so you can inject it anywhere in your Laravel app:
use omarchouman\LaraUtilX\LLMProviders\Contracts\LLMProviderInterface;
class MyController extends Controller
{
public function ask(LLMProviderInterface $llm)
{
$response = $llm->generateResponse(
'gpt-3.5-turbo',
[
['role' => 'user', 'content' => 'What is Laravel?']
]
);
return $response->getContent();
}
}
use omarchouman\LaraUtilX\LLMProviders\Contracts\LLMProviderInterface;
class MyController extends Controller
{
private LLMProviderInterface $llm;
// Inject the provider in the constructor
public function __construct(LLMProviderInterface $llm)
{
$this->llm = $llm;
}
public function ask()
{
$response = $this->llm->generateResponse(
'gpt-3.5-turbo',
[
['role' => 'user', 'content' => 'What is Laravel?']
]
);
return $response->getContent();
}
}
Customizing Parameters: You can use all OpenAI chat parameters:
$response = $llm->generateResponse(
'gpt-3.5-turbo',
[
['role' => 'user', 'content' => 'Tell me a joke.']
],
temperature: 0.8,
maxTokens: 100,
stop: ['\n'],
topP: 1.0,
frequencyPenalty: 0,
presencePenalty: 0,
logitBias: null,
user: 'user-123',
jsonMode: false,
fullResponse: true
);
The method returns an OpenAIResponse
instance, which provides:
getContent()
: The generated text from the model.getModel()
: The model used for the response.getUsage()
: Token usage information.getRawResponse()
: The full raw response object from OpenAI.Example:
$content = $response->getContent();
$model = $response->getModel();
$usage = $response->getUsage();
Success Result:
fullResponse
is true, metadata such as model and usage.Failure Result:
All the configurations for the provider are present in config/lara-util-x.php
:
'openai' => [
'api_key' => env('OPENAI_API_KEY'),
'max_retries' => env('OPENAI_MAX_RETRIES', 3),
'retry_delay' => env('OPENAI_RETRY_DELAY', 2),
'default_model' => env('OPENAI_DEFAULT_MODEL', 'gpt-3.5-turbo'),
// ...other defaults
],
Add your API key to your .env
file:
OPENAI_API_KEY=sk-...
max_retries
.To add support for other LLM providers:
LLMProviderInterface
in your own provider class.This utility simplifies the process of interacting with OpenAI’s chat models, providing a convenient, robust, and extensible way to generate completions in your Laravel application.