OpenAI Provider

The OpenAIProvider in the LaraUtilX package provides a standardized way to interact with OpenAI's GPT models for chat completions.

Methods

  1. generateResponse( string $modelName, array $messages, ?float $temperature = null, ?int $maxTokens = null, ?array $stop = null, ?float $topP = null, ?float $frequencyPenalty = null, ?float $presencePenalty = null, ?array $logitBias = null, ?string $user = null, ?bool $jsonMode = false, bool $fullResponse = false ): OpenAIResponse
    • Generates a chat completion using the specified model and parameters.

Usage

  1. Injecting the Provider: The provider is auto-bound to the LLMProviderInterface, so you can inject it anywhere in your Laravel app:

    use omarchouman\LaraUtilX\LLMProviders\Contracts\LLMProviderInterface;
    
    class MyController extends Controller
    {
       public function ask(LLMProviderInterface $llm)
       {
           $response = $llm->generateResponse(
               'gpt-3.5-turbo',
               [
                   ['role' => 'user', 'content' => 'What is Laravel?']
               ]
           );
    
           return $response->getContent();
       }
    }

    or you can inject it directly in the constructor

    use omarchouman\LaraUtilX\LLMProviders\Contracts\LLMProviderInterface;
    
    class MyController extends Controller
    {
        private LLMProviderInterface $llm;
    
        // Inject the provider in the constructor
        public function __construct(LLMProviderInterface $llm)
        {
            $this->llm = $llm;
        }
    
        public function ask()
        {
            $response = $this->llm->generateResponse(
                'gpt-3.5-turbo',
                [
                    ['role' => 'user', 'content' => 'What is Laravel?']
                ]
            );
    
            return $response->getContent();
        }
    }
  2. Customizing Parameters: You can use all OpenAI chat parameters:

    $response = $llm->generateResponse(
       'gpt-3.5-turbo',
       [
           ['role' => 'user', 'content' => 'Tell me a joke.']
       ],
       temperature: 0.8,
       maxTokens: 100,
       stop: ['\n'],
       topP: 1.0,
       frequencyPenalty: 0,
       presencePenalty: 0,
       logitBias: null,
       user: 'user-123',
       jsonMode: false,
       fullResponse: true
    );

Result

The method returns an OpenAIResponse instance, which provides:

  • getContent(): The generated text from the model.
  • getModel(): The model used for the response.
  • getUsage(): Token usage information.
  • getRawResponse(): The full raw response object from OpenAI.

Example:

$content = $response->getContent();
$model = $response->getModel();
$usage = $response->getUsage();


Success Result:

  • The response contains the generated content and, if fullResponse is true, metadata such as model and usage.

Failure Result:

  • If the request fails, an exception is thrown. The provider will retry the request up to the configured number of times before failing.

Configuration

All the configurations for the provider are present in config/lara-util-x.php:

'openai' => [
    'api_key' => env('OPENAI_API_KEY'),
    'max_retries' => env('OPENAI_MAX_RETRIES', 3),
    'retry_delay' => env('OPENAI_RETRY_DELAY', 2),
    'default_model' => env('OPENAI_DEFAULT_MODEL', 'gpt-3.5-turbo'),
    // ...other defaults
],

Add your API key to your .env file:

OPENAI_API_KEY=sk-...

Error Handling

  • The provider automatically retries failed requests up to max_retries.
  • If all retries fail, an exception is thrown. You should catch exceptions in your controller or service.

Extending

To add support for other LLM providers:

  • Implement the LLMProviderInterface in your own provider class.
  • Bind your provider in the service provider if you want to swap implementations.

This utility simplifies the process of interacting with OpenAI’s chat models, providing a convenient, robust, and extensible way to generate completions in your Laravel application.