Introducing MockGPT: Mock the OpenAI API when Developing and Testing LLM-Powered Apps

Uri Maoz
Co-Founder and CEO
August 2, 2023

The release of ChatGPT in late 2022 has piqued the curiosity of thousands of developers who immediately asked: what can I build with this? Within weeks, searches for the OpenAI API soared to meteoric heights, far surpassing some of the most commonly used APIs such as Stripe and Salesforce:

Image source: Google Trends

Everyone is trying to build the next great generative AI app - and OpenAI’s simple and powerful APIs are a natural starting point. However, developing against these APIs is not without challenges - including high costs, unpredictable responses, and significant wait time for responses. This can make development and testing finicky and prohibitively expensive, and slows down time to market for new GPT-powered applications.

To help tackle these challenges, today we are introducing MockGPT: A new, pre-built mock module for developing and testing OpenAI-powered applications.

What is MockGPT?

MockGPT is a WireMock-powered mock module you can use to simulate OpenAI APIs such as ChatGPT and GPT-3. This allows you to write and test your generative AI app without the high fees and frustrating wait times of working with the live API. 

You can set the mock API to return canned responses, which allows you to develop the rest of your application around code that’s identical to what you’ll run in production (rather than relying on workarounds such as using dummy variables). 

By opening a free WireMock Cloud account, you can create more detailed testing scenarios such as returning different responses to different prompts, adding delays, or introducing controlled unpredictability (chaos engineering).

Which problems is MockGPT solving?

OpenAI’s GPT-4 is probably the most advanced LLM on the market today, and offers powerful capabilities for chatbots, virtual assistants, or coding tasks. But if you’ve tried building on the OpenAI API, you’ve probably run into a few major hurdles:

  • It’s expensive! Calling the OpenAI API is typically measured in cents - and with longer prompts and more advanced models (such as GPT-4), a single API call can cost almost $1. These costs will easily rack up when you’re running the same code hundreds of times in unit and integration tests.
  • Long wait times for responses: OpenAI can take a while to return results - and this too gets worse when you’re working with longer prompts and newer models. A wait time of 15-45 seconds is not unusual.
  • Unpredictable responses: The non-deterministic nature of LLM responses is part of the ‘magic’ that makes each conversation feel fresh. But it also makes development very difficult, since the response might break your downstream code.
  • Rate limits: OpenAI rate limits both requests and tokens, and these limits are very easy to hit when making asynchronous requests.

Due to all of these reasons, it’s very hard to implement normal CI/CD processes (such as continuous testing) when developing against the OpenAI API. MockGPT allows you to do so by providing predictable, fast, and free API responses that work with your existing codebase.

How to get started:

/

Latest posts

Have More Questions?