Superpower Your Applications with AI APIs

Janet Wagner
by Janet Wagner on May 30, 2023 10 min read

In a recent blog post, we talked about using large language models (LLMs) to supercharge your API program. Today, we’re delving into different ways you could use AI in applications — specifically, what you could build with LLMs. We’ll also cover key factors you should consider when deciding whether to build your own AI API or use a third-party AI vendor API.

What Could You Build With AI APIs?

The possibilities AI brings to application development are endless. Let’s start by examining how some companies have created real-world applications driven by LLMs. The examples below use LLMs developed by OpenAI, which provides its AI models through the OpenAI API.

  • Conversational customer service — Instacart has leveraged OpenAI’s platform to develop a new feature called “Ask Instacart”, set to be released later this year. Customers can ask the chatbot relevant questions when creating grocery lists. Similarly, Shopify has launched a ChatGPT-powered assistant that users can chat with on a wide range of topics.
  • Auto-generated programming code — At the recent Hannover Messe Industrial Fair, Siemens announced a partnership with Microsoft that includes work involving ChatGPT and Azure AI services to augment automation engineering. The company demonstrated how engineers can use natural language inputs to generate PLC code. This code automation helps save time and reduce errors.
  • Healthcare paperwork app — Doximity has created an application with ChatGPT that lets healthcare providers generate documents based on prompts from a library or by creating new prompts. For example, a doctor could use the tool to quickly create a letter of necessity for a specific medical condition, and then use the same tool to digitally fax it to the appropriate insurer. This app helps healthcare professionals save valuable time by streamlining administrative tasks.
  • Auto-generated personalized emails — Salesforce announced Einstein GPT, integrated with ChatGPT, which features the ability to generate personalized emails for marketing and sales automatically. Meanwhile, Microsoft announced the launch of Microsoft 365 Co-Pilot — powered by multiple LLMs, including GPT-4 — which can generate email replies to drafts and summarize email threads.
  • AI-powered virtual tutor — Khan Academy has developed Khanmigo, an AI-powered guide for Khan Academy. The assistant serves as a one-on-one tutoring coach for students, as well as an assistant for teachers. The virtual assistant is powered by GPT-4 and is currently in its pilot phase with a waiting list.
  • Enhanced language lessons — Duolingo has integrated GPT-4 into its Duolingo Max offering, introducing a specialized conversation feature and a new feature called “Explain my Answer.” This new feature provides users with an explanation of the rules when they make mistakes.
  • AI video script writing — Waymark has integrated GPT-3 with its product to provide users with an easy way to create personalized video scripts. The AI generates custom, relevant video advertising scripts in seconds, allowing users to edit the generated scripts rather than creating them from scratch.

While all of these examples use OpenAI LLMs, you don’t need to use an LLM to integrate AI capabilities into your application. You can find platforms and products that provide access to smaller AI models, each designed for a specific domain or task. Additionally, you have the option to build your own LLM or multiple smaller AI models.

By applying AI to specific tasks and use cases, you can create applications that solve real-world problems. You can create innovative and valuable applications that users will enjoy!

Adding AI Capabilities to Your Application

If you want to add AI capabilities to an application, platform, or system, you’ll typically use an API. There are two parts to adding AI capabilities to your application:

  • The AI – An AI model or set of AI models that perform specific tasks or functions.
  • An API – To incorporate the AI functionality into your application.

You can either build an AI API in-house or buy an existing one from a vendor. Alternatively, you could adopt a hybrid approach by creating some AI capabilities in-house and purchasing others from a third party.

Building Your Own AI Solution

Whether you’re deciding to build or buy an AI solution, there are many factors to consider — far too many to cover in this blog post. However, if you are considering developing your own AI solution, here are a few things to keep in mind:

  • The initial and ongoing costs of developing AI — Building an AI solution in-house requires a substantial investment in terms of time and finances. In addition to the proper infrastructure and computing power, you need AI expertise, domain knowledge, and training data. You also need to train and retrain the models, among other things. An LLM like GPT-3 can cost more than $4 million to train.
  • The talent required to build AI — Building AI solutions requires AI talent. We’ve seen a substantial shortage of available AI talent for quite a few years. With the skyrocketing popularity of LLMs, companies looking to build them will need to find people with specialized AI skills, such as deep learning and natural language processing (NLP). You also shouldn’t forget about a “prompt engineer,” a role that tackles the challenge of creating effective prompts for LLM models!
  • Model training data — AI models require a LOT of high-quality training data. Most LLMs require a minimum of gigabytes of training data. Where are you going to get the training data for your model? If you get your data from an open repository like Common Crawl — where the datasets are noisy and contain unwanted content — you’ll also need someone capable of cleaning and preparing that data for model training.
  • Ensuring the quality of the AI model — Ensuring quality control is crucial when developing AI models, especially LLMs, as they can sometimes produce unpredictable results! You need to ensure you can adequately validate all your AI models so that they perform reliably.

This list is just the tip of the iceberg! There are so many things to consider when setting out to create your own AI models. If you do decide to build the AI for your application, you’ll also need to create an API to integrate it.

Building Your Own AI API

Once you’ve figured out how to build your AI solution, you’ll need to tackle how to build the API for it. When building your own API, there are several factors to take into account, such as:

  • API design approach — We recommend an API design-first approach as it can help you produce consistent and reliable APIs in a cost-effective and time-saving way. Consider using an API specification like OpenAPI to work out the details of your API before writing the code for it.
  • Developer experience — Developer experience always matters, whether the API is for internal use only or external consumers. You need to ensure developers have the tools they need to successfully get to ‘Hello World.’ Tools could include API documentation, SDKs in multiple languages, and developer guides.
  • API governance — Some industries require certain levels of standardization, and governance can help you do that with your APIs. You may want to consider implementing an API governance program. Governance can help ensure consistency across all the APIs your organization builds.
  • Collaboration tools —You need to collaborate effectively with all API stakeholders getting critical feedback on design decisions promptly. Tools like Stoplight’s Discussions can help you improve communication among stakeholders and enable asynchronous collaboration.

If building your own AI API seems a bit daunting, you can add AI capabilities to your application with a third-party AI solution. However, developing your own AI API provides some key benefits, including:

  • Control — When you build your own AI API, you control every aspect of its design and development. You control the data used to train your AI model. You don’t have to adhere to an AI vendor’s terms of service or restrictions. It’s your AI and API, so you set the rules and standards for both.
  • Visibility — Most AI vendors won’t give you many details on the data used to train their models or offer any insights on data pre-processing. When you build the model yourself, you know where the data came from and how it was processed. You can ensure your model gets diverse training data, which helps reduce bias in the model’s output.

If you want to experiment with AI or want a faster way to add AI to your application, consider buying an AI solution with an API.

Should You Use an AI Vendor?

Using a third-party API to add AI capabilities to your application is generally more straightforward and faster than building your own AI API. You don’t have to develop the AI solution or API from scratch because the vendor has already done that for you. You also don’t have to worry about maintaining the AI and API infrastructure.

With that said, there are some drawbacks to using an AI vendor, such as:

  • You’re subject to their terms of service, which can change at any time. You may also run into API rate-limiting issues.
  • Many LLMs are black boxes. You have no insights into where the training data came from or how the model was trained, which means the model could have biases or lack representation in its output.
  • You often see confusing pricing or prices based on different usages, which can lead to an unexpectedly high cost. For example, you could have usage pricing for LLM prompts, completions, or tokens and a separate subscription cost for API access.
  • No platform is entirely immune to outages. An outage at your AI vendor could temporarily slow down or break your application.
  • The AI vendor could eventually go out of business or get acquired by a larger enterprise. Building your own AI solution and API ensures greater control and stability in the long run.

While using a third-party AI API does have some drawbacks, this option may work well for your situation. If you want to go big and create an application powered by an LLM, you have a growing list of commercial and open-source options. For example, you can find commercial LLM API options from OpenAI, AI21, and Cohere. All three companies offer a free trial.

Hugging Face is an AI community and repository for open-source models and datasets. You can find a wide range of open-source AI models, including LLMs. BLOOM, an open-source autoregressive LLM, is available on Hugging Face. You can integrate the open-source AI models with the Hugging Face Hosted Inference API, which has both free and paid subscription tiers.

Building an AI API? Get Better Results With Stoplight!

Nowadays, you have a wealth of tools available to help you build your own AI solution and an API to go with it. With Stoplight’s collaborative design and documentation platform for APIs, you can go beyond creating an internal AI API to power your application. You can use Stoplight’s tools to treat your API as a Product, offering your AI solution as an API to developers outside your organization. Stoplight tools can help you build a consistent, reliable API — regardless of the API type or use case.

Share this post

Stoplight to Join SmartBear!

As a part of SmartBear, we are excited to offer a world-class API solution for all developers' needs.

Learn More
The blog CTA goes here! If you don't need a CTA, make sure you turn the "Show CTA Module" option off.

Take a listen to The API Intersection.

Hear from industry experts about how to use APIs and save time, save money, and grow your business.

Listen Now