
How to use OpenAI to verify the content
Posted on |
Interesting directions to explore using OpenAI for bloggers or online shopping stores:
- automatically Re-Write content (removing misspells, re-prase, add more information, SEO links automatically vs original article written by a human)
- Categorize content (prioritize, group the content by common tags, filter/delete/hide low value content)
- Check if the content provides a value or is a junk to your target users
- Cost of AI vs human work
- Tunning parameters and other factors
- Conclusions
I. How you can use OpenAI models
Using Java, Python, Curl or any programming language you want to call OpenAI API:
import java.io.*;
import java.net.*;
import org.json.JSONException;
import org.json.JSONObject;
public static JSONObject makeHttpRequest(String url,
String paramsJSON) {
try {
urlObj = new URL(url);
con = (HttpURLConnection) urlObj.openConnection();
con.setRequestProperty("Authorization", "Bearer "+"your-autorization-key");// .header("Authorization", "Bearer " + token)
con.setRequestMethod("POST");
con.setRequestProperty("Content-Type", "application/json; utf-8");
con.setRequestProperty("Accept", "application/json");
con.setDoOutput(true);
con.setReadTimeout(60000);
con.setConnectTimeout(60000);
try (OutputStream os = con.getOutputStream()) {
byte[] input = paramsJSON.getBytes(charset);
os.write(input, 0, input.length);
}
int code = con.getResponseCode();
//System.out.println("HTTP CODE"+ String.valueOf(code));
} catch (IOException e) {
e.printStackTrace();
}
try {
//Receive the response from the server
InputStream in = new BufferedInputStream(con.getInputStream());
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
result = new StringBuilder();
String line;
while ((line = reader.readLine()) != null) {
result.append(line);
}
// System.out.println("JSON Parser " + "result: " + result.toString());
} catch (IOException e) {
e.printStackTrace();
}
con.disconnect();
// try parse the string to a JSON object
try {
jObj = new JSONObject(result.toString());
} catch (JSONException e) {
System.out.println("JSON Parser" + "Error parsing data " + e.toString());
}
return jObj;
}
public static void main (String args[])
{
JSONObject json = makeHttpRequest("https://api.openai.com/v1/chat/completions",
"{\n" +
" \"model\": \"gpt-3.5-turbo\",\n" +
" \"messages\": [{\"role\": \"user\", \"content\": \"Your Question Here\"}] \n" +
" }"
);
System.out.println("json:"+json.toString());
}
There are two endpoints you can connect with HTTPRequest:
For GPT-3.5-turbo:
https://api.openai.com/v1/chat/completions
Json needed as input is something like:
"{\n" +
" \"model\": \"gpt-3.5-turbo\",\n" +
" \"messages\": [{\"role\": \"user\", \"content\": \"Your Question Here\"}] \n" +
" }"
For GPT-3.5 (text-davinci):
https://api.openai.com/v1/completions
JSon for input is something like:
"{\n" +
" \"model\": \"text-davinci-003\",\n" +
" \"prompt\": \"Your question here\",\n" +
" \"temperature\": 0,\n" +
" \"max_tokens\": 128,\n" +
" \"top_p\": 1,\n" +
" \"frequency_penalty\": 0,\n" +
" \"presence_penalty\": 0\n" +
"}"
In Playground mode text (https://platform.openai.com/playground?mode=complete) – by default is using GPT-3.5 (text-davinci-003 model, but can be changed to other less expensive models):

In Playground mode chat (https://platform.openai.com/playground?mode=chat) – by default is using GPT-3.5-turbo (at present, you cannot change, for GPT-4 there is another page: https://chat.openai.com/chat). Here you can play:

II. Factors and Tunning Parameters that will change the output
The problem with OpenAI is predictability for production usage (Does not guarantee the same output in the format you expected) and this creates problems when you want to scale it (10k, 100k, 1M cases) because it creates the need for error handling besides well-known exceptions triggered by the server (read this: https://gptforwork.com/troubleshooting/errors).
Temperature = 0
forces the model to produce more predictable (repeatable output). So by default should be 0.
End Sequence
sometimes is important because you can always use this to force the model to stop generating useless output. By default should be empty.
III. Verify if the content is valid or not.
The question of validity is questionable. What is the definition of valid?
I can point you to dozens of articles on the internet, but let’s not waste your time and will give you one possible solution directly. Imagine for a generic offer what are the attributes that bring value to users. Here are a few examples of attributes (you can define your own criteria/attributes):
Coupon Code
Is Sitewide
Is Limited
Percent Discount
Has Minimum Spent value
Has Free Shipping
Has Free Gift
Has Restrictions
etc.
Copy the following question (prompt for OpenAI) and paste it in Playground Chat (GPT-turbo-3.5) or you can call this prompt in API (as shown in another section of this article):