Usage

Image Generation

The image generations endpoint allows you to create an original image given a text prompt.

Generate an image
from openai import OpenAI

# Load environment variables 
be_api_key= "BE_API_KEY"
base_url='http://0.0.0.0:1234/v1'
model="be-stable-diffusion-xl-base-1.0"

client = OpenAI(
    base_url=base_url,
    api_key=be_api_key
)

# We repurpose "n" as the seed for SDXL
response = client.images.generate(
  model=model,
  prompt="a photorealistic(1.5) image of a cat playing with a (ball)0.5 in a (forest)1.0",
  size="512x512",
  quality="standard",
  n=2,
  response_format="url" # This field is optional, the default value is url
)

image_url = response.data[0].url

Example Stable Diffusion XL generations

PROMPT
GENERATION

A photorealistic image of a cat playing with a ball.

Each image can be returned as either a URL or Base64 data, using the response_format parameter. URLs will expire after an hour.

Image Edits

Also known as "inpainting", the image edits endpoint allows you to edit or extend an image by uploading an image indicating which areas should be replaced.

Image edits
from openai import OpenAI

# Load environment variables 
be_api_key= "BE_API_KEY"
base_url='https://api.blockentropy.ai/v1'
model="be-stable-diffusion-xl-base-1.0"

client = OpenAI(
    base_url=base_url,
    api_key=be_api_key
)

# We repurpose "n" as the seed for SDXL
response = client.images.edit(
  model=model,
  prompt="remove dog",
  size="512x512",
  image=Path("img.png"),
  n=1,
  response_format="url" # This field is optional, the default value is url
)

image_url = response.data[0].url
IMAGE
MASK
OUTPUT

Prompt: a sunlit indoor lounge area with a pool containing a flamingo

The uploaded image and mask must both be square PNG images less than 4MB in size, and also must have the same dimensions as each other. The non-transparent areas of the mask are not used when generating the output, so they don’t necessarily need to match the original image like the example above.

Controlnet with open pose

Generate a new image using the same pose as the reference image provided and a prompt.

Controlnet with open pose
from openai import OpenAI

# Load environment variables 
be_api_key= "BE_API_KEY"
base_url='https://api.blockentropy.ai/v1'
model="be-stable-diffusion-xl-base-1.0"

client = OpenAI(
    base_url=base_url,
    api_key=be_api_key
)

# We repurpose "n" as the seed for SDXL
response = client.images.edit(
  model=model,
  prompt="Darth vader dancing in the street",
  size="512x512",
  image=Path("person.png"),
  n=1,
  response_format="url", # This field is optional, the default value is url
  user="cn" # This parameter is required to be set to "cn" to use controlnet
)

image_url = response.data[0].url_image
pose_url = response.data[0].url_pose
IMAGE
OUTPUT
POSE IMAGE

Prompt: "A ballerina standing in the street, high quality and photorealistic"

Last updated