Working with Images
Image Operations in Pixeltable¶
In this tutorial we're going to be working with a subset of the COCO dataset (10 samples each for the train, test, and validation splits). To avoid further installs, the tutorial comes pre-packaged with a data file (of JSON records) and a set of images, which we're going to download into a temp directory now:
import json
import urllib.request
import tempfile
import tqdm
download_prefix = 'https://raw.github.com/pixeltable/pixeltable/master/docs/source/data'
json_data_url = f'{download_prefix}/coco-records.json'
records = json.loads(urllib.request.urlopen(json_data_url).read().decode('utf-8'))
image_dir = tempfile.mkdtemp()
for r in tqdm.tqdm(records):
filename = r['filepath'].split('/')[1]
out_filepath = f'{image_dir}/{filename}'
url = f'{download_prefix}/{r["filepath"]}'
r['img'] = out_filepath
del r['filepath']
img_data = urllib.request.urlopen(url).read()
with open(out_filepath, 'wb') as img_file:
img_file.write(img_data)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:12<00:00, 2.48it/s]
Each data record is a dictionary with top-level fields img, tag, metadata, and ground_truth.
records[0]
{'metadata': {'width': 640, 'height': 480}, 'ground_truth': {'detections': [{'attributes': {}, 'tags': [], 'label': 'bowl', 'bounding_box': [0.0016875000000000002, 0.3910208333333333, 0.9556093750000001, 0.5954999999999999], 'supercategory': 'kitchen', 'iscrowd': 0}, {'attributes': {}, 'tags': [], 'label': 'bowl', 'bounding_box': [0.48707812500000003, 0.008979166666666667, 0.49887499999999996, 0.47641666666666665], 'supercategory': 'kitchen', 'iscrowd': 0}, {'attributes': {}, 'tags': [], 'label': 'broccoli', 'bounding_box': [0.39, 0.4776458333333334, 0.49412500000000004, 0.5105833333333334], 'supercategory': 'food', 'iscrowd': 0}, {'attributes': {}, 'tags': [], 'label': 'bowl', 'bounding_box': [0.0, 0.02814583333333333, 0.678875, 0.7815], 'supercategory': 'kitchen', 'iscrowd': 0}, {'attributes': {}, 'tags': [], 'label': 'orange', 'bounding_box': [0.5878125, 0.08408333333333333, 0.118046875, 0.0969375], 'supercategory': 'food', 'iscrowd': 0}, {'attributes': {}, 'tags': [], 'label': 'orange', 'bounding_box': [0.7277812499999999, 0.0811875, 0.090734375, 0.09722916666666667], 'supercategory': 'food', 'iscrowd': 0}, {'attributes': {}, 'tags': [], 'label': 'orange', 'bounding_box': [0.60265625, 0.15345833333333334, 0.13128125, 0.14689583333333334], 'supercategory': 'food', 'iscrowd': 0}, {'attributes': {}, 'tags': [], 'label': 'orange', 'bounding_box': [0.568828125, 0.0051875, 0.1480625, 0.14806249999999999], 'supercategory': 'food', 'iscrowd': 0}]}, 'tag': 'train', 'img': '/tmp/tmp6gr0t1dv/000000000009.jpg'}
Data Ingest¶
In Pixeltable, all data resides in tables, which themselves can be organized into a directory structure.
Let's start by creating a client and an overview
directory (the ignore_errors
argument tells it not to raise an exception if the directory already exists):
import pixeltable as pxt
cl = pxt.Client()
cl.create_dir('overview', ignore_errors=True)
/home/marcel/pixeltable/pixeltable/exec/expr_eval_node.py:6: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console) from tqdm.autonotebook import tqdm
setting up Pixeltable at /home/marcel/.pixeltable 2024-03-25 16:27:29,040 INFO pixeltable env.py:208: creating database at postgresql://postgres:@/pixeltable?host=/home/marcel/.pixeltable/pgdata 2024-03-25 16:27:30,179 INFO pixeltable env.py:257: Initializing OpenAI client. 2024-03-25 16:27:30,252 INFO pixeltable env.py:231: connecting to NOS 2024-03-25 16:27:30.274 | INFO | nos.server:init:131 - Inference server already running (name=nos-inference-service-gpu, image=<Image: 'autonomi/nos:0.0.9-gpu'>, id=cc6f90997373). 2024-03-25 16:27:30,275 INFO pixeltable env.py:234: waiting for NOS Created directory `overview`.
Creating a table¶
A table for our sample data requires a column for each top-level field: filepath
, tag
, metadata
, ground_truth
.
Instead of a file path per image we are going to store the image directly. The table schema is as follows:
schema = {
'img': {'type': pxt.ImageType(nullable=False), 'indexed': True},
'tag': pxt.StringType(nullable=False),
'metadata': pxt.JsonType(nullable=False),
'ground_truth': pxt.JsonType(nullable=True),
}
nullable=False
means the values in this column can't be None
, which Pixeltable will check at data insertion time (nullable=False
is the default, so we'll leave that out from now on). indexed=True
tells Pixeltable to create a vector index for embeddings (using CLIP) for the images in this column, which enables text and image similarity search. More on that later.
The available data types in Pixeltable are:
Pixeltable type | Python type |
---|---|
pxt.StringType() |
str |
pxt.IntType() |
int |
pxt.FloatType() |
float |
pxt.BoolType() |
bool |
pxt.TimestampType() |
datetime.datetime |
pxt.JsonType() |
lists and dicts that can be converted to JSON |
pxt.ArrayType() |
numpy.ndarray |
pxt.ImageType() |
PIL.Image.Image |
pxt.VideoType() |
str (the file path) |
We then create a table data
:
cl.drop_table('overview.data', ignore_errors=True)
data = cl.create_table('overview.data', schema)
Created table `data`.
At this point, table data
contains no data:
data.count()
0
Inserting data¶
In order to populate data
with what's in records
, we call the insert()
function, which requires a list of rows, each of which is a dictionary mapping column names to column values.
data.insert(records)
Computing cells: 0%| | 0/30 [00:00<?, ?cells/s]
Inserting rows into `data`: 0 rows [00:00, ? rows/s]
Inserted 30 rows with 0 errors.
UpdateStatus(num_rows=30, num_computed_values=30, num_excs=0, updated_cols=[], cols_with_excs=[])
In Pixeltable, images are 'inserted' as file paths, and Pixeltable only stores these paths and not the images themselves, so there is no duplication of storage.
Let's look at the first 3 rows:
data.head(3)
img | tag | metadata | ground_truth |
---|---|---|---|
train | {'width': 640, 'height': 480} | {'detections': [{'tags': [], 'label': 'bowl', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.0016875000000000002, 0.3910208333333333, 0.9556093750000001, 0.5954999999999999], 'supercategory': 'kitchen'}, {'tags': [], 'label': 'bowl', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.48707812500000003, 0.008979166666666667, 0.49887499999999996, 0.47641666666666665], 'supercategory': 'kitchen'}, {'tags': [], 'label': 'broccoli', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.39, 0.4776458333333334, 0.49412500000000004, 0.5105833333333334], 'supercategory': 'food'}, {'tags': [], 'label': 'bowl', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.0, 0.02814583333333333, 0.678875, 0.7815], 'supercategory': 'kitchen'}, {'tags': [], 'label': 'orange', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.5878125, 0.08408333333333333, 0.118046875, 0.0969375], 'supercategory': 'food'}, {'tags': [], 'label': 'orange', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.7277812499999999, 0.0811875, 0.090734375, 0.09722916666666667], 'supercategory': 'food'}, {'tags': [], 'label': 'orange', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.60265625, 0.15345833333333334, 0.13128125, 0.14689583333333334], 'supercategory': 'food'}, {'tags': [], 'label': 'orange', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.568828125, 0.0051875, 0.1480625, 0.14806249999999999], 'supercategory': 'food'}]} | |
train | {'width': 640, 'height': 426} | {'detections': [{'tags': [], 'label': 'giraffe', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.602390625, 0.14091549295774647, 0.335890625, 0.6975586854460094], 'supercategory': 'animal'}, {'tags': [], 'label': 'giraffe', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.082828125, 0.836830985915493, 0.206296875, 0.12955399061032863], 'supercategory': 'animal'}]} | |
train | {'width': 640, 'height': 428} | {'detections': [{'tags': [], 'label': 'potted plant', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.32009375, 0.07247663551401869, 0.39825, 0.7572897196261682], 'supercategory': 'furniture'}, {'tags': [], 'label': 'vase', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.3711875, 0.36404205607476636, 0.26, 0.45619158878504673], 'supercategory': 'indoor'}]} |
Let's do this again, so we can demonstrate revert()
:
data.insert(records)
Computing cells: 0%| | 0/30 [00:00<?, ?cells/s]
Inserting rows into `data`: 0 rows [00:00, ? rows/s]
Inserted 30 rows with 0 errors.
UpdateStatus(num_rows=30, num_computed_values=30, num_excs=0, updated_cols=[], cols_with_excs=[])
We have now loaded our data twice:
data.count()
60
data.revert()
data.count()
30
Data persistence¶
Unlike "computational containers" such as Pandas or Dask DataFrames, tables in Pixeltable are persistent. To illustrate that, let's create a new Pixeltable client and a new handle to the data
table:
cl = pxt.Client()
We already have a database tutorials
, so now we call get_db()
instead of create_db()
(in fact, the latter would return with an exception). Likewise, we call get_table()
to get a handle to the already present data
table:
data = cl.get_table('overview.data')
data.count()
30
Retrieving data¶
Pixeltable allows filtering with the where()
function.
For example, to only look at test data:
data.where(data.tag == 'test').show(2)
img | tag | metadata | ground_truth |
---|---|---|---|
test | {'width': 640, 'height': 480} | None | |
test | {'width': 480, 'height': 640} | None |
Or at data for images that are less than 640 pixels wide:
data.where(data.metadata.width < 640).show(2)
img | tag | metadata | ground_truth |
---|---|---|---|
train | {'width': 481, 'height': 640} | {'detections': [{'tags': [], 'label': 'umbrella', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.0, 0.0783125, 0.9515176715176715, 0.6724218750000001], 'supercategory': 'accessory'}, {'tags': [], 'label': 'person', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.3483991683991684, 0.25451562499999997, 0.6457588357588357, 0.726859375], 'supercategory': 'person'}]} | |
train | {'width': 381, 'height': 500} | {'detections': [{'tags': [], 'label': 'horse', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.42669291338582677, 0.45312, 0.34228346456692915, 0.36886], 'supercategory': 'animal'}, {'tags': [], 'label': 'horse', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.21443569553805775, 0.48988, 0.21971128608923882, 0.31639999999999996], 'supercategory': 'animal'}, {'tags': [], 'label': 'person', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.5338320209973753, 0.52086, 0.17241469816272964, 0.14608000000000002], 'supercategory': 'person'}, {'tags': [], 'label': 'person', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.31083989501312337, 0.52264, 0.14937007874015748, 0.12586], 'supercategory': 'person'}, {'tags': [], 'label': 'person', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.3132283464566929, 0.66842, 0.031338582677165355, 0.06714], 'supercategory': 'person'}, {'tags': [], 'label': 'potted plant', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.5295800524934383, 0.85238, 0.18593175853018373, 0.09445999999999999], 'supercategory': 'furniture'}, {'tags': [], 'label': 'person', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.7462992125984251, 0.6668, 0.028556430446194228, 0.05486], 'supercategory': 'person'}, {'tags': [], 'label': 'person', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.5013123359580053, 0.66874, 0.018792650918635172, 0.04682], 'supercategory': 'person'}, {'tags': [], 'label': 'person', 'iscrowd': 0, 'attributes': {}, 'bounding_box': [0.9101312335958005, 0.6668, 0.03884514435695538, 0.01844], 'supercategory': 'person'}]} |
Pixeltable supports the standard comparison operators (<
, <=
, >
, >=
, ==
) and logical operators (&
for and
, |
for or
, ~
for not
). Like in Pandas, logical operators need to be wrapped in parentheses:
data.where((data.tag == 'test') & (data.metadata.width < 640)).show(2)
img | tag | metadata | ground_truth |
---|---|---|---|
test | {'width': 480, 'height': 640} | None |
Selecting output¶
By default, Pixeltable will retrieve all columns of a table. In order to select what you want to retrieve, use the select()
function.
Let's retrieve columns tag
and metadata
:
data.select(data.tag, data.metadata).show(2)
tag | metadata |
---|---|
train | {'width': 640, 'height': 480} |
train | {'width': 640, 'height': 426} |
In general, each element in select()
needs to be a Pixeltable expression. In the previous example, the expressions were simple column references, but Pixeltable also supports most standard arithmetic operators as well as a set of type-specific functions (more on those in a bit). For example, to retrieve the total number of pixels per image:
data.select(data.tag, data.metadata.width * data.metadata.height).show(2)
tag | col_1 |
---|---|
train | 307200 |
train | 272640 |
Operations on JSON data¶
The previous example illustrates the use of path expressions against JSON-typed data: width
is a field in the metadata
column, which we can simply access as data.metadata.width
.
Another example: retrieve only the bounding boxes from the ground_truth
column. This will come in handy later when we need to pass those bounding boxes (and not the surrounding dictionary) into a function.
data.select(data.ground_truth.detections['*'].bounding_box).show(2)
groundtruth_detectionsstar_boundingbox |
---|
[[0.0016875000000000002, 0.3910208333333333, 0.9556093750000001, 0.5954999999999999], [0.48707812500000003, 0.008979166666666667, 0.49887499999999996, 0.47641666666666665], [0.39, 0.4776458333333334, 0.49412500000000004, 0.5105833333333334], [0.0, 0.02814583333333333, 0.678875, 0.7815], [0.5878125, 0.08408333333333333, 0.118046875, 0.0969375], [0.7277812499999999, 0.0811875, 0.090734375, 0.09722916666666667], [0.60265625, 0.15345833333333334, 0.13128125, 0.14689583333333334], [0.568828125, 0.0051875, 0.1480625, 0.14806249999999999]] |
[[0.602390625, 0.14091549295774647, 0.335890625, 0.6975586854460094], [0.082828125, 0.836830985915493, 0.206296875, 0.12955399061032863]] |
The field detections
contains a list, and the '*'
index indicates that you want all elements in that list. You can also use standard Python list indexing and slicing operations, such as
data.select(data.ground_truth.detections[0].bounding_box).show(2)
groundtruth_detections0_boundingbox |
---|
[0.0016875000000000002, 0.3910208333333333, 0.9556093750000001, 0.5954999999999999] |
[0.602390625, 0.14091549295774647, 0.335890625, 0.6975586854460094] |
to select only the first bounding box, or
data.select(data.ground_truth.detections[::-1].bounding_box).show(2)
groundtruth_detections1_boundingbox |
---|
[[0.568828125, 0.0051875, 0.1480625, 0.14806249999999999], [0.60265625, 0.15345833333333334, 0.13128125, 0.14689583333333334], [0.7277812499999999, 0.0811875, 0.090734375, 0.09722916666666667], [0.5878125, 0.08408333333333333, 0.118046875, 0.0969375], [0.0, 0.02814583333333333, 0.678875, 0.7815], [0.39, 0.4776458333333334, 0.49412500000000004, 0.5105833333333334], [0.48707812500000003, 0.008979166666666667, 0.49887499999999996, 0.47641666666666665], [0.0016875000000000002, 0.3910208333333333, 0.9556093750000001, 0.5954999999999999]] |
[[0.082828125, 0.836830985915493, 0.206296875, 0.12955399061032863], [0.602390625, 0.14091549295774647, 0.335890625, 0.6975586854460094]] |
to select the bounding boxes in reverse.
Operations on image data¶
Image data has properties width
, height
, and mode
:
data.select(data.img.width, data.img.height, data.img.mode).show(2)
width | height | mode |
---|---|---|
640 | 480 | RGB |
640 | 426 | RGB |
Pixeltable also has a number of built-in functions for images (these are a subset of what is available for PIL.Image.Image
):
Image function | |
---|---|
convert() |
Returns a converted copy of this image |
crop() |
Returns a rectangular region from this image |
effect_spread() |
Randomly spread pixels in an image |
entropy() |
Calculates and returns the entropy for the image |
filter() |
Filters this image using the given filter |
getbands() |
Returns a tuple containing the name of each band in this image |
getbbox() |
Calculates the bounding box of the non-zero regions in the image |
getchannel() |
Returns an image containing a single channel of the source image |
getcolors() |
Returns a list of colors used in this image |
getextrema() |
Gets the minimum and maximum pixel values for each band in the image |
getpalette() |
Returns the image palette as a list |
getpixel() |
Returns the pixel value at a given position |
getprojection() |
Get projection to x and y axes |
histogram() |
Returns a histogram for the image |
point() |
Maps this image through a lookup table or function |
quantize() |
Convert the image to ‘P’ mode with the specified number of colors |
reduce() |
Returns a copy of the image reduced factor times |
remap_palette() |
Rewrites the image to reorder the palette |
resize() |
Returns a resized copy of this image |
rotate() |
Returns a rotated copy of this image |
transform() |
Transforms this image |
transpose() |
Transpose image (flip or rotate in 90 degree steps) |
These functions are invoked in the style of method calls and can be chained, as in this example, which rotates the image by 30 degrees and converts it to BW:
data.select(data.img.rotate(30).convert('L')).show(2)
col_0 |
---|
Image similarity search¶
When we created the frame
column we specified indexed=True
, which creates a vector index of CLIP embeddings for the images in that column. We can take advantage of that with the search function nearest()
. First, let's get a sample image from data
:
sample_img = data.select(data.img).show(1)[0, 0]
sample_img
show()
returns a result set, which is a two-dimensional structure you can access with standard Python indexing operations (ie, [<row-idx>, <column-idx>]
. In this case, we're selecting the first column value of the first row, which is a PIL.Image.Image
:
type(sample_img)
PIL.JpegImagePlugin.JpegImageFile
To look for images like this one, use nearest()
:
data.where(data.img.nearest(sample_img)).select(data.img).show(2)
img |
---|
We can use the same function to look for images based on text:
data.where(data.img.nearest('car')).select(data.img).show(2)
img |
---|
User-defined functions¶
User-defined functions let you customize Pixeltable's functionality for your own data.
In this example, we're going use a torchvision
object detection model (Faster R-CNN) against the images in data
with a user-defined function:
import torch, torchvision
model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(weights="DEFAULT")
_ = model.eval() # switch to inference mode
Our function converts the image to PyTorch format and obtains a prediction from the model, which is a list of dictionaries with fields boxes
, labels
, and scores
(one per input image). The fields themselves are PyTorch tensors, and we convert them to standard Python lists (so they become JSON-serializable data):
@pxt.udf(return_type=pxt.JsonType(), param_types=[pxt.ImageType()])
def detect(img):
t = torchvision.transforms.ToTensor()(img)
t = torchvision.transforms.ConvertImageDtype(torch.float)(t)
result = model([t])[0]
return {
'boxes': result['boxes'].tolist(), 'labels': result['labels'].tolist(), 'scores': result['scores'].tolist()
}
The pt.function
decorator creates a wrapper that tells Pixeltable what arguments the function takes and what it returns. In this case, it takes an image argument and returns a dictionary.
We can now use detect
in the Pixeltable index operator using standard Python function call syntax:
data.select(data.img, detect(data.img)).show(1)
img | col_1 |
---|---|
{'boxes': [[310.55047607421875, 0.0, 624.843017578125, 234.52691650390625], [237.1993408203125, 224.08872985839844, 587.0565795898438, 470.91851806640625], [0.0, 29.385292053222656, 426.712158203125, 425.4981994628906], [5.013685703277588, 196.5373992919922, 588.5419921875, 476.083984375], [37.35256576538086, 20.59006118774414, 388.15606689453125, 273.5964660644531], [0.7686876654624939, 22.125137329101562, 633.8622436523438, 476.8524169921875], [150.43922424316406, 1.7851982116699219, 521.2220458984375, 259.53662109375], [327.5017395019531, 415.15777587890625, 424.87408447265625, 480.0], [381.46246337890625, 3.4117469787597656, 461.89776611328125, 80.68553161621094], [384.0293273925781, 231.81298828125, 602.3222045898438, 449.110107421875], [0.0, 112.5062484741211, 239.01593017578125, 304.9220275878906], [329.5346984863281, 0.0, 424.4857482910156, 74.70343017578125], [106.70413970947266, 25.554290771484375, 375.65087890625, 248.9399871826172], [476.35162353515625, 400.2861328125, 637.7496948242188, 478.90008544921875]], 'labels': [51, 56, 51, 51, 51, 67, 51, 56, 55, 56, 54, 53, 54, 51], 'scores': [0.9891068339347839, 0.953942060470581, 0.9483123421669006, 0.9117993712425232, 0.5643739104270935, 0.19624418020248413, 0.1885174959897995, 0.14653879404067993, 0.138069748878479, 0.11395400762557983, 0.10923201590776443, 0.07621481269598007, 0.06613019853830338, 0.05693724378943443]} |
detect
returns JSON data, and we can use Pixeltable's JSON functionality to access that as well. For example, if we're only interested in the first detected bounding box and the first label:
data.select(detect(data.img).boxes[0], detect(data.img).labels[0]).show(1)
None_boxes0 | None_labels0 |
---|---|
[310.55047607421875, 0.0, 624.843017578125, 234.52691650390625] | 51 |
When running this query, Pixeltable evalutes detect(data.img)
only once per row.
Computed columns¶
Being able to run models against any image stored in Pixeltable is very useful, but the runtime cost of model inference makes it impractical to run it every time we want to do something with the model output. In Pixeltable, we can use computed columns to precompute and cache the output of a function:
data.add_column(detections=detect(data.img))
Computing cells: 0%| | 0/30 [00:00<?, ?cells/s]
added 30 column values with 0 errors
UpdateStatus(num_rows=30, num_computed_values=30, num_excs=0, updated_cols=[], cols_with_excs=[])
detections
is now a column in data
which holds the model prediction for the img
column. Like any other column, it is persistent. Pixeltables runs the computation automatically whenever new data is added to the table. Let's see what data
looks like now:
data.describe()
Column Name | Type | Computed With |
---|---|---|
img | image | |
tag | string | |
metadata | json | |
ground_truth | json | |
detections | json | detect(img) |
In general, the computed_with
keyword argument can be any Pixeltable expression. In this example, we're making the first label recorded in detections
available as a separate column:
data.add_column(first_label=data.detections.labels[0])
data.select(data.detections.labels, data.first_label).show(1)
Computing cells: 0%| | 0/30 [00:00<?, ?cells/s]
added 30 column values with 0 errors
detections_labels | first_label |
---|---|
[1, 1, 1, 1, 67, 46, 46, 46, 44, 46, 67, 73, 46, 72, 44, 1, 44, 73, 47, 47, 44, 1, 67, 46, 46, 47, 46, 44, 31, 1, 59, 67] | 1 |
Whether a computed image column is stored is controlled by the stored
keyword argument of the Column
constructor:
- when set to
True
, the value is stored explicitly - when set to
False
, the value is always recomputed during a query (and never stored) - the default is
None
, which means that the Pixeltable decides (currently that means that the image won't be stored, but in the future it could take resource consumption into account)
Updated about 1 month ago