In EvaDB, AI models are simple function calls similar to traditional SQL functions like
This page details how you can use AI models in different ways to construct AI queries in EvaDB. EvaDB automatically optimizes AI queries to save money and time, as detailed in the optimizations page.
EvaDB ships with a wide range of built-in functions listed in the Models page. If your desired AI model is not available, you can also bring your own AI function by referring to the Bring Your Own AI Function page.
AI queries often contain the AI function(s) in the
SELECT clause (projection list).
For example, the following query calls the MnistImageClassifier function to identify digits in a collection of frames in the mnist_video.
SELECT MnistImageClassifier(data).label FROM mnist_video;
Another common position in the AI query with model inference is the
WHERE clause (selection).
For example, the following query uses the
TextClassifier functions from the HuggingFace AI engine to summarize the sentiment of food reviews and identify those expressing a negative sentiment in the
WHERE clauses, respectively.
SELECT TextSummarizer(data) FROM food_reviews WHERE TextClassifier(data).label = 'NEGATIVE';
EvaDB supports specialized array operators.
For example, the following query applies the
CONTAIN operator (
@>) on the output of an object detection function:
SELECT id FROM camera_videos WHERE ObjectDetector(data).labels @> ['person', 'car'];
Here is another query with the
UNNEST function that flattens the output of an one-input-to-many-outputs AI function.
SELECT UNNEST(FaceDetector(data)) AS Face(bbox, conf) FROM movie;
The face detector model returns multiple outputs (e.g., bounding box and confidence score) as an array. The
UNNEST function unrolls elements from the array into multiple rows.
For more challenging AI apps, EvaDB supports lateral joins.
The following AI query uses both a
LATERAL JOIN and an
UNNEST function to detect emotions from faces in a movie, where a single scene may contain multiple faces. The output of the object detector is used to crop the bounding box from the image, and the cropped image is then sent to an emotion detector to detect the emotion of the face inside the bounding box.
SELECT EmotionDetector(Crop(data, Face.bbox)) FROM movie LATERAL JOIN UNNEST(FaceDetector(data)) AS Face(bbox, conf);
AI models may also be used in the
ORDER BY clause to enable usecases like similarity search.
For example, in the following query, the output of the SentenceFeatureExtractor is used to find relevant context for answering the user’s question (When was the NATO created) from a collection of PDFs.
SELECT data FROM MyPDFs ORDER BY Similarity( SentenceFeatureExtractor('When was the NATO created?'), SentenceFeatureExtractor(data) );
Similarity search maps to ordering based on the distance computed by the Similarity function, between the features extracted from the query and those extracted from the paragraphs loaded from the documents. EvaDB automatically accelerates such queries using vector databases.
Go over the PrivateGPT notebook for more details.
Given a queried image, we can use a different feature extractor (SiftFeatureExtractor function) to find the most similar image from an existing collection of images (reddit_dataset).
SELECT name FROM reddit_dataset ORDER BY Similarity( SiftFeatureExtractor(Open('reddit-images/cat.jpg')), SiftFeatureExtractor(data) );
Go over the Image Search page for more details.
AI models can be applied on a sequence of tuples using the
GROUP BY and
The following query concatenates consecutive frames in a movie into a single segment and applies an action recognition model on the segment:
SELECT ASLActionRecognition(SEGMENT(data)) FROM ASL_ACTIONS SAMPLE 5 GROUP BY '16 frames';
Here is another illustrative query that groups together paragraphs from a PDF document:
SELECT SEGMENT(data) FROM MyPDFs GROUP BY '10 paragraphs';
The use cases illustrate more ways of utilizing AI queries for building AI apps.