Skip to main content

Real-time/offline inference framework for video and streaming media

Project description

Stream Infer

English | 简体中文

Stream Infer is a Python library designed for stream inference in video processing applications. It includes modular components for video frame generation, inference algorithms, and result export.

Installation

pip install stream-infer

Quick Start

Below is a simple example of Stream Infer to help you get started and understand what Stream Infer does.

This example uses an open-source vertical detection model from ModelScope for detecting heads.

You may need to install additional packages via pip to use this example: pip install modelscope matplotlib thop timm easydict

from stream_infer import Inference, Dispatcher, DispatcherManager, Player
from stream_infer.algo import BaseAlgo
from stream_infer.producer import PyAVProducer, OpenCVProducer
from stream_infer.log import logger

import cv2
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks

INFER_FRAME_WIDTH = 1920
INFER_FRAME_HEIGHT = 1080
PLAY_FPS = 30
OFFLINE = True


class HeadDetectionAlgo(BaseAlgo):
    def init(self):
        self.model_id = "damo/cv_tinynas_head-detection_damoyolo"
        self.head_detection = pipeline(
            Tasks.domain_specific_object_detection, model=self.model_id
        )

    def run(self, frames):
        logger.debug(f"{self.name} starts running with {len(frames)} frames")
        try:
            result = self.head_detection(frames[0])
            logger.debug(f"{self.name} inference finished: {result}")
            return result
        except Exception as e:
            logger.error(e)
            return None


class SelfDispatcher(Dispatcher):
    def get_result(self, name):
        if self.collect_results.get(name):
            return self.collect_results[name]
        return None

    def get_last_result(self, name):
        algo_results = self.get_result(name)
        if algo_results is not None and len(algo_results.keys()) > 0:
            return algo_results[(str(max([int(k) for k in algo_results.keys()])))]
        return None


def draw_boxes(frame, data):
    for box, label in zip(data["boxes"], data["labels"]):
        start_point = (int(box[0]), int(box[1]))
        end_point = (int(box[2]), int(box[3]))
        color = (255, 0, 0)
        thickness = 2
        cv2.rectangle(frame, start_point, end_point, color, thickness)

        font = cv2.FONT_HERSHEY_SIMPLEX
        org = (start_point[0], start_point[1] - 10)
        font_scale = 0.5
        font_color = (0, 255, 0)
        line_type = 2
        cv2.putText(frame, label, org, font, font_scale, font_color, line_type)


if __name__ == "__main__":
    producer = OpenCVProducer(INFER_FRAME_WIDTH, INFER_FRAME_HEIGHT)
    video_path = "/path/to/your/video.mp4"
    max_size = 150
    dispatcher = (
        SelfDispatcher(max_size)
        if OFFLINE
        else DispatcherManager(SelfDispatcher).create(max_size)
    )

    inference = Inference(dispatcher)
    inference.load_algo(
        HeadDetectionAlgo(), frame_count=1, frame_step=PLAY_FPS, interval=1
    )

    player = Player(dispatcher, producer, path=video_path)
    if OFFLINE:
        for frame, current_frame in player.play(PLAY_FPS):
            current_algo_name = inference.auto_run_specific(
                player.play_fps, current_frame
            )
            # 其它操作,比如绘制结果窗口
            data = dispatcher.get_last_result(HeadDetectionAlgo.__name__)
            if data is None:
                continue

            draw_boxes(frame, data)
            cv2.namedWindow("Inference", cv2.WINDOW_NORMAL)
            cv2.imshow("Inference", frame)
            cv2.waitKey(1)
        cv2.destroyAllWindows()
    else:
        player.play_async(PLAY_FPS)
        inference.run_async()
        while player.is_active():
            pass
        inference.stop()
        player.stop()

Features and Concepts

Real-time Inference

Sequence

Real-time inference refers to inputting a video or stream, which plays at the normal real-time playback speed, adding frames to the track. The playback and inference processes are independent. Due to the time taken by inference, it results in varying delays, but with a reasonable frame rate set, it will not cause memory leaks or accumulation.

Real-time inference is more commonly applied in scenarios such as:

  • Various live broadcast scenarios
  • Real-time monitoring
  • Real-time meetings
  • Clinical surgeries
  • ...

Offline Inference

Good Processing Performance

Poor Processing Performance

Offline inference refers to inputting a video (streams are not applicable here) and performing inference in parallel with frame fetching at the speed the computer can handle. Depending on machine performance, the total runtime may be longer or shorter than the video duration.

Offline inference is applied in all non-real-time necessary video structure analysis, such as:

  • Post-meeting video analysis
  • Surgical video replay
  • ...

Also, since the video reading and algorithm inference in offline inference run in sequence, it can be used to test algorithm performance and effects (as in the Quick Start, displaying the video and algorithm data after inference through cv2), while real-time inference is not suitable for the algorithm development stage.

Modules

Flowchart

BaseAlgo

We simply encapsulate all algorithms into classes with two functions: init() and run(), which is BaseAlgo.

Even though Stream Infer provides a framework about stream inference, the actual algorithm functionality still needs to be written by you. After writing, inherit the BaseAlgo class for unified encapsulation and calling.

For example, you have completed a head detection algorithm, and the inference call is:

# https://modelscope.cn/models/damo/cv_tinynas_head-detection_damoyolo/summary
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks

model_id = 'damo/cv_tinynas_head-detection_damoyolo'
input_location = 'https://modelscope.oss-cn-beijing.aliyuncs.com/test/images/image_detection.jpg'

head_detection = pipeline(Tasks.domain_specific_object_detection, model=model_id)
result = head_detection(input_location)
print("result is : ", result)

Then, to perform stream inference of this algorithm in videos and streaming media, encapsulate it like this:

from stream_infer.algo import BaseAlgo

class HeadDetectionAlgo(BaseAlgo):
    def init(self):
        self.model_id = 'damo/cv_tinynas_head-detection_damoyolo'
        self.head_detection = pipeline(Tasks.domain_specific_object_detection, model=model_id)

    def run(self, frames):
        return self.head_detection(frames)

In this way, you have completed the encapsulation and will be able to call it normally in the future.

Dispatcher

Dispatcher serves as the central service linking playback and inference, caching inference frames, distributing inference frames, and collecting inference time and result data.

Dispatcher provides functions for adding/getting frames, adding/getting inference results and times. You don't need to worry about others, but to be able to get the results and print them conveniently, store them in other locations, you need to focus on the collect_result() function.

Here is their source code implementation:

def collect_result(self, inference_result):
    if inference_result is not None:
        time = str(inference_result[0])
        name = inference_result[1]
        data = inference_result[2]
        if self.collect_results.get(name) is None:
            self.collect_results[name] = {}
        self.collect_results[name][time] = data

The format of the collected collect_results is roughly as follows:

{
  "HeadDetectionAlgo": {
    "1": { "scores": [], "boxes": [] },
    "2": { "scores": [], "boxes": [] }
  },
  "other": {
    "60": { "a": 1 },
    "120": { "a": 2 }
  }
}

On this basis, if you want to request the result to a REST service, or do other operations on the existing data before requesting, it can be achieved by inheriting the Dispatcher class and rewriting the function:

from stream_infer import Dispatcher, DispatcherManager
import requests
...
class SelfDispatcher(Dispatcher):
    def __init__(self, max_size: int = 120):
        super().__init__(max_size)
        self.sess = requests.Session()
        ...

    def collect_result(self, inference_result):
        super().__init__(inference_result)
        req_data = {
            "time" = inference_result[0]
            "name" = inference_result[1]
            "data" = inference_result[2]
        }
        self.sess.post("http://xxx.com/result/", json=req_data)
...

# 离线环境下
dispatcher = SelfDispatcher()

# 实时环境下
dispatcher = DispatcherManager(SelfDispatcher).create(max_size=150)

You may have noticed that the way to instantiate the dispatcher is different in offline and real-time environments. This is because in real-time environments, playback and inference are not in the same process. Both need to share the same dispatcher, so DispatcherManager proxy is used.

Producer

Producer loads videos or streaming media in different ways, such as PyAV, OpenCV, ImageIO (only applicable offline), etc., and adjusts or transforms the width

, height, and color space of the frames, finally returning each frame as a numpy array.

Instantiating a Producer often requires inputting the frame width and height needed for inference and the color order. The default color order is the same as the BGR order returned by cv2.imread().

from stream_infer.producer import PyAVProducer, OpenCVProducer

producer = PyAVProducer(1920, 1080)
producer = OpenCVProducer(1920, 1080)

Inference

Inference is the core of the framework, implementing functions such as loading algorithms and running inference.

An Inference object needs to input a Dispatcher object for frame fetching and sending inference results, etc.

from stream_infer import Inference

inference = Inference(dispatcher)

When you need to load an algorithm, here is an example using the BaseAlgo above:

from anywhere_algo import HeadDetectionAlgo, AnyOtherAlgo

...

inference = Inference(dispatcher)
inference.load_algo(HeadDetectionAlgo("head"), frame_count=1, frame_step=fps, interval=1)
inference.load_algo(AnyOtherAlgo("other"), 5, 6, 60)

The parameters for loading the algorithm are the core features of the framework, allowing you to freely implement the frame fetching logic:

  • frame_count: The number of frames the algorithm needs to fetch, which is the number of frames finally received in the run() function.
  • frame_step: Fetch 1 frame every frame_step, a total of frame_count frames. If this parameter is filled with fps, it means fetching the last frame_count frames per second.
  • interval: In seconds, it represents the frequency of algorithm calls, such as AnyOtherAlgo will only be called once a minute, saving resources when it is not necessary to call it.

Player

Player inputs dispatcher, producer, and video/streaming media address for playback and inference.

from stream_infer import Player

player = Player(dispatcher, producer, video_path)

Player has two functions to execute in offline and real-time inference modes, respectively:

player.play(fps=None)
player.play_async(fps=None)

Both functions can input an fps parameter, which represents the playback frame rate here. If the frame rate of the video source is higher than this number, frames will be skipped to force playback at this specified frame rate. This can also save performance to some extent.

Play & Run

Offline Running

Player's play() returns an iterable object, and calling inference.auto_run_specific() in the loop will automatically determine which algorithm to run based on the current frame index:

if __name__ == "__main__":
    ...
    for frame, current_frame in player.play(PLAY_FPS):
        current_algo_name = inference.auto_run_specific(
            player.play_fps, current_frame
        )
        # 其它操作,比如绘制画面窗口
        cv2.namedWindow("Inference", cv2.WINDOW_NORMAL)
        cv2.imshow("Inference", frame)
        cv2.waitKey(1)
    cv2.destroyAllWindows()

As described in Offline Inference, all the executions above are synchronized in one process and one thread, so you can take your time to complete the operations you want, such as algorithm effect verification (as in the Quick Start, getting the inference result and displaying boxes to the window, etc.), even if it is stuttered due to synchronous operation, everything is accurate.

Real-time Running

Just run Player's play_async() and Inference's run_async():

It is particularly important to note that we recommend not exceeding 30 frames per second for the playback frame rate when running in real-time. Firstly, a high frame rate does not help much with the accuracy of analysis results. Secondly, it will lead to memory leaks and frame accumulation.

if __name__ == "__main__":
    ...
    player.play_async(PLAY_FPS)
    inference.run_async()
    while player.is_active():
        pass
        # 其它操作
    inference.stop()
    player.stop()

Monitor the playback status with player.is_active(), and manually end the inference thread and playback process after playback is complete.

License

Stream Infer is licensed under the Apache License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stream_infer-0.1.1.tar.gz (24.9 kB view hashes)

Uploaded Source

Built Distribution

stream_infer-0.1.1-py3-none-any.whl (22.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page