259 pointsby psychip4 days ago12 comments
  • 011000114 days ago
    If you're interested in DIY security+AI, check out Frigate NVR(https://frigate.video/), Scrypted(https://www.scrypted.app/) and Viseron(https://viseron.netlify.app/).
    • gh02t4 days ago
      I've been using Frigate for a long time and it's a really cool project that has been quite reliable. The configuration can be a little bit of a headache to learn, but it gets better with every release.

      Viserion is new to me though, that looks really cool.

    • taikon3 days ago
      I've been running frigate for a while now and I find it's object detection has a higher than preferred false-positive rate.

      For instance, it kept thinking the tree in my back yard is a person. I find it hilarious that it often assigns a higher likelihood the tree is a person than me! I've needed to put a mask over the tree as a last resort.

      • acidburnNSA3 days ago
        Assuming the tree is big you can set max object areas for person and then it will never happen again. I had to do this with some areas where shadows looked like people in the afternoons.
    • dfc4 days ago
      I just recently got frigate up and running. How do the other two compare?
      • 011000113 days ago
        Beats me, I'm just getting into this now. I started with a Reolink NVR, but it's a piece of crap, so I'm looking for a better alternative.

        It looks like either Frigate or Viseron will do what I want. I started setting up Frigate, but realized I should downgrade my Reolink Duo 3 to a Duo 2 before I go too far. The Duo 3 really doesn't offer much better image quality but forces you to use h265 and consumes a lot more bandwidth. Once I stabilize my camera setup I'll get back to setting up both Frigate and Viseron and see what performs better. I like that the pro upgrade of Frigate allows you to customize the model and may make use of that.

  • yu3zhou44 days ago
    Congrats! What hardware you use to run the inference 24/7? I built a simpler version for running on low end hardware [0] for recognizing if there’s a person on my parcel, so I know someone have trespassed and I can launch siren, lights etc.

    https://github.com/jmaczan/yolov3-tiny-openvino

  • pmontra4 days ago
    This runs with a Geforce GTX 1060. By a quick search it's 120 W. Maybe it's only the peak power consumption but it's still a lot. Do commercial products, if there are any, consume that much power?
    • moandcompany3 days ago
      There's a wide range of inference accelerators in commercial use.

      For "edge" or embedded applications, an accelerator such as the Google Coral Edge TPU is a useful reference point where it is capable of up to 4 Trillion Operations per Second (4 TOPS), with up to 2 Watts of power consumption (2 TOPS/W), however the accelerator is limited to INT8 operations. It also has around 8 MB of memory for model storage.

      Meanwhile a general purpose or gaming GPU can support a wider range of instructions, single-precision, double-precision floating point, integer, etc).

      Geforce GTX 1060 for example: 4.375 TFLOPS (FP32) @ 120W (https://www.techpowerup.com/gpu-specs/geforce-gtx-1060-6-gb....)

      There are commercial-oriented products that are optimized for particular operations and precision.

      Here's a blog post discussing Google's 1st-generation ASIC TPU used in its datacenters: https://cloud.google.com/blog/products/ai-machine-learning/a...

      (92 TOPS @ 700 Mhz - 40W)

      https://arxiv.org/abs/1704.04760

    • hcfman4 days ago
      I have something similar. It's not tracking though. Drawing around 10W on a pi, around 7W on a Jetson.
    • formerly_proven4 days ago
      YOLO is quick enough that you can just run it on a CPU, assuming you don’t want to run it at full resolution (no point) and full frame rate (ditto) for multiple streams. When you run it scaled down at a 2-3 fps you’ll get several streams per CPU core no problem. Energy use can be minimized by running a quick motion detection pass before, but that would obviously make the system miss things creeping through the frame pixel by pixel (very unlikely if you ask me)
    • phito4 days ago
      You can use a Coral USB Accelerator, doesn't use more than 10W.
      • Eisenstein4 days ago
        You can see here:

                        res = rest(ollama, {
        
                            "model": "llava",
        
                            "prompt": genprompt(box.name),
        
                            "images": [box.export()],
        
                            "stream": False
        
                        })
        
        
        They are calling the ollama API to run Llava. Llava is a combo of an LLM base model and + vision projector (clip or ViT), and is usually around 4 - 8GB. Since every token generated needs access to all of the model weights, you would have to send 4 - 8 GB through USB with the Coral. Even at a generous 10gbit/s that is 8GB / 1.25GB = 6.4seconds per token. A 150 (short paragraph) generation would be 16minutes.
        • phito3 days ago
          Hm yeah sure, I didn't think of the llm part. I don't think it's really useful tbh.
      • nicholasjarnold4 days ago
        Can confirm. The Coral inference accelerator is quite performant with very low power draw. Once I figured out some passthrough and config issues I was able to run Frigate in an LXC container on Proxmox using Coral USB for inference. It's been working reliably 24/7 for months now.
      • hcfman3 days ago
        Yeah. But it’s likely it’s an 8-bit quantised, likely very small model with a small number of parameters. Which translates into poor recall and lots of false positives.

        How many parameters is the model you are using with hailo? And what’s the quantisation and what model is it actually ?

        • phito3 days ago
          Honestly I have no idea what you are asking about. It's just dedicated hardware to a yolo-like object detection model
          • Eisenstein3 days ago
            They are asking about LLMs. There is a confusion it seems -- you are thinking of the object detection model (YOLO) which runs perfectly fine in (near) real time with a Coral or other NPU. The parent is referring the Llava part, which is a full-fledged language model with a vision projector glued onto to it for vision capability. Large language models are generally quantized (converted from full precision float values to less precise floats or ints for instance F16, Q8, Q4) because they would otherwise be extremely large and slow and require a ton of RAM (the model has to access the entire weights for every token generated, so if you don't have a gigantic amount of VRAM you would be pushing many tens of gigabytes of model weights through the system bus slowly).
            • janalsncm3 days ago
              Recall and false positives are classification metrics which relates to the YOLO part.
  • rocauc4 days ago
    A suggestion: I'd swap llava for Florence-2 for your open set text description. Florence-2 seems uniformly more descriptive in its outputs.
    • jerpint4 days ago
      I found grounding-dino better than Florence and faster
      • netdur4 days ago
        I found YOLOS to be faster and better, bot real time but 22k objects under half second
    • Eisenstein3 days ago
      They are using Ollama which is based on llama.cpp; florence is not supported on that backend.
  • matrik4 days ago
    MobileNetV3 and EfficientDet are othwr possible alternatives to YOLO. I was able to get higher than 1.5 FPS on Raspberry Pi Zero 2W which draws 1W on average. With efficient queuing approach, one can eliminate all bottlenecks.
  • xrd4 days ago
    I'm confused about why you need yolo and llava. Can't you simply use yolo without a multimodal LLM? What does that add? You can use yolo to detect and grab screen coordinates on its own, right?
    • andblac4 days ago
      Skimming through the source it seems to run 'car' and 'person' objects through llava with the following prompt:

      - "person": "get gender and age of this person in 5 words or less",

      - "car": "get body type and color of this car in 5 words or less".

      So YOLO gives the bounding box and rough category, while llava describes the object in more details.

    • michaelt4 days ago
      Almost certainly using yolo to segment the cars, then llava for the more detailed "silver sedan" description
  • vaylian4 days ago
    Hello from the privacy crowd! Please use this responsibly. Tech can be a lot of fun and I encourage you to play around with things and I appreciate it when you push the boundaries of what is technically feasible. But please be mindful that surveillance tech can also be used to oppress people and infringe on their freedoms. Use tech for good!
    • 4 days ago
      undefined
  • ferar4 days ago
    Can you specify ideal hardware (camera, computer) to deploy the solution? Thanks
    • skirmish4 days ago
      Here are hardware recommendations from another similar (and well established) project: [1] [2]. Even though they don't recommend Reolink cameras, I have both Amcrest and Reolink cameras working well with Frigate for more than a year now.

      [1] https://docs.frigate.video/frigate/hardware

      [2] https://github.com/blakeblackshear/frigate

      • jamesbfb4 days ago
        +1 for Frigate and Reolink. I have it running in a Proxmox VM on an old dell r710 (yes, it’s sucks watts and needs replacing) but all said, Frigate, is, amazing! The ease of integration with HA is equally great.
      • moandcompany3 days ago
        Many Amcrest IP Cameras are manufactured by Dahua and use localized versions of Dahua firmware. The same applies to the Lorex brand in the United States.

        Some things that matter when it comes to configuring your IP Cameras (Beyond security, etc): - Support for RTSP - Configurable Encoding Settings (e.g. h264 coded, bitrate, i-frame intervals, framerate) - Support for Substreams (i.e. a full-resolution main stream for recording, and at least one lower-resolution substream for preview/detection/etc) ...

        Make sure the hardware you select is capable of the above.

        Configurability will matter because Identification is not the same as Detection (Reference: "DORI" - Detection, Observation, Recognition, and Identification from IEC EN62676-4). If you want to be able to successfully identify objects or entities using your cameras, it will require more care than basic Observation or Detection.

        • hcfman3 days ago
          Isn’t it illegal now to import HIKvision and Dahua to the states now ?
          • moandcompany3 days ago
            AFAIK, the FCC ban pertains to particular applications (or marketing of products for such applications). It did not apply to consumer applications.

            "On November 25, 2022, the Federal Communications Commission (FCC) released new rules restricting equipment that poses national security risks from being imported to or sold in the United States. Under the new rules, the FCC will not issue new authorizations for telecommunications equipment produced by Huawei Technologies Company (Huawei) and ZTE Corporation (ZTE), the two largest telecommunications equipment manufacturers in the People’s Republic of China (PRC).

            The FCC also will not authorize equipment produced by three PRC-based surveillance camera manufacturers—Hytera Communications (Hytera), Hangzhou Hikvision Digital Technology (Hikvision), and Dahua Technology (Dahua)—until the FCC approves these entities’ plans to ensure that their equipment is not marketed or sold for public safety purposes, government facilities, critical infrastructure, or other national security purposes. The FCC did not, however, revoke any of its prior authorizations for these companies’ equipment, although it sought comments on whether it should do so in the future."

            https://crsreports.congress.gov/product/pdf/LSB/LSB10895/1

    • moandcompany4 days ago
      You'll want to find an IP Camera that supports the RTSP protocol, which is most of them.

      If your budget supports commercial style or commercial grade cameras, looking at Dahua or Hikvision manufactured cameras would be a good starting point to get an idea of specs, features, and cost.

      • meow_catrix4 days ago
        Maybe don’t buy surveillance hardware from those brands
        • sinuhe694 days ago
          Not OP, but the reason may be:

          US - FCC Ban The US Federal Communications Commission (FCC) banned Dahua and Hikvision from new equipment authorizations in November 2022. Most products that use electricity require FCC equipment authorizations; otherwise, they are illegal to import, sell, market, or use, even for private individuals. Jul 5, 2024

          • hcfman4 days ago
            Shame, they are the best cameras available.
            • formerly_proven4 days ago
              Also it’s not like you stop supporting these OEMs if you buy other made in china cameras. They’re essentially all designed and manufactured by very few of these large OEMs, all of which are implicated in CCP state surveillance.

              You’d have to buy from actual Western companies like Axis or Dallmeier.

        • moandcompany4 days ago
          A lot of the commercial-style or commercial-grade IP Cameras sold are rebadged Dahua or Hikvision products.

          Compromised firmware or other backdoors are a concern for a wide range of products. With IP Cameras, a commonly recommended practice includes putting them on a non-internet accessible network, disabling any remote access, UPnP type features, etc. You can run IP cameras in an air-gapped configuration as well.

          Home/consumer-grade cameras have plenty of shortcomings too.

          • hcfman4 days ago
            If they are rebadged, that's fine :)
        • avh024 days ago
          You're going to have to explain the reasoning here
          • meow_catrix4 days ago
            ”Analysts noticed that CCTV cameras in Taiwan and South Korea were digitally talking to crucial parts of the Indian power grid – for no apparent reason. On closer investigation, the strange conversation was the deliberately indirect route by which Chinese spies were interacting with malware they had previously buried deep inside the Indian power grid.”
            • 2Gkashmiri4 days ago
              link? i am close to CCTV retailers and dahua and hikvision are only brands of CCTV widely available with two exceptions of "cp plus" and "hawkvision" which are in all lilkelihood rebranded or made in china products.

              https://www.amazon.in/s?k=cctv+system+4+channel

              so what are your options? i have been contemplating getting a door phone + cctv for my home for the past so many years but problems like these prevent me from investing into an ecosystem.

              edit: oh. looks like pager attacks has their attention now.

              https://trak.in/stories/pager-bombs-govt-can-ban-chinese-cct...

              i guess time will tell and then there is lobbying so yeah

              • formerly_proven4 days ago
                > are in all lilkelihood rebranded or made in china products

                IPVM did all the legwork on this a while ago and unconvered that, not that surprisingly, two and a half OEMs (including Dahua and Hikvision) are manufacturing essentially every not-completely-garbage CCTV camera coming out of china, and a bunch that very explicitly claimed to not come out of china.

        • nativeit4 days ago
          Could you elaborate? What’s up with those brands?
    • npteljes4 days ago
      I can recommend the Axis brand. Very user friendly while power user friendly as well, true local offerings. I personally bought mine used, it's an older model, and even then, it holds up really well.
    • llm_trw4 days ago
      Default yolo models are stuck at 640x640, so literally any camera that is at least capable of that resolution. Llava I believe is about the same. You'd need ubuntu and something that can run a llava model in vaguely real time, so a 4090/4080.
  • doctorhandshake4 days ago
    >> It calculates the center of every detection box, pinpoint on screen and gives 16px tolerance on all directions. Script tries to find closest object as fallback and creates a new object in memory in last resort. You can observe persistent objects in /elements folder

    I’ve never implemented this kind of object persistence algo - is this a good approach? Seems naive but maybe that’s just because it’s simple.

  • nikolayasdf1234 days ago
    how about llama3.2 vision? should it get better performance?
  • _giorgio_4 days ago
    All I see, usually, is some AI YOLO algorithm applied to an offline video.

    This is the first time that I've seen a "complete" setup. Any info to learn more on applying YOLO and similar models to real time streams (whatever the format)?

    • yeldarb4 days ago
      We’ve got an open source pipeline as part of inference[1] that handles the nuances (multithreading, batching, syncing, reconnecting) of running multiple real time streams (pass in an array of RTSP urls) for CV models like YOLO: https://blog.roboflow.com/vision-models-multiple-streams/

      [1] https://github.com/roboflow/inference

    • llm_trw4 days ago
      Just stream it one frame at a time to the model and eat the latency: https://www.youtube.com/watch?v=IHbJcOex6dk if you need more hand holding.

      There's a reason why there's a whole family of models from tiny to huge.

      • yeldarb4 days ago
        If you do it naively your video frames will buffer waiting to be consumed causing a memory leak and eventual crash (or quick crash if you’re running on a device with constrained resources).

        You really need to have a thread consuming the frames and feeding them to a worker that can run on its own clock.

        • llm_trw4 days ago
          That's not how loop devices work on Linux.
    • hug4 days ago
      This repository seems to be exactly what you are asking for. It's YOLO analysis of video frames passed in through Real Time Streaming Protocol.
  • anshumankmr4 days ago
    Could try with Florence by Microsoft instead of Yolo and Llava, though the results are not going to be as great. Florence will do the inference on CPU. This is just for fun.