FreeNAS as DVR for IP cameras?

Joined
Nov 11, 2014
Messages
1,174
more cores typically means the clock rate is going to be substantially less, which can be a problem for single-threaded things like Samba, especially if you're trying to reach 10Gbps speeds.

I agree here and I love 10Gb speeds.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
I have no problems utilizing my 10Gb links between my r610 and the freenas machine. My cpu is not the bottleneck..my hard drives are.
 
Joined
Nov 11, 2014
Messages
1,174
I have no problems utilizing my 10Gb links between my r610 and the freenas machine. My cpu is not the bottleneck..my hard drives are.

You are utilizing your 10Gb link you said. What protocol you use to connect , and do you use jumbo frames ?
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
SMB..and yes jumbo frames on the fiber link...I get bursts to 5 Gbps(on the SSD boot mirror) with an average of 2...my hdd's are the bottleneck here..as is the fact they are sata drives..:)

Here's the source R610 machine:
Dell R610: 1U, 2 X L5640 2.27 GHZ Hex Core + HT, 48 GB DDR3 ECC, 4 x 1 GB Interfaces, 2 x 10GE Fiber Interfaces, DRAC(Dedicated interface), 6 x 2.5" HotSwap trays, H200 Controller, 2 x 250 GB SSD RAID 1(Boot). 2 x 1 TB HDD RAID 1 7.2k RPM(VM Array), 2 x 2 TB HDD 7.2k RPM (DATA Array) ReFS Formatted). Server 2012 R2
 
Last edited:
Joined
Nov 11, 2014
Messages
1,174
SMB..and yes jumbo frames on the fiber link...I get bursts to 5 Gbps(on the SSD boot mirror) with an average of 2...my hdd's are the bottleneck here..as is the fact they are sata drives..:)

Here's the source R610 machine:
Dell R610: 1U, 2 X L5640 2.27 GHZ Hex Core + HT, 48 GB DDR3 ECC, 4 x 1 GB Interfaces, 2 x 10GE Fiber Interfaces, DRAC(Dedicated interface), 6 x 2.5" HotSwap trays, H200 Controller, 2 x 250 GB SSD RAID 1(Boot). 2 x 1 TB HDD RAID 1 7.2k RPM(VM Array), 2 x 2 TB HDD 7.2k RPM (DATA Array) ReFS Formatted). Server 2012 R2

So how you saturate this 10Gb if you just said burst 5Gb sustained 2Gb ?! Are you speaking theoretically, base on iperf testing ?


P.S. I don't see problem they are sata, is more of a quantity problem. Mine are sata, slow 5700rpm, but they are 16 and I am sure they read/write little bit over 1GB/s. combined speed.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
So how you saturate this 10Gb if you just said burst 5Gb sustained 2Gb ?! Are you speaking theoretically, base on iperf testing ?


P.S. I don't see problem they are sata, is more of a quantity problem. Mine are sata, slow 5700rpm, but they are 16 and I am sure they read/write little bit over 1GB/s. combined speed.
I never said saturate..i said utilize. There is a difference in the terms. Sata is limited to 6Gbps so it isn't going to go any faster in real world data transfers. The h200 is also a 6Gbps SAS card so SAS or SATA it is going to be limited to 6Gbps on data transfers. The SSD's get bursts to 5Gbps which is pretty good. The other drives(read above I included the RPM specifications) are all spinning rust. The Data array is lucky to sustain gigabit speeds because ReFs is an I/O hog. My speeds i quoted are real world numbers(i never mentioned they were network tests). if i run iperf, of course i get the full 10 gigabits...but there are other bottlenecks in the chain..that i am aware of...and am satisfied with. If you want more in depth information read the specs of the R610, the R310 machine in my sig and the remote machine. This is a small portion of my farm that i maintain..and all perform their duties very well.
 
Joined
Nov 11, 2014
Messages
1,174
I never said saturate..i said utilize.

Yes you are correct. You did say utilize. I kind of misinterpreted it, cause even if "utilized" was used I would still assume you mean fully utilized. Usually when one says "I want to utilized my new "device" usually don't mean I want to utilize 20% of it. But anyways I stand corrected.

Sata is limited to 6Gbps so it isn't going to go any faster in real world data transfers. The h200 is also a 6Gbps SAS card so SAS or SATA it is going to be limited to 6Gbps on data transfers.

I don't see the problem with that , so you only need 2x SSD in raid0 and you'll exceed 10Gb speed, and H200 is 6Gb per channel. It has 8 o them.


My speeds i quoted are real world numbers(i never mentioned they were network tests). if i run iperf, of course i get the full 10 gigabits...but there are other bottlenecks in the chain..that i am aware of...and am satisfied with.

Yeah I am interested in real speeds, cause I have right now around 5Gb speed from freenas to desktop over SMB , but I am not happy and trying to get it close to 10Gb. My desktop had a single sata SSD , but I upgrade for Samsung 960 Pro NVME , so there is no more bottleneck in the desktop. Desktop should be able to write/read 20Gbs, but I don't have the jumbo enabled and perhaps that's what holding me back.

I am trying to see what is the max transfer speed I can achieve on 10Gb nic with SMB and without jumbo frames enabled.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
Yes you are correct. You did say utilize. I kind of misinterpreted it, cause even if "utilized" was used I would still assume you mean fully utilized. Usually when one says "I want to utilized my new "device" usually don't mean I want to utilize 20% of it. But anyways I stand corrected.



I don't see the problem with that , so you only need 2x SSD in raid0 and you'll exceed 10Gb speed, and H200 is 6Gb per channel. It has 8 o them.




Yeah I am interested in real speeds, cause I have right now around 5Gb speed from freenas to desktop over SMB , but I am not happy and trying to get it close to 10Gb. My desktop had a single sata SSD , but I upgrade for Samsung 960 Pro NVME , so there is no more bottleneck in the desktop. Desktop should be able to write/read 20Gbs, but I don't have the jumbo enabled and perhaps that's what holding me back.

I am trying to see what is the max transfer speed I can achieve on 10Gb nic with SMB and without jumbo frames enabled.

I run Raid 1 for my disks..in the description on my R610. As for getting more speed on SMB transfers you will need jumbo frames at the least. Keep in mind that smb has overhead..tcp has overhead...so you won't get full 10Gbps no matter what in terms of real data transfer. As far as per channel..it may be able to negotiate that per channel but it cannot move data at 12 Gbps..if it could I would saturate with the ssd's..it doesn't. The h200 is not a high performance controller.
 
Joined
Nov 11, 2014
Messages
1,174
As for getting more speed on SMB transfers you will need jumbo frames at the least.

I am afraid you are right, I just didn't want to create another network and also re-wire every 10Gb nic machine with another 1Gb connection. So it could be with jumbo on 10Gb and no jumbo on 1Gb. Right now 1Gb and 10Gb are interconnected , so machine with 10Gb nic has only one connection to the network.


As far as per channel..it may be able to negotiate that per channel but it cannot move data at 12 Gbps..if it could I would saturate with the ssd's..it doesn't. The h200 is not a high performance controller.

I know H200 is dell version of lsi 9211-8i, the ones I have on mine. It's serving as HBA in IT mode.
That's interesting, I have to dig some benchmark from internet to find out more about this case: You are saying that singe h200 having 8 ports x 6Gb which equals to 48Gbs (theoretical maximum) can't even pull data with 12 Gb ?Hmmm... Well in my case I have 2xLSI9211 (h200) so even if they pull only 6Gb per card I can still do over 10Gb combined speed, but I am curious is it possible to be that low ? I doubt that , but I have not benchmarked 8xSSD connected to H200 to test it, so I can't speak of experience.
 
Joined
Nov 11, 2014
Messages
1,174
file:///C:/Users/william/Desktop/LSISAS9211-8i_UG_v1-1.pdf

I would have to combine all of my drives into a giant RAID 0 to approach 10 Gbps or higher...which is not in my configuration.

Let me make some research, before I respond .:smile:
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
Joined
Nov 11, 2014
Messages
1,174

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
It does. You are not going to get the theoretical maximums in any way..unless you go to extremes...and even then it is iffy. Keep in mind the numbers you see everywhere are theoretical maximums. They do not take into account any kind of transport or protocol overhead that will be in play. To keep it simple..enable jumbo frames and see what you get. I would run jumbo frames over everything..then you do not have to rewire. If you get faster than 5 Gbps then great..if not you have a bottleneck somewhere. I know where the bottlenecks are in my system...and i am not willing to goto extremes jsut to get another 30 minutes or maybe an hour off my backup times.

Also you cannot do your calculations by the interfaces on the drives. you need to do it by the actual performance of the drives themselves. A hard drive is not going to ever reach 6 Gbps no matter what..regardless of what interface speed it negotiates. You need to look up the transfer rates for your devices...under which workload type..and how many iops they can do at each workload type. That is what will tell you what your performance will be..not the interface speeds on the drives.
 
Last edited:
Joined
Nov 11, 2014
Messages
1,174
I know where the bottlenecks are in my system...and i am not willing to goto extremes jsut to get another 30 minutes or maybe an hour off my backup times.

I like to constantly improve efficiency. A minute safe on one thing could be used to do another !:smile:

Also you cannot do your calculations by the interfaces on the drives. you need to do it by the actual performance of the drives themselves. A hard drive is not going to ever reach 6 Gbps no matter what..regardless of what interface speed it negotiates

I totally agree with you. I would use SSDs that can R:500MB/s+/W:500MB/s+ to push the controller to the limit. I just don't agree with your estimate of 10Gbps max for LSI 9211 (for being old),assuming you fill all 8 channels with Samsung 850 Pro.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
10 gbps on one sfp+ port is the maximum...if you want more you are going to need faster fiber interfaces. I provided the specifications for the card...it depends on what pci-e interface you have the card plugged into.

At the minimum you need to go jumbo frames. You also are still going with the theoretical maximums of the 850 pro. They do not always get 500 MB all the time. As i mentioned before:
You need to look up the transfer rates for your devices...under which workload type..and how many iops they can do at each workload type. That is what will tell you what your performance will be..not the interface speeds on the drives.
There are a ton of parameters that will effect your performance...You are not taking all of the factors into account.
 

averyfreeman

Contributor
Joined
Feb 8, 2015
Messages
164
Mine have SMB or FTP (and local SD card) but not NFS. That's cool to have as an option, no doubt. See that's what I meant , because SMB, FTP, NFS are protocol use to connect , but NAS is not. So they should put FTP/SMB/NFS/NAS together, because what is NAS, right ?

I apologize, I know this is a fairly old thread and I'm not sure if this has already been answered, but...

No experience with Amcrest cameras, but other cameras I've used have used 'NAS' and 'NFS' interchangeably - it's entirely possible that NFS is what they meant to say ;)
 

averyfreeman

Contributor
Joined
Feb 8, 2015
Messages
164
When I start thinking about it... I don't actually know what in my cameras NAS means? I only assume it's SMB because you know most used consumer os Windows/SMB , but it might be actually NFS. Could be even iSCSI for what I know, who knows what the stupid vendor had in mine when they put the "NAS" option there.

I used a Hikvision DVR once that supported iSCSI but they called it "NetHDD".

I agree, these Chinese vendors use painfully ambiguous language. They probably have a specific character they use that can be understood perfectly, but it gets lost in translation...
 

averyfreeman

Contributor
Joined
Feb 8, 2015
Messages
164
From what I heard (but never experienced) when you start running out of RAM in FreeNAS, it get much more ugly than other OS, and your performance can drop like 10x to a crawling state.

It can slow down quite a bit, yes. FreeNAS uses quite a bit more RAM than FreeBSD, Linux, etc. without web GUI overhead, lots of pre-loaded/installed services, etc. Looking at a min 8GB just for it to run, which is a lot for a file server.

The real issue is ZFS or other COW FS (BTRFS has this problem, too) - if you're using it for your OS and run much past 80% disk usage you can brick your system due to lack of slop space.

I've heard OpenSUSE users sometimes have this issue because it uses BTRFS by default and they don't tell users not to fill it up all the way. Should come with a warning sticker :P

For a storage volume obviously it won't kill your OS performance, but it'll kill your storage vol performance. Partition out some slop that can't be filled up with videos (what do they call it in SSD-speak, er, over-provisioning?)
 

averyfreeman

Contributor
Joined
Feb 8, 2015
Messages
164
You have one CPU with proper passive server CPU cooler, and second CPU is what modified laptop cooler?

It's an active 1U cooler. Looks like an older Dynatron, but I can't tell for sure.
 

averyfreeman

Contributor
Joined
Feb 8, 2015
Messages
164
Just sharing my experience.

Got FN on a server, using iSCSI (2x 10 Gb links) to provide the access to a Xen/VMware virtualized server. Installed Ubuntu and SHINOBI CCTV on it.
  • got like 10 D-Link DCS-4602EV
  • got 4 Hikvision model DS-2CD3145F-I -> important: support H.265 !! greatly reduces space required to save video, which impacts on FN
all of them configured in SHINOBI, so shinobi captures the video data from the cameras. Working all good so far, can post more details if interested.

How's Shinobi working out for you?

I've tried it a few times - I talked to the developer who said he writes it on Ubuntu 18.04, so I tried it most recently with a fresh VM but couldn't get text fields to operate in the initial setup of the web gui.

I got it working a time before that, also in 18.04, but it seemed temperamental. I was just trying different programs at the time, but I deleted the VM thinking I could just install it again, but the NPM package versions broke it or something.

Definitely the most granular settings I've ever seen on NVR software.

So, what's your secret? I'm running Xprotect Essentials on 2019 LTSC right now secretly hoping Shinobi development churns out something more usable before I run out of Xprotect's free camera licenses...
 
Top