Small footprint ... what's the buzz?

Status
Not open for further replies.

EasyGoing1

Dabbler
Joined
Nov 5, 2021
Messages
42
I'm running a couple on mine without a problem--but again, only four cores, with no hyperthreading. I'm considering a CPU swap for mine at some point.
That tells me you're not happy with the performance ... which CPU do you have in yours?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
That tells me you're not happy with the performance ... which CPU do you have in yours?
I'm satisfied with the performance, but I'm also limiting what I do with it. And I'm not too concerned for now; I have three other nodes, each with dual E5-2680v2, for when I need the grunt. But the system will take more CPU than the stock E-2224, so if I can find an upgraded CPU, I'll consider swapping it in.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
... first one is ... no hardware raid? [...] But I'm wondering if hardware RAID is even something that home NAS boxes typically even do? Is it pretty much software raid these days?

RAID itself is a thing that is on an outbound trajectory in the industry.

ZFS is what you dismissively(maybe?) refer to as "software RAID", because Sun figured out many years ago that RAID silicon was stupid-expensive compared to CPU. And, unlike standard RAID, ZFS is able to do some really amazing things to protect your data that no RAID controller at a similar price point can do. This is why there are numerous petabyte ZFS systems out there; they are generally affordable (at least if you can afford the disks and the chassis).

RAID has historically protected both running systems (such as your typical Windows Server) and also the data archived, stored, and processed. A lot of "hardware RAID" was oriented towards idiot operating systems like Windows, where it could barely lay blocks down on disk without a catastrophic error happening. Trying to integrate something like software RAID into the DOS/Windows ecosystem was basically a nonstarter, and, at the time, there wasn't CPU-level support for rapid bitwise operations like what we have today.

Modern UNIX derived operating systems like FreeBSD have no problems with doing complicated things with disk I/O, and because modern CPU's do these operations fast, and you can do things like RAIDZ3 parity, they outshine "hardware" controllers in many ways. Even Dell's high end H740p RAID controller that lists for north of $1K has a "whopping" 8GB of cache and 32 device limit. ZFS will use all of your system's RAM, up into the terabytes, and hundreds of drives, without blinking.

Current application development has largely veered away from Windows, though, and towards microservices and other things that do not really require large RAID-based servers, and instead can be hosted in the cloud on dirt-cheap easily lit, easily burned instances.

This leaves large scale data storage as an issue, and, conventional RAID really isn't that good at that either .. consider RAID6 vs ZFS RAIDZ3, and the ability to tolerate failures.

ZFS wasn't designed for home NAS boxes. When it was designed in the mid 2000's, it was aimed at high end Sun servers. We're now enough years on that the hardware to run it plausibly is within reach of home users, but don't equate ZFS's "software RAID" with "home NAS boxes" -- most home NAS boxes still do not run anything like ZFS. But you can have it, it's within practical reach.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

rvassar

Guru
Joined
May 2, 2018
Messages
972
RAID itself is a thing that is on an outbound trajectory in the industry.

Even Dell's high end H740p RAID controller that lists for north of $1K

News to me... RAID is still very much a thing, just not in ZFS circles. Those cloud block & object stores are build from the same parts as everything else. It's all just hidden from view.

FWIW - Depending on your NVMe requirements... The H755N is the current "high end" shipping controller... :wink:
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
We have abandoned RAID controllers more than 15 years ago, even before we switched to ZFS. A software based solution is in all aspects superior, most importantly resilience to hardware failure. All "hardware RAIDs" rely on a specific brand, sometimes even model of controller. What if the one in a five year old machine breaks and you cannot get the correct replacement? We used GEOM mirror throughout. Server fails, pull disks, put into *any* server with the matching (SATA/SAS/SCSI) ports, boot, be back in business.
 

EasyGoing1

Dabbler
Joined
Nov 5, 2021
Messages
42
RAID itself is a thing that is on an outbound trajectory in the industry.

...
ZFS wasn't designed for home NAS boxes. When it was designed in the mid 2000's, it was aimed at high end Sun servers. We're now enough years on that the hardware to run it plausibly is within reach of home users, but don't equate ZFS's "software RAID" with "home NAS boxes" -- most home NAS boxes still do not run anything like ZFS. But you can have it, it's within practical reach.
SO I honestly had never heard of RAID Z before ... I've been focusing most of my time over the last 8-ish years in software development, mainly Java but also some C++ and microcontrollers. But half way through your post I googled it and read a summary of it including a brief history and once again ... I'm impressed with how innovative people are in this industry. We've all heard the promises of so-called "self-healing" drive formats, and we've all been disappointed. My current 8TB external is an APFS volume but one day I noticed that some of my data was just ... GONE! No errors ... no nothing ... fortunately, I was able to get the data back using the free version of DiskDrill, but nothing I've tried has been able to fix the APFS errors that show up on a health scan.

I suppose I could say that had it been any other format like FAT or NTFS that I wouldn't have so easily recovered the data nor would I still have a volume that I can mount and get to the data that's still on it ... but talk about being disillusioned concerning the robustness of APFS ... goes without saying.

Conceptually, I like what they're saying about the ZFS file system and the way it uses checksums to verify data blocks which lets it repair missing data ... I assume that it either keeps a record of checksums for specific blocks or it uses some kind of algorithm to just know if a block of data has any errors... haven't dug into it that far yet. And petabyte raids? That is very impressive!

Of course, it also makes sense to use the power we now have in CPUs to handle the calculations for the raid volumes ... it's not like it was in the 90s and 2000's and most computers rarely if ever maximize their CPU time so why not dedicate some of that to the file system. I wonder if it uses any kind of virtual machine technology to slice out hardware resources for its functionality... lots of questions...

Thanks for the info, much appreciated.
 

EasyGoing1

Dabbler
Joined
Nov 5, 2021
Messages
42
"Equipped with TWO - DUAL CORE HIGH PERFORMANCE AMD Opteron processors" - LOL with all the drives that thing could hold, I'm amazed it worked at all ... but hey at least it was 32 bit compatible ... so ...
 

EasyGoing1

Dabbler
Joined
Nov 5, 2021
Messages
42
News to me... RAID is still very much a thing, just not in ZFS circles. Those cloud block & object stores are build from the same parts as everything else. It's all just hidden from view.
We use to laugh at the idea that people would allow their sensitive data to be stored "in a cloud" ... businesses not maintaining absolute control over their data by keeping it in house? ABSURD! Look at the trends now ... I once said I would NEVER have a laptop as my main computer ... now I know I will NEVER own a desktop again. But it is hard to see RAID controllers going the way of the DODO ... If anything, they serve a function that will ALWAYS be needed when you need to build a server that process near light speeds ... having a piece of hardware in it that off-loads that function ... is something that will always be appealing to an engineer. GPUs are a great example of doing just that... look at how that trend has gone since cyber currency hit the planet...
 

EasyGoing1

Dabbler
Joined
Nov 5, 2021
Messages
42
We have abandoned RAID controllers more than 15 years ago, even before we switched to ZFS. A software based solution is in all aspects superior, most importantly resilience to hardware failure. All "hardware RAIDs" rely on a specific brand, sometimes even model of controller. What if the one in a five year old machine breaks and you cannot get the correct replacement? We used GEOM mirror throughout. Server fails, pull disks, put into *any* server with the matching (SATA/SAS/SCSI) ports, boot, be back in business.
I would argue that you could squeeze more performance out a server that is implementing software in the operating system to handle ... well ... just about any function really. We've moved the process of encryption over to dedicated silicon which was a great thing to do ... I think an operating system should really be more of a resource steward in as much as possible ... because once you start taxing a CPU with overhead, the entire system starts feeling sluggish so the more you can keep that from happening ... the better ... and of course keeping Windows 100 feet away from the building at all times (even if it takes a restraining order) is just good common sense. Though I have to say I am impressed with Windows 11 ... but Windows 11 is what Windows 200 should have been ... they aren't just a day late and a dollar short, they still have a LONG way to go ... but at least they are working in that direction which gives me hope that someday they will finally "get it".

And hardware failures are no more possible - over the average - with an added controller than they are without one. Probabilities of hardware failures are fairly consistent with their numbers of course being different depending on who manufactures the hardware, what materials are used, engineering prowess etc. etc. I'd say that software raids have their place, absolutely, but hardware raiding - too - has its place and the determining factor of those options is strictly bound to the purpose of the server.

You used the magic words when discussing hardware failures ... "what if" ... those are words that can justify anything and rarely do they take root in the evidence of reality...
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
We use to laugh at the idea that people would allow their sensitive data to be stored "in a cloud" ... businesses not maintaining absolute control over their data by keeping it in house? ABSURD! Look at the trends now ... I once said I would NEVER have a laptop as my main computer ... now I know I will NEVER own a desktop again.

But in this case, the problem hasn't shifted like the laptop/desktop split, and it certainly hasn't gone away. When you move to cloud, you're just paying to make it someone else's problem, so you don't have to manage it. They're still building their equipment from the same pieces of technology. They just get to economize on scale, split their risk among fault domains, standardizing on middle cut "binned" CPU's that don't even get price listed for purchase by you and me, etc... A modern server with 2Tb of RAM, and a couple racks of JBOD's attached to multiple HW RAID controllers is a FRU to these outfits. But at the end of it, your "object storage" sits on some kind of disk in some kind of arrangement of redundant array. You expect 100% redundancy with 4-nines uptime, and they bill accordingly. :eek:

For the really critical corporate stuff... You can even buy their entire management schema & codebase, rent their staff, and hire them to run their cloud in your datacenter. I know at least one of the major players will sell this. You get all the same API's, all the same features, and even get to define change rules and other critical requirements tailored to business needs. All the company provides is floor space, power & air-conditioning. But only government's & huge companies like telco's can afford it so far...
 

EasyGoing1

Dabbler
Joined
Nov 5, 2021
Messages
42
But in this case, the problem hasn't shifted like the laptop/desktop split, and it certainly hasn't gone away. When you move to cloud, you're just paying to make it someone else's problem, so you don't have to manage it. They're still building their equipment from the same pieces of technology. They just get to economize on scale, split their risk among fault domains, standardizing on middle cut "binned" CPU's that don't even get price listed for purchase by you and me, etc... A modern server with 2Tb of RAM, and a couple racks of JBOD's attached to multiple HW RAID controllers is a FRU to these outfits. But at the end of it, your "object storage" sits on some kind of disk in some kind of arrangement of redundant array. You expect 100% redundancy with 4-nines uptime, and they bill accordingly. :eek:

For the really critical corporate stuff... You can even buy their entire management schema & codebase, rent their staff, and hire them to run their cloud in your datacenter. I know at least one of the major players will sell this. You get all the same API's, all the same features, and even get to define change rules and other critical requirements tailored to business needs. All the company provides is floor space, power & air-conditioning. But only government's & huge companies like telco's can afford it so far...
That's interesting ... I always saw the practice of outsourcing daily needed skills to an outside organization as being something that is akin to mixing water and oil and is usually only done because the company lacks the internal skills to do it themselves. Certainly, it is usually less expensive to do something in-house when it's a function that must be done daily. Of course, it makes more sense for a company to outsource a function that only needs to be done once in a while but outsourcing skills that must be utilized daily usually means that they pay double or more for those same skills than if they kept them in-house... I never understood that practice, especially for the private sector. Government outsourcing I don't have much problem with since it's basically a practice of re-investing tax money into the economy although it's still technically a waste of money - and I'm not referring to government projects like new building construction or when the military wants to find out if there is a better way to design camouflage clothing ...

I've always wanted to see what the inside of a data center like at Google or Amazon looks like but they keep that stuff under tight wraps. I do remember seeing once, an aerial photo of a new Apple data center being built in the middle of nowhere, and the building was massive ... putting the Wal-Mart distribution centers to shame in terms of size. I would think it would be a good measure to build those data centers underground... turns out that when properly designed and built, underground structures are more resilient to earthquakes and certainly they would be less expensive to air condition.

But I certainly share your views on hardware raid not trending out. I think it will always have its place if for no other reason than to offload work from the main CPU in those applications where that is a benefit and even a technology like raid-z wouldn't be enough, I don't think, to mitigate the benefits of a hardware raid layer in every situation. It would have to be some magic pixie dust software raid before that ever happens and even then ... if that could be dedicated to silicone all the better... certainly dedicating processors to just that one function offers less risk of problems since putting that layer into software on the operating system runs a risk of contamination by other software running on the operating system. And though I'm sure that risk is mitigated substantially, it remains a risk nonetheless.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
News to me... RAID is still very much a thing, just not in ZFS circles. Those cloud block & object stores are build from the same parts as everything else. It's all just hidden from view.

FWIW - Depending on your NVMe requirements... The H755N is the current "high end" shipping controller... :wink:

RAID has been on its way out for a long time, and it has nothing to do with "ZFS circles."

If you want the obvious example, it would be VMware vSAN, where they discourage the use of RAID controllers, using individual JBOD disks and turning off write caching if you have a certified RAID controller (which is basically killing most of the RAID card's functionality).

Most of the storage vendors these days seem to be using proprietary software stacks for their storage, and not using hardware RAID controllers in there. This allows for a lot of flexibility in the way these are designed and implemented, including allowing for different kinds of data protection that aren't conventional RAID5/6. Don't think you'll find a RAID controller in your typical Nutanix, Pure Storage, etc. devices. (with some luck I'll even have remembered correctly).

Now of course if you happen to work for a Windows shop, I'm sure you still see lots of hardware RAID, but the cloud vendors are definitely looking to cut costs where they can, and eliminating hardware RAID has been a large component of that. AWS EBS is definitely not on hardware RAID, and their availability policy should make that clear ... if you want RAID-like protection, you need to do it on your virtual machine at the OS level.

Over here at SOL, this has been a thing basically forever, preferring multiple inexpensive servers ("RAIS" - redundant array of inexpensive servers) over a single, pricey, and only somewhat more reliable server. Sometimes it felt like we were a bit of an outlier, but eventually the world caught up thanks in part to AWS and a generation of developers who learned new abstractions that allowed them to build stuff like microservices.

This doesn't mean that they don't make hardware RAID controllers anymore, or that Windows Server is dead. These things will be at least somewhat relevant for a long time, but their criticality to core infrastructure design is definitely in sharp decline.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
"Equipped with TWO - DUAL CORE HIGH PERFORMANCE AMD Opteron processors" - LOL with all the drives that thing could hold, I'm amazed it worked at all ... but hey at least it was 32 bit compatible ... so ...

AMD Opteron kicked the crap out of contemporary Intel offerings. Not clear what your source of amazement is, here. Intel regained the lead with Sandy/Ivy Bridge, and held on for a good while, but it's flipped again with EPYC/Ryzen.
 

kiriak

Contributor
Joined
Mar 2, 2020
Messages
122
Heck even something like this would be FINE as a boot drive: https://www.amazon.com/Samsung-MUF-256AB-AM-Plus-256GB/dp/B07D7Q41PM

Flash disks are not recommended as boot drives anymore, probably due to quick failure and because small SSDs are really cheap now.

But in my HP Microserver I use 2 X 32 Gb USB Flash drives as mirrored boot devices and the reason was the smallest possible footprint as I also live in an apartment. I know the risks, I keep backups of the config and I'll see how long they last and if needed I'll change to SSDs.
In this case your system dataset should be in another pool (not the boot).
 

EasyGoing1

Dabbler
Joined
Nov 5, 2021
Messages
42
AMD Opteron kicked the crap out of contemporary Intel offerings. Not clear what your source of amazement is, here. Intel regained the lead with Sandy/Ivy Bridge, and held on for a good while, but it's flipped again with EPYC/Ryzen.
My source of amazement was that you posted a link to a server ad from about 15 years ago...
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I did - because this was the system ZFS was developed for. Or the system that was developed for ZFS. Whichever way round, this was an incredible feat of engineering at Sun Microsystems that revolutionized how we view, use and manage storage today.
 

EasyGoing1

Dabbler
Joined
Nov 5, 2021
Messages
42
RAID has been on its way out for a long time, and it has nothing to do with "ZFS circles."

If you want the obvious example, it would be VMware vSAN, where they discourage the use of RAID controllers, using individual JBOD disks and turning off write caching if you have a certified RAID controller (which is basically killing most of the RAID card's functionality).

Most of the storage vendors these days seem to be using proprietary software stacks for their storage, and not using hardware RAID controllers in there. This allows for a lot of flexibility in the way these are designed and implemented, including allowing for different kinds of data protection that aren't conventional RAID5/6. Don't think you'll find a RAID controller in your typical Nutanix, Pure Storage, etc. devices. (with some luck I'll even have remembered correctly).

Now of course if you happen to work for a Windows shop, I'm sure you still see lots of hardware RAID, but the cloud vendors are definitely looking to cut costs where they can, and eliminating hardware RAID has been a large component of that. AWS EBS is definitely not on hardware RAID, and their availability policy should make that clear ... if you want RAID-like protection, you need to do it on your virtual machine at the OS level.

Over here at SOL, this has been a thing basically forever, preferring multiple inexpensive servers ("RAIS" - redundant array of inexpensive servers) over a single, pricey, and only somewhat more reliable server. Sometimes it felt like we were a bit of an outlier, but eventually the world caught up thanks in part to AWS and a generation of developers who learned new abstractions that allowed them to build stuff like microservices.

This doesn't mean that they don't make hardware RAID controllers anymore, or that Windows Server is dead. These things will be at least somewhat relevant for a long time, but their criticality to core infrastructure design is definitely in sharp decline.
"RAID has been on its way out for a long time" - I don't know about you, but when I have a house guest and they are "on their way out" ... they don't linger around for another decade ... I guess Raid controllers are the last remaining "rude" house guests that just won't take a hint? And VMware officially recommending JBODs for virtual machines is only evidence that what I said is correct ... which is that there will always be a use case for software raid and there will always be a use case for hardware raid ... and that leaves hardware raid anywhere BUT "on it's way out" ... I seem to recall similar predictions in the late '90s concerning Serial ports when USB was busting onto the scene ... yet over 20 years later ... I think Cisco still uses Serial as the primary means of accessing their headless hardware...

Hardware RAID in Windows Servers... I mean does this even need to be mentioned? Microsoft is still trying to figure out how to engineer a real operating system ... and by sheer luck (and crafty marketing campaigns) they managed to take the worst operating system on the planet and make it the most popular... the perception of being user friendly was mostly to blame for that - Novel had a directory service which couldn't even be compared to Active Directory ... it was so lean and so simple and so FUNCTIONAL and SCALABLE ... Active Directory was never even in the same ball park as NDS and it still isn't ... yet there again ... marketing won as well as group think... but their operating system still has a log way to go ... but Windows 11 shows me that they seem to understand now which direction they need to be headed in or suffer the fate of obselence as *nix operating systems become not only more user friendly, but as they get leaner and meaner every year ... and the pseudo fear from "corporate" (pick any corporate it doesn't matter) - that open source is a risk because no one takes any ownership in it... is a generalized fear that falls under the "what if" category I mentioned earlier ... but it's finally sinking in that those fears were massively unwarranted ... so switching away from the corporate playground - complete with fancy lights and music ... towards a more robust, capable and STABLE operating system is become a real threat for Microsoft and they seem to be somewhat acknowledging that reality with Windows 11. They really need to bite the bullet and re-design Windows from the ground up and they need to drop all of their arrogance and all of their pride and they need to develop a REAL operating system that has the ability to be a true contender in this space ... they need to take risks like Apple used to do almost every three years until we lost Steve Jobs...

But back to raid ...

There is BY NO MEANS anything WRONG with hardware raid ... to the contrary, they are a value add function to NICHE servers depending on what the server does ... they don't all just serve files you know ... and as long as they can add value to a system, they will NEVER be gone ... EVER! That's just how it works ... when hardware can be applied to a situation and make that situation better than it would be without the hardware ... it's not going anywhere (cloud file servers being more cost-effective - does not indicate a trend away from hardware raid - it only indicates a trend away from it for that use case because massive storage devices for almost nothing exist now and CPUs are blazing fast and still following Moores law which makes software raid economically PROPER because the task of just serving files anymore is a brainless tasks requiring no CPU overhead so why not use software raid FOR THAT FUNCTION?).

Let's talk about a web server that has to process hundreds of thousands of transactions per second ... Id fire you if you implemented software raid on that box ... taking away valuable CPU time from my customers. Or how about mission critical gaming servers like the kind that host World of Warcraft of Call of Duty ... if you implement software raid on those kinds of boxes ... you need to have your IT card revoked and go become a greeter at Wal Mart where you will be of more use to a company. ... and I'm sorry that doesn't fit your wishes ... but it is a simple fact of engineering and economics and so you must simply accept that fact .. or don't ... you're free to live in denial all you want ... and I certainly respect your right to do so, and will fight for you to maintain it.

But the only thing that will replace the current hardware raid ... is a BETTER hardware raid ... we went from MFM to IDE to SATA ... and we ADDED USB to Serial ... we STILL have DVD and CD and BluRay ... we STILL have tape backups ... about the only hardware technology that I can think of that actually did go extinct ... was the Zip drive ... (I was shocked to learn the DROBO still exists) ... the floppy drive (though you can still buy them today) ... tape drives as a mainstream standard method for backing up data... since massive hard drives are damn near free compared to what similar storage sizes would have cost 20 years ago ... but Tape drives still exist ... SCSI has mostly surrendered to SAS but it's not gone and it won't be gone ... it was simply replaced for the most part ... Token Ring - replaced by Ethernet ... but as a hardware technology ... that function is still alive and well ... as a hardware solution.

Can you think of any other hardware technology since the birth of IT that has actually gone extinct and was not replaced by a newer and more efficient hardware technology? It just doesn't happen.


(I'm still shocked that Microsoft AD killed Novel NDS ... Novel Directory Service AS IT EXISTED in the late 1990s WOULD STILL make the modern Active Directory look like the blubbering fool that it is and always has been. ... it just proves that people - are for the most part ... clueless and they would rather purchase shiny objects because they look pretty under bright lights ... than the not so pretty - yet rock-solid tractor that can actually help them make a profit... it really is a shame ... NDS was exactly what a network resource management platform is supposed to be ... Novel actually understood what it was that was happening out in the IT space. Microsoft managed to cause dead cells to come to life and then they took Frankenstein and put gold chains on him ... a backward baseball cap ... some new Nike's and sunglasses then taught him how to moonwalk and suddenly Frankenstein was the most amazing thing that any had ever seen and they just threw money at him like a high dollar "lady of the evening" (apparently we have word filters on this server :smile:)
Mike
 
Last edited:

EasyGoing1

Dabbler
Joined
Nov 5, 2021
Messages
42
You know ... when Novel created NDS ... that's what I cut my network engineering teeth on ... and there was so much poetry in that software that I didn't realize until I started working for a local government who drank the Microsoft kool-aid like everyone else ... but this was the main culture shock for me ...

In Novel NDS, as a Network Admin, I could create an object in the directory ... say a printer object ... or a file share object ... then I could assign the location of that object TO THE OBJECT ... so if it was a printer share called HPLJ4 for example ... I would assign to that object the location of that actual share which might have been something like \\SERVER-ENGINEERING-01\HPLJ4 ...

... and let's say that our domain name for our directory is Company

BUT WHAT THE WORKSTATIONS mapped their printers to was \\Company\HPLJ4 ... nowhere in their mapping would the client EVER need to include which server the resource actually existed on ... you map to the tree then the resource and the location of the resource didn't matter. This meant that as an admin, I was free to move things around whether they were folders with data or printers or scanners or actual servers that did functions like mail ... it didn't matter ... I could move anything I wanted to anywhere I wanted to and I would NEVER have to go out to the end-users workstations and change anything ... I could move the Engineering share of CAD files from SERVERA to SERVERB and because Engineering mapped their drives to \\Company\CadFiles (a genuine directory object pretending to be a file share for the workstation and seamlessly redirecting that resource to the client in the background) ... it didn't affect them at all.

But that is what a REAL Directory service is supposed to be ... it's just this cloud of resources that remain constant regardless of the back end changes in the network and thus the users keep on working like any other day no matter whats going on in the network.,

To my knowledge, Active Directory STILL does not do this ... at all ... it's never actually been a directory service ... at best, it's been a steward of user accounts, security policies and the assignment of permissions based on organizational units or however the company wants it structured, but the only thing it's ever done, is centrally store policies that get applied to servers and workstations but it's always been centered on the edge devices and NEVER has it been an actual central resource that provides services on the network for the user in a genuinely user-friendly manner.

That's what I mean when I say that Microsoft just never got it ... they never really understood what it takes to manage IT properly and they still don't. They even attempted at one time to implement folder mirroring through Active Directory and man what a colossal failure that was.... I do wonder if they ever got it right or if they just abandoned it under the realization that their AD code simply doesn't scale and so better to pretend those features arent needed than to have to actually create those features - and re-think their entire approach to directory services.
 
Status
Not open for further replies.
Top