IIS isn't "bad", but it's definitely way more complicated than these newer hosting models.
Controlling 100% of the hosting environment from code is a really nice shift in responsibility. Takes all the mess out of your tooling and processes. Most of the scary is resolved at code review time.
You can see some benchmarks here FROM 2017: https://robertoprevato.github.io/Comparing-Linux-hosted-to-W... where Linux is beating out Windows in Async web workloads. That's only gotten better since years have gone on.
In my personal experience dealing with .Net 6+ (Mostly 8 now), Linux destroys Windows in performance running .Net workloads hands down 95% of the time. This lines up with Microsoft .Net team advice, every time the topic comes up "Windows or Linux?" They say "Linux, it's easier to deal with and performance is better."
I'm really not sure why you are arguing in favour of using Windows Server. This is legitimately bad advice.
ASP.NET Core team maintains an extensive suite of benchmarks. If you select plaintext or fortunes or json for Intel Gold Linux and Windows (both Kestrel and IIS) hosts, you will see that pure Kestrel with Linux has the best throughput and latency across the board.
https://msit.powerbi.com/view?r=eyJrIjoiYTZjMTk3YjEtMzQ3Yi00...
I would say that Kestrel is so damn fast it probably doesn't matter much either way. I've only ever used it on Windows and it hasn't disappointed.
On the topic, please stop putting Nginx in front of Kestrel just for the sake of it (i.e. you have a single node, or it's an internal network without a load-balancer, etc.) - you're literally wasting resources on additional point of failure and more network hops.
For certificates, you can retrieve them from whatever vault solution you have on hand, or you can, indeed, access a volume. I do not understand the difficulties with volume mounting however which is a standard practice.
Configuring Kestrel programmatically is really straightforward too: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/s...
I'm not saying just using Nginx is a bad choice, but, much like using Redis for things that have no business using Redis, I've seen a fair share of the same misuse with Nginx.
> Configuring Kestrel programmatically is really straightforward too
Sure, but Developers tend to not want to futz with it. My use case is our application lives in a container, most of time we host it but occasionally a customer demands they host it so we give them the container. Some host it on Windows (we do build a Windows container of our application :vomit:) and some on Linux. In any case, we require them to use reverse proxy because HTTPS with Kestrel was becoming too high of support so we require they provide HTTPS proxy. We recommend IIS ARR for Windows users, and CaddyServer for Linux but Linux users tend to not need our advice.
EDIT: Our hosting platform is Azure Kubernetes Service, 100% Linux Nodes.
Might've as well shipped NativeAOT-compiled (or single-file trimmed JIT) build of YARP. Faster than Envoy which is faster than Caddy. Only Nginx and HAProxy are faster than YARP.
(also bumped https://github.com/dotnet/yarp/issues/261)
I've built pipelines where the deployed code is responsible for cloning, building & redeploying itself based upon commands sent via an administration interface. The .NET SDK is relatively lightweight and can be included as a management component in a broader B2B/SaaS product.
Probably a cleaner way, but it’s just a exe, so replace it like a exe?
Kestrel is fast, though.
Can't find the documentation for it now, but in some version of msdeploy they also added a way to automatically bring the site offline while deployment was done so that the deployment is not blocked by files in use.
UPD: https://learn.microsoft.com/en-us/aspnet/web-forms/overview/...
It does indeed look simpler! I'll have to dig in to find out how exactly the remoting works and what user permissions are needed, but this could be the next step. Thanks a bunch!
I'm very unfamiliar with IIS hosting though, does it support any kind of containerisation/deployment immutability at all?
So while this is absolutely ancient tech and process by most means, it has been a huge step up from copying files over to machines by hand through RDP connections.
And also, our apps have no problems having even a few minutes' downtime, so cold restarts are absolutely fine for us.
While I would absolutely like to get into containerized zero downtime deployments, from where we're actually standing now, those don't seem to be offer much in the way of ROI.
And please, if anyone has a more suitable process in mind, please let me know!
Otherwise, on our fleet of hundreds of IIS, we've had success just pointing IIS to a hardlink, deploying the new version to a new folder and updating the hardlink - from memory this does trigger an app pool restart but its super fast, lets you go back and forth very easily.
Some random IIS capabilities include:
- You can edit any part of the IIS configuration live, and this won't require a service restart. You can even edit .NET application code and configuration files live, and changes can take effect non-disruptively even if a process restart is required for the virtual app. IIS will buffer requests and overlap the new process with the old one. https://learn.microsoft.com/en-us/previous-versions/iis/6.0-...
- Web Deploy is basically just a zip file of a web application that can be produced by Visual Studio, vaguely similar to packaging up a site and its config with Docker: https://www.iis.net/downloads/microsoft/web-deploy
- Visual Studio integrates directly with IIS using "publish" settings that can target an IIS server and be used to deploy a complete web app with a button-press from the IDE: https://learn.microsoft.com/en-us/visualstudio/deployment/tu...
- The volume shadow service can be utilised by IIS to create backups of its entire configuration including all files on a schedule: https://usavps.com/blog/13597/
- Shared configuration allows scale-out web farms to be created. The key IIS config files and web content are moved to a UNC file share and then the workers just "point" at this. With an external load balancer that supports traffic draining this allows for seamless OS reboots, upgrades, etc... There's even a trick where SMB Caching is used to automatically and transparently create a local disk cache of the shared content, allowing IOPS to scale with web farm server instances without any manual deploy or sync operations. https://learn.microsoft.com/en-us/iis/web-hosting/configurin...
- The above goes hand-in-hand with Centralized SSL Certificate Support: https://learn.microsoft.com/en-us/iis/get-started/whats-new-...
If you want to use container technology on Windows for web apps, you can do that too. Windows Server supports Windows Containers, and ASP.NET Core has excellent support with a whole bunch of pre-prepared base container images for both Windows and Linux hosts.
If you have many such sites on a Windows web host, you would use the IIS Application Request Routing reverse proxy to add HTTPS to back-end containers running on HTTP. That, or just use Azure App Service, or YARP, or...
Personally, if I had to run a large-scale IIS web farm again, I would keep things simple and just use the Azure DevOps agent with the IISWebAppDeploymentOnMachineGroup task: https://learn.microsoft.com/en-us/azure/devops/pipelines/tas...
The last time I hosted a web app on it was... sucks breath.. wow... before NGINX existed!
In any case, it doesn't matter now, Windows Server is on maintenance mode and Microsoft is clearly done except for cashing licensing checks. Linux is the (web) server future.
Unix got it right, text config, sighup the processes to pick up any config changes and away you go. Requires more work from Admins, much cleaner in the long run.
For those wondering how anyone is dealing with such an ancient process, I've written a piece about the history of automation in our org that might shed some light: https://rewiring.bearblog.dev/automation-journey-of-a-legacy...
(Not to mention Octopus Deploy is prohibitively expensive with large real estates. What used to be $3K per year, or even a reasonable $30K per year, would now be $100K+ per year to renew the license. Which is down from $300K a few years ago but we’d already decided to move away by that point.)
You absolutely should not remote into the web server box from the agent box! This goes entirely against the grain of how modern Azure DevOps pipelines deployments are designed to work... hence the security issue that the hapless blogger is trying to unnecessarily solve.
The correct approach is to install the DevOps Agent directly onto the IIS web hosts, linking them to a named Environment such as "Production Web App Farm A" or whatever. See: https://learn.microsoft.com/en-us/azure/devops/pipelines/pro...
In your pipelines, you can now utilise Deployment Jobs linked to that named environment: https://learn.microsoft.com/en-us/azure/devops/pipelines/pro...
Deployment jobs have all sorts of fancy built-in capabilities such as pre-deployment tasks, rolling and canary strategies, post-deployment tasks, health checks, etc...
They're designed to dynamically pick up the current "pool" of VMs linked to the environment through the agents, so you don't need to inject machine names via pipeline parameters. Especially when you have many apps on a shared pool of servers, this cuts down on meaningless boilerplate.
All of the above works even with annoying requirements such as third-party applications where active-passive mode must be used for licensing reasons. (I'm looking at you ESRI and your overpriced software). The trick is to 'tag' the agents during setup, which can then be used later in pipelines to filter "active" versus "passive" nodes.
Running the agent on the app server seemed a bit risky since it will a) drain resources from the apps and b) needs to have a route open to Azure (Devops)
Apparently you have had good experiences with this? I'd be interested to learn more
Agent permissions to Azure are restricted based on the pipeline configuration to only allow things that are used in the pipeline. So if your pipeline does not involve cloning git some private git repo, agent cannot do that. And even that gives only access to that particular resource. So you normally have a build pipeline that generates package from you application and then deployment pipeline that only has access to that generated package which is then distributed to agents configured for some particular deployment environment.
I don't really have much direct experience with deployment side of things so someone else can probably provide extra info.
The resource usage is negligible.
If you really need that level of high efficiency that you can't spare the tiny amounts of memory the agent uses, then you'd be better off looking into using something like Packer to build a Server Core image in your pipeline, and deploying that to an Azure Virtual Machine Scale Set. This allows "zero outbound communications" deployments also because the image build step is out-of-band and isn't done anywhere near the live infrastructure network. https://learn.microsoft.com/en-us/azure/devops/pipelines/tas...
The above approach concept also works with any other imaging-based workflow such as Windows Docker containers running on App Service or Kubernetes. https://azure.github.io/AppService/windows-containers/
The packer + VMSS approach however allows the IIS workers to join an Active Directory domain "properly". Sure, Kubernetes has some limited support for this, but I wouldn't try it in production unless forced to at gunpoint.
> needs to have a route open to Azure.
If you're only deploying instead of running arbitrary pipeline tasks, then I believe just four firewall rules is sufficient to Azure DevOps (not "Azure"!), but you may need more depending on your requirements: https://learn.microsoft.com/en-us/azure/devops/pipelines/age...
> Apparently you have had good experiences with this?
It's pretty much the "only way" to do things in the current-gen YAML Azure DevOps pipelines.
The alternatives are Kubernetes, App Service, and the like, but those generally aren't 100% compatible with traditional domain-joined IIS web servers.
If you actually need the traditional Windows box "infrastructure" instead of a PaaS, then directly installed DevOps Agents would be one of the best ways to manage it.
The best alternative I know of is probably Nomad, but that's only good if your org is already invested into Hashicorp tooling.