31 pointsby Mossy95 days ago7 comments
  • bob1029a day ago
    If you have the choice, I'd strongly consider using Kestrel and self contained deployments.

    IIS isn't "bad", but it's definitely way more complicated than these newer hosting models.

    Controlling 100% of the hosting environment from code is a really nice shift in responsibility. Takes all the mess out of your tooling and processes. Most of the scary is resolved at code review time.

    • stackskiptona day ago
      SRE here who deals with .Net (Core). If you have .Net and not running on Linux, you are just subjecting yourself to pain for no reason. .Net recommended runtime environment is Linux, preferable a container, otherwise, I'd recommended Ubuntu LTS.
      • p_inga day ago
        This depends on your requirements. If you want high performance non-blocking async networking, Windows is the better bet with .NET.
        • stackskipton21 hours ago
          Dude, just wrong. Last job, we moved .Net RabbitMQ job processor that was making a bunch of async web calls nightly would handle 50k+ messages from Windows to Linux, our job processing time was cut by 2/3. In fact, first few mornings, no one believe the execution time, thought it had silently failed and Dev team manually checked it it's work. Nope, it did its job properly with less RAM/CPU.

          You can see some benchmarks here FROM 2017: https://robertoprevato.github.io/Comparing-Linux-hosted-to-W... where Linux is beating out Windows in Async web workloads. That's only gotten better since years have gone on.

          In my personal experience dealing with .Net 6+ (Mostly 8 now), Linux destroys Windows in performance running .Net workloads hands down 95% of the time. This lines up with Microsoft .Net team advice, every time the topic comes up "Windows or Linux?" They say "Linux, it's easier to deal with and performance is better."

        • neonsunset20 hours ago
          .NET has really good integration between async and networking via epoll/kqueue-based Socket (SocketAsyncEngine) implementation. Given that Linux is the main target for server workloads written in C# or F# nowadays, it has better or more uniform performance more often than not. My experience aligns with sibling comment by stackskipton.
          • p_ing18 hours ago
            epoll and kqueue are userland mechanisms for async that aren't non-blocking async I/O in the kernel.
            • neonsunset18 hours ago
              What do you think the kernel is doing? :)

              I'm really not sure why you are arguing in favour of using Windows Server. This is legitimately bad advice.

              ASP.NET Core team maintains an extensive suite of benchmarks. If you select plaintext or fortunes or json for Intel Gold Linux and Windows (both Kestrel and IIS) hosts, you will see that pure Kestrel with Linux has the best throughput and latency across the board.

              https://msit.powerbi.com/view?r=eyJrIjoiYTZjMTk3YjEtMzQ3Yi00...

              • bob102918 hours ago
                The difference is actually quite shocking to me. I wasn't expecting it to be that extreme.

                I would say that Kestrel is so damn fast it probably doesn't matter much either way. I've only ever used it on Windows and it hasn't disappointed.

      • neonsunset20 hours ago
        To be fair, ASP.NET Core has pretty nice http.sys integration for IIS. I'm an advocate of running such workloads on Linux whenever possible too, but I wouldn't say that IIS is that bad, but it is definitely worse.

        On the topic, please stop putting Nginx in front of Kestrel just for the sake of it (i.e. you have a single node, or it's an internal network without a load-balancer, etc.) - you're literally wasting resources on additional point of failure and more network hops.

        • stackskipton18 hours ago
          We do it on Single node just because doing HTTPS with Kestrel in a container is painful. Volume mounting the config file + certificates along with Kestrel appsettings.json is annoying vs Proxy server which makes it much easier.
          • neonsunset18 hours ago
            You can use ConfigMap instead of volume for providing the config (it is also nice since you can merge the application settings with the deployment manifest).

            For certificates, you can retrieve them from whatever vault solution you have on hand, or you can, indeed, access a volume. I do not understand the difficulties with volume mounting however which is a standard practice.

            Configuring Kestrel programmatically is really straightforward too: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/s...

            I'm not saying just using Nginx is a bad choice, but, much like using Redis for things that have no business using Redis, I've seen a fair share of the same misuse with Nginx.

            • stackskipton17 hours ago
              If you have ConfigMap, you have Kubernetes so Ingress is solved for you. ;)

              > Configuring Kestrel programmatically is really straightforward too

              Sure, but Developers tend to not want to futz with it. My use case is our application lives in a container, most of time we host it but occasionally a customer demands they host it so we give them the container. Some host it on Windows (we do build a Windows container of our application :vomit:) and some on Linux. In any case, we require them to use reverse proxy because HTTPS with Kestrel was becoming too high of support so we require they provide HTTPS proxy. We recommend IIS ARR for Windows users, and CaddyServer for Linux but Linux users tend to not need our advice.

              EDIT: Our hosting platform is Azure Kubernetes Service, 100% Linux Nodes.

              • neonsunset17 hours ago
                Caddy :(

                Might've as well shipped NativeAOT-compiled (or single-file trimmed JIT) build of YARP. Faster than Envoy which is faster than Caddy. Only Nginx and HAProxy are faster than YARP.

                (also bumped https://github.com/dotnet/yarp/issues/261)

    • bragha day ago
      How do you deploy your code with Kestrel though? If you already have Windows Server licensed, then you get IIS and msdeploy without any additional tooling or vendor needed.
      • bob1029a day ago
        The self-contained deployment piece helps with that.

        I've built pipelines where the deployed code is responsible for cloning, building & redeploying itself based upon commands sent via an administration interface. The .NET SDK is relatively lightweight and can be included as a management component in a broader B2B/SaaS product.

      • eastona day ago
        Add the executable as a windows service, stop service, replace executable (SMB copy or rsync or whatever), start service?

        Probably a cleaner way, but it’s just a exe, so replace it like a exe?

        • p_inga day ago
          While you can do that, you lose the flexibility of IIS without a lot of infrastructure development should you not just want a single website per server running .NET only.

          Kestrel is fast, though.

        • bragha day ago
          Yes, that is obvious, but the problem is that this requires some account to have permissions to start and stop services and to execute commands on the target host. Corporate IT departments are not too happy with that kind of approach nowadays.
      • neonsunset20 hours ago
        It's better to use the same deployment process you would use for Node.js, Go, Rust, etc. Especially since .NET CLI tooling is excellent and lets you build fairly small completely self-contained binaries (with trimming) or you could easily get your hosts to have the runtime installed (which takes little space) and then just copy very small .dll files (you can do runtime-less single-file deployment, so it's going to be literally 1-5 files taking less than 1-5MB in total).
    • Mossy9a day ago
      That's the next step - hopefully. Moving to Linux, possibly with containers. Maybe in 5 years...
  • bragha day ago
    If you are anyway forced to use IIS for hosting for some reason, then why not use msdeploy.exe for deployment? I have recently used this guide with great success https://dennistretyakov.com/setting-up-msdeploy-for-ci-cd-de...

    Can't find the documentation for it now, but in some version of msdeploy they also added a way to automatically bring the site offline while deployment was done so that the deployment is not blocked by files in use.

    • Mossy9a day ago
      Thanks for the link, I'll be sure to check it out! Although the biggest pains at the moment are the connections and accounts, anything to smooth out the process is of great interest

      UPD: https://learn.microsoft.com/en-us/aspnet/web-forms/overview/...

      It does indeed look simpler! I'll have to dig in to find out how exactly the remoting works and what user permissions are needed, but this could be the next step. Thanks a bunch!

      • Kwpolskaa day ago
        The 'web-forms' in the link does not look very promising, I wouldn't be surprised if it only supported legacy ASP.NET correctly.
  • egamirorrima day ago
    It's mind blowing to me that people still ship software by copying a file to a machine and restarting a service.

    I'm very unfamiliar with IIS hosting though, does it support any kind of containerisation/deployment immutability at all?

    • Mossy9a day ago
      Author here - isn't it just. I believe there's a lot of organizations whose IT infrastructure has been set up a couple of decades ago and then barely maintained due to non-existing resources.

      So while this is absolutely ancient tech and process by most means, it has been a huge step up from copying files over to machines by hand through RDP connections.

      And also, our apps have no problems having even a few minutes' downtime, so cold restarts are absolutely fine for us.

      While I would absolutely like to get into containerized zero downtime deployments, from where we're actually standing now, those don't seem to be offer much in the way of ROI.

      And please, if anyone has a more suitable process in mind, please let me know!

      • Probably the only improvements I can think is running the runner as a gMSA and applying the relevant ACL's to that, so you can avoid needing to supply creds.

        Otherwise, on our fleet of hundreds of IIS, we've had success just pointing IIS to a hardlink, deploying the new version to a new folder and updating the hardlink - from memory this does trigger an app pool restart but its super fast, lets you go back and forth very easily.

    • elvis19a day ago
      To be honest it’s quite refreshing if your company is full-on Microsoft anyway. It has some limitations and drawbacks you will miss compared to a modern container environment. For the usual in-house/b2b business app in dotnet it’s totally fine. With powershell DSC you can do declarative setup of Windows Server and IIS to limit the „click ops“.
    • emoIIa day ago
      Isn't this basically just push-based deployments? If your build agent opens a connection to the server this is essentially what happens
    • stackskiptona day ago
      There is Windows Containers, we run a few at work. They are nightmare and large but if want, they exist. Windows 2022 container is 5.56GB for .Net app, Linux equivalent is 336MB.
    • jiggawattsa day ago
      Arguably, IIS is more advanced than typical Linux hosting services, where the standard for decades has been "stop and restart the service" for configuration changes, causing a temporary outage. I'm sure that's improved over time (has it?), but IIS had continuous operation modes since at least the year 2000. Historically my experience with Linux web deployment has also been "copy in a bunch of files", so I don't see how it's any better or worse! (Docker support is available for both as well.)

      Some random IIS capabilities include:

      - You can edit any part of the IIS configuration live, and this won't require a service restart. You can even edit .NET application code and configuration files live, and changes can take effect non-disruptively even if a process restart is required for the virtual app. IIS will buffer requests and overlap the new process with the old one. https://learn.microsoft.com/en-us/previous-versions/iis/6.0-...

      - Web Deploy is basically just a zip file of a web application that can be produced by Visual Studio, vaguely similar to packaging up a site and its config with Docker: https://www.iis.net/downloads/microsoft/web-deploy

      - Visual Studio integrates directly with IIS using "publish" settings that can target an IIS server and be used to deploy a complete web app with a button-press from the IDE: https://learn.microsoft.com/en-us/visualstudio/deployment/tu...

      - The volume shadow service can be utilised by IIS to create backups of its entire configuration including all files on a schedule: https://usavps.com/blog/13597/

      - Shared configuration allows scale-out web farms to be created. The key IIS config files and web content are moved to a UNC file share and then the workers just "point" at this. With an external load balancer that supports traffic draining this allows for seamless OS reboots, upgrades, etc... There's even a trick where SMB Caching is used to automatically and transparently create a local disk cache of the shared content, allowing IOPS to scale with web farm server instances without any manual deploy or sync operations. https://learn.microsoft.com/en-us/iis/web-hosting/configurin...

      - The above goes hand-in-hand with Centralized SSL Certificate Support: https://learn.microsoft.com/en-us/iis/get-started/whats-new-...

      If you want to use container technology on Windows for web apps, you can do that too. Windows Server supports Windows Containers, and ASP.NET Core has excellent support with a whole bunch of pre-prepared base container images for both Windows and Linux hosts.

      If you have many such sites on a Windows web host, you would use the IIS Application Request Routing reverse proxy to add HTTPS to back-end containers running on HTTP. That, or just use Azure App Service, or YARP, or...

      Personally, if I had to run a large-scale IIS web farm again, I would keep things simple and just use the Azure DevOps agent with the IISWebAppDeploymentOnMachineGroup task: https://learn.microsoft.com/en-us/azure/devops/pipelines/tas...

      • Xiol32a day ago
        All major Linux web servers have had zero downtime config reloading for as long as I can remember, at least 15 years.
        • jiggawattsa day ago
          That just goes to show how dated my Linux experience is!

          The last time I hosted a web app on it was... sucks breath.. wow... before NGINX existed!

      • stackskiptona day ago
        Also, with all this power came a ton of interfaces, poorly defined and very hard to automate at times. Not to mention, they didn't always play nice with each other.

        In any case, it doesn't matter now, Windows Server is on maintenance mode and Microsoft is clearly done except for cashing licensing checks. Linux is the (web) server future.

        Unix got it right, text config, sighup the processes to pick up any config changes and away you go. Requires more work from Admins, much cleaner in the long run.

    • delusionala day ago
      What do you think kubernetes does? A container is literally just a tar.gz containing an executable file (among a great number of other files) that is executed with some kernel configuration and then later killed to make room for a new one.
    • chairmanstevea day ago
      OTOH, a child could do it,
  • Mossy9a day ago
    Author here - very surprised to see this on the front page after posting it a few days ago. Thanks for the resurrect!

    For those wondering how anyone is dealing with such an ancient process, I've written a piece about the history of automation in our org that might shed some light: https://rewiring.bearblog.dev/automation-journey-of-a-legacy...

  • Kwpolskaa day ago
    If you’re doing ASP.NET Core, you should be able to get away without restarting the IIS app pool. You can just create a `app_offline.htm` file, wait some time until the process fully shuts down, deploy the new code, and finally remove the .htm file.
    • Mossy9a day ago
      Interesting, I did not know that. Thanks for the tip! That would require using configuration snapshots, which we have forgone now a simpler "load configs once on startup" (again, a small downtime now and then is no problem for us), but sounds worth checking out.
      • Kwpolska17 hours ago
        This app_offline.htm trick does a full restart of the app. You just don’t need to talk to IIS to do it.
  • juntoa day ago
    If you’re still forced to deal with IIS and Windows Services deployments then I’d highly suggest moving to Octopus Deploy for this. It saves so much headache. Starter edition license is just $360 per year.
    • Uvixa day ago
      If you’re using Octopus Deploy, then Azure DevOps has equivalent functionality for free. The complication in this post comes from avoiding the local agent install; an equivalent Octopus Deploy process without a Tentacle would require similar custom PowerShell.

      (Not to mention Octopus Deploy is prohibitively expensive with large real estates. What used to be $3K per year, or even a reasonable $30K per year, would now be $100K+ per year to renew the license. Which is down from $300K a few years ago but we’d already decided to move away by that point.)

      • a day ago
        undefined
    • jsbroksa day ago
      You can also checkout ctrlplane.dev which is free and open source.
  • jiggawattsa day ago
    This article is NOT good advice and should be completely disregarded by any serious sysadmins.

    You absolutely should not remote into the web server box from the agent box! This goes entirely against the grain of how modern Azure DevOps pipelines deployments are designed to work... hence the security issue that the hapless blogger is trying to unnecessarily solve.

    The correct approach is to install the DevOps Agent directly onto the IIS web hosts, linking them to a named Environment such as "Production Web App Farm A" or whatever. See: https://learn.microsoft.com/en-us/azure/devops/pipelines/pro...

    In your pipelines, you can now utilise Deployment Jobs linked to that named environment: https://learn.microsoft.com/en-us/azure/devops/pipelines/pro...

    Deployment jobs have all sorts of fancy built-in capabilities such as pre-deployment tasks, rolling and canary strategies, post-deployment tasks, health checks, etc...

    They're designed to dynamically pick up the current "pool" of VMs linked to the environment through the agents, so you don't need to inject machine names via pipeline parameters. Especially when you have many apps on a shared pool of servers, this cuts down on meaningless boilerplate.

    All of the above works even with annoying requirements such as third-party applications where active-passive mode must be used for licensing reasons. (I'm looking at you ESRI and your overpriced software). The trick is to 'tag' the agents during setup, which can then be used later in pipelines to filter "active" versus "passive" nodes.

    • Mossy9a day ago
      Thanks for the counterpoint - absolutely what I was looking for.

      Running the agent on the app server seemed a bit risky since it will a) drain resources from the apps and b) needs to have a route open to Azure (Devops)

      Apparently you have had good experiences with this? I'd be interested to learn more

      • kukkamarioa day ago
        a) resource use is minimal when deployment isn't in progress. It just idling and waiting for commands b) Agent need to be able to connect to Azure DevOps servers, but it is connection from agent to Azure DevOps servers so no need to open any extra inbound ports or anything like that. Documentation lists the domains that need to be accessible from agents.

        Agent permissions to Azure are restricted based on the pipeline configuration to only allow things that are used in the pipeline. So if your pipeline does not involve cloning git some private git repo, agent cannot do that. And even that gives only access to that particular resource. So you normally have a build pipeline that generates package from you application and then deployment pipeline that only has access to that generated package which is then distributed to agents configured for some particular deployment environment.

        I don't really have much direct experience with deployment side of things so someone else can probably provide extra info.

        • Mossy9a day ago
          Yeah, we probably initially overestimated these issues, causing us to take a longer, more indirect route. Thanks!
      • jiggawattsa day ago
        > drain resources from the apps

        The resource usage is negligible.

        If you really need that level of high efficiency that you can't spare the tiny amounts of memory the agent uses, then you'd be better off looking into using something like Packer to build a Server Core image in your pipeline, and deploying that to an Azure Virtual Machine Scale Set. This allows "zero outbound communications" deployments also because the image build step is out-of-band and isn't done anywhere near the live infrastructure network. https://learn.microsoft.com/en-us/azure/devops/pipelines/tas...

        The above approach concept also works with any other imaging-based workflow such as Windows Docker containers running on App Service or Kubernetes. https://azure.github.io/AppService/windows-containers/

        The packer + VMSS approach however allows the IIS workers to join an Active Directory domain "properly". Sure, Kubernetes has some limited support for this, but I wouldn't try it in production unless forced to at gunpoint.

        > needs to have a route open to Azure.

        If you're only deploying instead of running arbitrary pipeline tasks, then I believe just four firewall rules is sufficient to Azure DevOps (not "Azure"!), but you may need more depending on your requirements: https://learn.microsoft.com/en-us/azure/devops/pipelines/age...

        > Apparently you have had good experiences with this?

        It's pretty much the "only way" to do things in the current-gen YAML Azure DevOps pipelines.

        The alternatives are Kubernetes, App Service, and the like, but those generally aren't 100% compatible with traditional domain-joined IIS web servers.

        If you actually need the traditional Windows box "infrastructure" instead of a PaaS, then directly installed DevOps Agents would be one of the best ways to manage it.

        The best alternative I know of is probably Nomad, but that's only good if your org is already invested into Hashicorp tooling.

        • Mossy9a day ago
          Thanks a lot! I'll study further and discuss these further with the IT team.