166 pointsby kristian11095 days ago38 comments
  • Animats5 days ago
    > To do this, we’ve developed Yeager, an ensemble of models including state of the art speech-to-text that can understand ATC audio.

    Listening on multiple channels might help at busier airports. Ground, ramp, approach, departure, and enroute are all on different channels. Military aircraft have their own system. (That may have contributed to the DCA accident.)

    Something like this was proposed in the late 1940s. The FAA was trying to figure out how to do air traffic control, and put out a request for proposals. General Railroad Signal responded.[1] They proposed to adapt railroad block signalling to the sky. Their scheme involved people listening to ATC communications and setting information about plane locations into an interlocking machine. The listeners were not the controllers; they just did data entry. The controllers then had a big board with lights showing which blocks of airspace were occupied, and could then give permission to aircraft to enter another block.

    Then came radar for ATC, which was a much better idea.

    [1] https://searchworks.stanford.edu/view/1308783

    • kristian11095 days ago
      It's been interesting to see that a product as simple as combining data from multiple frequencies at once has been really compelling to folks. Can't tell you the number of times we've heard "wait, can you compile ground, tower, and approach in one place?"... "... yes, of course."

      Military aircraft are typically equipped with UHF radios (in addition to civilian VHF). Many of the same systems apply, just a different RF band. And we're in the process of adding UHF capabilities to our product as a lot of these military aircraft land at civilian airports for training exercises.

      I can't imagine what would've happened if we adopted block signaling for ATC ...

      • VBprogrammer4 days ago
        > I can't imagine what would've happened if we adopted block signaling for ATC ...

        You don't have to imagine. We already do in many places. The North Atlantic Tracks are essentially exactly that. Aircraft give position reports and estimates, those positions reports are used to decide whether an aircraft can climb though which levels etc.

        It's also used extensively in an IFR non-radar environments. Exactly why aircraft have to cancel IFR at uncontrolled airfields in the US or under a procedural ATC service in the UK. You hear it a lot around the Caribbean and Bahamas too.

  • dharmab5 days ago
    I'm the developer of a speech-to-speech tool for tactical radar control for a combat flight simulator (https://github.com/dharmab/skyeye). My users have often asked to expand it to ATC as well, usually under the impression that it could be done trivially with ChatGPT. I love that I can now link to your post to explain how difficult this problem is! :)
    • mh-5 days ago
      Being unfamiliar with DCS' architecture, I expected this repo to be in Lua or something. I was surprised to find a very polished, neatly structured, well-documented Go service, haha. Very cool!
    • kristian11095 days ago
      Post product market fit :D
  • ryandrake5 days ago
    > The latest system transcribes the VHF control audio at about ~1.1% WER (Word Error Rate), down from a previous record of ~9%.

    I'd be curious about what happens when the ASR fails. This is not the place to guess or AI-hallucinate. As a pilot, I can always ask "Say Again" over the radio if I didn't understand. ASR can't do that. Also, it would be pretty annoying if my readback was correct, but the system misunderstood either the ATC clearance or my readback and said NO.

    • kristian11095 days ago
      Good and fair questions.

      In the very short term, we're deploying this tech more in a post-operation/training role. Imagine being a student pilot, getting in from your solo cross country, and pulling up the debrief will all your comms laid out and transcribed. In this setting, it's helpful for the student to have immediate feedback such as "your readback here missed this detail...", etc. Controllers also have phraseology and QA reviews every 30 days where this is helpful. This will make human pilots and controllers better.

      Next, we'll step up to active advisory (mapping to low assurance levels in the certification requirements). There's always a human in the loop that can respond to rare errors and override the system with their own judgement. We're designing with observability as a first-class consideration.

      Looking out 5-10 years, it's conceivable that the error rates on a lot of these systems will be super-human (non-zero, but better than a human). It's also conceivable that you could actually respond "Say Again" to a speech-to-speech model that can correct and repeat any mistakes as they're happening.

      Of course, that's a long ways from now. And there will always be a human in the loop to make a final judgement as needed.

      • btown5 days ago
        One of the challenges I imagine you'll face as you move towards active advisory is that the more an alerting tool is relied upon, the more an absence of a flag from it is considered a positive signal that things are fine. "I didn't hear from Enhanced Radar, so we don't need to worry about ___" is a situation where a hallucinated silence of the alerting tool could contribute to danger, even if it's billed as an "extra" safety net.

        I imagine that aviation regulatory bodies have high standards for this - a tool being fully additive to existing tools does not necessarily mean that it's cleared for use in a cockpit or in an ATC tower, right? Do you have thoughts about how you'll approach this? Also curious from a broader perspective - how do you sell any alerting tool into a niche that's highly conscious of distractions, and of not just false positive alerts but false negatives as well?

        • kristian11095 days ago
          Yes, fair points. In talking to controllers, this has already come up. There are a few systems that do advisory alerting and controllers have expressed some frustration because each alert triggers a bunch of paperwork and they are not 100% relevant.

          There are lots of small steps on this ladder.

          The first is post-operational. You trigger an alert async and someone reviews it after the fact. Tools like this help bring awareness to hot spots or patterns of error that can be applied later in real time by the human controller.

          A step up from that is real-time alerting, but not to the main station controller. There's always a manager in the tower that's looking over everyone's shoulder and triaging anything that comes up. That person is not as focused on any single area as the main controllers. There's precedence for tools surfacing alerts to the manager, and then they decide whether it's worth stepping in. This will probably be where our product sits for a while.

          The bar to get in front of an active station controller is extremely high. But it's also not necessary for a safety net product like this to be helpful in real time.

      • ryandrake5 days ago
        Thanks for that. It must be exciting to be applying software skills to aviation. Life goals!

        To me, speech to text and back seems like an incremental solution, but the holy grail would be the ability to symbolically encode the meaning of the words and translate to and from that meaning. People' phraseology varies wildly (even though it often shouldn't). For example, if I'm requesting VFR flight following, I can do it many different ways, and give the information ATC needs in any order. A system that can convert my words to "NorCal Approach Skyhawk one two three sierra papa is a Cessna one seventy two slant golf, ten north-east of Stockton, four thousand three hundred climbing six thousand five hundred requesting flight following to Palo Alto at six thousand five hundred," is nice, but wouldn't it be amazing if it could translate that audio into structured data:

            {
            atc: NORCAL,
            requester: "N123SP",
            request: "VFR",
            type: CESSNA_172,
            equipment: [G],
            location: <approx. lat/lon>,
            altitude: 4300,
            cruise_altitude: 6500,
            destination: KPAO,
            }
        
        ...for ingestion into potentially other digital-only analysis systems. You could structure all sorts of routine and non-routine requests like this, and check them for completeness, use it for training later, and so on. Maybe one day, display it in real time on ATC's terminal and in the pilot's EFIS. With structured data, you could associate people's spoken tail numbers with info broadcast over ADS-B and match them up in real time, too. I don't know, maybe this already exists and I just re-invented something that's already 20 years old, no idea. IMO there's lots of innovation possible bringing VHF transmissions into the digital world!
        • kristian11095 days ago
          Who gave you our event schema!? ;)

          Kidding aside, yes, you're exactly right. We're already doing this to a large degree and getting better. Lots of our own data labeling and model training to make this good.

          • ryandrake5 days ago
            Best of luck to you. Finally a Launch HN that's important, potentially life-saving work.
      • threeseed5 days ago
        > Looking out 5-10 years, it's conceivable that the error rates on a lot of these systems will be super-human (non-zero, but better than a human). It's also conceivable that you could actually respond "Say Again" to a speech-to-speech model that can correct and repeat any mistakes as they're happening.

        This is effectively AGI.

        And I've not seen anyone reputable suggest that our current LLM track will get us to that point. In fact there is no path to AGI. It requires another breakthrough in pure research in an environment where money is coming out of universities.

        • fartfeatures5 days ago
          It isn't AGI, it is domain specific intelligence.
        • kristian11095 days ago
          AGI is a moving target, but agreed, lot's more research to be done.
    • Mikhail_K4 days ago
      > I'd be curious about what happens when the ASR fails.

      When, not if. The "artificial intelligence" as it is presently understood is statistical in nature. To rely on it for air traffic control seems quite irresponsible.

    • ibejoeb5 days ago
      I think it would be handy to have it as a check. If I get an alert about a potentially incorrect readback, then I can call back for clarification.
  • raphting4 days ago
    I worked a few years for German air traffic control and I own a PPL.

    From a non-commercial viewpoint, I like to see when people get enthusiastic to make airspace and flying safer. From a commercial perspective, I agree with others writing here that going into a highly regulated market such as air traffic, is very hard, and I can tell you why I think so.

    For example German air traffic control (DFS) publishes tools which are not directly ment for ATCOS https://stanlytrack3.dfs.de/st3/STANLY_Track3.html , so they are already covering part of this market. Then there are companies already specialised in tapping into the open data of the skies https://www.skysquitter.com/en/home-2/ (Or check https://droniq.de which is specialised in integrating drones into airspace). They are all either governmental or subsidiaries, or not directly involved in air traffic control itself.

    I once built a 3D airspace app which I thought could become a commercial product, but I found it is too hard to compete with companies like DFS or Boeing (ForeFlight) and others. (I published the app for free to play around with: https://raphting.dev/confident/)

    Saying that, I think I thought a lot about commercialisation of airspace products and my conclusion is that most countries have a good reason to leave air traffic control governmentally owned and continue gatekeeping for new entries. These gates are very well protected, and if it is only with high fees you need to pay to even gain access to data (like when I purchased airspace data from Eurocontrol for the 3D App).

    Focusing on training or "post-ops", what I think you plan to do, is probably the more viable direction.

  • RockyMcNuts5 days ago
    Maybe the future is structured electronic messaging with the humans in the loop.

    Like, check in with the controller but most messages are sent electronically and acknowledged manually.

    I have your clearance, advise when ready to copy, then you write everything down on kneeboard with a pencil and then manually put it in the navigation system, is a little archaic.

    certainly speech to text is a useful transition but in the long run the controller could click on an aircraft and issue the next clearance with a keyboard shortcut. then the pilot would get a visual and auditory alert in the cockpit and click to acknowledge.

    I would hope someone at NASA or DARPA or somewhere is working on it. And then of course the system can detect conflicts, an aircraft not following the clearance etc.

    • kristian11095 days ago
      The problem with datalink systems is they are poor substitutes for immediate control & confirmation. My co-founder Eric wrote a short piece about this: https://www.ericbutton.co/p/speech. This is why they are mainly relegated to low-urgency en-route & clearance delivery.
      • someguydave4 days ago
        He’s write about the bandwidth and latency of voice, but the problem is that you can’t immediately know who should react to instructions. “GO AROUND IMMEDIATE!” - now all the pilots on frequency are wondering who’s the addressee

        Also, AM voice on VHF is not full duplex and the blocking problem is very real and could be addressed potentially

      • RockyMcNuts4 days ago
        interesting! have PP but haven't flown really last couple of decades.

        I feel like, with proper UX in the cockpit and on the controller console, making it easy to send/acknowledge the clearance, and intrusively demanding immediate acknowledgment for important messages, with the controller able to talk to the pilot if it isn't immediately acknowledged, structured messages would save time, be more accurate, allow automated checks, i.e. be a superior substitute.

        UX needs a ton of work and human factors validation, and would take 20 years to implement. But if you were starting from a blank slate it seems like the way to go!

    • EMM_3865 days ago
      > Maybe the future is structured electronic messaging with the humans in the loop.

      There already is: Controller Pilot Data Link Communications (CPDLC).

      Get an instruction, press to confirm.

      At the moment, this is only used for certain types of things (clearances, frequency changes, speed assignments, etc.) along with voice ATC.

      https://en.wikipedia.org/wiki/Controller%E2%80%93pilot_data_...

    • imadethis5 days ago
      Look up CPLDC - https://en.wikipedia.org/wiki/Controller%E2%80%93pilot_data_...

      This is how big operations handle clearances today, complete with integration into the FMS. The pilot simply reviews the clearance and accepts it.

    • noahnoahnoah5 days ago
      This already exists and is used in much of the US and extensively in Europe for airlines. Look up Controller Pilot Data Link Communications (CPDLC).
    • fallingmeat4 days ago
      also requires fairly expensive equipment (FMS with FANS support)
  • RachelF5 days ago
    I have two questions:

    1. Good overview of the technologies you are using, but what product are you planning on building or have built? I understand what you are doing and it's "extra safety over existing systems" but how does it work for the end user? Is the end user an ATC or a pilot?

    2. You will find that introducing new systems into this very conservative field is hard. I've built avionics and ATC electronics. The problem isn't normally technology, it's the paperwork. How do you plan on handling this?

    • kristian11095 days ago
      1. Our first product is post-op review at airports. We're selling that to airport managers who use our system for training and incident review. Today, when a ground ops vehicle (for example) makes a mistake, the airport manager has to note the incident, call the tower, wait a week for them to burn a CD of the audio, scrub through to find the relevant comms, go to a separate source to pull the ADS-B track (if available), fuse all that together, and review with the offending employee. Our product just delivers all that data at their fingertips. For training, we also flag clips where the phraseology isn't quite right, etc. Obviously this isn't the long term product, but it gets us to revenue quickly and side-steps regulation for now.

      2. Agree

      [edit] (oops, sorry, seeing your edit)

      2. The regulation allows for escalating assurance levels. We'll start with low assurance (advisory) and climb that ladder. We're definitely not naive about it; this will be hard and annoying. But it's inconceivable that someone won't do this in the next 10 years. Too important.

      • RachelF4 days ago
        Thank you for the detailed reply. Your first product sounds like something that is needed. I wish your startup very good luck and will be watching your progress.

        Do ground vehicles also have GPS trackers with a radio transmitter, or do they just use normal ADS-B?

  • vednig5 days ago
    I've watched a lot of Aircraft investigation stories, with incidents like this happening, sadly, rarely there are people who are interested in an intersection of the both technologies to be able to find a proper functioning solution and I think this is pretty interesting stuff you guys are doing, if you'd been working on re-inventing the wheel with new software to automate flight trajectory management, I'd not be as amazed, I think you guys have really taken out the time to understand the problem and worked on a potential solution, that could have a major impact.

    > This system is completely parallel to existing infrastructure, requires zero permission, and zero integration. It’s an extra safety net over existing systems (no replacement required). All the data we need is open-source and unencrypted.

    The part where you explain that it is integrable in the existing chain of command at Airports is proof enough.

    Wishing you all the best for your venture.

    • kristian11095 days ago
      Thank you — yes, it's super important to us to find the wedge where we can actually ship quickly. Nothing beats feedback from real products in the real world. As much experience as we have being pilots, there's so much to learn on the control side.
  • lxe5 days ago
    I was thinking along the same exact lines:

    Why do we still rely on analog narrowband AM voice over VHF to do vital comms like air traffic control? Same way as we did in the 1940s!

    We should be transcribing the messages and relaying them digitally with signatures and receipt acknowledgement.

    • DF1PAW5 days ago
      AM modulation is perfectly justified in this context: if two (or more) stations accidentally transmit at the same time, this will be noticed. Using FM, only the stronger signal wins and the other signal remains undetected. The advantage over digital transmission is the lack of coding overhead - the voice reaches the receiver without any time delay.
      • jlewallen5 days ago
        This is true, for anybody curious about this: https://en.wikipedia.org/wiki/Capture_effect
      • lxe5 days ago
        The justification still holds, but better tech with the same benefits exists nowadays.

        As far as digital decoding delay is concerned, this is a negligible number if implemented correctly.

    • jimnotgym5 days ago
      Isn't it because AM audio is still understandable under very suboptimal conditions where digital might not get through? Digital narrowband data modes tend to pass very small amounts of data
      • lxe5 days ago
        Quite the opposite. For short messages digital modes can employ layers of redundancy, auto carrier recovery, error correction at all layers all while yielding lower power requirements, and longer distance.
        • EricButton5 days ago
          FAA actually tried moving to digital voice (has benefits wrt airband congestion) but it didn't go anywhere. I believe a lot came down to the minimal benefit over current solutions, plus the coordination and safety implications of actually making the switch. Tough for an FAA official to pull the trigger on a rollout that has even 0.1% chance of an aircraft crashing.
      • dehrmann5 days ago
        A lot of ATC seems to use lowest common denominator tech so that you an fly a Cessna into JFK.
        • kristian11094 days ago
          But only after 1am when you're not fighting a 15 knot headwind with an A320 cleared number two.
  • qdotme5 days ago
    I'm loving it. As a private pilot working (slowly) towards my CFI rating - this is also such an opportunity to integrate it into training devices.

    Bulk of the instrument flight training is "mindgames" anyways - you see nothing other than instruments, your "seat in the pants" is likely to cheat you..

    Possibly going a step further, the state of teaching aids available to CFIs is pretty sad, with online quizzes and pre-recorded videos being the pinnacle of what I have experienced... this would be an awesome opportunity to try and build "automatic CFI" - not counting for AATD time under current rules but better than chair-flying (the process of imagining a flight and one's reactions).

    • zeroc83 days ago
      The teaching tools nowadays available to CFIs are fantastic. Get yourself X-Plane 12, Honeycomb yoke, throttles and rudders and a decent aircraft like the Challenger 650 and you are good to go. All of this can be had for around 1000 bucks and is the best investment anyone can make to stay current. I used to be a CFII a long time ago (1995) when sims where expensive and only the best flight schools had them. Back then, instrument training was flying long and boring hours under the hood and if you were lucky, one or two hours actual IFR. Most people were afraid to fly real IFR after the rating, since they were aware that the training was so bad, myself included. A couple of hours flying a PC sim fixed this problem once and forever and made a huge difference in real world flying.
    • kristian11095 days ago
      I have a bit of an ax to grind with the state of pilot training tools. There's some interesting new work being done here, but agreed lots to do in this realm. What does a compelling "automatic CFI" look like to you?
    • subhro5 days ago
      > Bulk of the instrument flight training is "mindgames" anyways - you see nothing other than instruments, your "seat in the pants" is likely to cheat you..

      Eh, I guess I can flex a little. Living in the Pacific North West, I do not have to play mind games. I can almost get IMC delivered on demand. :P

  • someguydave4 days ago
    - I think the ultimate safety improvement would be to move the human out of the realtime control loops and getting him focused more on the big picture.

    - there is opportunity for permissionless passive RF sensors like this startup shows. Imagine the pilots in the CRJ received an immediate notification that an intercepting aircraft was getting blocked with(transmitting over) ATC comms on their UHF frequency. I think this could be done without decoding the voices.

    - Passive radar combined with direction-finding of the VHF/UHF voice transmissions could also be integrated as another source of high-resolution tracking data

  • tjlahr5 days ago
    I was thinking about this the other day. To me, the future is decreasing the amount of coordination that happens verbally over VHF.

    Ignoring takeoff clearances for a moment, my limited understanding is that most traffic in and out of an airport follows a prescribed pattern: You take off, turn to some particular bearing, climb to some particular altitude, contact center on some particular frequency... etc. Listening to VASAviation, it seems like this accounts for > 80% of pilot-controller communication.

    It's strange to me that, given the amount of automation in a modern airliner, these instructions aren't transmitted digitally directly to the autopilot. Instead of the controller verbally telling the pilot where to go, it seems feasible that the controller (or some coordinating software) could just digitally tell the plane where to go.

    I feel like that's how you dramatically decrease workload on both ends, and then maybe there's more bandwidth to focus on those takeoff clearances (and eventually automate those as well?).

    So many other aspects of flight safety have been handed over to the computer to solve, it's curious to me why a critical function like air traffic control still happens verbally over VHF.

    • hugh-avherald5 days ago
      There are STAR and SID procedures that are 90% of what you're proposing. A pilot is at the top of descent and is told "Descend via the SID" which takes them to final approach.

      As a pilot, I am surprised by how important audio communication is for retention and awareness. Given that my visual senses are (nearly) overwhelmed with information, I think there is a risk that moving ATC from audio to visual would simply saturate the "visual channel" of pilots.

      In terms of automating coordination, it's obviously possible but it would take decades to prove its relative safety. (Aviation is extremely safe.) The system would be very fragile, unless you had 24/7 fully staffed backup human ATC, which rather defeats the purpose. Practically speaking too, planes take a long time to build, and the current system allows planes built 80 years ago to fly alongside brand new ones. The cost of abolishing the 'legacy fleet' (i.e. all current passenger aircraft) is pretty high!

    • kristian11094 days ago
      Speaking as a pilot for a moment, I think your instincts are correct in theory but hard to actually implement.

      In a critical function like control, you don't want to split a pilot's attention. You wouldn't want them to sometimes be monitoring a datalink system, but then also sometimes be listening to the radio for deviations. Even if it's less efficient 70% of the time, you reduce cognitive load by training a pilot to ALWAYS go to the radio for clearance and command.

      Of course, there are edge cases these days where pilots use datalink for some clearance delivery before taxi and enroute, but you can see how these phases of flight (before you push back and after the auto pilot is on) are selected for very low competing load. In a live terminal environment, you want a pilot focused in one place for instructions.

      Furthermore, you're correct that most pilot-controller communication falls largely within tight set of procedures, but deviations are common enough (weather, traffic, emergencies, hold patterns, taxi route, etc) that you find yourself on the radio regularly to sort it.

      Last thing: pilots are allowed to say "unable" if they deem an instruction unsafe. I've personally had to do that many times (most common case for me is trying to comply with a vector instruction under VFR with a cloud in my way). VFR may seem like an edge case that commercial planes don't deal with, but again that's not always true in a terminal environment. Plenty of these planes fly visual approaches all the time. And if ATC is talking directly to the computer and not through the pilot, you lose the opportunity for the pilot to quickly and clearly decline an instruction.

    • jcrites5 days ago
      I think this is ultimately true, in a sense, but the challenge is correctly handling all of the edge-cases. It's a challenging problem tantamount to the self-driving car problem.

      It happens by humans over VHF because a lot of unpredictable things happen in busy airspace, and it would require a massive investment for machines to automate all of it.

      I'm also not sure that people would accept the safety risk of airplanes' autopilots being given automated instructions by ATC over the air. There's a large potential vulnerability and safety risk there. I think there's some potential for automation to replace the role of ATC currently, but I suspect it would still be by transmitting instructions to human pilots, not directly to the autopilot.

      Lastly, for such a system to ever be bootstrapped, it would still need to handle all of the planes that didn't have this automation yet; it would still need to support communicating with pilots verbally over VHF. An entirely AI ATC system, that autonomously listens to and responds by voice over VHF seems like a plausible first step though.

    • stevage5 days ago
      >Instead of the controller verbally telling the pilot where to go, it seems feasible that the controller (or some coordinating software) could just digitally tell the plane where to go.

      An intermediate step would at least be transmitting those instructions digitally and showing it on a map that the pilot can follow. There have been a number of incidents where pilots misunderstood where they were, and incorrectly followed instructions.

      • kristian11094 days ago
        A lot of glass panels these days will do this in the PFD. You get your clearance via datalink from ATC, it loads everything right up, and you just keep the plane in the box or turn the autopilot on once you're in the air.

        Of course, this still keeps the pilot in the loop and ideally they will notice if something seems weird.

  • nitin_j115 days ago
    Your system fuses ATC speech recognition, NLP, and ADS-B signals to detect and mitigate human error in air traffic control. Given the rapid advancements in multimodal AI, have you explored integrating visual data sources (e.g., satellite imagery, radar feeds, or airport surveillance cameras) to further improve situational awareness and error detection? What challenges do you foresee in making Yeager more contextually aware using additional modalities?
    • kristian11095 days ago
      Yes, this is an excellent prompt and we're working on it. One problem is a lot of these visual sources require permission, integration, and regulation. That's going to move slower than something we can proceed directly with (VHF antennas).

      I believe scaling laws will hold as we start to feed all of this context data into an integrated model. You could imagine a deep-q style reinforcement learning model that ingests layers of structured and visual data and outputs alerts and eventually commands. The main challenge I foresee here will be observability... it's easy enough to shove a ton of data into a black box and get a good answer 98% of the time. But regulation is likely to require such a system to be highly observable/explainable so the human can keep up with what's going on and step in as needed.

      Looking further into the future, it's plausible the concrete structures of today with humans looking out windows will be replaced with sensor packages atop a long flagpole that stream high-res optical/ir camera data, surface radar, weather information, etc into a control room with VR layers that help controllers stay on top of busier and busier airspace.

  • mclau1565 days ago
    Why the focus on speech-to-text and not purely on radar and predicting 3D movement or at least shutting down in 3D space a runway that is occupied?
    • kristian11095 days ago
      Great question - the fundamental entry point of the air traffic system is this VHF control audio. Everything is downstream of that. Trajectory planning misses the intention signal on the frontend.

      The example in the original post is actually a good case study for why trajectory planning alone breaks down. By the time the aircraft are on a predictable collision course with each other, you've lost 10+ seconds of potential remediation time that you would've had if you detected the error when it was spoken. Those 10 seconds really matter in our airspace.

    • TylerE5 days ago
      One major issue is that radar doesn’t work very well at low altitudes. The lower you aim the beam, the more ground clutter you pick up (trees, buildings, clouds, etc).
      • kristian11095 days ago
        Yes, this is a challenging issue in ATC. Also, some safety systems are turned off or diminished at critical phases of flight (near the ground) because of the noise problem (TCAS, for example).

        ADS-B helps with this as it's self-reporting. And systems like ASDE-X are useful to track objects once they hit the ground. But low-altitude deconfliction is a big problem.

        • TylerE5 days ago
          My understanding is that TCAS is disabled not due to radar limitations, but because it isn't altitude-aware and will happily generate an avoidance command that results in CFIT.
          • timoth5 days ago
            It's been a long time since I worked in flight simulation (full flight simulators and simpler pilot training devices, including simulating TCAS), but I believe at that time TCAS would be switched to a mode in which it only alerts of "Traffic" instead of providing avoidance instructions precisely _when_ entering busier airspace -- e.g. airport proximity. In that environment it was undesirable for TCAS to be giving instructions. That seems like the environment in which Enhanced Radar's (future) product(s) could be of most interest.

            (By the way, I believe EGPWS would take priority over TCAS anyway.)

          • kristian11095 days ago
            The systems I've flown have definitely been altitude aware. They won't alert if the plane is sufficiently deconflicted above or below.

            One problem for sure is that when you're close to the ground you have to be careful about buildings, cell towers, etc. Terrain is one thing, but when you're a few hundred feet AGL, you could quickly be in the way of tall structures if a TCAS alert goes off.

  • ammar25 days ago
    Any plans on open-sourcing your ATC speech models? I've long wanted a system to take ATIS broadcasts and do a transcription to get sort of an advisory D-ATIS since that system is only available at big commercial airports. (And apparently according to my very busy local tower, nearly impossible to get FAA to give to you).

    Existing models I've tried just do a really terrible job at it.

    • kristian11095 days ago
      I've thought about the same thing; transparently, we were trying to get a reliable source of ATIS to inject into our model context and had the same issue with D-ATIS. What airport are you at? Maybe we whip up a little ATIS page as a tool for GA folks.
      • ammar25 days ago
        That would be awesome! My airport is KPDK (sadly it doesn't have a good liveatc stream for its ATIS frequency).

        I did collect a bunch of ATIS recordings and hand-transcribed ground-truth data for it a while ago. I can put it up if that might be handy for y'all.

        • kristian11095 days ago
          If you're willing, that'd be great. I think our model will do well out of the box, but more data is more better as they say.

          I spent a lot of time out at PDK when I worked briefly in aircraft sales. Nice airport!

          Let me work on this and come back! I think we can ship you an API for ATIS there...

  • flight-plan-fly4 days ago
    Neat! I have been thinking of trying to build something like this as a side project, but knew it would be difficult to get good results without having a team of aviation folks labeling data.

    Do you think your model's word error rate would decrease if it had access to raw feeds from towers or was trained on data from even more airports?

    Do you have any plans to make the data searchable/discoverable in real time? Something like being able to search for any audio transmissions regarding a specific flight by number, etc., available on ADS-B exchange, Flightradar, FlightAware, etc., both in text and audio form.

    • kristian11094 days ago
      > Do you think your model's word error rate would decrease if it had access to raw feeds from towers or was trained on data from even more airports?

      Yes, especially for local things like navaids, cities, etc. Our expectation is that we'll need to fine-tune for different deployments.

      > Do you have any plans to make the data searchable/discoverable in real time? Something like being able to search for any audio transmissions regarding a specific flight by number, etc., available on ADS-B exchange, Flightradar, FlightAware, etc., both in text and audio form.

      Definitely! This actually already exists and we're selling it to airports today.

  • superchris5 days ago
    This sounds awesome. As a GA pilot and software developer, I can tell you that voice comms and the need to work with them will be around for a long time. Just look at all the feet dragging just to get ADSB adopted. I'd love to find out more and possibly get involved. Are you looking for help?
    • kristian11095 days ago
      Appreciate it; we're not quite hiring, but I'll come back to this when we start.
  • YZF3 days ago
    I like the idea. I used to be a military air traffic controller (for about 3 years) and have seen a few cases of issues between ground traffic, either vehicles, or helicopters or other airplanes and airplanes landing or taking off. We had a somewhat imperfect system of tracking whether runways were free or in use.

    Relying solely on speech recognition seems a little tricky though. Either way you'll probably want to involve someone from the ATC side sooner rather than later.

  • abhinuvpitale5 days ago
    Pretty cool use of NLP. I wonder if training schools can use this to train their students to improve their readback detection. IMO, one of the hardest things as a pilot in training in the first few months is getting feedback on my communication with the tower.
    • kristian11095 days ago
      Exactly, this is one of our early beachheads. If we can train better pilots, the airspace system gets safer immediately.
  • cdfuller5 days ago
    This is fascinating! Are you still looking for labelers? I'm a SWE with about 30 hours of flight time towards a PPL, lots of hours watching VAS Aviation, and have worked on ingesting ADS-B data. So I may be qualified for that type of work.
    • kristian11094 days ago
      Sure, feel free to email founders @ ...

      One of the best parts of this job is how awesome the labeling community is. So many super qualified folks like yourself. Thank you!

  • hiroprot5 days ago
    I love the concept, and it's super timely of course (or about time, depending on your perspective).

    I was trying to find your website to get a better understanding of it all, is this it? https://www.enhancedradar.com

    If so, for most people it would be super helpful if you could put together something that explains this a bit better, with visuals, videos, testimonials, etc. I work on similar challenges in a completely different problem domain, and it takes quite a bit of material / explaining to really show what we do / what's possible, and make it feel less abstract.

    • kristian11094 days ago
      Good callout — we're being a little vague to the public while we build out these first use cases (HN doesn't count as the "public"). Expect more detail soon!
  • catwhatcat5 days ago
    Hey this seems like a great idea and has a lot of promise! If you're looking for a frontend / UI / UX engineer, I'd love to get involved. Either way, best of luck! Let me know - hello (at) joshmleslie.com
    • kristian11095 days ago
      Thank you -- saving for future reference
  • westurner5 days ago
    Can any aircraft navigation system plot drone Remote ID beacons on a map?

    How sensitive of a sensor array is necessary to trilaterate Remote ID signals and birds for aircraft collision avoidance?

    A Multispectral sensor array (standard) would probably be most robust.

    From https://news.ycombinator.com/item?id=40276191 :

    > Are there autopilot systems that do any sort of drone, bird, or other aerial object avoidance?

    • kristian11094 days ago
      A lot of drones these days will have ADS-B. The ones that don't probably have geo-fencing to keep them away from airports. There's also all kinds of drone detection systems based on RF emittance.

      The bird problem is a whole other issue. Mostly handled by PIREP today if birds are hanging out around an approach/departure path.

      Computer vision here is definitely going to be useful long term.

      • westurner4 days ago
        FWIU geo-fencing was recently removed from one brand of drones.

        Thermal: motors, chips, heatsinks, and batteries are warm but the air is colder around propellers; RF: motor RF, circuit RF, battery RF, control channel RF, video channel RF, RF from federally required Remote ID or ADS-B beacons, gravitational waves

        Aircraft have less time to recover from e.g. engine and windshield failure at takeoff and landing at airports; so all drones at airports must be authorized by ATC Air Traffic Control: it is criminally illegal to fly a drone at the airport without authorization because it endangers others.

        Tagging a bird on the 'dashcam'+radar+sensors feed could create a PIREP:

        PIREP: Pilot's Report: https://en.wikipedia.org/wiki/Pilot_report

        Looks like "birds" could be coded as /SK sky cover, /WX weather and visiblity, or /RM remarks with the existing system described on wikipedia.

        Prometheus (originally developed by SoundCloud) does pull-style metrics: each monitored server hosts over HTTP(S) a document in binary prometheus format that the centralized monitoring service pulls from whenever they get around to it. This avoids swamping (or DOS'ing) the centralized monitoring service which must scale to the number of incoming reports in a push-style monitoring system.

        All metrics for the service are included in the one (1) prometheus document, which prevents requests for monitoring data from exhausting the resources of the monitored server. It is up to the implementation to determine whether to fill with nulls if sensor data is unavailable, or to for example fill forward with the previous value if sensor data is unavailable for one metric.

        Solutions for birds around runways and in flight paths and around wind turbines:

        - Lights

        - Sounds: human audible, ultrasonic

        - Thuds: birds take flight when the ground shakes

        - Eyes: Paint large eyes on signs by the runways

        • westurner4 days ago
          > Sounds and Thuds [that scare birds away]

          In "Glass Antenna Turns windows into 5G Base Stations" https://news.ycombinator.com/item?id=41592848 or a post linked thereunder, I mentioned ancient stone lingams on stone pedestals which apparently scare birds away from temples when they're turned.

          /? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...

          Are some ancient stone lingams also piezoelectric voice transducer transmitters, given water over copper or gold between the lingam and pedestal and given the original shape of the stones? Also, stories of crystals mounted on pyramids and towers.

          Could rotating large stones against stone scare birds away from runways?

    • westurner5 days ago
      Remote ID: https://en.wikipedia.org/wiki/Remote_ID

      Airborne collision avoidance system: https://en.wikipedia.org/wiki/Airborne_collision_avoidance_s...

      "Ask HN: Next Gen, Slow, Heavy Lift Firefighting Aircraft Specs?" https://news.ycombinator.com/item?id=42665458

    • westurner4 days ago
      Are there already bird not a bird datasets?

      Procedures for creating "bird on Multispectral plane radar and video" dataset(s):

      Tag birds on the dashcam video with timecoded sensor data and a segmentation and annotation tool.

      Pinch to zoom, auto-edge detect, classification probability, sensor status

      voxel51/fiftyone does segmentation and annotation with video and possibly Multispectral data: https://github.com/voxel51/fiftyone

    • westurner5 days ago
      Oh, weather radar would help pilots too:

      From https://news.ycombinator.com/item?id=43260690 :

      > "The National Weather Service operates 160 weather radars across the U.S. and its territories. Radar detects the size and motion of particles in rain, snow, hail and dust, which helps meteorologists track where precipitation is falling. Radar can even indicate the presence of a tornado [...]"

    • westurner5 days ago
      From today, just now; fish my wish!

      "Integrated sensing and communication based on space-time-coding metasurfaces" (2025-03) https://news.ycombinator.com/item?id=43261825

  • aj_icracked5 days ago
    Love this. I sent this to a few of my military and commercial pilot friends to see if they are interested in talking / helping out. Good luck solving this problem!
    • kristian11095 days ago
      Appreciate your help! Always keen to discuss with aviation folks
  • beebaween5 days ago
    IMO one of the only interesting things "block chain" tech ever produced that had real world value and potential to save lives.

    https://aviationsystems.arc.nasa.gov/publications/2019/SciTe...

    Curious if OP has seen this paper / project before?

    • kristian11095 days ago
      Interesting paper, thanks for sharing here. Encrypting ADS-B is a whole discussion... in my opinion, it's a good thing to have largely public location data for folks to consume and study.

      What were your main takeaways from this paper?

  • ryry4 days ago
    this is amazing!

    I'm actually working on a similar ATC transcription project, but more along the lines of education / entertainment for non-pilots. I'm gonna be posting a YT video about it soon, but would love to chat if you guys are open to it!

  • singlepaynews3 days ago
    Are you hiring? I recently did a very similar coding challenge for vencover.com, and if there’s an opportunity I’d love to contribute
  • Axsuul5 days ago
    Great use of AI models, since language is likely enough to infer the situation.
    • kristian11095 days ago
      Language is critical input data. Appreciate it!
  • 1zael5 days ago
    I just want to say this is such a pivotal problem to solve in this era of aviation. Looking forward to the progress you all make here. If you need any help on the product or sales side, let me know!
  • 5 days ago
    undefined
  • btreecat4 days ago
    Do you believe the FAA should mandate ADSB for all GA regardless of airspace, and unify on a single global frequency rather than dual freq UAT+ADSB?
  • notahacker5 days ago
    What's your intended model for this? Pitch to air traffic management as their additional safety net or sell direct to operators/pilots as a warning service?
    • kristian11095 days ago
      Short answer: both.

      Pitching air traffic management is going to be a years long process.

      Getting certified for on-board avionics is similarly challenging.

      In the meantime, we'll get better and better at monitoring the airspace system and deploy that technology into unregulated applications like post-operational roles.

  • dehrmann5 days ago
    The demo I'd love-hate to see is backtesting on recent data to see if it can predict pilot deviations or TCAS alerts before there's a problem.
    • kristian11094 days ago
      Not to spoil the reveal here, but ... yes. We're working on compiling more of these cases and will probably publish a website or paper when it's solid.
  • csours5 days ago
    The current rash of airline incidents reminds me of the assembly instruction: Torque fastener until you hear expensive sounds and then back off a quarter turn.

    We've accelerated past our capabilities and need to slow down. ATC has incentive to slot takeoffs and landings as close as possible, but that is in tension with the goal of safety.

    > Air traffic control from first principles looks significantly more automated.

    We have a system 'designed' by history, not by intention. The ATC environment is implemented in the physical world, everyone has to work around physical limitations

    Automation works best in controlled environments with limited scope. The more workaround you have to add, the noisier things get, and that's why we use humans to filter the noise by picking important things to say. Humans can physically experience changes in the environment, and our filters are built from our experiences in the physical world.

    Anyway, sorry that isn't a question.

    • kristian11095 days ago
      No stress, appreciate you chiming in.

      > We've accelerated past our capabilities and need to slow down.

      This is a super interesting meditation. As much work as there is to be done now, the demand for air traffic is growing and power laws are concentrating it into tight airspace bubbles. It would behoove us to figure out how to make airspace more dense without compromising safety. There's lots of good economic incentive for this.

      • csours4 days ago
        > how to make airspace more dense without compromising safety.

        My suggestion would be to make things as repeatable and consistent as possible. This would mean forcing some airports to change their practices to be consistent with everyone else, and forcing physical layout changes and construction. Unfortunately an app can't do that =( ... and the benefits are on the other side of a paradigm shift, so it's hard to make it happen naturally.

        >> more dense

        Large, high passenger capacity airliners have gone out of style but that would have been the best way to get passenger density up.

  • pj_mukh5 days ago
    "The tower cleared JetBlue 1554 to depart on Runway 04, but simultaneously a ground controller on a different frequency cleared a Southwest jet to cross that same runway, putting them on a collision course. Controllers noticed the conflict unfolding and jumped in to yell at both aircraft to stop, avoiding a collision with about 8 seconds to spare "

    As a regular passenger and for someone with a little bit of drone autonomy experience, the fact that this is possible sounds insane to me. I just assumed there was something blocking the controllers from putting two flights on the same pathway (like little planes on a board?).

    My main question is, how does the "noticing" work now? What did the Controllers see that alerted them to what was unfolding?

    • kristian11094 days ago
      At an ASDE-X equipped airport like DCA, there is surface radar that the tower manager, for example, might be keeping an eye on. This is just some screen with a high-res view of everything on the ground. Then you "notice" the mistake, jump out of your seat, run over to the radio, and yell at both of them to stop in place. That's basically how it works today.

      On a clear day though, it's probably just people looking out the window of the tower (maybe with binoculars) and realizing in horror that there's been a mistake. The "noticing" is just intuiting the trajectory of the two moving objects...

      • pj_mukh4 days ago
        Jesus.

        On top of your analysis of commands that might flag conflicting commands, shouldn't we also be analyzing the radar to make sure there aren't any obviously conflicting paths?

        I saw your comment about pilot intentionality being the major time-saver but in a situation like the Potomac River crash, the intentionality and knowledge may have been a unintentional headfake for a system like yours i.e "Yes I see the CRJ".

        Maybe it's 80-20 for you, with the Potomac situation being especially rare.

        • kristian11094 days ago
          There's lots to discuss about the most recent Potomac River crash, but I'll let that rest for now.

          Your general point is 100% correct. It's not enough to only do the command parsing; you have to also compare that to radar + trajectory planning to get the full picture. That's exactly what we're working on now. Fortunately, the radar piece is much more deterministic.

  • 1oooqooq5 days ago
    laudable. I'm hoping the radar UX will move away from 1930s 2d oscilloscope sweeper one day, instead of AI alucinating speech.
    • kristian11095 days ago
      Yes, this speech technology is day one, but we've been talking a lot about what a future tower should look like. Augmented radar is complex and very interesting.

      Little known fact: some towers don't even have radar at all. They're just up in the cab with binoculars.

  • kats5 days ago
    Great!
  • kaimuki5 days ago
    [dead]
  • abetancort5 days ago
    [flagged]