278 pointsby ingve11 days ago21 comments
  • exabrial10 days ago
    Laugh, but this probably does have some real world applications for Live Audio.

    Digital Live audio mixing is taking over, but it suffers one flaw compared to analog: Latency. Humans can adjust pretty easily to performing an action and hearing a delayed response (that's pretty natural in our daily lives, basically think of it as echolocation). This is sort of like standing farther from a guitar amplifier (sound travels roughly 1 ms per foot). However, singers have it the worst: there is 0 latency from their voice to the ear canal, so monitor systems try to use analog as much as possible.

    For digital audio links, every time you join then end-to-end or decode them, you get a bit of latency added.

    There are a few audio interconnects that run on Ethernet's OSI Layer 0 (physical medium)

    * AES50 is standardized, basically you can think of it as the 100Base-T of digital live audio. It's synchronously clocked with a predictable latency; with roughly ~62us per link. Pretty nice. Cat5e cables are dirt cheap and musicians are destructive as feral cats, so it it's a pretty good solution. Max length is 100meters.

    * AudioDante is also popular but actually relies on IP Layer 3, so latency is variable. Typical values are 1ms - 10ms. Max length is pretty much unlimited, with a lot of asterisks.

    FTA: 11us is _unbelievably good_ digital latency, but with near unlimited length is actually a pretty good value proposition for Live Audio. There may be a niche demand for a product like this: slap in some SFP adapters, transmit a channel of digital audio over whatever medium you like.

    • lflux10 days ago
      Things have probably changed since I last talked to my friends at a large state radio/tv broadcaster, but for long haul they used either MADI over fibre, or AES50 into boxes from NetInsight along with SDI for the video feeds. This works so well that you can put the input/output converters in a venue hosting a live music and do the program audio mix in a control room at broadcast HQ 100s of kilometers away.
      • amluto10 days ago
        At 100s of km, you’d be pushing the limits for actual live sound, though. 100km is about a light-millisecond, and ordinary fiber is rather slower than light, so that’s maybe 3ms round trip per 100km. If a musician can hear themselves through monitors at too much more latency than that, it could start to get distracting.
        • kijiki10 days ago
          If the monitors are 3ft away from the musician, they're already looking at 3ms of latency just in the air between the monitor and their ear.
          • _factor10 days ago
            This is why you see headphones used in recording studios I’m sure.
            • InitialLastName10 days ago
              You see headphones used in recording studios because ambient sound (i.e. from a loudspeaker) has a habit of getting picked up by microphones.
        • lflux10 days ago
          As i understand it, the sound for audience in the venue and monitors for artists was run locally by separate mixer. The audio backhauled to HQ was for the live broadcast.
        • mrb10 days ago
          Latency is 1ms for a round-trip through 100km of fiber (200km total).
    • philjohn10 days ago
      Although when designing audio solutions for large venues, the further back a speaker stack is, the more you'll likely want to add a delay to it so that the sound hits at the same time as the sound from speakers closer to the stage - otherwise it can sound awful (like a strange echo): https://www.prosoundweb.com/why-wait-the-where-how-why-of-de...

      So yes, for monitoring, or linking two far away places with near zero latency audio, but not for connecting speaker stacks in a venue :)

    • miki12321110 days ago
      I've recently been reading about T1 and E1 cables, which were used to transmit most calls inside and between telecom companies back in the day, and I was astonished that they transmitted data one sample at a time.

      Unlike IP, those were synchronous, circuit-switched systems. You'd first use a signaling protocol (mostly SS7) to establish a call, reserving a particular timeslot on a particular link for it, and you'd then have an opportunity to transmit 8 bits of data on that timeslot 8000 times a second. There was no need for packet headers, as the timeslot you were transmitting on was enough to identify which call the byte belonged to.

      Because all data from a single call always took the same path, and everything was very tightly synchronized, there was also no variability in latency.

      This basically eliminated any need for buffers, which are the main cause of latency in digital systems.

      • toast09 days ago
        > This basically eliminated any need for buffers, which are the main cause of latency in digital systems.

        You still need a buffer at each switching point, because the timeslots on each cable aren't likely to line up. But the buffer for each channel only needs to be 2 samples wide in the worst case where the timeslots overlap and you need to send from the buffer while receiving into the buffer.

        Given the timeframe when T1/E1 were developed, a more accurate perspective is not that buffers were eliminated, it's that they were never created.

      • rasz10 days ago
        Didnt GSM(2G) work same way with dedicated regular timeslots per call? I dont know about 3G, but 4G finally introduced and 5G cemented packetized voice data with Volte.
        • joha427010 days ago
          The interesting point wasn't the timeslots, but their size.

          Yes, 2G has fixed time slots, but a slot is used for a lot longer than a single (half?) sample.

          • miki1232119 days ago
            2g (and all other standards after it) use 20-millisecond frames.

            It needs to send 8KHz audio at much lower bitrates (~14Kbps instead of 64Kbps), and you can't do that with raw PCM if you want decent quality. This means you need lossy compression and a codec, and those need far more than a single sample to work well.

            CDMA was similar, not sure what their frame size was exactly, but it was somewhere in the vicinity.

    • toast010 days ago
      > FTA: 11us is _unbelievably good_ digital latency, but with near unlimited length is actually a pretty good value proposition for Live Audio. There may be a niche demand for a product like this: slap in some SFP adapters, transmit a channel of digital audio over whatever medium you like.

      Used to be you could get an PRI (ISDN/T1) phone line for this kind of work, but I think it's pretty doubtful that you can keep it end-to-end low latency PRI with modern telephony. You'd have to be ok with single channel 8-bit, 8k uLaw, but that's not that bad; you could probably orchestrate multiple calls for multiple channels. Someone is going to convert it to SIP with 20ms packets and there goes your latency.

    • 10 days ago
      undefined
    • lukeh10 days ago
      Dante network latency can go as low as 125us.
      • exabrial9 days ago
        Is there a mode I'm unaware of? I've never had Dante latency that low, let alone that predictable. 1ms-2ms is average with occasional spikes in my experience, and the more complex the network setup the worse it gets.
      • chgs10 days ago
        That in aes67 mode?

        I don’t dabble much in low latency audio but from what I remember Dante tended to be about 1ms?

        • lukeh10 days ago
          AES67 mode is unfortunately limited to 1ms or higher.
  • vluft11 days ago
    On a related note, the excellent DIY Perks youtube channel recently replaced toslink leds with lasers to do a wireless surround system https://www.youtube.com/watch?v=1H4FuNAByUs
    • dylan60411 days ago
      What happens when your sub starts kicking so hard that your walls start to vibrate causing the line of sight to go intermittent?
      • ragebol10 days ago
        Then the audio drops out, so it's a self-correcting problem!

        Also, the beam is a bit divergent, even if it vibrates the beam could still cover the sensor.

        • dylan60410 days ago
          Not necessarily. The sub is not usually attached to a wall, so it wouldn't self correct like you're suggesting
          • chowells10 days ago
            I think you missed a joke there.

            Loss of signal -> silence -> no vibrations -> signal resumption.

            • dylan60410 days ago
              no, you're missing the point. the subwoofer is not connect to a wall that vibrates, so it wouldn't miss the signal. the surround speakers and possibly the front and surround speakers tend to be attached to a wall. The floor doesn't shake enough for the sub to loose alignment is the point.
              • simoncion10 days ago
                Well, (to treat this seriously, rather than the joke it was) where's your transmitter? And are there vibration-sensitive components inside of either the transmitter or receiver? Several times a month, cars idle outside my apartment with bass loud enough to severely shake my windows, and somewhat shake my walls and floor. I imagine a receiver that's physically attached (or merely very near) to a subwoofer that loud would have trouble maintaining a steady optical link.
              • ragebol10 days ago
                I was making a joke though.

                Also, if you bounce the signal off a mirror on the wall like DIY Perks did, then walls vibrating even a little bit will be an issue if the beam is narrow enough.

        • kridsdale39 days ago
          What an excellent natural interpretation of "DROP THE BASS"
    • actionfromafar11 days ago
      Next step, point a TOSLINK laser at the Moon Retroreflectors!
      • dylan60411 days ago
        There was something posted not too long ago that bounced radio signals off of the moon that they then turned into an audio filter based on their testing on what it would do to the signal.
        • jrockway10 days ago
          https://en.wikipedia.org/wiki/Earth%E2%80%93Moon%E2%80%93Ear...

          I like the example audio file they have for the article, because the QSO ends with "73, bye bye" and that bounces off the moon and is received by the sender a little bit later. The moon is far away!

          (I also really enjoy the distortion to SSB signals that you get by tuning the "carrier" frequency slightly wrong; more likely in this case because the moon changes the frequency of the reflected signal due to the doppler effect. Also happens with satellite comms, though you might not notice if you're using FM and not SSB.)

      • mey10 days ago
        The dark side of the moon on continuous loop would be an interesting project.
    • gorkish10 days ago
      The problem with DIY perks solution is that the manchester clock+data encoding is an amplitude modulated thing and isnt really very robust to using in free space. LED bulbs, sunlight, or all manner of other stuff can and will fuss with it. This is probably why he ended up having to go with lasers instead of just a big IR blaster against the ceiling. If he modulated the OOK signal onto some kind of carrier the entire thing would be a lot more reliable and as a bonus could probably ditch the lasers. This is more or less how the infrared wireless speakers and headphones of yore (80's and 90's) did the job.
      • Neywiny10 days ago
        So the problem with his solution is that he needed a solution to solve a problem?
        • gorkish5 days ago
          It works but it's not going to be very robust without a carrier.
      • amluto10 days ago
        If you mean a literal “IR blaster”, those generally modulate onto a 38kHz carrier. (I built an IR blasting device out of a 555 timer and an LED once, and it worked great, and no, I did not use precision resistors or capacitors. I admit I’m not actually sure whether a standard IR blaster contains a modulator or whether the device supplying the signal is expected to pre-modulate it.). You’re not going to get anything resembling acceptable audio quantity over consumer IR tech.
        • gorkish5 days ago
          > anything resembling acceptable audio quantity over consumer IR tech

          I don't know what you are on about. You can go to your local walmart and get IR headphones off the shelf that work exactly this way.

    • pseudosavant10 days ago
      Such a great video. There is a really good chance I use that technique for a remote subwoofer at some point. Really elegant solution.
    • skerit9 days ago
      A part of me wants to use his idea to set up some kind of wireless data connection just for fun.
  • pclmulqdq10 days ago
    Large-scale audio systems will often use synchronous Ethernet or other similar protocols instead of things like TOSLINK at this point.

    Also, a general solution to "send low-bandwidth over an SFP" is to use FM or phase modulation to carry the signal on top of a carrier wave that is fast enough for the retimers in question. Buffer and retimer chips will not respect amplitude in a modulation system, but they will largely preserve frequency and phase.

    • rdtsc10 days ago
      Indeed. I worked with CobraNet for some years. I kind like their isochronous protocol. But being a layer 2 protocol I believe it's outdated at this point.

      Also greetings, again (I believe?) from a fellow assembly username HNer!

    • iancmceachern10 days ago
      Yeah, there is a whole standard for it

      https://en.m.wikipedia.org/wiki/Audio_over_Ethernet

      This is what most professional places have

      • dekhn10 days ago
        I had a dream many years ago where I could connect all my house devices; all the TVs, stereos, etc, all to one ether network (ideally the same physical network as my switched Internet ports) and send AV from any source to any dest without having to worry that much about formats or bandwidth limits.

        It never really happened and each company came up with their own bespoke solution, seemingly with "mobile phone-first" philosophy.

        • iancmceachern10 days ago
          They have this too, it's how thise fancy systems in rich peoples mansions work and fancy board rooms. A famous company in that world is Crestron. They make stuff that let's you do this, control everything from one central system.

          The protocol for the video is GigE vision. It's how many fancy broadcast, CCD security, and fancy home theater/office setups work

  • crote11 days ago
    I'm surprised it works this well!

    A while ago I looked into this for a similar-ish hobby project, and the main dealbreaker seemed to be the mandatory AC coupling capacitors: they are intended to block DC currents, so a signal which is substantially slower than intended is essentially fighting a high-pass filter. This is also why there are special AV SPF transceivers: Unlike Ethernet, SDI suffers from "pathological patterns" consisting of extremely long runs of 1s or 0s, which can cause "DC wander" [0]. SDI transceivers need to take this (albeit extremely unlikely) possibility into account, or risk losing signal lock.

    For this reason I pretty much gave up on the idea of reliably going sub-100Mbps on cheap and easily available 1G / 10G SFP modules. Seeing it (mostly) work for TOSLINK at 3Mbps is beyond my wildest expectations - I bet the LVDS driver's high slew rate is doing quite a bit of work here too.

    [0]: https://www.ti.com/lit/an/snaa417/snaa417.pdf

    • MrRadar11 days ago
      The article mentions S/PDIF (which TOSLINK is an optical version of) uses Manchester code[1] which eliminates the DC component by ensuring every bit has at least one transistion of the signal between high and low.

      [1] https://en.wikipedia.org/wiki/Manchester_code

      • crote10 days ago
        The problem is the speed. S/PDIF doesn't have a DC component at the S/PDIF bit rate, but to an SFP+ transceiver that S/PDIF signal is a lot closer to DC than to its expected signal. A single S/PDIF bit viewed as if it were a 10Gbps signal looks like thousands of 1s followed by thousands of 0s. Yes, they all balance out in the end, but you can still develop quite a large drift within a single sub-S/PDIF-bit sequence.

        A thought experiment to clarify it: let's say you are hoisting a bucket with a DC motor. You're feeding it with a 50Hz AC power source. It's obviously not going anywhere, because it's just oscillating rapidly. You'd need for the motor to run in a single direction for a few minutes to actually lift the bucket. Now drive it with a 0.0000001Hz AC power source (which starts at peak voltage). The motor is going to reverse after 58 days, but does that actually matter? For any practical purposes, how is it different from a DC power source?

        • crest10 days ago
          That's why you get problems around 10Gbps, but simple 10Gbps optics and afaik all 1Gbps or slower optics don't use the "fancy" kind of signal processing because it wasn't needed. Their lower cut-off frequency should be around 100kHz.
        • nomel10 days ago
          Does SFP+ not have a scrambler/descrambler to make this a non issue, like almost all other phy?

          https://en.m.wikipedia.org/wiki/Scrambler

          • jrockway10 days ago
            This is done before the SFP+ module sees the signal, but the module makes the design assumption that it is being done. It is right for 10G ethernet, it is wrong (at a certain time scale) for SPDIF.

            I also think that https://en.wikipedia.org/wiki/Line_code is the term you're looking for.

        • MrRadar10 days ago
          Thanks for the explanation!
      • teraflop11 days ago
        Yup, but that only works if those transitions happen frequently enough compared to the time constant of the high-pass filter. Presumably, that's why the author found that the optics only worked with signals above about 150kHz.
    • michaelt10 days ago
      I can understand DC wander being a problem on copper ethernet, where the signal goes through an isolation transformer - which is there specifically to block DC; you don't want to accidentally make a ground loop between buildings after all.

      But presumably an optical SFP doesn't need to block DC, because you can't make a ground loop over optical fibre?

  • glitchc11 days ago
    Once you replace the TOSLINK transmitter with an SFP module, it's not the TOSLINK tx/rx that's being tested but rather the low-bandwidth S/PDIF protocol operating over a high bandwidth SFP link. So it's not really TOSLINK that's being extended but rather S/PDIF over optical fibre. Maybe I'm missing something....
    • toast010 days ago
      TOSLINK is S/PDIF over (usually plastic) optical fiber. S/PDIF over SFP is S/PDIF over optical fiber too, unless you're using SFP DACs.
  • myself24811 days ago
    Fiber techs have "talk sets" which are just little voice intercoms that you plug into an unused fiber in the bundle, so you can yammer back and forth between manholes/closets/whatever. I'm not sure whether they're even digital; it's been a while since I played with a pair.
    • mrguyorama10 days ago
      How do you non-destructively jack into a glass fiber? Or are they limited to hooking into transceivers on the ends?
      • myself24810 days ago
        You're correct that the talk-sets have to plug into the ends.

        However, there's directional indicators that just clamp onto the middle of a fiber. They bend it a little and sample the light that leaks out of the bend, without interrupting payload traffic. The first one I used back in the day was an Exfo but there are tons of 'em now.

        As far as I know, these are receive-only, though physics doesn't seem to prohibit launching light into the fiber this way, it would just be an extremely inefficient process.

        There isn't enough light leaking out to reconstruct the whole high-bit-rate signal (as far as I know), but there's enough to tell whether the light is flowing one way or the other, or both. And there's enough to tell whether it's modulated with a low frequency signal -- most optical test sets can generate a simple "tone", typically 270 Hz, 330 Hz, 1 kHz, or 2 kHz, and the clamp testers can tell if and which tone is present.

      • 0_____010 days ago
        My guess is it's already-terminated dark fiber with an FC connector (no transceiver)

        Found an example here. https://www.fiberinstrumentsales.com/fis-singlemode-multimod...

        You can't really "get into" an optical fiber mid-run without splicing. Splicing isn't really that hard (I've done it! Fusion splicers are little robotic wonders. Most of the work is in the prep, not the splice itself.)

      • toast010 days ago
        You're probably in the manhole to work on a fiber break anyway...
        • chgs10 days ago
          And hopefully not break all the other fibres while doing it.

          Of course that’s why we get so concerned about pinch points with dual fibres

  • theandrewbailey11 days ago
    > TOSLINK/SPDIF turns this into a manchester coded serial signal, at around 1.5Mbps that is much more resiliant to analog interference

    When I was connecting my surround sound receiver to my PC, I was bummed that SPDIF standard was never improved to support 5.1 or 7.1 uncompressed surround sound. 5.1 DTS compression is the best it can do (due to the 1.5 mbps bandwidth), but PC support is rather limited. I gave up, and I've been using it with HDMI for 10 years. Running it through my video card/drivers has introduced (bearable) complexity, but I wonder why receivers to this day can't connect to PCs over USB instead. (Yes, most receivers have USB ports, but those are for playing MP3s off a flash drive. A PC isn't a flash drive.)

    • toast010 days ago
      I think the root of the problem is lack of bidirectional signalling means you have to manually configure for capabilities on both sides (which actually already happens for DTS/Dolbly over SPDIF, so it wouldn't have been the end of the world...). Lack of bidirectional signalling also precludes content protection that's more effective than setting a "don't pirate" flag, which might be the real reason.
    • jiehong10 days ago
      Part of the answer is that toslink does not support DRM, while HDMI does.

      So they never was any compelling reason to improve it like that. They even removed toslink output from many devices nowadays even if they didn’t have too.

    • dylan60411 days ago
      > A PC isn't a flash drive

      That could be a kind of cool app that would allow you to present a folder on your PC as a media device. However that would then require a dreaded USB-A to USB-A type of cable <shudder>

      • akovaski11 days ago
        You can do this (in Linux, at least. Mobile devices like Android as well.) if the USB port of the peripheral side is a USB OTG port. I've only seen USB OTG ports as USB-B (standard and micro) or USB-C.

        Edit: I didn't notice before, but USB OTG is on the front page right now https://news.ycombinator.com/item?id=42585167

        • nsteel10 days ago
          I think there was a RPi Zero project doing the round some years back that made use of this.
      • EvanAnderson11 days ago
        Target disk mode on a lot of older Mac machines did that over Firewire. You could boot the machine into target disk mode and it would present its mass storage over Firewire. It was pretty cool.
        • UniverseHacker10 days ago
          I loved that feature- I could take my shitty old laptop into a university computer lab and boot a powerful brand new mac with fast internet from my hard drive- and use all of my software as if it was my own computer.
        • dylan60411 days ago
          But you couldn't use the machine at the same time. This would be like a SAMBA share, but over USB
          • zokier10 days ago
            You can connect two computers with usb and setup network between them, so you can just use smb/cifs. Microsoft has even handy tutorial for that: https://learn.microsoft.com/en-us/windows-hardware/design/co...
            • dylan60410 days ago
              again this is not the same thing has allowing a USB cable to connect from a PC to another device that is expecting a device that would present itself as a mass storage device
      • ianburrell10 days ago
        It could be done with USB-C. The computers would need to figure out which is the computer and the USB host, and which is the drive and act like USB device.

        This is called gadget mode. I don't know what PCs can do it, but Raspberry Pi can do it.

    • bar000n10 days ago
      i doubt about the 1.5mbps limit as many DAC specs toslink as 24bit 96khz pcm stereo capable, which sums up to almost 5mbps
  • brudgers10 days ago
    Recently, I described Toslink in an internet conversation...the other person expected it to be like USB. It is pretty amazing how old this technology is and how little anyone complains about it.

    There just aren't Toslink horror stories floating around the popular internet (SPDIF is another WTF-a-75Ω-RCA-cable? story). Toslink is a technology that just works (and the normal limit is a generous 10m)

  • fru65411 days ago
    I wonder if something like this is possible with HDMI? Separate 10G SFP+ for each color channel, one more for i2c, create a similar style breakout PCB, maybe add an MPO or CWDM mux… Could be a fun project. Optical HDMI cables are expensive and most of the time come with a preexisting cable which is hard to route (in conduits) due to HDMI connector size.
    • crote11 days ago
      Such products are already commercially available [0][1]!

      DIYing it is probably too painful to be doable. You won't be able to source any kind of protocol translation chip, so you'll have to send it essentially raw into quad SFP+ transceivers. Running 4+ fibers instead of the required 2 (or even 1) is very expensive, and any kind of WDM immediately blows up your budget. Unless you're getting the stuff for free from a DC renovation or something, it's just not worth it.

      On top of that you also have to deal with designing board for extremely fast signals, which is pretty much impossible to debug without spending "very nice car" amounts of money on tooling. People have done it before, but I definitely don't envy them.

      [0]: https://www.startech.com/en-us/audio-video-products/st121hd2...

      [1]: https://www.blackmagicdesign.com/products/miniconverters/tec...

      • toast010 days ago
        If you need 4x channels, it sounds like a job for QSFP? HDMI is already differential signalling, so you don't need to do that, but you might still need level shifting.

        Probably a box on the source end to manage DDC and strip HDCP.

      • raron10 days ago
        > You won't be able to source any kind of protocol translation chip

        I think many of those chips are simple off-the-shelf parts. Probably you would need special licenses only to decode HDCP.

        If you have an FPGA, you could even create valid Ethernet frames and send the data / video stream over any standard switch / media converter as long as you have enough bandwidth and no packet loss. (10G would be enough for FullHD and 25G for 4K if you make it a bit smarter and can strip the blanking interval.)

      • Doohickey-d10 days ago
        There's even cheaper versions of this now, "fiber" HDMI cables with the electronics in the HDMI plugs themselves, no additional power required. They go up to 100m length. I do wonder how these work, since I've never seen a good teardown of one.
      • 1515510 days ago
        > You won't be able to source any kind of protocol translation chip

        This is called an FPGA.

    • somat10 days ago
      My plan, if I ever need long haul(>3meters) video or audio links, is to get the signal into ethernet(or even better ip) and use common network equipment to transport it.

      The theory being ethernet is such a well developed, easy to source common jelly-bean part that this would trump any gains that specialized transports might otherwise have.

      But this is probably just my inner network engineer being disdainful over unfamiliar transport layers.

      • myself24810 days ago
        Nah, this is totally the reasonable way to do it, iff you can tolerate the compression loss or whatever. Because 4k60 is like 12Gbps uncompressed, and even more after you cram ethernet headers onto everything. So most such devices include some compression, and the really expensive ones let you configure how much.

        Failing that, you're probably doing SDI over your own lambda.

      • gh02t10 days ago
        It's much cheaper to just buy an optical HDMI cable if you need a long point to point run, it's like 50 bucks for 100 ft. The cool stuff you can do with HDMI over IP lies in switching the signal to different endpoints on demand and things like multicast to multiple receivers, both of which are things you can do with off the shelf HDMI over IP gear.
      • zokier10 days ago
        That is happening in the pro world, check out e.g. SMPTE ST 2110.
    • wolrah10 days ago
      I have wondered about the same (and/or DisplayPort) but with QSFP optics to simplify dealing with the four channels of data.

      "Classic" DVI-derived HDMI would probably be trickier because of variable clock speeds and additional data but modern HDMI 2.1 is pretty similar to DisplayPort in that it uses four lanes at fixed rates and sends data as packets over those.

      I would love to be able to use standard widely available fiber patch cables for long distance video runs rather than needing proprietary cables only offered in fixed lengths and equipped with enormous connectors that are not friendly to conduit.

      Also these days data rates are getting high enough that even normal lengths are problematic, DisplayPort just recently announced that 3 meter cables will need active components for the full 80 gigabit per second mode, which means that a computer on the floor connecting to a monitor on a standing desk will not be guaranteed to work with passive cables. HDMI also recently announced version 2.2 with a bump from 48 to 96 gigabits per second so they'll presumably be in the same boat.

    • psophis11 days ago
      Not HDMI, but SDI over fiber is basically this. It can be muxed and is used in the broadcast industry for long haul camera feeds.
      • chgs10 days ago
        SDI over fibre with a cheap converter if you need to push multi hindered metres. Then people moves to 222-6 which packetised the SDi over IP, and now 2110 which breaks out the SDI to its components.

        For most long haul links people still compress, good old h264 or 265 with latencies in the 200-500ms range (plus network propagation), or J2k/JXS and NDI which are more like 50-150ms. Ultimately 200mbit of h265 is far cheaper to transmit than 10ish gbit of 2110, and in many cases the extra 500ms doesn’t matter.

  • mrb9 days ago
    You can buy a spool of 100km of "OTDR launch fiber" which is quite compact as it's just raw fiber terminated by two connectors. 100km sells for only $1400: https://aliexpress.com/item/1005006978999338.html which is well within his budget as it seems he spent over $1000 on this hobby project.

    He could have bought two spools for 200km, laid that on his bench, and call it a day, instead of driving around data centers only to achieve 160km :-) But that's just the lazy side of me talking. Heck he could even return the spools for a refund when he's done :-)

  • ahofmann11 days ago
    This is wonderfully useless, what a great delight to read!
  • jrockway10 days ago
    I fear that I'm about to be nerd sniped because I really want to try to make this work as a "proper" 10G signal.

    I feel that if you over sample the SPDIF signal and line code it to not have a DC bias, and do the opposite on the receive end, it would work. That is maybe too much transformation to be interesting, however. So I wonder what happens if you sample the signal at the 10G ethernet sample rate like a 1 bit ADC does, transmit that, and smooth the too-high-frequency result with a capacitor?

    I am very worried that I may end up trying this ;)

  • PaulHoule10 days ago
    I think it's amusing that optic fiber connectors have had so little success in the market though I have a few TOSLINK and the coaxial equivalent in my upstairs home theater (I have a Sony 300 disc CD changed packed with DTS 5.1 Music Discs so I'm living the surround music dream) and downstairs (computer to stereo, computer to minidisc recorder, etc.)

    I recently got a cable to hook up a Meta Quest 3 to a PC for PCVR. My understanding is that works like a high-spec USB 3 cable but has an optic fiber in it for the data so it can be really long.

    • synchrone10 days ago
      I tore down oculus link cable - it's just copper internally.

      Also oculus works fine over the "charging" type c cable + type-c to type-a + a classic copper usb3.0 extender of another 1.8 meters.

      • zokier10 days ago
        You can see the fibers clearly in this teardown video: https://www.youtube.com/watch?v=Spa_pAn871c
      • PaulHoule10 days ago
        I use it for other things and it performs admirably. (In particularly my Sony camera has trouble with cheap cables) It is one of two "elite" USB-C cables I keep near my computer, the other one is the shorter cable that came with the Looking Glass Go.
    • TacticalCoder10 days ago
      > I think it's amusing that optic fiber connectors have had so little success in the market though I have a few TOSLINK and the coaxial equivalent in my upstairs home theater ...

      Is TOSLINK that unsuccessful? I was already using TOSLINK a very long time ago (in the nineties) and I'm still using TOSLINK today. Enter any audio store and they have TOSLINK cables.

      It's very old by now though and I take it there's better stuff but TOSLINK still does the job.

      My "music" path doesn't use TOSLINK:

          source, eg QOBUZ for (lossless) streaming -> ethernet -> integrated amp (which has a DAC) -> speakers
      
      But my "movie" (no home theater anymore atm) path uses TOSLINK:

          TV -> TOSLINK -> same integrated amp -> speakers
      
      For whatever reason that amp is quite new and has all the bells and whistles (ARC and network streaming, for example) yet that amp still comes with not one but two (!) TOSLINK inputs.

      I'd say that's quite a successful life for a standard that came out in the mid eighties.

      • PaulHoule9 days ago
        TOSLINK is completely successful, but ethernet over fiber is a datacenter thing, not a home or office thing.
        • rasz9 days ago
          TOSLINK was historically heavily suppressed and gimped https://www.realhd-audio.com/?p=1990

          Same reason Sony pretty much killed MDs in the crib by not allowing digital write access to first two gens.

  • martinmunk10 days ago
    I did basically this exact same thing at work a few years ago.

    For time-correlating audio measurements around the office buildings I needed a analog reference signal in sync.

    So I drew up a PCB design with a toslink in/out connector, and a connector for a SFP module and just a lvds driver in between. It worked straight away (more luck than skill) I could then re-use network fibers already run around the basement, and convert it to analog in the MDF rooms of each building, and run the analog signal up to the 3rd floor through existing RJ45 cables.

  • khaki5410 days ago
    I love when people do random stuff like this. I couldn't even suss out his reasoning for taking this project on. Normally there is at least a notional but absurd use case. Cool project though, and I'm sure he had fun.
  • Aloha6 days ago
    I never would have considered using an SFP to transmit arbitrary signals over a mux, this is fun and kinda crazy.
  • m46310 days ago
    I thought I read somewhere where someone had somehow jammed a fiber cable into a toslink/spdif port (doing all this optically without sfp)

    can't seem to find the article.

    (janky in comparison to this article, which is amazing!)

  • rcarmo10 days ago
    “Knowing this stuff means that it is possible to build bigger, better, more horrifying solutions/workarounds to problems.”

    Hear hear. Great read!

  • blt10 days ago
    TOSLink was kind of a silly idea because digital electrical signals would also prevent ground loops. The key is digital vs. analog, not optical vs. electrical.
    • ielillo10 days ago
      Ground loops comes from the ground mismatch between two electrically connected devices. When you use an optical link, you isolate those two devices since there is no common ground and the hum goes away. Same if connect a battery device to a grounded device.
  • omer911 days ago
    Light travels 300.000km/h, not 200.000km/h. Or did I overlooked something?
    • halestock11 days ago
      It’s about 200,000km/h when traveling through fiber optic cable.
      • rayhaanj10 days ago
        I think you meant kilometres per second, not per hour.
    • crote10 days ago
      Very simplified: the speed of light isn't constant. The well-known 299.792.458 m/s constant is the speed of light in vacuum - and glass isn't a vacuum. Light goes significantly slower in a lot of mediums, including glass, and it's why things like lenses are possible.
      • somat10 days ago
        It is also why high speed trading firms invest in microwave radio links the speed of light through air is enough faster enough than the speed of light through glass that they feel this gives them a trading edge.

        Honestly, gaming the system this hard really worries me, a lot of our economic ability is tied up in these trading system(the stock market). and I can see something going wrong far faster than our ability to fix it.

        • Vecr9 days ago
          Flash crash. It's really hard to put in fair minimum times on things. A basic delay isn't really any better.
    • formerly_proven10 days ago
      Speed of light in a medium is c/index of refraction, which is about 1.5 for every glass and highly transparent plastic.
  • no_identd10 days ago
    …now complete the circle, and run a 56k V.92(*) link over it. 8)

    (* important, cuz despite claims to the contrary V.90 ain't at the Shannon limit, but V.92 is — kind of. See https://news.ycombinator.com/item?id=4344349 )

    • no_identd10 days ago
      Follow up, quoting from the article:

      >It is tempting to attach a “dialup” modem to both sides, this would probably create the greatest modern day waste of a 100 GHz optical channel, given that it gives a final output bandwidth of ~40 kbit/s, and I assume this would probably confuse an intelligence agency if they were tapping the line.

      Regardless of the fact that 48 kbps seems more likely, I'd really like to know the noise floor & SNR of that link