Digital Live audio mixing is taking over, but it suffers one flaw compared to analog: Latency. Humans can adjust pretty easily to performing an action and hearing a delayed response (that's pretty natural in our daily lives, basically think of it as echolocation). This is sort of like standing farther from a guitar amplifier (sound travels roughly 1 ms per foot). However, singers have it the worst: there is 0 latency from their voice to the ear canal, so monitor systems try to use analog as much as possible.
For digital audio links, every time you join then end-to-end or decode them, you get a bit of latency added.
There are a few audio interconnects that run on Ethernet's OSI Layer 0 (physical medium)
* AES50 is standardized, basically you can think of it as the 100Base-T of digital live audio. It's synchronously clocked with a predictable latency; with roughly ~62us per link. Pretty nice. Cat5e cables are dirt cheap and musicians are destructive as feral cats, so it it's a pretty good solution. Max length is 100meters.
* AudioDante is also popular but actually relies on IP Layer 3, so latency is variable. Typical values are 1ms - 10ms. Max length is pretty much unlimited, with a lot of asterisks.
FTA: 11us is _unbelievably good_ digital latency, but with near unlimited length is actually a pretty good value proposition for Live Audio. There may be a niche demand for a product like this: slap in some SFP adapters, transmit a channel of digital audio over whatever medium you like.
So yes, for monitoring, or linking two far away places with near zero latency audio, but not for connecting speaker stacks in a venue :)
Unlike IP, those were synchronous, circuit-switched systems. You'd first use a signaling protocol (mostly SS7) to establish a call, reserving a particular timeslot on a particular link for it, and you'd then have an opportunity to transmit 8 bits of data on that timeslot 8000 times a second. There was no need for packet headers, as the timeslot you were transmitting on was enough to identify which call the byte belonged to.
Because all data from a single call always took the same path, and everything was very tightly synchronized, there was also no variability in latency.
This basically eliminated any need for buffers, which are the main cause of latency in digital systems.
You still need a buffer at each switching point, because the timeslots on each cable aren't likely to line up. But the buffer for each channel only needs to be 2 samples wide in the worst case where the timeslots overlap and you need to send from the buffer while receiving into the buffer.
Given the timeframe when T1/E1 were developed, a more accurate perspective is not that buffers were eliminated, it's that they were never created.
Yes, 2G has fixed time slots, but a slot is used for a lot longer than a single (half?) sample.
It needs to send 8KHz audio at much lower bitrates (~14Kbps instead of 64Kbps), and you can't do that with raw PCM if you want decent quality. This means you need lossy compression and a codec, and those need far more than a single sample to work well.
CDMA was similar, not sure what their frame size was exactly, but it was somewhere in the vicinity.
Used to be you could get an PRI (ISDN/T1) phone line for this kind of work, but I think it's pretty doubtful that you can keep it end-to-end low latency PRI with modern telephony. You'd have to be ok with single channel 8-bit, 8k uLaw, but that's not that bad; you could probably orchestrate multiple calls for multiple channels. Someone is going to convert it to SIP with 20ms packets and there goes your latency.
Also, the beam is a bit divergent, even if it vibrates the beam could still cover the sensor.
Loss of signal -> silence -> no vibrations -> signal resumption.
Also, if you bounce the signal off a mirror on the wall like DIY Perks did, then walls vibrating even a little bit will be an issue if the beam is narrow enough.
I like the example audio file they have for the article, because the QSO ends with "73, bye bye" and that bounces off the moon and is received by the sender a little bit later. The moon is far away!
(I also really enjoy the distortion to SSB signals that you get by tuning the "carrier" frequency slightly wrong; more likely in this case because the moon changes the frequency of the reflected signal due to the doppler effect. Also happens with satellite comms, though you might not notice if you're using FM and not SSB.)
I don't know what you are on about. You can go to your local walmart and get IR headphones off the shelf that work exactly this way.
Also, a general solution to "send low-bandwidth over an SFP" is to use FM or phase modulation to carry the signal on top of a carrier wave that is fast enough for the retimers in question. Buffer and retimer chips will not respect amplitude in a modulation system, but they will largely preserve frequency and phase.
Also greetings, again (I believe?) from a fellow assembly username HNer!
https://en.m.wikipedia.org/wiki/Audio_over_Ethernet
This is what most professional places have
It never really happened and each company came up with their own bespoke solution, seemingly with "mobile phone-first" philosophy.
The protocol for the video is GigE vision. It's how many fancy broadcast, CCD security, and fancy home theater/office setups work
A while ago I looked into this for a similar-ish hobby project, and the main dealbreaker seemed to be the mandatory AC coupling capacitors: they are intended to block DC currents, so a signal which is substantially slower than intended is essentially fighting a high-pass filter. This is also why there are special AV SPF transceivers: Unlike Ethernet, SDI suffers from "pathological patterns" consisting of extremely long runs of 1s or 0s, which can cause "DC wander" [0]. SDI transceivers need to take this (albeit extremely unlikely) possibility into account, or risk losing signal lock.
For this reason I pretty much gave up on the idea of reliably going sub-100Mbps on cheap and easily available 1G / 10G SFP modules. Seeing it (mostly) work for TOSLINK at 3Mbps is beyond my wildest expectations - I bet the LVDS driver's high slew rate is doing quite a bit of work here too.
A thought experiment to clarify it: let's say you are hoisting a bucket with a DC motor. You're feeding it with a 50Hz AC power source. It's obviously not going anywhere, because it's just oscillating rapidly. You'd need for the motor to run in a single direction for a few minutes to actually lift the bucket. Now drive it with a 0.0000001Hz AC power source (which starts at peak voltage). The motor is going to reverse after 58 days, but does that actually matter? For any practical purposes, how is it different from a DC power source?
I also think that https://en.wikipedia.org/wiki/Line_code is the term you're looking for.
> I also think that https://en.wikipedia.org/wiki/Line_code is the term you're looking for.
In 10G ethernet phy's, it's a multiplicative (self-synchronizing) scramblers [1] and does not use line code. From what I remember, it's statistically fine, and plays into LDPC correction easier.
[1] https://www.iol.unh.edu/sites/default/files/knowledgebase/10...
But presumably an optical SFP doesn't need to block DC, because you can't make a ground loop over optical fibre?
However, there's directional indicators that just clamp onto the middle of a fiber. They bend it a little and sample the light that leaks out of the bend, without interrupting payload traffic. The first one I used back in the day was an Exfo but there are tons of 'em now.
As far as I know, these are receive-only, though physics doesn't seem to prohibit launching light into the fiber this way, it would just be an extremely inefficient process.
There isn't enough light leaking out to reconstruct the whole high-bit-rate signal (as far as I know), but there's enough to tell whether the light is flowing one way or the other, or both. And there's enough to tell whether it's modulated with a low frequency signal -- most optical test sets can generate a simple "tone", typically 270 Hz, 330 Hz, 1 kHz, or 2 kHz, and the clamp testers can tell if and which tone is present.
Found an example here. https://www.fiberinstrumentsales.com/fis-singlemode-multimod...
You can't really "get into" an optical fiber mid-run without splicing. Splicing isn't really that hard (I've done it! Fusion splicers are little robotic wonders. Most of the work is in the prep, not the splice itself.)
When I was connecting my surround sound receiver to my PC, I was bummed that SPDIF standard was never improved to support 5.1 or 7.1 uncompressed surround sound. 5.1 DTS compression is the best it can do (due to the 1.5 mbps bandwidth), but PC support is rather limited. I gave up, and I've been using it with HDMI for 10 years. Running it through my video card/drivers has introduced (bearable) complexity, but I wonder why receivers to this day can't connect to PCs over USB instead. (Yes, most receivers have USB ports, but those are for playing MP3s off a flash drive. A PC isn't a flash drive.)
So they never was any compelling reason to improve it like that. They even removed toslink output from many devices nowadays even if they didn’t have too.
That could be a kind of cool app that would allow you to present a folder on your PC as a media device. However that would then require a dreaded USB-A to USB-A type of cable <shudder>
Edit: I didn't notice before, but USB OTG is on the front page right now https://news.ycombinator.com/item?id=42585167
This is called gadget mode. I don't know what PCs can do it, but Raspberry Pi can do it.
There just aren't Toslink horror stories floating around the popular internet (SPDIF is another WTF-a-75Ω-RCA-cable? story). Toslink is a technology that just works (and the normal limit is a generous 10m)
DIYing it is probably too painful to be doable. You won't be able to source any kind of protocol translation chip, so you'll have to send it essentially raw into quad SFP+ transceivers. Running 4+ fibers instead of the required 2 (or even 1) is very expensive, and any kind of WDM immediately blows up your budget. Unless you're getting the stuff for free from a DC renovation or something, it's just not worth it.
On top of that you also have to deal with designing board for extremely fast signals, which is pretty much impossible to debug without spending "very nice car" amounts of money on tooling. People have done it before, but I definitely don't envy them.
[0]: https://www.startech.com/en-us/audio-video-products/st121hd2...
[1]: https://www.blackmagicdesign.com/products/miniconverters/tec...
Probably a box on the source end to manage DDC and strip HDCP.
I think many of those chips are simple off-the-shelf parts. Probably you would need special licenses only to decode HDCP.
If you have an FPGA, you could even create valid Ethernet frames and send the data / video stream over any standard switch / media converter as long as you have enough bandwidth and no packet loss. (10G would be enough for FullHD and 25G for 4K if you make it a bit smarter and can strip the blanking interval.)
This is called an FPGA.
The theory being ethernet is such a well developed, easy to source common jelly-bean part that this would trump any gains that specialized transports might otherwise have.
But this is probably just my inner network engineer being disdainful over unfamiliar transport layers.
Failing that, you're probably doing SDI over your own lambda.
"Classic" DVI-derived HDMI would probably be trickier because of variable clock speeds and additional data but modern HDMI 2.1 is pretty similar to DisplayPort in that it uses four lanes at fixed rates and sends data as packets over those.
I would love to be able to use standard widely available fiber patch cables for long distance video runs rather than needing proprietary cables only offered in fixed lengths and equipped with enormous connectors that are not friendly to conduit.
Also these days data rates are getting high enough that even normal lengths are problematic, DisplayPort just recently announced that 3 meter cables will need active components for the full 80 gigabit per second mode, which means that a computer on the floor connecting to a monitor on a standing desk will not be guaranteed to work with passive cables. HDMI also recently announced version 2.2 with a bump from 48 to 96 gigabits per second so they'll presumably be in the same boat.
For most long haul links people still compress, good old h264 or 265 with latencies in the 200-500ms range (plus network propagation), or J2k/JXS and NDI which are more like 50-150ms. Ultimately 200mbit of h265 is far cheaper to transmit than 10ish gbit of 2110, and in many cases the extra 500ms doesn’t matter.
He could have bought two spools for 200km, laid that on his bench, and call it a day, instead of driving around data centers only to achieve 160km :-) But that's just the lazy side of me talking. Heck he could even return the spools for a refund when he's done :-)
I feel that if you over sample the SPDIF signal and line code it to not have a DC bias, and do the opposite on the receive end, it would work. That is maybe too much transformation to be interesting, however. So I wonder what happens if you sample the signal at the 10G ethernet sample rate like a 1 bit ADC does, transmit that, and smooth the too-high-frequency result with a capacitor?
I am very worried that I may end up trying this ;)
I recently got a cable to hook up a Meta Quest 3 to a PC for PCVR. My understanding is that works like a high-spec USB 3 cable but has an optic fiber in it for the data so it can be really long.
Also oculus works fine over the "charging" type c cable + type-c to type-a + a classic copper usb3.0 extender of another 1.8 meters.
Is TOSLINK that unsuccessful? I was already using TOSLINK a very long time ago (in the nineties) and I'm still using TOSLINK today. Enter any audio store and they have TOSLINK cables.
It's very old by now though and I take it there's better stuff but TOSLINK still does the job.
My "music" path doesn't use TOSLINK:
source, eg QOBUZ for (lossless) streaming -> ethernet -> integrated amp (which has a DAC) -> speakers
But my "movie" (no home theater anymore atm) path uses TOSLINK: TV -> TOSLINK -> same integrated amp -> speakers
For whatever reason that amp is quite new and has all the bells and whistles (ARC and network streaming, for example) yet that amp still comes with not one but two (!) TOSLINK inputs.I'd say that's quite a successful life for a standard that came out in the mid eighties.
Same reason Sony pretty much killed MDs in the crib by not allowing digital write access to first two gens.
For time-correlating audio measurements around the office buildings I needed a analog reference signal in sync.
So I drew up a PCB design with a toslink in/out connector, and a connector for a SFP module and just a lvds driver in between. It worked straight away (more luck than skill) I could then re-use network fibers already run around the basement, and convert it to analog in the MDF rooms of each building, and run the analog signal up to the 3rd floor through existing RJ45 cables.
can't seem to find the article.
(janky in comparison to this article, which is amazing!)
Hear hear. Great read!
Honestly, gaming the system this hard really worries me, a lot of our economic ability is tied up in these trading system(the stock market). and I can see something going wrong far faster than our ability to fix it.
(* important, cuz despite claims to the contrary V.90 ain't at the Shannon limit, but V.92 is — kind of. See https://news.ycombinator.com/item?id=4344349 )
>It is tempting to attach a “dialup” modem to both sides, this would probably create the greatest modern day waste of a 100 GHz optical channel, given that it gives a final output bandwidth of ~40 kbit/s, and I assume this would probably confuse an intelligence agency if they were tapping the line.
Regardless of the fact that 48 kbps seems more likely, I'd really like to know the noise floor & SNR of that link