1. resonators and device-to-device variance: in general it's pretty hard to get these resonant effects to line up with each other from a production POV, especially with large arrays. Silicon photonics has come far, but I don't think it has approached the level of uniformity as electronics. They have demonstrated some level of electro-optic tunability, which is the traditional solution, but they still need to leverage that for their nonlinear effects too.
2. area and space: the 'minimum' trace size of these planar photonics circuits is still quite large (~200 nm minimum feature size typically for these waveguides). This is essentially due to a minimum size needed to confine light within a waveguide which depends generally on the waveguide's refractive index and target wavelength. These are currently all integrated on a planar manner, so each channel becomes quite large, especially if now you also need a relatively large ring resonator, which in this case is at least ~100 micrometers or so in diameter
3. the combination of 1 and 2: high device-to-device variation, along with a large planar footprint means that these things are quite expensive and difficult to manufacture, without some kind of miniaturization benefit that you would typically get with electronics (at least not yet). This effect appears to be more than the sum of 1 + 2.
https://www.sony-semicon.com/en/technology/industry/evs.html
Deep physical neural networks trained with backpropagation (2022)
> This process works by sending a tiny bit of the optical signal to a photodiode that measures how much optical power is there.
It seems that the benefit of the approach in general is to keep compute in optics, because crossing the optical to electrical boundary takes too long. But then in the middle of their described process is a boundary transition.
How is this so different to the CMOS/CCD boundary? Is a photodiode that much quicker to activate that it doesn't matter?
Edit: Turbo encabulator description from the paper linked at the bottom:
>To realize a programmable coherent optical activation function, we developed a resonant electro-optical nonlinearity (Fig. 1(iii)). This device directs a fraction of the incident optical power ∣b∣2 into a photodiode by programming the phase shift θ in an MZI. The photodiode is electrically connected to a p–n-doped resonant microring modulator, and the resultant photocurrent (or photovoltage) detunes the resonance by either injecting (or deplet-ing) carriers from the waveguide.
... and a couple of notes on the observed latency later in the paper
>We experimentally characterized the computational latency of the NOFU in this mode, finding that the response time for carrier injection was shorter than 100 ps and that 75 μA of photocurrent was sufficient to detune the resonator by a linewidth, corresponding to a static power dissipation of 60 μW.
>As our architecture computes entirely in the optical domain and is integrated onto a single photonic circuit, inference latency is limited only by the optical time of flight through the chip
Are we still in the "glass era"?
Is it correct then to say that augmenting the other properties of light increases overall information density/capacity of light as a medium? whereas with electricity we only have 2D: amplitude & freq?
the actual optical "packet" of information is traveling slower than the electric "packet". The key here is that the electric packet can a few bits, while the photonic packet in theory has a much larger bandwidth.