If we ever get all-optical chips, I'd expect an O(1) cost for die features, which could mean enormous memory capacities (TB range) at better than SRAM speeds. Now the determining factor for memory access is not memory architecture, but distance from register (or register analog).
This would create a cool new optimization problem for chip design (hopefully mostly automated), in which you want to optimize the 2D/3D digraph layout to minimize the mean waveguide length (weighted by the frequency with which data passes through a particular waveguide).
Hopefully in the future all our caches will be the same fast memory architecture, but physically closer to where computation is occurring. Assuming a 1mm round-trip (including internal waveguides/wavegates) for the fastest cache, memory accesses could occur at ~300GHz.
Maybe this optimization will encourage interesting patterns in die design, like radial or fractal dies which minimize mean distance to important memory. One might also consider a 3D approach to fill a volume around the core with memory, allowing more of the memory to be close.
Chip design could move away from a centralized approach, in which we have local registers & pipeline, and to a decentralized approach, in which computation can occur in parallel all over the chip semi-independently, with ultrafast access to memory by virtue of physical locality.
Another interesting thought: what if a lot of computations could occur in-flight? Like bit-flips and the like? Also, light gives you a lot of neat tricks, like encoding information in both the polarization and frequency of a single photon. Ofc you're limited by uncertainty.
Design of these devices should not be left to humans. Automated optimized synthesis of OIC would be such a fun problem to work on. A modular approach could work well: start with optimized building blocks, then compose them naively, then eliminate and layout w/ global perspective
I can't think of a way for classical computers to improve beyond this point: computation (on a single core) will be limited by the speed of light.
I might be missing something big (other than the basic materials/CMF issues) that precludes this kind of technology. I'd love to hear from @bofh453 @Plinz @whitequark @difluorine for corrections or predictions 😄
Well actually, one seemingly-infeasible way they could get faster is if they dynamically reassembled, changing their layout to process optical data passively; I think this constrains the problems you can solve e.g. w/o long-term state, putting them in a lower computational class.
You can follow @_brickner.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: