Optical computing startup Lightelligence has demonstrated a silicon photonics accelerator running the Ising problem more than 100 times faster than a typical GPU setup.
Lightelligence’s photonic arithmetic computing engine, known as Pace, is an integrated optical computing system consisting of about 12,000 photonic devices running at 1 GHz. That represents about a 1 million-fold speedup versus Lightelligence’s 100-device prototype, Comet, unveiled in 2019. The latest demonstration also marks the first time Lightelligence showed use cases beyond AI acceleration on its hardware.
Pace can run algorithms from the NP-Complete class of problems, which are computationally extremely difficult, many times faster than existing accelerators. While not demonstrating optical superiority for all applications, it did execute the Ising problem 100 times faster than a typical GPU, even beating a system purpose-built for the Ising problem–Toshiba’s simulated bifurcation machine, which runs on FPGAs–by a factor of 25.
NP-Complete problems have a very large state space, require huge computing resources to solve. The time to solution scales as a polynomial of the problem size. This class includes the Ising problem, graph max-cut/min-cut and the traveling salesman problem. In practice, NP-Complete problems occur in bioinformatics, scheduling, circuit design, material discovery, cryptography and power grid optimization applications.
CEO Yichen Shen told EE Times that Lightelligence decided to demonstrate NP-Complete acceleration since it illustrates the advantages of optical computing.
“The core of our optical compute engine is that it can finish matrix multiplication in a much shorter time period” than a GPU, Shen asserted. A GPU might take many hundreds of clocks to complete a 64- by 64-matrix multiplication. Lightelligence claims it can do it in less than 10, or about 5 nsec. “NP-Compute problems do iterative matrix multiplication many, many times, which enlarges our advantage. With the new technology, we wanted to find a problem that shows the best photonic superiority.”
The iterative nature of NP-Complete algorithms means successive matrix multiplications are dependent on the previous result. That helps minimize bottlenecks caused by system electronics parts. Hence, data does not need to shuttle to and from memory in between multiplies.
“For bigger commercial use cases, digital electronics and memory read and write will certainly mean the total computing system drags its feet,” Shen said. “We think even with that drag, we will still be able to demonstrate a good enough advantage down the road… maybe not as big as 100x, but at least a few times [faster].”
Lightelligence is also working on photonic technologies for data broadcasts and data interconnect to ease the bottleneck.
Asked whether Lightelligence would pursue NP-Complete acceleration commercially, Shen replied: “With this hardware, we can try to enter this market, but the technology will be used for our product… which will address a broader market, including AI acceleration.”
Optical computing based on silicon photonics promises orders-of-magnitude improvements in computing speed and power efficiency. The technology is based on directing modulated infrared light into silicon “wires” called waveguides, producible using standard CMOS processes. A form of analog computing, joining two waveguides effectively combines two signals while on-chip modulators (modulating brightness) effectively multiply two signals. Together, optical MAC units can be formed. (Read our primer on optical computing here). However, while optical computing is ideal for accelerating linear operations like matrix multiplication, standard digital electronics are required for nonlinear operations, memory and control.
Like competitor Lightmatter, Lightelligence uses a silicon photonics version of the Mach Zehnder Interferometer (MZI) as its computing element. However, where Lightmatter uses MEMS to change the physical shape of the waveguide in its MZI, Lightelligence instead injects electrons into the waveguide to modulate its photonic refractive index, modulating the optical signal passing through it.
Similar to other optical designs, Shen said Lightmatter’s technology has the potential to simultaneously process multiple inputs using different wavelengths or polarizations of light (such as using different colors for a pair of AI inferences at the same time).
Electronics plus photonics
The chip at the center of Lightelligence’s Pace demonstrator includes an ASIC control die flip-chip bonded to a photonic die. The assembly is mounted on a conventional substrate via a PCB, with a fiber array connecting it to a laser source. The mixed-signal ASIC houses a digital block with control logic that regulates data flow and I/O as well as SRAM for data storage. The analog portion of the ASIC bridges the digital block and the photonic devices.
Maurice Steinman, Lightelligence’s vice president of engineering, said individual chips are hard to engineer, and integrating them is even tougher. “With photonic computing, it’s really a class of analog computing. So a high-fidelity result requires a tremendous amount of circuit design, simulation, iteration and test chips,” he said. Moreover, at 1 GHz, the system uses light pulses shorter than a nanosecond. Compared to a megahertz system, noise and electronic crosstalk are proportionally much larger.
“The other [challenge] is the packaging architecture,” Steinman said. “We are taking two chips built on different fabrication processes and stacking them up directly with thousands of connections between them.
“One is powered by light, so we need to get a light source in there. The other needs electric current to power it… and heat removal. There are tremendous challenges we have to systematically attack to get all of that to come together,” he added.
Lightelligence has taped out its first commercial product, an AI accelerator based on Pace technology. The startup plans to start shipping devices in 2022. Lightelligence was spun out of MIT in 2017. So far, it has raised more than $100 million in funding, with 150 employees worldwide.