The IC industry is struggling with blurring lines between different disciplines as chips are more tightly integrated with software in packages and systems.
Complexity in hardware design is spilling over to other disciplines, including software, manufacturing, and new materials, creating issues for how to model more data at multiple abstraction levels.
Challenges are growing around which abstraction level to use for a particular stage of the design, when to use it, and which data to include. Those decisions are becoming more difficult at each new node, as more devices are integrated into larger systems, and as more steps in the design flow shift both left and right.
“In electronic design, SPICE is the simulation environment best suited to transistor-level design, perhaps extending to the complexity of a basic logic building block,” said Simon Davidmann, CEO of Imperas Software. “But for SoC design, with possibly billions of transistors, different jumps in abstraction are required. For example, there is the gate-level boundary that forms the basis of the RTL design of the more complex structures of processors and beyond.”
Creating an executable instruction set model that works for both the hardware and software teams is critical for a design to hit a market window. By using a golden reference model with a step-and-compare methodology, verification can be done at the instruction boundary and configured in a UVM SystemVerilog testbench for asynchronous events and debug. But this also is becoming harder to achieve.
“It becomes the dynamic reference, and can become the heart of a simulation of the whole system as a virtual platform or virtual prototype,” Davidmann said. “As multicore designs become mainstream, with hundreds or thousands of cores, the same challenge can be seen in finding the ideal balance in abstraction in accuracy over capacity. It is the key requirement for simulation. But what level of modeling provides the most useful reference for designers? With simulation technology, it is possible to simulate the complete design with instruction-accurate models for a programmer’s view of the total system. In the case of AI or ML applications that have been developed in cloud-based environments, a large amount of analysis and tuning has already been completed. Today, hardware/software co-design is becoming more of a software-driven analysis of hardware structures, with complete simulation of large data sets and real-world situations.”
Partitioning of the design can help, breaking it into more manageable parts. The key is knowing what to split and when. “Configuring an array of processors as an AI/ML hardware accelerator can be split between the top-level design and the key subsystems,” he said. “Often the key algorithms will be partitioned to a processing element with, for example, two to five CPU cores plus hardware engines, such as RISC-V Vector Extensions or Arm SVE2. In turn, this processing element will be replicated in the design tens to thousands of times. A virtual platform provides the abstraction necessary for first level of tradeoff analysis and development. This leads to a verification reference model for the individual RISC-V cores, as well as a complete model of the SoC for software development. As in the case of the gate-level boundary in previous abstractions, the instruction-accurate boundary unites the hardware and software teams, and is the natural basis for the next levels of abstractions in this post-Moore’s Law era of heterogeneous multicore compute platforms for AI and ML.”….
To read the full Semiconductor Engineering article by Ann Steffora Mutschler, click here.