Cache coherency is expensive and provides little or negative benefit for some tasks. So why is it still used so frequently?
Cache coherency, a common technique for improving performance in chips, is becoming less useful as general-purpose processors are supplemented with, and sometimes supplanted by, highly specialized accelerators and other processing elements.
While cache coherency won’t disappear anytime…
Interview with Sanjay Gangal following the announcement of ImperasDV for RISC-V processor verification at the co-located DAC and RISC-V Summit 2021.
The open ISA of RISC-V is generating a lot of interest on the new design freedoms for processor hardware, in this interview Sanjay explores the implications for software development and the growing demand for processor verification solutions. Highlighting the recent announcements on ImperasDV, the latest…
With its new ImperasDV solution, the company aims at enabling all RISC-V developers to accomplish the complex task of processor IP verification more efficiently.
“The greatest migration in verification responsibility in the history of EDA,” from processor IP vendors to SoC designers: this, according to Imperas Software, is the challenge facing SoC development teams as they take advantage from RISC-V customization capabilities…
RISC-V is known as an open-standard instruction set architecture (ISA) whose base instructions have been frozen to minimize complexity. But more recently it has added a wide range of custom extensions and enhancements that are making it increasingly popular amongst SoC designers building application-specific systems.
The custom functionality adopted in these architectures is often enhanced…
The application of old techniques to new problems only gets you so far. To remove limitations in AI processors, new thinking is required.
Software and hardware both place limits on how fast an application can run, but finding and eliminating the limitations is becoming more important in this age of multicore heterogeneous processing.
The problem is certainly not new. Gene Amdahl (1922-2015) recognized the issue and published a paper about…
Verification and debug of AI is a multi-level problem with several stakeholders, each with different tools and responsibilities.
When an AI algorithm is deployed in the field and gives an unexpected result, it’s often not clear whether that result is correct.
So what happened? Was it wrong? And if so, what caused the error? These are often not simple questions to answer. Moreover, as with all verification problems, the only way…
The C-Suite wants the chip industry to use PLM, but are their issues different enough that a more specialized black-box approach would be better?
Product lifecycle management (PLM) and the semiconductor industry have always been separate, but pressure is growing to integrate them. Automotive, IIoT, medical, and other industries see that as the only way to manage many aspects of their business, and as it stands,…
Software and hardware interdependencies complicate debug in embedded designs. New approaches are maturing to help reduce debug time.
Debugging embedded designs is becoming increasingly difficult as the number of observed and possible interactions between hardware and software continue to grow, and as more features are crammed into chips, packages, and systems. But there also appear to be some advances on this front, involving a mix of techniques, including…
Technologies must evolve to keep up with changing demands, and emulation is no exception.
Emulation is now the cornerstone of verification for advanced chip designs, but how emulation will evolve to meet future demands involving increasingly dense, complex, and heterogeneous architectures isn’t entirely clear.
EDA companies have been investing heavily in emulation, increasing capacity, boosting performance, and adding new…