Challenges are changing for engineering teams, and they are crossing traditional boundaries.
Chip reliability is coming under much tighter scrutiny as IC-driven systems take on increasingly critical and complex roles. So whether it’s a stray alpha particle that flips a memory bit, or some long-dormant software bugs or latent hardware defects that suddenly cause problems, it’s now up to the chip industry to prevent these problems in the first place, and solve them when they do arise. By the time these systems reach manufacturing — or worse still, when they malfunction in the field — the ability to fix issues is both limited and costly. So systems vendors and foundries have kicked the problem left in the design-through-manufacturing flow, all the way back to the initial architecture and layout, followed by much more intensive verification and debug.
Reliability depends on fixing issues that may crop up at every step of the flow. The challenge at the chip level is ensuring that increasingly complex chips are also capable of functioning throughout their lifetimes in deeply nuanced applications and use cases. “We’ve gone from the traditional semiconductor concepts of reliability to engineering teams wanting to analyze more on the system side of things, to interactions with things like soft errors, as well as software,” said Simon Davidmann, CEO of Imperas Software. “For example, in automotive ISO 26262 qualification, one of the things that’s really worrying for developers is that due to the small geometries of the silicon, there is the potential for random bit flips in memory caches from cosmic rays, and they want to know if the software is resilient enough. Will the system survive if certain errors occur? With a certain level of randomness, how does the software survive? Will the car keep steering? Will the brakes keep working if the caches get damaged?”….
To read the full Semiconductor Engineering article by Ann Steffora Mutschler, click here.