Thursday, December 27, 2012

Evolution of the test bench - Part 1

Nothing is permanent except change and need constantly guides innovation. Taking a holistic view with reference to a theme throws light on the evolution of the subject. In a pursuit to double the transistors periodically, the design representation has experienced a shift from transistors  à gates à RTL and now to synthesizable models. As a by-product of this evolution, ensuring functional correctness became an ever growing unbounded problem. Test bench is the core of verification and has witnessed radical changes to circumvent this issue.
While the initial traces of the first test bench is hard to find, the formal test benches entered the scene with the advent of HDLs. These test benches were directed in nature, developed using Verilog/VHDL and mostly dumb. The simulations were completely steered by the tests i.e. the stimuli and checker mostly resided inside the tests. Soon variants of this scheme came wherein the HDLs were coupled with other programming languages like C/C++ or scripting languages like perl & tcl.  Typical characteristics of that period were, stretched process node life, multiple re-spins a norm, elongated product life cycle and relatively less TTM pressure.  For most part of the 90’s these test benches facilitated verification and were developed and maintained by the designers themselves. As design complexity increased, there was a need for independent pair of eyes to verify the code. This need launched the verification team into the ASIC group! Directed verification test suite soon became a formal deliverable with no. of tests defining the progress and coverage. After sustaining for almost a decade, this approach struggled to keep pace with the ramifications in the design space. The prime challenges that teams started facing include –
- The progress was mostly linear and directly proportional to the complexity i.e. no. of tests to be developed
- The tests were strictly written wrt clock period and slight change in design would lead to lot of rework
- Maintenance of the test suite for changes in the architecture/protocol was manual & quite time consuming
- Poor portability & reusability across projects or different versions of the same architecture
- High dependency on the engineers developing the tests in absence of a standard flow/methodology
- Corner case scenarios limited by the experience and imagination of the test plan developer
- Absence of a feedback to confirm that a test written for a specific feature really exercised the feature correctly
Though burdened with a list of issues, this approach still continues to find its remnants with legacy code.
In the last decade, SoC designs picked up and directed verification again came to the fore front. With processor(s) on chip, integration verification of the system is achieved mainly with directed tests developed in C/C++/ASM targeting the processor.  Such an approach is required at SoC level because –
- The scenario should run on the SoC using the processor(s) on chip
- Debugging a directed test case is easier given the prolonged simulation time at SoC level
- The focus in on integration verification wherein test plan is relatively straight forward
Constraints required for CRV needs to be strict at SoC vs block level and fine tuning them would involve a lot of debugging iterations which is costly at SoC level
While the above points justify the need for a directed approach to verify SoC, the challenges start unfolding once the team surpass the basic connectivity and integration tests. Developing scenarios that mimic the use-cases, concurrence behaviour, performance and power monitoring is an intricate task. Answers to this problem are evolving and once widely accepted they will complement the SoC directed verification approach to further evolve into a defined methodology.
Directed verification approach isn't dead! It has lived since the start of verification and the Keep It Simple Silly principle would continue to drive its existence for years to come. In the next part, we check out the evolution of CRV, its related HVLs and methodologies.
If you enjoyed reading this post, you would also appreciate the following –
Wish you a Happy and Prosperous 2013!!!


  1. Comment from Linkedin - Semiconductor professionals group :

    Mitch Alsup • I want to disagree with the first sentance of the second paragraph. Formal test benches were used to debug the very first 360s (circa 1962) far before TTL or any kind of HLD languages. These test benches were the size of a large room where the computer would sit in the middle surrounded by oscilliscopes and other test measurement and signal generation equiptment.

    I strongly suspect that test benches were built around earlier computers but have no actual details thereto.

    Gaurav Jalan • Hi Mitch,

    Yes, the real (physical) test benches have been there in use for a long time. My reference to formal test bench here is to 'simulation based logical test bench with reference to the design representation in RTL'. Thanks much for pointing out the fine detail.

  2. Hi Gaurav Jalan,

    How are today's SOC's are verified for concurrence behaviour (directed/random or constrained random) and for power/performance? Are you proposing a better solution when you said, answers are evolving? what do you think is the best solution as of today?


  3. Hi Madhu,

    That is a long topic to discuss. Concurrency tests are manually developed and takes a long time to stabilize, monitor and close. For processor based verification tests, the evolving solutions are graph based. For random ones, a structured approach is required right from IP to sub system and SOC level to maximize reuse.