Saturday, December 31, 2011

2011 : Bidding Adieu

The year 2011 had its own set of twists and turns. A plethora of applications riding over interesting set of electronic gadgets became part of our lives. Consumption of electronics devices continued to rise and market dynamics were unpredictable more than ever. New process nodes stabilized thereby promising to shrink designs  and reduce power consumption further. The tsunami added to the woes but Japan defied the odds and showcased its ability to bounce back quickly. The sad demise of Steve Jobs was a big loss to the electronics world. Amongst all of this we experienced that market window for products is constantly tightening which means the turnaround time for designs needs to reduce incessantly. Since verification claims most of the cycles in ASIC design, the community took one step further in multiple areas to bridge this gap. Some of the highlights include -

- UVM : Final release happened with quick adoption.
- Steps taken to bring together the differences in UPF & CPF.
- Tools for generating assertions automatically adopted widely.
- Cloud computing discussions moved from closed rooms to open forums.
- Hardware acceleration jumped to mainstream.
 
Some interesting facts also came up. While we keep talking about next generation of verification technologies, the primitive techniques continue to dominate in parallel. For example, at ‘Siddhakarana’ the most popular post was “Gate Level Simulations” with more than thousand hits and still getting visitors on daily basis. This essentially means there are areas that need attention before they become a bottleneck in getting the designs shipped faster.

On this note, I thank you for your valuable suggestions and readership throughout 2011. I look forward for the same in 2012.

Wish you and your family "HAPPY & PROSPEROUS 2012".

:)

Sunday, December 4, 2011

Constrained Random Verification : A case study

Continuing the discussion further on CRV, here is a case study where a performance intense core was verified from scratch.
GIVEN
- Architecture documents for each block and integrated core.
- Reference model for each block developed for architecture validation and early software development.
- Periodic deliveries of core tests from architecture validation team, reusable at block and core level.
- Enough compute cycles, tool licenses and access to hardware accelerator to speed up verification at core level.
- Timeline for Architecture to RTL freeze = 13 months.

APPROACH
Develop block level test benches targeting CRV and capable of reusing system level tests to ensure staged bring up of functionalities in parallel. Top level test bench would reuse monitors and assertions from block level and simulate system level tests (no CRV) only. Sign off criteria would be 100% functional coverage, Code coverage (Line, expression & FSM), assertion coverage at block level and toggle coverage at core level.
Why limit CRV at block level –

- Test simulation time at core level was expected to be 12-48 hours, throttling iterations and affecting schedule.
- Low probability of finding an RTL bug using CRV with focus on integration verification.
- Test cases coming from architecture validation team would cover most of the use case scenarios.
- Bring up of constrained random TB for core would require staged approach i.e. append 1 block at a time and stabilize that. Verification closure in given time was unpredictable.

EXECUTION

The estimated effort was 200 man months and the team started off with 1 engineer per block taking CDV approach. After test plan, block level test benches were developed using standard methodology and code shared for similar functionalities. The test benches became functional using system level tests executing at block level. While RTL debugging started, coverage models were developed to realize the test plan and hooked onto the TB. Next the CRV support was added to the test bench. The random tests were initially directed towards areas yet to be explored by system tests to exercise DUT from all fronts. When the system tests stagnated for a data path within the block, the random tests continued to weed out bugs and hit scenarios for coverage closure. By the time system tests started passing for individual blocks, the team was ready with the core level TB. Every engineer finishing block level started contributing to top level. Tests at core level consumed a lot of simulation cycles (i.e. idle time for engineers) so a couple of engineers focused on CR TB for adjacent blocks (choking point from performance perspective) to regress the RTL further. No specific coverage goals were planned at this level and the emphasis was to hit random scenarios. Finally the team was able to deliver quality RTL with acceptable slippage in schedule.
CONCLUSIONS
Following CDV and standard methodology for CRV helped in defining, tracking and achieving goals in an organized manner. Maintenance and rework claimed limited overhead. Reusing components from block to top level was smooth with minimum efforts.

With CRV TB, the bug rate increased drastically, averaging to 7 bugs per week per block for at least 6 weeks. The scenario generation increased exponentially while tests delivered by architecture validation team continued to be incrementing linearly. The team was able to uncover hidden bugs and hit corner cases difficult to cover otherwise. Performance bottleneck scenario generation was easily achievable through CRV.
The CRV TB for adjacent blocks revealed interesting outcome. Converging on the valid constraints was challenging. The set of valid constraints at block level required additional constraints guided by introduction of a new block in the DUT data path. This TB was able to uncover 3 bugs, 2 with software work around and 1 critical where the two blocks would hang in a particular scenario. The latter was difficult to catch through system tests.
Lesson learnt – If you are unable to introduce CRV at top level, try to deploy it to the level possible!

Wednesday, November 23, 2011

Constrained Random Verification : + and -

In the last decade, adherence to Moore’s law demanded ‘divide and conquer approach’ for developing SoC/ASIC. The design cycle now requires develop/procure IPs; build sub systems using them and integrate these sub systems further to realize the final product. Some IPs (Networking protocols, Graphics, Video, DSP… etc) are complex enough that further division into blocks during design and verification is unavoidable. Amidst all this, verification still is the biggest challenge in meeting schedules and taping out bug-free products. The ever increasing design complexity problem has enabled rapid development in ASIC verification. From traditional directed verification approach to Constrained Random Verification (CRV), it has been a long way. CRV brought a paradigm shift in the way we verify our designs and enabled development of Coverage Driven Verification (CDV) and Standard methodologies (eRM, RVM, AVM, VMM, OVM & UVM).

ADVANTAGES
+ Enables the test bench to hit corner cases & hidden (unplanned) bugs
+ When coupled with CDV leads to faster convergence of verification schedule  
+ Number of test cases to be developed / maintained is fewer as compared to directed test
+ Updating verification environment for changes in specifications during the design cycle is better organized
+ Methodical approach during development enables reusability from module to higher levels
+ Improves Portability across projects
+ Test bench reviews are more structured and probability of finding issues increases
+ Following a standard methodology helps in knowledge transfer across engineers & teams
+ Increases compute resource & license utilization

LIMITATIONS
- Requires a reference/Golden model to predict the output
- Requires lot of planning and structured code development (Methodology is key)
- In absence of CDV it would hard to define ‘are we done yet’
- During TB development, RTL bug rate is 0 and initially TB bug rate may be > RTL bug rate
- Converging on valid constraints when moving from IP to higher level involves iterations
- With increased design scope debugging random scenarios is challenging
- Limited randomness possible while verifying at SoC level with processor(s) in place.
- Gate level simulations require a set of directed test cases where CRV doesn’t contribute much
- In absence of adequate licenses and compute resources it stagnates to deliver desired results

Undoubtedly CRV has been instrumental in improving the quality of verification and getting organized verification closure. Most of the above limitations can be nullified with proper planning during the initial phase of any project. For IP level, CRV is a de-facto standard. However, as we move one level up, the limitations start influencing. Teams are forced to think if CRV will be utilized to its true potential? Should we follow CRV for sub-system and top level? What are the bottlenecks in extending CRV to increased scope of the design & is it worth the investment?

Coming next is a quick case study. Stay tuned…

Thursday, October 6, 2011

Mythology & Verification

It’s festive season in India! During this period one can experience the rich & diversified culture across the country. Apart from the fun and celebrations, this is the period when Indian mythological stories come across very frequently before everyone. While these stories convey subtle facts, rules and maxims to guide our daily lives, they also add meaning to these celebrations. All these stories related to Gods and demons are based on certain basic premises and are usually filled with some common concepts and ideas. It is considered that in every epoch as time passes the demons become prominent disturbing the regular functionality of the world. While these demons are on a rise, God himself arrives on this earth in different avatars (incarnations). He would master the art of war, build his arsenal with all sorts of weapons and finally start his quest to hunt all the demons and bring back the lost balance to this world.  

While pondering over these various stories a thought struck my mind. Isn’t this whole scheme the same in ASIC world? In every ASIC design, as the time passes and the RTL design gets ready, the bugs become prominent disturbing the regular functionality of the ASIC. While these bugs are on a rise, the verification engineer comes into picture in different roles (IP verification, SOC verification, Formal…). He would have mastered the art of verification and ready with his arsenal equipped with various techniques (constraint random verification, coverage, assertions, power aware verification, AMS, GLS…) to hunt down the bugs hiding in the design. At the end of this cycle he would restore the balance i.e. desired functionality of the ASIC and move on to the next one.

Verification surely has come a long way. I still remember the initial days of my career when every fresher (engg. graduate) wanted to be a designer and verification was considered to be a 2nd option. However in the past decade the rising complexity of the designs have improved this role and placed it in the centre. During the recession (2009), my ASIC team had to face layoffs. Even during that period the verification engineers were able to manage more than 1 job offers in limited time.  The demand in verification is continuously on a rise. Now the verification engineer probably develops more lines of code (in comparison to RTL) to build a state of art test bench and uses multiple techniques to get the job done. The verification cycle directly affects both the schedule and quality (first Si success) of the product. The involvement has expanded to TLM modeling, Virtual platform development, functional verification, GLS, ATE vector generation and bring up keeping the engineer busy throughout the ASIC design process.   

So, to the members of the verification fraternity - while you are busy bringing balance to the ASIC functionality, enjoying the festive season and fascinated by the stories, make sure that your arsenal is always well equipped with the tricks and techniques in your pursuit towards Bug Hunting. As Albert Einstein rightly said –

“No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”

Happy Dussehra! Happy Bug Hunting!

PS : This post is an experiment towards bridging technical and off topic subjects. Please provide your feedback by posting a comment or voting (Just 2 clicks).

Saturday, September 10, 2011

Scoreboard

Recently while reviewing a test bench architecture with a group of verification engineers, an inveterate though interesting debate started. The engineer proposing the verification plan had ‘scoreboard’ as well as ‘checkers’ as two different components in his test bench. The debate was on the clear definition of these 2 terminologies as every engineer had his own understanding of these words.  Though various verification methodologies have termed the test bench components differently, still names like driver, bfm, generator and sequencer etc. reveal a lot about their functionality with the exception of scoreboard. Engineers across the globe have varied opinion on the history of this term and its origin as a verification component is untraceable.
Here is a quick preview of the various definitions of ‘what is a scoreboard’?
A scoreboard is a large board in a ballpark, sports arena, or the like for publicly displaying the score in a game or match and often other statistics, such as the count of the balls and strikes etc.
A scoreboard is a centralized hardware mechanism (initially used in CDC 6600 – 1964) used to keep a track on the activity (instructions being fetched, issued, executed etc. and the resources they use) inside a system that helps in determining certain decisions (dynamically scheduling instructions). Here the activities are tracked by flipping bits in scoreboard that keep a score of what is going on in the system.

These definitions are relevant to the scoreboard usage in verification. There are examples of primitive test environment where testing as an activity was maintained using an array of registers (scoreboard) representing tests to be executed. The registers would maintain if a test ran and the status of the result. On completion of the test suite the scoreboard would help generate the statistics and determine next steps. Scoreboard however has evolved from tracking & decision making into multiple avatars wherein it can be as simple as a data structure or as complicated as having the transfer function (predictor) and compare function included. So far in my experience I haven’t seen scoreboards performing protocol compliance checks so I believe my colleague was right in proposing scoreboard as an entity, predicting and comparing while the protocol checks to be handled by a separate component he referred to as checker.
Later that day while reading an intriguing article on context aware applications I came across this definition of context which states that –
“Context is any information (the parts of a written or spoken statement that precede or follow a specific word or passage, usually influencing its meaning or effect) that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.”
With the dilemma on the real intention of the user who coined this word (scoreboard) in verification domain or where it was first used I could only conclude that the definition of scoreboard is relative to the context it is used in. The context could be a methodology that defines this term clearly or the test bench architecture where it is used. If you know the functionality of the other components in the test bench (which you can fairly conclude by their names) the rest can be assumed to be taken care by scoreboard J
Check out what Scoreboard means in UVM @ verification academy.
Drop in a comment to share your experiments with the scoreboard definitions!

Tuesday, July 12, 2011

Gate Level Simulations : A Necessary Evil - Part 3

The response to the initial 2 posts (Part 1, Part 2) clearly show that GLS touches the career profile of almost all verification engineers. I am wrapping up this 3 part series on GLS with some recommendations that might be of some help if you aren’t already deploying them. Please do drop in an email or comment to share your tips & tricks.

To start, it is important to identify features targeted for GLS in the test plan upfront. Adding functional coverage/assertions to these features is a good practice. To set up GLS, develop the Verilog/VHDL TB (with DUT instantiation) from scratch so as to avoid any exceptions/force statements that might just pass from RTL simulations to GLS. If there is a provision, setup a process to develop/tune test cases for GLS such that all HDL code (DUT, TB & test cases) is compiled once and using ‘plusargs’ different tests can be simulated. The SDF should be compiled once and used for all tests. For initial netlist, whether you use zero delay/unit delay/pre layout SDF, disable timing check and have few smoke tests for sanity checks on the netlist. In case of large gate count, implementation team would be using hierarchical synthesis. If the hierarchy is logical you may want to try segregating GLS test points that are local to the components in hierarchy and try out GLS at that level. For designs that take some initial time to synchronize before the actual feature testing starts (particularly in networking domain), the tests can be developed such that multiple features are clubbed in 1 test. Some level of test case rewriting will save a lot of time during simulations.

Following is a basic common list of tests that most of the teams have in the GLS test suite –
1. Initialization tests i.e. chip coming out of reset and functional in different boot modes
2. Clock, Reset generation and clock dividers
3. Power aware simulations – covering features based on the power format specification
4. Test structures
5. Digital - Analog boundaries
6. Dedicated tests for timing exceptions in the STA
7. Stimulus to each block of the design atleast once

In order to have a controlled rework on the hierachical paths for force statements or monitors, pull out all paths in 1 HDL file under compile directives (e.g. `define in Verilog). This file can be included in the TB and modified/reviewed for every netlist release.

To control the SDF timing on synchronizers –
Enforce naming rules for such modules so that with a perl script you can turn off timing in SDF or through tcl interface to the tool turn off timing checks on selective instances. Where naming convention is not followed, there are some interesting ideas in the following papers –
- My favorite DC/PT TCL tricks, SNUG San Jose 2003
- SoC Gate Level Simulations (GLS) cycle time reduction – Simulation flow enablers, SNUG India 2008.
- Identification of Non-resettable flops for faster Gate Level Simulation, SNUG India 2010.

For debugging, simulation tools have some interesting features that can help narrow down the problem to some extent. A couple of them are – tracing the source of X propagation and expanding time step to highlight the order of execution. With some basic perl scripting you can triage the timing violations report into bins and then chase it accordingly.

Most of the above recommendations have worked well across multiple projects. It would be interesting to know if they hit any limitations in the design space that you work on so that we can discuss further on those specific corner cases for these recommendations.

Hope some of these would help you in your next tryst with GLS.
Happy BUG hunting!!!


Gate Level Simulations : A Necessary Evil - Part 1
Gate Level Simulations : A Necessary Evil - Part 2

Wednesday, June 29, 2011

Gate Level Simulations : A Necessary Evil - Part 2

In the previous post, we discussed on why GLS is necessary. In this post we take a look as to why is it challenging.

Having GLS in the design flow means it needs to be planned and started quite early in the verification cycle. GLS has to pass through various stages before sign off and serves to check both functionality and timing (complementing STA & LEC). The setup of GLS starts off when the prelim netlist is released. Since this netlist is prone to functional and timing bugs, the GLS bring up uses selected functional tests with zero/unit delay (timing checks and specify blocks turned off) simulations. This helps in setting up the flow for GLS and confirm that the netlist is properly hooked up. Later with higher confidence netlist releases pre-layout SDF can be tried out till the final SDF is delivered. With final netlist and post layout SDF in place, GLS claims a lot of simulation time and debugging effort before sign off.

Why Evil?

Some of the challenges for GLS closure include –

- GLS with timing is inherently slow. Careful planning is required to decide the list of vectors to be simulated on netlist, turn around time for these tests and memory requirements for dump and logs.
- Identifying the optimal list of tests to efficiently utilize GLS in discovering functional or timing bugs is tough. Coverage/Assertions dedicated for GLS are required to assure completeness of these tests.
- Prelim netlist simulations without timing are prone to race conditions. With improperly balanced clock tree, sometimes data propogation to next stage happens before clock leading to unusual results.
- During the initial stages of GLS, identifying the list of flops/latches that needs to be initialized (forced reset) is a bug hurdle. Simulation pessimism would drive out X on the output of all such components without proper reset bringing GLS to a standstill.
- During clock tree synthesis, the clock port/net is replaced with a clock tree with buffers/invertors to balance the clocks. Monitors/Assertions hooked to the clock port in test bench during RTL simulations need to be revised for GLS to make sure the intended net is getting probed.
- A lot (hierarchy, net naming etc) changes with each netlist release leading to a lot of rework on the force statements (if any remaining) as well as the assertions/monitors.
- For netlist simulations with timing check, one needs to remove the synchronizer setup-hold constraints from the SDF file or disable timing check on these component instances. Getting a list of these synchronizers is a dauting task if certain coding conventions aren’t followed.
- Assertions are increasingly used for functional verification to improve observability and error detection. Reusing the assertions for GLS raises reliability concerns due to issues arising from incorrect clock connections or negative hold time causing assertion sampling at undesirable time.
- Debugging the netlist simulations is one of the biggest challenge. In GLS, ‘X’s are generated if there is a violation of timing requirements on any of the netlist components or when a given input condition is not declared in a UDP’s (User defined primitives) table. Identifying the right source of the problem requires probing the waveforms at length which means huge dump files or rerunning simulations multiple times to get the right timing window of failure. Engineers tend to get lost easily during the debugging of X propogation source.

Amidst all these challenges, GLS has been part of the ASIC design flow for decades making it a Necessary Evil!

Monday, June 20, 2011

Gate Level Simulations : A Necessary Evil - Part 1

Rising complexity, tightening schedules and ever demanding time to market pressure are pushing the industry to move to the next level of abstraction for design representation viz ESL (Electronic System Level). A similar push came in when there was a need to move from gate level to RTL. Even after efficiently using RTL simulations for a couple of decades, the industry is still relying on GLS (Gate level simulation) before sign off. Many organizations have recognized this effort so very important that there are dedicated GLS teams verying netlists for one project or the other throughout the year. Advancements in static verification tools like STA (static timing analysis) and Equivalence Checking (EC) have leveraged GLS to some extent but so far none of the tools have been able to abandon it. GLS still claims a significant portion of the verification cycle footprint.

On demand from some readers, here is a 3 part series that would try to address this less talked but significant topic.

Why necessary?


GLS or Netlist simulations typically need to start early when functional verification is still progressing to flush the GLS flow and verifying that the netlist is hooked up correctly. Timing at this point can be ‘zero delay’, ‘unit delay’. Later, these simulations are performed after back annotating first 'pre layout SDF' and finally ‘post layout SDF’ with the goal to assure that the design will run at the desired operating frequency. Following points summarize (in no specific order) the need for GLS in the ASIC design flow.

GLS are a must –

- To verify critical timing paths of asynchronous designs that are skipped by STA.
- To validate the constraints used in STA and EC. Static verification tools are constraint based and they are only as good as the constraint supplied. Careless use of wildcards, typos, design changes not propogating to constraints or incorrect understanding of the design all demand validation of these constraints.
- ­To verify any black boxes in EC. Multi vendor EDA tools flow is common. Sometimes to direct the synthesis, RTL instantiates a module from the synthesis tool vendor library for which an equivalent is hard to find in a competitor’s EC tool.
- To verify the power up and reset operation of the design.
- To verify that the design doesn’t have any unintended dependence on initial conditions.
- To verify low power structures absent in RTL and added during synthesis (power aware simulations). Apart from logical netlist even physical netlist (with VDD & GND) needs to be verified here.
- To collect switching activity for power estimation and correlation.
- To verify the integration of digital and analog netlist.
- To verify DFT structures absent in RTL and added during or after synthesis. Also required to simulate ATPG patterns.
- To generate ATE test vectors.
- To validate EDA tool flow change while moving from one vendor’s sign off tool to another.
- To validate that RTL simulations were not having any undesired force statements from test bench and masking bugs.

Finally, GLS is a great confidence booster in the quality of the netlist. The probability of having ‘sound sleep’ after tape out improves with GLS.

Suggested Reading -

1. ESNUG article - http://www.deepchip.com/items/0421-01.html
2. All my X's come from Texas...Not!! (SNUG 2004)

Gate Level Simulations : A Necessary Evil - Part 2
Gate Level Simulations : A Necessary Evil - Part 3

Tuesday, May 3, 2011

Tracking Verification : are we done?

Growing design complexity has turned verification into an NP-hard problem. Achieving 100% verification is unrealistic. The approach followed finds an interesting corollary here. Verification methodologies are continuously evolving to answer the most important questions before the verification team i.e. where are we now and are we done? Deployment of different metrics limit the verification space for a project and help in measuring the progress. Some of these include –
- Functional coverage
- Code coverage
- Formal/Assertion based coverage
As design size grew, the divide and rule policy entered. Divide the system into IP or sub systems so as to verify them thoroughly as a separate entity. Further plug-in these entities into the system and verify integration. The coverage metrics can track progress from the entity to system level.
Are these coverage metrics good enough indicators to measure progress or should the verification teams look out for more such pointers?
Other useful indicators include
- Tracking Bug rate on - RTL, VIP/Golden model and Test bench
- Tracking version control update frequency
- Regression failure rate
- Correlating the above with similar complexity projects in the past and predicting accordingly.
What else could point to the progress?
- Status on Use cases scenarios in functional simulation.
- For hardware assisted acceleration, results of simulations with real life data.
- For prototyping, results of executing drivers/APIs with real life data.
- Gate sims passing with post layout SDF on different delay parameters.
- Server load and license usage. 
More ….
:) Designer’s tasks - blessing coverage waivers instead of fixing bugs
:) Email frequency and subject/content  of emails between team members.
:) Engineers demanding compensatory off or planning vacations
Verification dashboards available in market use some of the above pointers to indicate the progress. However, there is a need for next level, something like a cockpit that provides the control to the user to extract the progress with all possible indicators and mitigate the risk on the schedule and quality.

Sunday, April 17, 2011

Verification Trends

DVCON 2011, held at San Jose between FEB 28th to MAR 3rd was quite a success. While UVM dominated the conference, the keynote from Walden C. Rhines, CEO Mentor Graphics was quite very interesting. The presentation, “From Volume to Velocity” touched upon the what’s going on in verification for past few years and challenges for future. Some of the interesting facts highlighted in the presentation were an outcome of one of the largest functional verification studies carried out by Wilson Research Group in 2010, commissioned by Mentor Graphics. Harry Foster has been writing a series of blogs summarizing this study.

The study distributes info in following regions –
- North America       : Canada, United States
- Europe/Israel        : Finland, France, Germany, Israel, Italy, Sweden, UK
- Asia (minus India) : China, Korea, Japan, Taiwan
- India

Here are a few interesting items  picked from the presentation and the blog . The comparisons are in reference to the 2007 Far West Research study.

DESIGN vs VERIFICATION in last 3 years

New logic development reduced by 34% and External IP adoption increased by 69%.
New verification code reduced by 24% and External VIP adoption increased by 138%.
Design teams have grown by 3.8% only.
Verification teams have grown by 58.34%.
Mean time a designer spends on verification has increased from 46% to 50%.

Trends -
- SOC designs are the key to suffice the appetite of next gen electronic products.
- 'Need to Standardize' is directly proportional to 'Need to Reuse'.
- Challenges in verification increase multifold with increase in design complexity.
- Jobs in verification continue to rise in comparison to other ASIC skills.
- Considering the demand, VIP business is luring. The business models for VIP (license based mostly) vs IP (royalty based mostly) makes it even more appealing.
 
FACTS about VERIFICATION

~66% projects are still not on schedule with functional bugs causing 50+% respins.
Mean time a verification engineer spends on various activities include –
   o 32% Debugging
   28% test bench development
   27% writing and debugging tests
   ~14% Others
Median number of verification engineers engaging on related activities –
   o Formal analysis – 1.68 in 2007 à 1.84 in 2010
   o FPGA prototyping – 1.42 in 2007 à  2.04 in 2010
   o HW acceleration/Emulation  – 1.31 in 2007 à 1.86 in 2010

Trends –

- Random verification adding volume is stagnating to meet verification challenges for next level mainly due to limitation on hardware resources and debugging time.
- Random verification is still not the preferred approach at SOC level.
- HW assisted acceleration needs to evolve further to ease random verification debug.
- Mention to Cloud computing was missing in the study as it is still evolving. However that should find a breather for hardware resource constraint in random simulations.
- Adoption of Formal verification should modulate the time spent on various verification activities.

- Advancements in verification have been reactive and unable to control the functional failure rate as complexity increases.

Geographical TRENDS in VERIFICATION
Based on the adoption rate the data is restructured below to highlight the #1 & #2.
ITEM
North America
Europe
Asia - India
India
Methodology  adoption
Code coverage
2
1
Functional coverage
2
1
Assertions
1
2
Emulation
2
1
FPGA prototyping
1
2
HVL adoption
System Verilog
1
2
Specman E
1
2
Vera
2
1
System C
2
1
BCL methodology adoption
UVM
1
2
OVM
2
1
VMM
1
2
eRM
1
2
AVM
2
1
RVM
1
2
Trends -
- India is among top 2 in 10/15 items. Infact it is marginally lagging in rest 4. This reflects the diverse skill set developed by India particularly in verification. FPGA prototyping is one area where it is last.
- Since functional coverage is not utilized a lot at SOC level as it is done at IP level, the data would make more sense if it includes IP verification vs SOC verification categories for the above items.
- Adoption of SystemC as an HVL is debatable as it is touted as a modeling language. Infact with ESL gaining momentum it would be interesting to see the dynamics between SystemC and System Verilog adoption.
- Specman-E adoption though small, enjoys a strong base. If UVM adds multi-language support, it will stabilize the ‘e’ adoption further.
Finally, the key to see convergence on verification as rightly pointed out by Wally Rhines is -  “Shift from Increasing Cycles of verification to maximizing verification per cycle” both from the EDA tools and the verification engineers.

NOTE - The data referenced from the study remains the sole property of Mentor Graphics.