Sunday, June 23, 2013

Leveraging Verification manager tools for objective closure

Shrinking schedules topped with increasing expectations in terms of more functionality opened up gates for reusability in the semiconductor industry. Reusability (internal or external) is constantly on a rise both in design & verification. Following are some of the trends on reusability presented by Harry Foster at DVCLUB UK, 2013 based on Wilson Research Group study in 2012, commissioned by Mentor Graphics.
 



Both IP and SOC now demand periodic releases targeting specific features for a customer or a particular product category. It is important to be objective in terms of verifying the design for the given context to ensure the latest verification tools & methodologies do not dismiss the required focus. With verification claiming most of ASIC design schedule in terms of efforts & time, conventional schemes fail in managing verification progress and extending predictable closure. There is a need for a platform that helps in directing the focus of CRV, brings in automation around coverage, provides initial triaging of failures and aids in methodical verification closure. While a lot of this has been done using in house developed scripts, there is a significant time spent in maintaining it. There are multiple solutions available in the market and the beauty of being into consulting is that you get to play around with most of them considering customer preferences towards a particular EDA flow.
 
QVM (Questa Verification Manager) is one such platform provided by Mentor Graphics. I recently co-authored an article (QVM : Enabling Organized, Predictable and Faster Verification Closure) published in Verification Horizons, DAC 2013 edition. This is available on Verification academy or you can download the paper here too.

6 comments:

  1. Just read your Verification Horizons article. Very nice.

    With QVM, it looks like you are now capturing the coverage model using the XLS spreadsheet and transforming it to XML then to USDB. Does this allow you to skip writing the functional coverage in SystemVerilog?

    Your paper also concludes that, "coverage
    grading helps avoid redundancy from random simulations." Do you have any metrics of speedup of the tests?

    In Wally's DVCON 2011 talk, he mentioned that Intelligent Testbenches speed up verification from between 10X and 100X, is the speedup anything like that?

    Figure 1 and 4 show an annotation process. Does the tool automatically annotate the XLS or do you do it by hand?

    Thanks,
    Jim

    ReplyDelete
  2. Jim,

    Thanks much for reading. The questions you have asked are great!

    #1 We aren't capturing coverage model in XLS but instead the test plan. However there is a TAG that associates each item of test plan with coverage (functional/code) or assertions in actual code. So yes, you still need to develop coverage in SV, however we went ahead and developed a script for that too. Sometimes when cover points are huge, automating part of it helps in higher turn around.

    #2 The best use of coverage grading is achieved when let's say you have achieved some fixed value of coverage and you changed the RTL (bug fix) or TB during the verification process. To validate the change, you can define a regression suite using coverage grading to hit all the cover points with minimum no. of tests instead of running random simulations. After you achieve 100% you can run random further to hit unanticipated corners/bugs.

    #3 Wally's reference to intelligent test bench is different from QVM. Mentor also has a tool - inFact that allows you to map your coverage model & input constraints in form of a graph to generate automated tests scenarios and hit all possible conditiond.

    #4 Annotation is achieved by TAG that should be the same in test plan & coverage model. If the TAG is correct, annotation is automatic.

    Thanks & Regards,
    Gaurav Jalan

    ReplyDelete
  3. Hi Gaurav,
    Thanks for your answers.

    You seem to have missed my middle two questions. Your article claims that "coverage grading helps avoid redundancy from random simulations." How much redundancy do you avoid? Do I get a 2X faster speedup?

    If you do are not getting better than a 2X speedup why don't we all switch to an intelligent testbench approach that suggests we can get a 5X or better speedup?

    Jim

    ReplyDelete
  4. Hi Jim,

    Thanks for following up.

    Total turn around time for regressions (Speed) varies with complexity in coverage grading. It is a complex function of complexity and how the constraints are written. In terms of productivity gains, yes it is much higher than 2x.

    Intelligent test benches are still in infancy. I am yet to see results with complex IP verified using intelligent TB solutions. Another point is the setup. Intelligent TB would demand representing the constraints & coverage as a graph while grading requires almost 0 setup time. However once the intelligent TB solutions mature I do believe they would see a good adoption rate.

    Let me know what is your opinion on this.

    Thanks & Regards,
    Gaurav Jalan

    ReplyDelete
  5. Hi Gaurav,
    > Intelligent test benches ... Another point is the setup.
    In VHDL's OSVVM, we make it easy. Skip writing top level randomization constraints. Write a coverage model. Randomize. If randomizing across input coverage apply the transaction and repeat. If randomizing across output coverage, the desired result is decoded (case statement) and a sequence of transactions (replacing the graph) is generated (using constrained random or directed).

    Like other intelligent testbench approaches, coverage closure is the strength of the approach. Unlike other intelligent testbench approaches, everything is written in VHDL - no vendor specific extensions or directives.

    Jim

    ReplyDelete
  6. Hi Jim,

    Thanks for sharing insight into VHDL based methods. My references have been mainly to the other intelligent test bench approaches promoted by vendors involving graph theory.

    I have few questions on the approach mentioned by you -
    1. How is the link established between the coverage output dumped by simulator and the next generation of coverage parameters on input side i.e.
    (a) Coverage would be dumped in a file. How do you read that?
    (b) Do you modify constraints based on the data read or do you store them somewhere and run randomization next time unless a value is generated that is not covered?
    2. On output side you mentioned that the result is decoded and sequence of transactions is generated. Is this a manual process?
    3. What's the complexity of the design that you have tried this approach on?

    Thanks & Regards,
    Gaurav Jalan

    ReplyDelete