The explosive
growth of cellular market has affected the semiconductor industry like never
before. Product life cycle have moved to an accelerated track to meet time to
market. In parallel, engineering teams are
in a constant quest to add more functionality on a given die size with higher
performance and less power consumption. To manage this, the industry adopted reusability in design & verification. IPs & VIPs have carved out a growing
niche market. While reuse happens either by borrowing from internal groups or buying
from external vendors, the basic
question that arises is, whether
the given IP/VIP would meet the
specifications of SoC/ASIC? To
ensure that the IP serves requirement of multiple applications, thorough verification is required. Directed verification falls short in
meeting this target and that is
where Constrained Random Verification (CRV) plays an important role.
A recent
independent research conducted by Wilson Research Group, commissioned by MentorGraphics revealed some interesting results on deployment of CRV.
In past 5 years –
- System Verilog as
a verification language has grown by 271%
- Adoption of CRV increased
by 51%
- Functional
coverage by 65%
- Assertions by 70%
- Code coverage by
46%
- UVM is expected
to grow by 46% in next 12 months
- Half of the
designs over 5M gates use UVM
A well defined
strategy with Coverage Driven Verification (CDV) riding on CRV can really be a game
changer in this competitive industry scenario. Unfortunately, most of the
groups have no answer to this strategy and pick adhoc approaches only to lose
focus during execution. At a very basic level, focus of CRV is to generate
random legal scenarios to weed out corner cases or hidden bugs not anticipated
easily otherwise. This is enabled by developing a verification environment that
can generate test scenarios under direction of constraints, automate the
checking and provide guidance on progress. CDV on the other hand uses CRV as
the base while defining Simple, Achievable, Measurable, Realistic and Time
bound coverage goals. These goals are represented in form of Functional
coverage, Code coverage or Assertions.
The key to
successful deployment of CDV+CRV demands avoiding redundant simulation cycles while ensuring overall
goals, defined (coverage) and perceived (verified design) are met. Multiple approaches
to enable this further are in use –
- Run random
regressions while observing coverage trend analysis till incremental runs
aren’t hitting additional coverage. Analyze coverage results and feedback to
the constraints to hit remaining scenarios.
- Run random
regressions and use coverage grading to come up with a defined regression suite.
Use this for faster turnarounds with a set of directed tests hitting the rest.
- Look for advanced
graph based solutions that help you attain 100% coverage with most optimal set
of inputs.
To define a
strategy the team needs to understand the following –
- Size of design,
coverage goals and schedule?
- Availability of
HW resources (server farm & licenses)?
- Transition from
simulator to accelerator at any point during execution?
- Turnaround time
for regressions with above inputs?
- Room to run
random regressions further after achieving coverage goals?
- Does the design services partner bring in complementing skills to meet the objective?
- Does the design services partner bring in complementing skills to meet the objective?
- Does the
identified EDA tool vendor support all requirements to enable the process i.e.
Simulator, Accelerator, Verification planner, VIPs, Verification manager to run
regressions, coverage analysis, coverage grading, trend analysis and other
graph based technologies.
A sample flow using CRV is
given below -
Relevant Blog posts -
Interesting article Gaurav Jalan . Specially the sample flow using CRV . Picture makes the message even more sound .
ReplyDeleteHi Gaurav,
ReplyDeleteThank you for sharing with us.
I have quick questions that which verification method is better Constraint Random Verification or Constraint Random Coverage verification?
which is more useful in industries?
what would you say about UVM ?
Thank you.
Hi Yatin,
ReplyDeleteCRV is the base and CDV provides focus to it. If you consider any design today and try to verify it based on the design implementation the state space you would need to verify would be huge. It might be unreasonable to achieve that in given time & resources. This is where CDV comes into picture. It helps in providing you a list of features that needs to be verified to ensure the IP works for defined applications. Sometimes it helps in even prioritizing the features to be covered based on each release of the IP.
As for UVM, it is not the defacto standard being supported by all EDA vendors and driven by an active Accellera community.
Hope that helps.
Hi Gaurav,
ReplyDeleteThank you for your quick reply.
I am working on Ethernet RGMII Verification project. I just started. :)
I have 1 more quick question that In ASIC Flow, What process is more time consuming & why do we need more time in that process ? I mean Chip Designing(RTL)or synthesize or STA or Layout or Verification.
As per my knowledge, Verification is more time consuming or depends on chip, could be Designing (RTL).
Thank you.
Yatin,
ReplyDeleteYou should refer to the post below.
http://whatisverification.blogspot.in/2012/04/verification-claims-70-of-chip-design.html
It certainly is verification!
I read a rule of: 10% documentation, 10% design, 80% verification. It might be off by some, but order of magnitude is about right.
ReplyDeleteI would just observe that Formal gives huge verification value, and does so right away (with dramatically smaller debug time). Also, adding a lot of assertions into the DUT means you get more bug-catching, plus faster debug time (isolating bug in time & space). Plus you can run the assertions in Formal.
Debugging constrained random takes a LOT of time, unless you have lots of tools to help speed it up.
Hi Gaurav,
ReplyDeleteYou say, "The key to successful deployment of CDV+CRV demands avoiding redundant simulation cycles ..."
Best case, any CRV approach is a uniform distribution. Theory tells us that it will take O(N*log N) randomizations to generate N unique test cases. You might change this a small amount by the SV CDV approach of observing coverage and the next time stopping simulations when they no longer increase functional coverage. However, that still leaves parts of your test plan uncovered. Perhaps you change your controls in your testbench and try again to get these.
Fortunately with Intelligent Testbenches, there is a better way. Unfortunately for SV users this is another big cost. The great news is that for VHDL users, this next step is free if they choose OSVVM. More at OSVVM.org or http://www.synthworks.com/blog/osvvm
Best Regards,
Jim