At the recent NASCUG (North American SystemC User Group), veteran EDA investor Jim Hogan shared that SoCs have gross margins of 40-60%, compared to 10-20% for discrete ICs. Along side, at DVCON, Mentor Graphics CEO Wally Rhines also shared some survey results where he highlighted that 52% of chip failures were still due to functional problems.
Club these two and you see that while the trend is moving towards SOC the onus lies on verification teams to a large extent to extract these gross margins.
Verification at SOC has always been a tough nut to crack. Here's an article authored by me and published at EETIMES INDIA that discusses a comprehensive strategy to SOC verification.
LinkedIn Groups:Semiconductor Professional's Group -
ReplyDelete"great thoughts covering end-to-end aspects of chip level verif"
Posted by Manit Saluja
LinkedIn Groups : OVM Professionals Network
"Software Driven Verification"
Posted by Harunobu Miyashita
LinkedIn Groups : Design Verification Professionals
ReplyDeleteCheck with verification kit available from cadence (KITSOCV) available in ies installation path.
- Portability of vectors across hierarchy of verification (block, subsystem, SOC)
- Interfaces eVC/UVC should be able to integrate with components (checker, reference model) written in other language OVM_ML is a solution.
- Good to start HW/SW coverification earlier in verification cycle
- create SW sequences instead of passing file commands
- Split SOC to subsystem and attain verfication closure and then do integration test alone at SOC level.
- Good to use acceleration for getting crtical bugs earlier in verification phase
much more .......
Posted by Kesava viswanathan Sivanthi
LinkedIn Groups : Low Power and Power Management Designs
ReplyDeleteOne that guarantees first-time Silicon success and quickly enough to hit your market window.
Does it exist? - probably not.
IMO you need an AMS design flow for digital that is not currently supported, which leaves you doing sign-off verification with dumb digital (Verilog + SDF) or overly detailed fast-Spice (limiting your coverage).
Posted by Kevin Cameron
LinkedIn Groups : UVM Verification Professionals Group
ReplyDeleteAnything else to added apart from this article? :)
Posted by Pradeep Salla
LinkedIn Groups : UVM Verification Professionals Group
ReplyDeleteI liketo make a parralel with a similar question in the real estate business. One may asks himself "What makes an acquisition a good one?". a saying goes "There are 3 important criterias: location, location and location" :-)
In my opinion, similarly what makes an optimal SoC verification strategy are 3 criterias: "planning, planning and planning". Without planning you just can not be successfull on a complex SoC verification project
Posted by Alain Gonier
Very true if you are given plenty of time. Time is the key. Based on timelines, I would say reuse, target the new changes and resources.
Posted by Pradeep Salla
LinkedIn Groups : Low Power and Power Management Designs
ReplyDeleteMajority of respins in ASICs are due to functional bugs which were not tested. This could happen due to Poor specification/Implementation or verification just missed it.
QUick verification is not exhaustive. So briefly the steps would be
1. Spec it right, derive Reference C model from the Spec, do model simulation/analysis to extract any arechitectural bugs
2. Follow right by design philosophy
3. Exhaustive verification -- Functional and code coverages with periodic review of various scenarios
4. Consider FOrmal verification (Not equivalence check) early
5. Use adequate resources, no cutting corners
Posted by Rajakeerthy Ramesh
LinkedIn Groups : Low Power and Power Management Designs
ReplyDeletewould like to echo feedbacks by Ramesh,
for 1, with respect to low power, the spec is the power intent. You can write it in CPF or UPF, such as power domains, power modes and associated low power logic inferred from this intent;
for 2 & 4, key is the 'methodology'. One idea is to embrace closed loop verification at each step of design, including power intent verification, power intent aware RTL simulation, post implementation power intent checks (every time when a netlist is changed), power intent equivalence checking (i.e. RTL+CPF vs Netlist + CPF) and final power structure signoff check on physical Verilog. here is a blog with some useful info: http://www.cadence.com/Community/blogs/lp/archive/2010/11/02/quot-cadence-low-power-verification-tear-down-these-walls-quot.aspx
for 3, what can be done is to deploy 'Metric Driven Verification' methodology starting with verification plan, followed by testbench creation and coverage analysis. Refer to this blog for more info: http://www.cadence.com/Community/blogs/lp/archive/2010/10/19/digital-centric-mixed-signal-dynamic-power-verification-bringing-it-all-together.aspx
for 5, cannot agree with Ramesh more. Once the methodology is establish and flow is constructred, need to commit the investment to execute.
Posted by Qi Wang
If you use CPF/UPF + Verilog you are mostly defining intent for synthesis (and test), but the current synthesis flows just produce more Verilog rather than producing Verilog-AMS which can actually handle power properly. Neither Verilog or Verilog-AMS have a back-annotation methodology that support logic or power wiring, which forces you into doing extraction and fast-spice. The reason that this stuff doesn't work properly is mostly because Cadence has refused to provide or back proposals to fix it at the Accellera Verilog-AMS committee since the mid-90s - so buy into their methodology at your own risk, it's anything but smart
In other words Ramesh is correct, but the tools don't really help you.
Posted by Kevin Cameron
It seems that your information on Cadence tool support of Verilog-AMS may still stay at the mid-90s. Technically, there is nothing to stop simulator to run AMS simulation along with power intent like CPF or UPF. In fact, latest simulator from Cadence can simulate AMS-Verilog with power shutoff in the digital port of the circuit by automatically converting a logical corruption value into a voltage value to drive analog portion of the circuit, with the help from the information specified in the power intent file. Nevertheless, the pure digital simulation with power intent file has become the main stream now that every major EDA vendor supports the methodology.
Posted by Qi Wang
@ Qi - technically you are correct, Verilog-AMS will simulate the stuff (it's a committee I still work on occasionally). My point is that post-synthesis the logic description should be Verilog-AMS versions of the cells capable of back-annotation with SPEF, not Verilog & SDF (and "intent"). A Verilog-AMS/SPEF flow would be able to handle variable voltage supplies and back-biasing, Verilog & SDF can't.
None of the big EDA companies have a decent flow for handling power management.
Posted by Kevin Cameron
LinkedIn Groups: UVM Verification Professionals Group
ReplyDeleteKnowledge is the key. You need to know what the current strategy does to be able to optimize it. Everything from optimising tools/licenses/resources/training/methodologies/languages/flows/planning/failure-rate/development-cycles/code-churn/etc all require knowledge gained for measurements and calculations (aka profiling), so that you focus on improving the worst performing areas in your SoC strategy.
Posted by Adiel Khan
LinkedIn Groups: UVM Verification Professionals Group
ReplyDeleteIt all starts from the verification plan. SoC verification is inherently fractured -- multiple levels of IP scope, TLM down to gate simulation, low-power verification, blending formal and simulation, analog and embedded software. Keeping track of all those niches is critical to SoC success. And that all stems from an executable plan that captures the overall SoC verification Intent, tracks it through all levels of abstraction, and measures the convergence of the plan.
Posted by Adam Sherer
LinkedIn Groups : Design Verification Professionals
ReplyDeleteI would also say have a layered checker methodology which allows you to encapsulate most of the complexity in your reference model design at one level and the transaction flow at another. This will also help you house your coverage points and coverage crosses at the appropriate place in the checker hierarchy.
The second thing I would add is the increased usage and documentation of key issues related to VIPs. Your SOC verification strategy must focus on reusable 3 party VIPs.
Posted by Kartik Subramanium
Linkedin Group: Electronic System Realization
ReplyDeleteExcellent article !!
Do you feel that use of SystemC in verification will increase going forward, as the verification is shufting to the complete system verification (Hardware + low level software). The latest version of UVM also supports the SystemC / TLM2.0.
SoC companies are anyway creating the virtual platforms of their SoC using SystemC / TLM2.0 for the purpose of embedded software development. The same models could be re-usable along with the UVM for the verification of SoC.
What are your thoughts about this ?
Regards,
Umesh Sisodia
Hi Umesh,
ReplyDeleteSystemC is moving forward as a modeling language with support from TLM2.0 standard. I am sure to define a system level platform (virtual prototyping) is offers a good base. Further to this, adoption would increase if industry moves towards Behavioral modeling. System C as an HVL is still debatable. It has to compete with System Verilog and I am thinking if instead System Verilog would start eating up the modeling part also with TLM2.0 support.
As for UVM, I suppose the support is mainly for TLM2.0 compatible modules and so far UVM supports only System Verilog. Hopefully with multi language support more coherency should come in.
Your thoughts...
LinkedIn Groups: UVM Verification Professionals Group
ReplyDeleteFrom designer side, the "design for verification" should be take into account.
Posted by Enzo Chi
I agree that planning is the most important. But you also got to have a solid risk mitigation plan to address both technical and non-technical risks.
Posted by Vijayabhaskar Sankaranarayanan
LinkedIn Group: UVM Verification Professionals Group
ReplyDeleteVerification architecture and strategies are at same effort levels as design architecture. There are firms that think verification is a overhead cost. When some is looking to get an SoC out, they have to think the same way on resources as they think about design architecture. Functional bugs are mostly non-technical decision making by non-technical authorities in an organization.
Posted by Jothi