Wednesday, June 29, 2011

Gate Level Simulations : A Necessary Evil - Part 2

In the previous post, we discussed on why GLS is necessary. In this post we take a look as to why is it challenging.

Having GLS in the design flow means it needs to be planned and started quite early in the verification cycle. GLS has to pass through various stages before sign off and serves to check both functionality and timing (complementing STA & LEC). The setup of GLS starts off when the prelim netlist is released. Since this netlist is prone to functional and timing bugs, the GLS bring up uses selected functional tests with zero/unit delay (timing checks and specify blocks turned off) simulations. This helps in setting up the flow for GLS and confirm that the netlist is properly hooked up. Later with higher confidence netlist releases pre-layout SDF can be tried out till the final SDF is delivered. With final netlist and post layout SDF in place, GLS claims a lot of simulation time and debugging effort before sign off.

Why Evil?

Some of the challenges for GLS closure include –

- GLS with timing is inherently slow. Careful planning is required to decide the list of vectors to be simulated on netlist, turn around time for these tests and memory requirements for dump and logs.
- Identifying the optimal list of tests to efficiently utilize GLS in discovering functional or timing bugs is tough. Coverage/Assertions dedicated for GLS are required to assure completeness of these tests.
- Prelim netlist simulations without timing are prone to race conditions. With improperly balanced clock tree, sometimes data propogation to next stage happens before clock leading to unusual results.
- During the initial stages of GLS, identifying the list of flops/latches that needs to be initialized (forced reset) is a bug hurdle. Simulation pessimism would drive out X on the output of all such components without proper reset bringing GLS to a standstill.
- During clock tree synthesis, the clock port/net is replaced with a clock tree with buffers/invertors to balance the clocks. Monitors/Assertions hooked to the clock port in test bench during RTL simulations need to be revised for GLS to make sure the intended net is getting probed.
- A lot (hierarchy, net naming etc) changes with each netlist release leading to a lot of rework on the force statements (if any remaining) as well as the assertions/monitors.
- For netlist simulations with timing check, one needs to remove the synchronizer setup-hold constraints from the SDF file or disable timing check on these component instances. Getting a list of these synchronizers is a dauting task if certain coding conventions aren’t followed.
- Assertions are increasingly used for functional verification to improve observability and error detection. Reusing the assertions for GLS raises reliability concerns due to issues arising from incorrect clock connections or negative hold time causing assertion sampling at undesirable time.
- Debugging the netlist simulations is one of the biggest challenge. In GLS, ‘X’s are generated if there is a violation of timing requirements on any of the netlist components or when a given input condition is not declared in a UDP’s (User defined primitives) table. Identifying the right source of the problem requires probing the waveforms at length which means huge dump files or rerunning simulations multiple times to get the right timing window of failure. Engineers tend to get lost easily during the debugging of X propogation source.

Amidst all these challenges, GLS has been part of the ASIC design flow for decades making it a Necessary Evil!


  1. nice to b with GLS team .....

    -Srikanth Kolli

  2. very good write up on GLS. mixed simulations and stubbed gls can further speed up sims.

  3. LinkedIn Groups : EDA SaaS Enthusiasts

    I think you maybe failed to mention that what you really test with GLS is mostly problems related to wiring that is generated by place & route. Most of the loading on gates is now wiring, and there may be related power distribution issues. Good results with Verilog depended on the back-annotation working properly (SDF generated from extraction), but that's a flow that was created for designs a couple of decades ago before wiring was dominant, so I'm highly skeptical that it actually works anymore. You're probably only going to get good results with 'fast' spice these days. I'm also skeptical that STA is reliable in power-managed flows.

    Posted by Kevin Cameron

    Hi Kevin,

    You bring up a good point. Yes there are issues coming from the wire loads that dominate the power dissipation more then ever. The intent of this post is to analyze the need for GLS on the issues that the industry has been facing for a couple of decades and still no specific solution exists. Interestingly most of the companies including the Top 20 in semiconductors are still using GLS for the traditional reasons too.
    I deliberately skipped the power related issues as there are many for a separate post under 'power simulations' may be :)

    Thank you sharing the alternate aspect.

    Posted by Gaurav Jalan

  4. LinkedIn Groups : EDA SaaS Enthusiasts

    The (/my) original design of Verilog-AMS was intended to tackle this problem - i.e. it allowed back annotation of SPEF level wiring descriptions into Verilog instead of SDF. Unfortunately Cadence broke it and have made no attempt to fix it. Likewise nobody has implemented proper power routing in Verilog either.

    I have since moved on to factoring in power-management and Silicon variance, unfortunately that really needs a fully functional Verilog-AMS too.

    So , yes you do need to do GLS, but if you want good tools that do it properly you might be out of luck.

    Posted by Kevin Cameron

    I agree we are out of luck on good tools/flow for GLS :(

    Posted by Gaurav Jalan

  5. LinkedIn Groups : Coffee with SOC design verification experts

    I have just started reading this blog and the first item already got my attention:
    "- To verify critical timing paths of asynchronous designs that are skipped by STA. "

    I think this is not a meaningful reason. By definition asynchronous paths don't have timing paths which can be exhaustively checked as timing conditions at the source and target continuously vary. Asynchronous blocks have be correct by design and no amount of simulation will check the timing behavior of such paths.

    Posted by - Muzaffer Kal

    Thanks for zooming into the sentence. Yes, simulation is not inteended to check timing only. It is the interaction that we verify with functional and timing aspect and not the timing path as clearly stated by you "asynchronous paths don't have timing paths". The end to end path served with multiple clock domains need to be verified in totality on the functional aspect with timing introduced to ensure that it still works fine.

    Posted by Gaurav Jalan

  6. LinkedIn Groups : EDA SaaS Enthusiasts

    I do have a plan for fixing the entire simulation stack, but it's mostly an open-source effort, and it isn't moving at any speed. My current approach is to tackle the ESL market with this -

    - and work down into the gate/transistor/ams stuff, with thin translators for Verilog/VHDL. The bulk of SystemVerilog was misguided and should have been done in C++ anyway.

    Unfortunately no customers are showing any enthusiasm for change or want to put money into EDA R&D, so I'm focusing on more lucrative activities just now.

    Posted by Kevin Cameron

  7. LinkedIn Groups : Coffee with SOC design verification experts

    I am fully behind GLS, however, I have a couple of concerns:

    "Since this netlist is prone to functional and timing bugs, the GLS bring up uses selected functional tests with zero/unit delay (timing checks and specify blocks turned off) simulations. This helps in setting up the flow for GLS and confirm that the netlist is properly hooked up."

    Since you mention later that zero/unit delay simulations are problematic, why do them? Is your concern the connectivity check? Wouldn't it be easier to add extra ports to the RTL design (for any BIST features) and use the same proven test harness rather than have a separate test harness for GLS?

    "- Identifying the optimal list of tests to efficiently utilize GLS in discovering functional or timing bugs is tough. Coverage/Assertions dedicated for GLS are required to assure completeness of these tests."
    Perhaps timing bugs deserve their own point? It really sets me off that you include timing bugs in the same sentence with the words identify, optimal, and efficient. My experience has been that they require a separate custom written set of vectors to go after the exceptions that were used in STA. Perhaps you can blog further on discovering timing bugs with GLS.

    Posted by Jim Lewis


    Thanks much for reading the post and sharing your thoughts.
    Addressing your concerns below.

    #1 Whatever I have mentioned are the trends in the industry. Some are there because nobody bothered to change them.
    For the test vectors at this stage they could be borrowed from RTL test vector list to execute over the GLS TB.
    We use this for 2 reasons -
    1. Initial GLS flow setup for that particular ASIC/SOC version which is an effort intensive process.
    2. Ensure Netlist hook up. As you mentioned that we could as well do the same in RTL but there are certain inhibitions -
    (a) Not everything can be done @ RTL. Implementation tools introduce changes throughout the netlist schema.
    (b) For power aware simulations, one may want to develop directed tests for which a GLS setup is required.
    (c) Even LEC flow at this point is getting setup so hook up check with a different pair of eyes is always better.

    Yes, there are issues with unit/zero delay models. Some organizations even use an estimated SDF over these models.

    #2 The answer lies in your description itself. I guess the confusion is due to the word 'identifying'.
    I meant, reusing test vectors from RTL and writing directed ones (particularly for timing expections) to hit the right scenario is tough.

    Posted by Gaurav Jalan

  8. Hi Gaurav,

    I am trying to understand the difference between Zero and Unit delay simulations. Are they both the same? Do they have different set of goals to achieve?


  9. Hi Sumanth,

    The purpose is the same but the functionality differs. Both of them are used typically to setup GLS or run simulations to confirm that the netlist connectivity is correct in absence of SDF. The difference comes in while simulating i.e. Zero delay will assume every logical component in the active datapath contributes zero delay while Unit delay will assume these components contribute to 1 of delay. Both these approaches have pros & cons. You can look for more details on them in the simulator reference documentation that you are using. If you have any specific query do let me know.

  10. Hi Gaurav,

    I have one basic query for zero delay simulation.
    > do we required to use +notimingcheck switch for zero delay ?
    > or we should "+delay_mode_zero" instead of +notimingcheck ?


  11. Hi Vijay,

    #1 For zero delay simulations there is no concept of timing checks as the cells assume zero delay.
    #2 To enable zero delay mode each simulator will have their own way to enable that. +delay_mode_zero could be specific to 1 simulator.

    Let me know if that answers your questions

  12. Hi Guarav,

    why netlist generated from RTL is prone to bugs? can you explain clearly