Sunday, April 20, 2014

Sequence Library in UVM

Looking into the history of verification we learn that the test bench and test cases came into picture when RTL was represented in form of Verilog or VHDL. As complexity grew, there was a need for another pair of eyes to verify the code and release the designer from this task that continues to transform into a humongous problem. The slow and steady pace of directed verification couldn’t cope up with the rising demand and constrained random verification (CRV) pitched in to fill the gap. Over the years industry played around with different HVLs and methodologies before squaring down to SV based UVM. The biggest advantage of CRV was auto generation of test cases to create scenarios that the human mind couldn’t comprehend. Coverage driven verification (CDV) further complimented CRV in terms of converging the unbounded problem. In the struggle to hit the user defined coverage goals, verification teams sometime forget the core strength of CRV i.e. finding hidden bugs by running random regressions. UVM provides an effective way to accomplish this aim through use of sequence library.

What is Sequence Library?

A sequence library is a conglomeration of registered sequence types derived from uvm_sequence

A sequence once registered shows up in the sequence queue of that sequence library. Reusability demands that a given IP should be configurable so as to plug seamlessly into different SoCs catering varied applications. The verification team can develop multiple sequence libraries to enable regressions for various configurations of the IP. Each of these libraries can be configured to execute sequences any no. of times in different order as configured by the MODE. The available modes in UVM include –

UVM_SEQ_LIB_RAND   : Randomly select any sequence from the queue
UVM_SEQ_LIB_RANDC :  Randomly select from the queue without repeating till all sequences exhausted
UVM_SEQ_LIB_ITEM    : Execute a single sequence item
UVM_SEQ_LIB_USER   : Call select_sequence() method the definition of which can be overridden by the user

Steps to setup a Sequence Library?

STEP 1 : Declare a sequence library. You can declare multiple libraries for a given verification environment.











STEP 2 : Add user defined & relevant sequences to the library. One sequence can be added to multiple sequence libraries.








STEP 3 : Select one of the 4 modes given above based on the context i.e. to run random or run a basic sequence item for sanity testing or a user defined mode as applicable like a set of sequences that test a part of the code for which bug was fixed.






STEP 4 : Select the sequence library as default sequence for a given sequencer from the test. This kind of depends on the STEP 3 i.e. the context.


Advantage

Sequence library is one of the most simple, effective but sparingly used mechanism in UVM. A user can plan to use sequence library by changing modes to achieve sanity testing, mini regression and constrained random regressions from the same test. Further, the library helps in achieving the goal of CRV in terms of generating complex scenarios by calling sequences randomly thereby finding those hidden bugs that would otherwise show up in the SoC verification or the field. 

As Albert Einstein rightly said "No amount of experimentation can ever prove me right; a single experiment can prove me wrong". It is important to run those experiments random enough so as to improve the probability of hitting that single experiment to prove that the RTL is not an exact representation of specification. Sequence library in UVM does this effectively!!!


Previous posts -

Sunday, April 13, 2014

Hierarchical Sequences in UVM

Rising design complexity is leading to near exponential increase in verification efforts. The industry has embraced verification reuse by adopting UVM, deploying VIPs and plugging block level env components at sub system or SoC level. According to a verification study conducted by Wilson research in 2012 (commissioned by Mentor) the engineers spend ~60% of their time in developing/running tests and debugging failures. While the report doesn’t zoom in further, the verification fraternity would agree that a considerable chunk of this bandwidth is spent because the test developer didn’t have flexible APIs (sequences) to create tests and ill planned verification environment lead to extra debug. UVM provides a recommended framework that if incorporated effectively can overcome such challenges. Hierarchical Sequences in UVM is one such concept suggesting sequence development in a modular fashion to enable easier debug, maintenance and reuse of the code.

As in LEGO, hierarchical sequences postulate development of base structures and assembling them in an orderly fashion to build desired structures. This process demands considerable planning right from the beginning with theend goal in mind. Definition of the base sequences determine the flexibility offered for effective reuse. The aim is to develop atomic sequences such that any complex sequence can be broken down into a series of base sequences. A few aspects worth considering include –

TIP 1: Define a sequence that generates a basic transaction depending upon the constraints passed to the sequence. To implement this, local variables corresponding to the sequence item members are defined in the sequence. These fields can be assigned values directly while calling the sequence from the high level sequences or test. The values received by the sequence are passed as inline constraints while generating the transaction. This provides full control to the user to generate the desired transaction.

TIP 2: Bundle the fields to be controlled into a configuration object and use the values set for this object to derive the inline constraints of the transaction to be generated. This is a complex version of TIP 1 particularly useful when either there is large number of variables in the transaction class or multiple base sequences are created having similar variables to be configured. This class can have fields directly linked to the sequence item and additional variables to control the sequence behavior. It is best to define a method to set the Config object. Note that this Config object has nothing to do with Config DB in UVM.

TIP 3: UVM provides Configuration DB to program the components of environment for a given scenario. As complexity increases, it is desired to read the configuration of a given component or understand the current state of a component and use this information while generating the transaction. Having a handle of the UVM component hierarchy in the base sequence facilitates development of sequences is such cases.

Once the base sequences are defined properly, a hierarchy can be developed enabling reuse and reducing margin of error. Let’s apply this to a bus protocol e.g. AMBA AHB. Irrespective of the scenario, an AHB master would finally drive a READ or a WRITE transaction on the bus. So our base sequence can be L0_AHB_READ and L0_AHB_WRITE. These sequences generate an AHB transaction based on the constraints provided to them as in TIP 1. Next level (L1) would be to call these sequences in a loop wherein the no. of iterations is user defined. Further we can develop a READ_AFTER _WRITE sequence wherein the sequences L1_LOOP_WRITE and L1_LOOP_READ are called within another loop such that it can generate Single Write followed by Read OR Bulk Write followed by Read. Using the above set of sequences any scenario can be generated such as configuration of any IP, reading data from an array/file and converting it into AHB transactions or interrupt/DMA sequences etc.

Deploying UVM is a first step towards reuse. To control the efforts spent on developing tests and debugging verification code, UVM needs to be applied effectively and that is beyond the syntax or proposed base class library. Hierarchical sequences demand proper planning and a disciplined approach. The observed returns are certainly multi-fold. This reminds me of a famous quote from Einstein – Everything Should Be Made as Simple as Possible, But Not Simple!


Suggested Reading -

Sunday, April 6, 2014

UVM : Just do it OR Do it right!

No, the title is not a comparison of captions but a gist of the discussions at UVM 1.2 day – a conference hosted by CVC Bangalore celebrating their decade long journey. The day started with keynote from Vinay Shenoy, MD Infineon Technologies India where he discussed the evolution of Indian industry in the last 30+ years and why India is a role model for services but lags in manufacturing. He also shared some rich insights into the Indian govt initiatives to bridge this gap. Vikas Gautam, Senior Director, Verification Group, Synopsys delivered the next keynote raising a question on the current state of verification and what next after UVM? Later in the day, Anil Gupta, MD Applied Micro India in his keynote discussed the growth of semiconductor industry emphasizing the need for value creation and expectations from the verification team adopting UVM. Pradeep captured the summary of these keynotes here – Vinay, Vikas and Anil. Further to this, Dennis Brophy, Director of strategic business development, Mentor & Vice Chairman, Accellera unleashed UVM1.2 inviting engineers to participate in the open review before final release.

Begin with an end in mind

While the phenomenal adoption rate of UVM has alleviated obvious worries, applying it effectively to address verification at various levels is still challenging. Deploying UVM in absence of proper planning and a systematic approach is a perfect recipe towards schedule abuse. This point was beautifully comprehended by Anil Gupta referring to carpentry as a profession. UVM is a toolset similar to hammer, chisel, lathe etc. found with every carpenter. Successful accomplishment of the work depends upon a focussed effort towards the end goal. If a carpenter is building a leg of a chair, he needs to concentrate on the leg in the context of that specific chair. This ensures that when the chair is finally assembled it comes into shape instantly. Ignoring this would otherwise lead to rework i.e. delay in schedule while affecting the stability and longevity of the chair. Similarly, while developing a UVM based environment, it is critical for the verification engineer to define and build it at block level such that it integrates at the sub system or SoC level seamlessly.

My 2 cents

Verification today demands a disciplined approach for marching towards the end goal. To start with, the verification architecture document at block level needs to address the reuse aspect of UVM components. Next is definition of sequence item and config DBs, avoiding glue logic while knitting components at block level and extending clear APIs to enable effective reuse. Performance of the SoC test bench depends on the slowest component integrated. Memory and CPU cycles consumed by entities need to be profiled, analyzed and corrected at block level to weed out such bottlenecks. It is crucial to involve block level owners to understand, discuss and debate the potential pitfalls when aiming to verify a SoC that is nothing less than a beast. Breaking the end goal into milestones that can be addressed at block level would ensure verification cycles at SoC level are utilized efficiently. Early start on test bench integration for subsystem/SoC level to enable flushing the flow and providing feedback to block owners would be an added advantage. 

Following is a 5 point rating scale for measuring overall verification approach effectiveness particularly at IP level –


Remember to revisit this scale frequently while developing the leg of the chair.... I mean any UVM based entity that is scheduled for reuse further.... :)

Related articles –


Sunday, March 2, 2014

Back to the basics : Power Aware Verification - 1

The famous quote from the movie 'Spiderman' – “With great power comes great responsibility” gets convoluted when applied to the semiconductor industry where “With LOW power comes great responsibility”. The design cycle that once focused on miniaturization, shifted gears to achieve higher performance in the PC era and now in the world of connected devices it is POWER that commands most attention. Energy consumption today drives the operational expense for servers and data centers so much so that companies like Facebook are setting up data centers close to Arctic Circle where the cold weather reduces cooling expense. On the other hand, for consumer devices ‘meantime between charging’ is considered as one of the key factors defining user experience. This means that environment friendly green products resulting from attention to low power in the design cycle can bring down the overall cooling/packaging cost of a system, reduce probability of system failure and conserve energy.

DESIGN FOR LOW POWER

The existing hardware description languages like Verilog & VHDL fall short of semantics to describe the low power intent of a design. These languages were primarily defined to represent the functional intent of a circuit. Early adopters to the low power design methodology had to manually insert cells during the development phase to achieve the desired results. This process was error prone with limited automation and almost no EDA support. Archpro, an EDA startup (acquired by Synopsys in 2007) provided one of the early solutions to this space. Unified Power Format (UPF – IEEE 1801) and Common Power Format (CPF from Si2) are the two TCL based design representation approaches available today to define a low power design intent. All EDA vendors support either or both formats to aid development of power aware silicon.

VERIFYING LOW POWER DESIGN

Traditional simulators are tuned to HDLs i.e. logic 1/0 and do not have a notion of voltage or power turning on/off. For the first generation of low power designs, where cells were introduced manually, the verification used to be mostly script based by forcing X (unknown) on the internal nodes of the design and verifying the outcome. Later, when power formats were adopted for design representation, the verification of this intent demanded additional support from the simulators such as –

- Emulating the cells like isolation, state retention etc. during the RTL simulations to verify the power sequencing features of the design. These cells are otherwise inserted into the netlist by the synthesis tool taking the power format representation as the base
- Simulating power ON/OFF scenarios such that the block that is turned off has all outputs going X (unknown)
- Simulating the voltage ramp up cycle which means once the power is turned ON, it takes some time for the voltage to ramp up to the desired level and during this period the functionality is not guaranteed
- Simulating multi voltage scenarios in the design and in absence of level shifter cells the relevant signals are corrupted
- Simulating all of the above together resulting into a real use case scenario

Tools from Archpro (MVRC & MVSIM) worked with industry standard simulators through PLI to simulate power aware designs. Today all industry standard simulators have the feature to verify such design with limited or complete support to the UPF & CPF feature list. Formal/Static tools are available to perform quick structural checks to the design targeting correct placement of cells, correct choice of cells, correct connections to the cells and ensuring power integrity of the design based on the power format definition at RTL and netlist level. Dynamic simulations further ensure that the power intent is implemented as per architecture by simulating the power states of the design as functional scenarios. 

CONCLUSION

In the last decade, the industry has collaborated at different levels to realize power aware design for different applications. Low power was adopted for the products targeting the consumer and handheld markets initially but today it is pervasive across all segments that the semiconductor industry serves. The key is to define the low power intent early and incorporate tools to validate that the intent is maintained all throughout. As a result, low power lead to greater responsibility for all stakeholders to the design cycle in general and for the verification engineers in particular!

DVCON 2014 issue of Verification Horizons has an article “Taming power aware bugs with Questa” co-authored by me on this subject.


Drop in your questions and experiences with low power designs in the comment section or write to me at siddhakarana@gmail.com

Sunday, December 29, 2013

Sequence Layering in UVM

Another year passed by and the verification world saw further advancements on how better to verify the rising complexity. At the core of verification there exist two pillars that have been active in simplifying the complexity all throughout. First is REUSABILITY at various levels i.e. within project from block to top level, across projects within an organization and deploying VIPs or a methodology like UVM across the industry. Second is raising ABSTRACTION levels i.e. from signals to transaction and from transaction to further abstraction layers so as to deal with complexity in a better way. Sequence layering is one such concept that incorporates the better of the two.
 
What is layering?
 
In simple terms, the concept of layering is where one encapsulates a detailed/complex function or a task and moves it at a higher level. In day to day life layering is used everywhere e.g. instead of calling a human being as an entity with 2 eyes, 1 nose, 2 ears, 2 hands, 2 legs etc. we simply refer to it as human being. When we go to a restaurant & ask for an item from the menu say barbeque chicken, we don’t explain the ingredients & process to prepare it. Latest is apps on handhelds where by just a click of an icon everything relevant is available. In short, we avoid the complexity of implementation & details associated while making use of generic stuff.
 
What is sequence layering?
 
Applying the concept of layering to sequences in UVM (any methodology) improves the code reusability by developing at a higher abstraction level. This is achieved by adding a layering agent derived from uvm_agent. The layering agent doesn’t have a driver though. All it has is a sequencer & monitor. The way it works is that there is a high level sequence item now associated with this layering sequencer. It would connect to the sequencer of the lower level protocols using the same mechanism as used by sequencer & driver in an uvm_agent. The lower level sequencer would have only 1 sequence (translation sequence that takes the higher level sequence item & translates it into lower level sequence item) running as a forever thread. Inside this sequence we have a get_next_item similar to what we do in a uvm_driver. The item is received from the higher level sequencer. It is translated by this lower level sequence & given to its driver. Once done, the same item_done kind of response is passed back from this lower level sequence to the layered sequencer indicating that it is ready for the next item.
 
Figure 1 : Concept diagram of the layering agent
 
On the analysis front, the same layering agent can also have a monitor at higher level. This monitor is connected to the monitor of the lower layer protocol. Once a lower layer packet is received it is passed on to this higher level monitor wherein it is translated into a higher level packet based on the configuration information. Once done, the higher level packet is given to the scoreboard for comparison. So we need only 1 scoreboard for all potential configurations of an IP & the layering agent monitor does the job of translation.
 
Example Implementation
 
I recently co-authored a paper on this subject at CDNLive 2013, Bangalore and we received the Best Paper Award! In this paper we describe the application of Sequence Layering where our team was involved in verifying a highly configurable memory controller supporting multiple protocols from the processor side and a no. of protocols on the DRAM memory controller front. A related blog post here.
 
I would like to hear from you in case you implemented the above or similar concepts in any of your projects. If you would like to see any other topic covered through this blog do drop in an email to siddhakarana@gmail.com (Anonymity guaranteed if requested).
 
Wish you all a happy & healthy new year!!!
 
 

Monday, October 14, 2013

Trishool for verification

It’s the time of the year when I try to correlate Mythology with Verification. Yes, festive season is back in India and this is the time when we celebrate the fact that good prevails over evil. Given the diversity of Indian culture, there are a variety of mythological stories about demigods taking over evil. Well for us in the verification domain, it is the BUG that plays the evil preventing us from achieving first silicon success. While consumerism of electronic devices worked wonders in increasing the size of business, it actually forced the ASIC teams to gear up for developing better products in shrinking schedules. Given that verification is the long pole in achieving this goal, verification teams need a solution that ensures the functionality on chip is intact in a time bound fashion. The rising nature of design complexity has further transformed verification into a diverse problem where a single tool or methodology is unable to solve it. Clearly a multi pronged approach.... a TRISHOOL is required!
 
 
TRISHOOL is a Sanskrit word meaning 'three spears'. The symbol is polyvalent and is wielded by the Hindu God Shiva and Goddess Durga. Even the Greek God of sea Poseidon and Neptune  the Roman God of the sea, are known to carry it. The three points have various meanings and significance. One of the common explanations being that Lord Shiva uses the Trishool to destroy the three worlds: the physical world, the world of culture drawn from the past and the world of the mind representing the processes of sensing and acting. In physical sense, the three spears would actually be fatal as compared to a single one. 
 
So basically we need a verification strategy equivalent to a Trishool to confirm that the efforts converge in rendering a fatal blow to the hidden bugs. The verification tools available today correlate well with the spears of Trishool and if put together would weed out bugs in a staged manner moving from IP to SoC verification. So which are the 3 main tools that should be part of this strategy to nail down the complex problem we are dealing with today?
 
Constrained RandomVerification (CRV) is the workhorse for IP verification & reuse of the efforts at SoC level makes it as the main spear or the central spear of the verification Trishool strategy. CRV is further complimented by the other two spears on either side i.e. Formal Verification and Graph based Verification. The focus of CRV is more at the IP level wherein the design is stressed under constraints to attack the thought through features (verification plan & coverage) and also hit corner cases or areas not comprehended. As we move from IP to SoC level, we face two major challenges. One is repetitive code typically combinational in nature where Formal techniques can prove the functionality comprehensively and in limited simulation cycles. Second is developing SoC level scenarios that cover all aspects of SoC and not just integration of IPs. The third spear or graph based verification comes to rescue at this level where multi processor integration, use cases, low power scenarios and performance simulations are enabled with much ease. 
 
Hope this festive season while we enjoy the food & sweets, the stories and enacts that we experience around will also given enough food for thought towards a Trishool strategy for verification.
 
Happy Dussehra!!!
 
Related posts -

Sunday, October 6, 2013

Essential ingredients for developing VIPs

The last post Verification IP : Build or Buy? initiated some good offline discussions over emails & with verification folks on my visit to customers. Given the interest, here is a quick summary of important items that needs to be taken care of while developing a VIP or evaluating one. Hopefully they will further serve the purpose of helping you decide on Build vs. Buy J.
 
1. First & foremost is the quality of VIP. Engineers would advocate that quality can be confirmed by extensive validation & reviews. However, nothing can beat a well defined & documented process that ensures predictability & repeatability. It has to be a closed loop i.e. defining a process, documenting it & monitoring to ascertain that it is followed. This helps in bringing all team members in sync to carry out a given task, provides clarity to the schedule and acts as a training platform for new team members.
 
2. Next is architecture of the VIP. Architecture means a blue print that conveys what to place where. In absence of a common architecture, different VIPs from the same vendor would assume different forms. This affects user productivity as he/she would need to ramp up separately for each VIP. Integration, debugging & developing additional code would consume extra time & effort. Due to inconsistency across the products, VIP maintenance would be tough for the vendor too. A good architecture is one that leads to automation of skeleton while providing guidelines on making the VIP simulator and HVL+methodology agnostic.
 
3. With the wide adoption of accelerators, the need for having synthesizable transactors is rising. While transactors may not be available with initial releases of the VIP, having a process in place on how to add them when needed without affecting the core architecture of the VIP is crucial. The user level APIs shouldn’t change so that the same set of tests & sequences can be reused for either simulator or accelerator.
 
4. While architecture related stuff is generic, defining the basic transaction element, interface and configuration classes for different components of the VIP is protocol specific. This partitioning is essential for preserving the VIP development schedule & incorporating flexibility in the VIP for future updates based on customer requests or protocol changes.
 
5. Talking about protocol specific components, it is important to model the agents supporting a plug & play architecture. Given the introduction of layered protocols across domains, it is essential to provide flexible APIs at the agent level so as to morph it differently based on the use cases without affecting the core logic.
 
6. Scoreboard, assertions, protocol checkers & coverage model are essential ingredients of any VIP. While they are part of the release, user should be able to enable/disable them. For assertions, this control is required at a finer level. Also, the VIPs should not restrict use of vendor provided classes. The user should be able to override any or all of these classes as per requirement.
 
7. Debugging claims majority of the verification time. The log messages, debug & trace information generated by the VIP should all converge in aiding faster debug and root causing the issue at hand. Again not all information is desired every time. Controls for enabling different levels based on the focus of verification is required.
 
8. Events notify the user on what is happening and can be used to extend the checkers or coverage. While having a lot of events helps, too many of them affect simulator performance. Having control knobs to enable/disable events is desirable.
 
9. Talking about simulator performance, it is important to avoid JUGAAD (work around) in the code. There are tools available that can comment on code reusability & performance. Incorporating such tools as part of the development process is a key to clean code.
 
10. As in design, the VIP should be able to gracefully handle reset at any point during operation. It also needs to support error injection capabilities.
 
11. Finally, a detailed protocol compliance test suite with coverage model needs to accompany the VIP delivery.
 
These are essential ingredients. Fancy toppings are still possible to differentiate the VIP from alternate solutions though.
 
Looking for more comments & further discussions ....
 
Relevant posts -
 

Sunday, August 18, 2013

Verification IP : Build or Buy?

Consumerism of electronic products is driving the SoC companies to tape out multiple variants of products every year. Demand for faster, low power, more functionality and interoperability is forcing the industry to come up with standard solutions for different interfaces on the SoC. In past couple of years, tens of new protocols have shown up on silicon and equal no. of protocols has been revised spreading their description to thousands of pages. Reusability is the key to conquer this level of complexity both for design and verification. The licensing models for IPs & VIPs vary and many design houses still are in the dilemma on ‘Make vs Buy’ for verification.
 
Why BUILD?
 
Points that run in favour of developing in-house VIP solutions include –
 
- Cost of licensing the VIP that front loads the overall design cost for a given project.
- Availability of VIP for a given HVL, methodology & simulator.
- Encrypted VIP code aggravates the debug cycle delaying already aggressive schedules.
- VIP & simulator from different vendors lead to further delay in root causing issues.
- Verification environment developed with a VIP ties you to a vendor.
- DUT specific customizations need to be developed around the VIP. Absence of adequate configurability in available solutions poses a high risk to verification.
 
Why BUY?
 
While obvious, reasons why to procure the VIP include –
 
- Reusability advocates focusing on features that differentiate the final product and leave the innovation on standard solutions to relevant experts.
- Developing a VIP comes with a cost. A team needs to be identified, built and maintained all throughout with a risk that attrition would lead to risk at critical times.
- Time to market is important. Developing/upgrading in house VIP may delay the product itself.
- For new protocols or upgrades to existing ones, there would be a ramp up associated with protocol knowledge and this increases the risk with internally developed solutions.
- Probability of finding a bug and the end product being interoperable is high with third party solutions that have experienced different designs.
- Architecting a VIP is easier said than done. Absence of an architecture & process leads to multiple issues.
- In house solutions may not be reusable across product lines (different applications) or projects due to missing configurability at all levels.  Remember verification is all about JUGAAD and such philosophy doesn’t work with VIP development.
- With increasing adoption of hardware acceleration/emulation for SoC verification, there is need to develop transactors to reuse VIP leading to additional effort which otherwise would be done by vendor.
- Poorly developed VIP can affect the simulator/accelerator performance badly in general and at SoC level in particular. This in turn would affect the productivity of the team. To be competitive, vendors would focus on this aspect which is otherwise missing with internal solutions.
- External solutions come with example cases and ready to use env giving a jumpstart to verification. For in-house solutions the verification team may end up experimenting to bring up the environment adding to delays.
 
Clearly the points in favour of BUY outweigh the BUILD ones. Infact the ecosystem around VIP is evolving where solutions are available to the issues favouring MAKE too. With standard HVLs and methodologies like UVM, simulator agnostic VIP is relatively easy to find. Multiple VIP vendors and design service providers with a VIP architecture platform are getting into co-development of VIPs to solve the problem of specific language, methodology, encryption and availability of transactors for acceleration. Customization of VIPs to address DUT specific features or enable transition from one vendor solution to another is also on the rise through such engagements.
 
With this, the debate within the organization needs to move from BUILD vs BUY to defining the selection criteria for COLLABORATION with vendors who can deliver the required solution with quality at desired time.
 
In case you still are planning to build one, drop your comments on why?
 
Relevant posts -

Sunday, June 23, 2013

Leveraging Verification manager tools for objective closure

Shrinking schedules topped with increasing expectations in terms of more functionality opened up gates for reusability in the semiconductor industry. Reusability (internal or external) is constantly on a rise both in design & verification. Following are some of the trends on reusability presented by Harry Foster at DVCLUB UK, 2013 based on Wilson Research Group study in 2012, commissioned by Mentor Graphics.
 



Both IP and SOC now demand periodic releases targeting specific features for a customer or a particular product category. It is important to be objective in terms of verifying the design for the given context to ensure the latest verification tools & methodologies do not dismiss the required focus. With verification claiming most of ASIC design schedule in terms of efforts & time, conventional schemes fail in managing verification progress and extending predictable closure. There is a need for a platform that helps in directing the focus of CRV, brings in automation around coverage, provides initial triaging of failures and aids in methodical verification closure. While a lot of this has been done using in house developed scripts, there is a significant time spent in maintaining it. There are multiple solutions available in the market and the beauty of being into consulting is that you get to play around with most of them considering customer preferences towards a particular EDA flow.
 
QVM (Questa Verification Manager) is one such platform provided by Mentor Graphics. I recently co-authored an article (QVM : Enabling Organized, Predictable and Faster Verification Closure) published in Verification Horizons, DAC 2013 edition. This is available on Verification academy or you can download the paper here too.